Catching up

[From Bill Powers (980209.2010 MST)]

Turns out I was kicked off the list some time last week. I'm signed up
again, and will just reply to a few highlights.

Martin Taylor 980201 10:40

The "observed relationship between inputs and outputs" is the action
of the subject that indicates the subject saw or did not see (hear, taste,
smell...) something when a real physical "something" is known to have
been available to see (hear, tast, smell...). You seem to be saying that
whether the subject can taste salt at a concentration of x per million
is a characteristic of the environment, not of the subject. If that is
so, you might explain what it is about the environment that does the
tasting and allows the subject to act according to whether the evironment
tasted the salt or not.

There is an observable relationship between the concentration of sodium
chloride and a statement "yes/no, I can/can't taste something salty."
However, we do not know what, in the environment, corresponds to the taste.
Whatever it is, it is also affected by magnesium chloride and several other
substances, and the presence of sugar can alter it, too. The missing factor
here is the relationship between salt concentration and whatever it is that
corresponds to the salt-sensation.

···

---------

A comment on Standard HPCT and perceptual functions.

According to "Standard" HPCT, the hierarchy consists of a series of levels
of elementary control units (ECUs). Each ECU contains an input function
that accepts possibly many inputs from the sensors or from the outputs
of lower level ECUs. The output of an input function is a perceptual
variable. The sensors transform some physical variable in the environment
into a signal value suitable for input to an ECU input function.
Perceptual variables are therefore functions of physical variables.

This is true, but they are not _single-valued_ functions. You may be
shooting for an 8, but your reference-level for eight-ness can be satisfied
by 6 and 2, 7 and 1, 1 and 7, or 2 and 6 on the dice.

According to "Standard" HPCT, the form of a perceptual input function is
determined by reorganization, which in turn depends on the degree to which
intrinsic variables are controlled to be near their inborn reference
levels. This implies that the perceptual input functions of (at least)
low-level ECUs change slowly, if at all, over time. Hence, again according
to "standard" HPCT, there should exist functions that relate the values
of perceptual signals to the values of physical variables.

Yes, but I repeat that these functions need not be single-valued. Neither
do they have to be stable over time. Furthermore, the relationship between
physical variables in the environment and intrinsic variables is highly
variable -- how many ways are there to stay warm? The environment has far
more degrees of freedom than our perceptions do, and far more than are
monitored by the reorganizing system. So there is no unique relationship
between intrinsic variables and the perceptual variables we learn to
control, or between either class of variables and those we call "physical"
variables.

I happen to believe that Standard HPCT is inadequate in this linkage
between perceptual and physical variables, and I said so. There are many
reasons why I think it inadequate, based on the results of "conventional
research", some of which I have mentioned in previous messages. Bill P
agrees with me that it is inadequate, for the same reasons (980130.0517
MST) "I can reproduce the same perception I had before, but there is
no guarantee that in doing so I am producing the same situation as a
physicist would deduce it."

I don't see how this comes out with Standard HPCT being inadequate. For me,
this is an insight into the nature of perception. We make our perceptions
repeat, but this does not imply that we make the physical reality which
underlies them repeat. Why do you see this as an inadequacy in HPCT? What
would an adequate theory say? And when did I agree that this observation
represented an inadequacy of HPCT?

Most psychologists know this, from the results of "conventional"
psychophysical studies, but so far as I can interpret Bill's message,
his agreement is based on his powerful intuition, since to him the
results of non-PCT experiments are inadmissible.

How do they know this? Psychophysical studies can't tell you the
relationship between a physical stimulus and a given perceptual signal.
They can tell you that when a given physical stimulation gets small enough,
a person will report that some aspect of experience has disappeared. They
can't tell you what perception is being caused by the physical stimulation,
or whether it is that stimulation or some other variable that depends on it
that is involved.

Since I don't think I ever agreed that the statement you cite constitutes
an inadequacy of HPCT, this paragraph is meaningless.

I don't think I ever said that the results of non-PCT experiments are
inadmissible simply because of not being PCT experiments. Surely you can
see why I would be skeptical if someone said he had measured the response
to a stimulus, if no attempt had been made to see if the subject was in
fact perceiving that stimulus or only something dependent on it. As far as
I know, the Test is the only way to do this, so any experiment that doesn't
include something equivalent to the Test would be inadmissible -- in any
court, not just the court of PCT.

The ways in which a fixed function relating a perceptual signal to
physical variables is inadequate (according to "conventional" studies
can be summarized in the oversimplification that the output of the
sensors (and by extension the output of any perceptual function) is
dependent on the context extended over time.

This depends entirely on what you think of as the perceptual function and
its mathematical representation. If the mathematical representation is a
constant multiplier, but the actual physical function adapts over time or
exaggerates changes, then the mathematical representation is incorrect. A
correct definition of the perceptual function would include time-dependent
parameters. Likewise, if context matters, the perceptual function has to
include terms that change when context changes: in other words, the initial
definition of the perceptual function did not include enough input variables.

I have added that it is
possible that the output of the perceptual input function of an ECU
may depend on whether that ECU is actually controlling the perception
at the tested moment--but this addition is purely intuitive, and is made
largely to warn people further against taking seriously the results
of experiments that purport to "measure" the magnitudes of perceptual
variables.

It may depend on the phases of the moon, too. If it turns out that other
variables influence the measure of the controlled variable, then obviously
those other variables have to become part of the definition.

Your definition of a "fixed" function is what is creating the problems you
mention. It was never my intention to define functions that way.

Anyway, the upshot is that I think that (as a minimum) experiments that
put a bound on the ability of people to perceive changes in a perceptual
variable are useful to PCT.

Yes, that would be useful -- provided you can show what perception is
influenced by changes in a physical variable. My point is that
psychophysical experiments do not do this. You still have to ask the
subject what is being perceived, and the subject can only try to tell you
in words. At least, using the Test, you don't have to use words.
------------------------------
Later, replying to Perper:

What you are pointing out is what I have pointed out elsewhere, and Bill P
has pointed out many times--one cannot define a present set of physical
variables that specify a present perception, except in the simplest cases.
History matters, and current context matters.

What I believe is that neither history nor context matter, unless the
perceptual function is specifically organized to take them into account. My
point is that you don't _need_ to define a present set of physical
variables that specify a present perception. All that matters to the
organism is controlling the perception. Even the variables controlled by
the reorganizing system are not related uniquely to any physical state of
affairs. The reorganizing system wants to stay warm; it doesn't care
whether this is done by moving to Southern California, burning the
furniture, buying Inuit clothing, or sitting next to a nice warm spent
reactor rod. There are far more ways to correct errors than there are
errors to be corrected (a few of which prove to be mistakes, of course).
---------------------------------------------------------------------------
Jeff Vancouver 980202.15:30 EST--

(Commenting to Martin Taylor)

You (or others) may object to my use of the word prediction in introducing
this last paragraph, but that seems an apt word as the gunner (or radar
system) predicts where the target will be when the shell is on a plane
perpendicular to the target. Prediction is required to deal with the lag
in the system (due to the physics of shell travel). And this discussion is
required to counter the "too slow" argument.

The world needs to be "predictable" in the sense that the same action will
have the same direction of effect on the controlled variable, and any
disturbances will be limited in magnitude and speed. This predictability,
however, is not involved in "making predictions." It's merely a statement
of what is required for a control system (or any fixed design) to continue
working in a given environment. It's necessary to be able to predict that a
meteor is not going to destroy the control system before it can act, but
this doesn't mean that the control system works by calculating the
probability that a meteor will strike in the next x seconds.

As to whether any prediction actually occurs in firing a gun at a moving
target, this depends on the individual or on the designer of the radar
control system. When I learned anti-aircraft gunnery in the Navy, I was not
told to make any predictions of target movements. I was told to try to get
the stream of tracers to intersect the target, or alternatively (when not
using tracers) to keep the target on the "appropriate" ring of the sight.
Both strategies involved control of present-time perceptions, not prediction.
Fire-control radar (in WW 2, where most current myths about control theory
originated), however, can't see the speeding projectiles, so it can't
estimate miss distances. The only possible improvement over blind firing,
then, is to try to estimate an aiming direction that will end up in a
collision between the projectile and the target. If this really worked
well, there would be a ratio of one projectile to one target, but of course
the actual ratio is thousands, or hundreds of thousands, of projectiles to
one near-enough miss.

What needs to be predictABLE is the effect of action on input, although
this doesn't mean that this effect must actually be predicted by the
control system. What does NOT need to be predictable OR predicted is the
presence, amount, or direction of disturbances that can alter the
controlled variable independently of the system's action. Those, provided
they are not too violent, can be taken care of without prediction. Of
course if they are too violent, the control system will fail, however it is
designed.
---------------------------------------------------------------------------
Dan Miller (980204.1645) --

For example, there exists a statistical relationship
between amount of reading (IV) and political progressivism (DV)
(r = 0.62 and it has been replicated). To me, this is very
interesting. I wouldn't want to argue from this statistical
relationship to a full-fledged theory, but how do we explain it.

How do you explain what? The fact that the correlation is only 0.62? That
one's easy: this fact is false for almost as many people as it is true. The
underlying statement (more reading goes with progressivism)? That's even
easier: the statement of fact is wrong too often to allow it in to any
scientific discussion. If your conclusions depended on two facts like that,
they would most probably be false.

So,how do we make sense of this intriguing finding?
What is it about the act of reading (and reading a lot) that creates
a context within which progressive political ideas can generate and
thrive?

The problem here is that you're trying to generate a general reason for
this finding when it's not generally true. Also, you're doing exactly what
users of statistics claim they're not doing: assigning causality to a
correlation. How do you know it's not progressivism that leads to more
reading? How do you know that this trend doesn't simply reflect the fact
that there are more avid readers of the New Republic than avid readers of
Soldier of Fortune?

No matter what explanation for this "fact" you come up with, it's going to
predict incorrectly in a very large number of cases -- not just now and
then, but almost half of the time. In my book that makes it pretty useless.
----------------

Some of the discussion on this list is about control systems and not
living control systems. A little engineering can go a long way, but
it is ONLY a metaphor, right?

No. Absolutely not. It is no more metaphorical than saying that the heart
is a pump. The heart IS a pump: it pumps. People ARE control systems: they
control. The engineering analysis of closed-loop systems tells us
literally, not metaphorically, the laws governing such organizations.
Control theory deals with facts of nature on the same levels that physics
deals with them.
All metaphors are equal, but some are more equal than others.

A few of us are
interested in social interaction. It's not always easy to do the
test in such circumstances. It's not always clear how it is done in
your controlled situations.

If you understand how to do the Test, it's very easy to do under any
circumstances. You don't have to create big disturbances; all you want to
know is what changes in the environment result in actions that oppose the
changes (successfully). You don't have to keep anyone from succeeding at
control. The test works best when the person controls as perfectly as
possible all during it. This is why ALL pct experiments involve the use of
disturbances, and why the disturbances are always constructed so they are
easy to resist. Every PCT experiment is a continuous Test for the
Controlled Variable.

For example, can your experiments and
demonstrations be done without prior instruction (that damned social
stuff, again)? Is there a need to inform people what they are going
to be perceiving, and what they want to perceive? This is a very
troubling question is it not?

Not to me. We don't need to communicate verbally with anyone to do the
Test. If we have enough time and equipment, we can wait for natural
disturbances to occur, and make and test reasonable guesses about
controlled perceptions with no interaction between us and the observed
person. In fact it's through watching what people naturally control that we
get our ideas for instrumented experiments.

If we wanted to, we could just go around observing people controlling
things. But when we think we see something being controlled, we naturally
want to get a closer look, and that calls for instrumentation and
simplification of the circumstances. Of course then we have to explain what
we are asking the person to control. The instructions are usually very
brief: use that mouse to keep that object on that other object, or to keep
this sound the same, or to make this object look like that object. We don't
tell anyone HOW to do that. How they do it is what we want to measure.

The only behavior that people can produce under any conditions is very
simple: push, pull, twist, and squeeze. That's it, there ain't no more. All
the rest of what they do is concerned with controlling perceived
consquences of pushing, pulling, twisting, and squeezing. So a simple
tracking experiment has all the same elements of behavior we see in any
context; the main difference is in the kinds of perceptions we see being
controlled. For practical reasons we have to use simple perceptions that we
can show on a computer screen (and that, with our limited abilities, we
know how to program). Plus, of course, the fact that we're limited to the
visual mode with a few forays into sound, and one or two controllers at a
time. But that's not a limit in principle, only in practice. Give me a
virtual-reality setup and I'll give you control experiments with much more
complex environments.

Instructions simply describe to the other person a temporary goal which we
ask be adopted. Their exact wording is unimportant; we can quickly see if
the person understood what to control, which almost all participants do. We
could dispense with instructions about what to do by asking the person to
discover what the apparatus does. But that would be studying something else.

We could also ask the person to experiment with the apparatus and pick out
something to make it do. It wouldn't matter to us what the person selected;
we can measure the parameters of any control behavior that we have
quantitative data about. It's not as if some things people would do would
be control behaviors, while the rest would be something else. Everything
the person does would be control behavior. We don't have to tell people to
perceive or control; they never do anything else.
-------------------------------------------------------------------------
Martin Taylor 980206 00:20 --

Now I play you a snippet of Mozart and then a snippet of Hendrix. You say
"the first was Mozart." I do it again, and you say "the second was Mozart".
After I have played you 100 pairs of snippets, sometimes playing Mozart
first and sometimes Hendrix first, you have told me 100 times
correctly which one was Mozart. I conclude that you can discriminate
pretty well between Mozart and Hendrix with the duration of snippet I used.

This tells us that you can give responses that agree with our opinions
about the source of the music. It doesn't however, tell us anything about
what you're actually perceiving that allows you to give these "correct
responses." The distal stimulus is identified; the proximal stimulus
remains unknown.

Best,

Bill P.

[from Jeff Vancouver 980210.10:10 EST]

[From Bill Powers (980209.2010 MST)]

What needs to be predictABLE is the effect of action on input, although
this doesn't mean that this effect must actually be predicted by the
control system. What does NOT need to be predictable OR predicted is the
presence, amount, or direction of disturbances that can alter the
controlled variable independently of the system's action. Those, provided
they are not too violent, can be taken care of without prediction. Of
course if they are too violent, the control system will fail, however it is
designed.

I think the basic point that I am trying to make is that complex systems
(i.e., humans) _can_ make predictions and that the operationalization of
that prediction is perfectly compatible with PCT. I am _not_ trying to say
that prediction is inherent in all control systems; that control systems
work by prediction. This seems to be the perception of my point that
others are attempting to control.

I have suggested that predictability is available in many (but certainly
not all) disturbances and actions. My first experiment (so called the
Vancouver experiment, or the spiral experiment) attempted to show that
humans could do better than random when controlling a predictable
disturbance even if the effect of that disturbance on the variable is no
longer available for perception. The counter-argument I got was that much
better control was had when the perception of the variable was available
(i.e., prediction of the disturbance was unnecessary). But that was not my
point. I did not doubt that on-line control is better than control based
on predictable disturbances.

Bill P. also constructed a set of ECUs that acted "as if" they were
predicting the disturbances. This is not a counter-argument to my
argument. I am saying ECUs are involved. However, my follow-up is to show
how their organization only makes sense when prediction of disturbances are
required. Hence, the systems were constructed (i.e., organized) because of
the predictability and would have been organized differently if there was
no predictability, and that the reorganizing systems required to make
predicting set-ups are not so simple that a simple organism could do it. I
have not figured out how to do that yet (also, these are tangential issues
to the main one that systems can act on anticipated perceptions).

The second type of predictability, the one to which Bill refers to here, is
the predictability of the action on the variable. It means that the system
can (but it need not, often does not, but can), predict the results of
actions on the variable (actually on the perception) because the world hold
this feature. Thus, if the perception of the current state is unavailable,
or lags in the physical or social systems involved require the control
system to commit scarce resources (i.e., actions, arrows) it will be more
efficient if it can make predictions of the effects of those actions than a
system that cannot make predictions. My second experiment (the Star Trek
experiment) was designed to show that humans have this capacity. I am not
sure that I have seen arguments that suggest the results of that experiment
could be interpreted in another way. So far, the arguments seem to be that
I have made it more complex than it needs to be. The other argument (or
manifestation of the same argument) is that a perfectly legitimate PCT
explanation is available, which is that the system has a reference signal
which causes the head to turn down to look when it does not hear the sound
of the hammer drop. But my point is that prediction is a perfectly
legitimate PCT concept. Specifically, that the reference signal which
causes the head to turn is the manifestation of that prediction algorithm,
and that the prediction algorithm is probably in the input function.

Your above statement about predictability indicates that you do not see the
environment as completely random (I never thought you did). The question
is, can the complex system, build of nothing but control systems (and
memory stores, if they are separate entities) take advantage of that
predictability. My answer is yes. We see it all the time. Writing this
passage is full of prediction. I am predicting that the reader can
understand English and that they can understand what I am trying to say. I
revise passages that I think are unclear, based on my model of the reader.
That is, I control the perceptions of the clarity of my writing without
actually including the audience in the loop (again, I do not say I am or
anyone is particularly good at this when compared to on-line interaction,
but we are better than random, which would look like this:
ksahfdoshoighjkhgaf'dfjwpo aahdsofht).

What I get back is that the appearance of predictability in the system is
an illusion. I attribute the reaction to my comments as a manifestation of
the respondents' models of what others mean by prediction. Others may very
well mean something different than I.

Later

Jeff
A great many people think they are thinking when they are merely
rearranging their prejudices.
                -- William James

[From Rick Marken (980210.0815)]

Now that Bill Powers (980209.2010 MST) is back it might be
a tad more difficult for Martin Taylor, Bruce Abbott, Dan
Miller, et al to blame what they don't like about PCT on me.
I realize that many of the implications of PCT, such as the
behavioral illusion, are very unpleasant for those who have
a stake in conventional social science research and pure
"observation". But don't blame me for this; I'm just the
messenger. If you don't like the message of PCT, please
blame PCT, not me.

Thanks

The messenger

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[Martin Taylor 980211 0:21]

Rick Marken (980210.0815)]

Now that Bill Powers (980209.2010 MST) is back it might be
a tad more difficult for Martin Taylor, Bruce Abbott, Dan
Miller, et al to blame what they don't like about PCT on me.

Huh? What is it about PCT I am supposed not to like? I guess you will have
to use the method of levels on me to make me aware of something that I
perceive but am unaware of.

Sorry to be so unknowledgable about my likes and dislikes.

... But don't blame me for this; I'm just the
messenger. If you don't like the message of PCT, please
blame PCT, not me.

The message of PCT is a wonderful message. It is the translation of that
message into the language of the messenger that sometimes is troublesome.

Martin

[From Bill Powers (980210.1559 MST)]

Jeff Vancouver 980210.10:10 EST --

I think the basic point that I am trying to make is that complex systems
(i.e., humans) _can_ make predictions and that the operationalization of
that prediction is perfectly compatible with PCT.

Would you explain what operationalization means?

I am _not_ trying to say
that prediction is inherent in all control systems; that control systems
work by prediction. This seems to be the perception of my point that
others are attempting to control.

I have suggested that predictability is available in many (but certainly
not all) disturbances and actions.

I think you need to be more precise about your terms. To say that something
is predictable entails the following assumptions, more or less cumulatively:

1. The phenomenon is regular and lawful rather than random. Since this
applies to everything above the level of quantum mechanics, predictability
is essentially always present.

2. Enough information is available to allow making a prediction. If not
enough information is available, one cannot predict even a predictable
(that is, regular) phenomenon.

3. Given that enough information is available from which to make a
prediction, the capacity exists for generating a prediction from the
available information. Assuming this implies also that the information can
be perceived with sufficient accuracy. In principle, from seeing a piece of
paper blowing across the road, the wind velocity and its effect on a moving
car could be calculated. However, it is unlikely that this information
could be sensed or used with enough accuracy to be useful.

4. Given the ability to sense accurately enough and apply all the physical
laws that are relevant, the brain can compute fast enough and accurately
enough to provide a prediction before the data become obsolete. Also, it is
assumed that if an accurate prediction can be made, it can be converted
accurately into a prescribed behavior, through sending signals to muscles.

5. Prediction always relies on the assumption that all else (not taken into
account by the predictive method) is equal. This rules out the use of
predictions in any case where independent disturbances can affect the
outcome on a time-scale comparable to the time needed to update the
prediction.

I think we have to ask ourselves why prediction seems so important to some
people. I submit that one reason is that many people can't imagine how
control could possibly work without it. They use prediction as a blanket
explanation of control -- a spurious explanation, offered because it's the
only one they know. Thus we see explanations of catching a baseball that
rely on computing a trajectory and predicting where the ball will come
down, then running to that place. As we know, there is a far simpler
explanation that doesn't require any predictions, a model that works with
marvellous reliability even though the computations involved are very crude.

Another reason that prediction seems important is that we all wish we could
predict the future. Someone who could predict only 20 seconds into the
future could make a fortune at Las Vegas. A prediction good for two minutes
would bankrupt the racetracks. If we could predict the market one day in
advance we could reap billions of dollars. The ability to predict would
save us from countless accidents, and allow us to be ready in time when an
unexpected danger arose. It's human nature, I suppose, to conclude that
because prediction would be very important and useful, we must be able to
do it.

Clearly, people _do_ make predictions, and try to use them as the basis for
control. However, many cases (like catching the baseball) are not only
possible to explain in other ways, but are better explained in other ways,
and better DONE in other ways. An outfielder who had to stop and calculate
the ball's trajectory before moving would miss a lot of easy catches. He
could try it, but just entering the numbers in his pocket calculator would
probably take up more time than is available.

Anyway, enough for now. Your turn.

Best,

Bill P.

My first experiment (so called the
Vancouver experiment, or the spiral experiment) attempted to show that
humans could do better than random when controlling a predictable
disturbance even if the effect of that disturbance on the variable is no
longer available for perception. The counter-argument I got was that much
better control was had when the perception of the variable was available
(i.e., prediction of the disturbance was unnecessary). But that was not my
point. I did not doubt that on-line control is better than control based
on predictable disturbances.

Your point was that acting on the basis of predictions, however poor the
result, is better than acting at random. My objection was that the achieved
quality of behavior is so far short of the threshold needed for a viable
system that we can dismiss it as being unimportant. The result, I agree,
isn't completely random. But it might as well be.

Bill P. also constructed a set of ECUs that acted "as if" they were
predicting the disturbances. This is not a counter-argument to my
argument. I am saying ECUs are involved. However, my follow-up is to show
how their organization only makes sense when prediction of disturbances are
required.

Can you state the conditions under which prediction of disturbances is
required?

Hence, the systems were constructed (i.e., organized) because of
the predictability and would have been organized differently if there was
no predictability, and that the reorganizing systems required to make
predicting set-ups are not so simple that a simple organism could do it. I
have not figured out how to do that yet (also, these are tangential issues
to the main one that systems can act on anticipated perceptions).

You're speaking as if predictability -- that is, regularity of physical
laws -- is unusual. In cases where there is no predictability, there can be
no control. But this does not mean that where predictability exists,
control is best achieved through making predictions.

···

The second type of predictability, the one to which Bill refers to here, is
the predictability of the action on the variable. It means that the system
can (but it need not, often does not, but can), predict the results of
actions on the variable (actually on the perception) because the world hold
this feature. Thus, if the perception of the current state is unavailable,
or lags in the physical or social systems involved require the control
system to commit scarce resources (i.e., actions, arrows) it will be more
efficient if it can make predictions of the effects of those actions than a
system that cannot make predictions. My second experiment (the Star Trek
experiment) was designed to show that humans have this capacity. I am not
sure that I have seen arguments that suggest the results of that experiment
could be interpreted in another way. So far, the arguments seem to be that
I have made it more complex than it needs to be. The other argument (or
manifestation of the same argument) is that a perfectly legitimate PCT
explanation is available, which is that the system has a reference signal
which causes the head to turn down to look when it does not hear the sound
of the hammer drop. But my point is that prediction is a perfectly
legitimate PCT concept. Specifically, that the reference signal which
causes the head to turn is the manifestation of that prediction algorithm,
and that the prediction algorithm is probably in the input function.

Your above statement about predictability indicates that you do not see the
environment as completely random (I never thought you did). The question
is, can the complex system, build of nothing but control systems (and
memory stores, if they are separate entities) take advantage of that
predictability. My answer is yes. We see it all the time. Writing this
passage is full of prediction. I am predicting that the reader can
understand English and that they can understand what I am trying to say. I
revise passages that I think are unclear, based on my model of the reader.
That is, I control the perceptions of the clarity of my writing without
actually including the audience in the loop (again, I do not say I am or
anyone is particularly good at this when compared to on-line interaction,
but we are better than random, which would look like this:
ksahfdoshoighjkhgaf'dfjwpo aahdsofht).

What I get back is that the appearance of predictability in the system is
an illusion. I attribute the reaction to my comments as a manifestation of
the respondents' models of what others mean by prediction. Others may very
well mean something different than I.

Later

Jeff
A great many people think they are thinking when they are merely
rearranging their prejudices.
               -- William James

[From Rick Marken (980211.0840)]

Martin Taylor (980211 0:21) --

Huh? What is it about PCT I am supposed not to like? I guess you
will have to use the method of levels on me to make me aware of
something that I perceive but am unaware of.

The method of levels is not necessary. The Test is appropriate
here. One thing you don't like about PCT are the implications
of the _behavioral illusion_ for conventional psychological
research; you act as though the implications of this illusion
(the existence of which is known only because of PCT) are a
disturbance to some perception you are controlling. It seems to
me that the perception you are controlling is rather obvious:
"the merits of conventional psychophysical research" (I've
been Testing this hypothesis about what you are controlling
several years now and it has yet to fail the Test).

There is nothing "wrong" with controlling for this variable,
by the way. I just wish you would stop blaming one source
of disturbance to it (me;-)) for the fact that the behavioral
illusion is a disturbance to what you are controlling.

Thanks again

The messenger

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[from Jeff Vancouver 980211.10:30 EST]

[From Bill Powers (980210.1559 MST)]

Jeff Vancouver 980210.10:10 EST --

I think the basic point that I am trying to make is that complex systems
(i.e., humans) _can_ make predictions and that the operationalization of
that prediction is perfectly compatible with PCT.

Would you explain what operationalization means?

In this case operationalization means model, in the sense of a simulation
like you, Rick, Tom, etc. do.

I think you need to be more precise about your terms. To say that something
is predictable entails the following assumptions, more or less cumulatively:

For sure.

1. The phenomenon is regular and lawful rather than random. Since this
applies to everything above the level of quantum mechanics, predictability
is essentially always present.

2. Enough information is available to allow making a prediction. If not
enough information is available, one cannot predict even a predictable
(that is, regular) phenomenon.

These two premises, which I would agree are true, are prerequisites to the
capacity of prediction in complex systems. I rarely see them in threads on
this list. Instead, the assumption is usually that the disturbances are
random.

3. Given that enough information is available from which to make a
prediction, the capacity exists for generating a prediction from the
available information. Assuming this implies also that the information can
be perceived with sufficient accuracy. In principle, from seeing a piece of
paper blowing across the road, the wind velocity and its effect on a moving
car could be calculated. However, it is unlikely that this information
could be sensed or used with enough accuracy to be useful.

When you say "capacity" in point 3, do you mean a system can be built which
has that capacity (that any system can exist with this property), or are
you saying human systems have that capacity (that a particular system has
the capacity). Obviously, if you mean the latter, the former is true.
Given what you say later, I am assuming you mean the latter.

The second half of the statement is where we start getting into trouble.
You are making an evaluation of the usefulness of the capacity. It is here
that we disagree (but perhaps not as dramatically as it would seem at
first). That is, I would say that the prediction capacity would be very
useful in some circumstances. However, I think one point that you have
made very well in the past is that it is less useful than many would think.
This was a critical selling point for your theory - that individuals could
act well in randomly (unpredictable) circumstances. You have made that
point with me anyway, I am moving on.

The issue here may be that on a continuum of usefulness, you would put
prediction way down on the low end of the scale. But would you put it at
the zero point (or less)? I think the perspective matters greatly here.
When talking about the lower-level systems, prediction is worse than
useless. However, when talking about program and higher levels, prediction
is more prevalent (although how much more is clearly debatable).

4. Given the ability to sense accurately enough and apply all the physical
laws that are relevant, the brain can compute fast enough and accurately
enough to provide a prediction before the data become obsolete. Also, it is
assumed that if an accurate prediction can be made, it can be converted
accurately into a prescribed behavior, through sending signals to muscles.

This point clearly (to me anyway) implies the lower (faster) levels of the
hierarchy. But more problematic is the idea imbedded here that prediction
"prescribes behavior" all the way down to the muscles. What I mean by
prediction ends with a perception. The system acts on that perception. So
if I predict that a meteor is about to fall on my desk, I will use my
on-line senses to navigate my way out of the room.

5. Prediction always relies on the assumption that all else (not taken into
account by the predictive method) is equal. This rules out the use of
predictions in any case where independent disturbances can affect the
outcome on a time-scale comparable to the time needed to update the
prediction.

This merely makes it stochastic, not useless. But it is this concern that
drives our attempts to understand the moderators - the contingencies that
are at play.

I think we have to ask ourselves why prediction seems so important to some
people. I submit that one reason is that many people can't imagine how
control could possibly work without it. They use prediction as a blanket
explanation of control -- a spurious explanation, offered because it's the
only one they know. Thus we see explanations of catching a baseball that
rely on computing a trajectory and predicting where the ball will come
down, then running to that place. As we know, there is a far simpler
explanation that doesn't require any predictions, a model that works with
marvellous reliability even though the computations involved are very crude.

Although it seems like a reasonable question, the answer you give here
speaks to the danger of the question. This first answer, which I think is
true, speaks more toward your motivation than mine. You and Rick and
others have been fighting this belief for so long that other reasons, which
I will get to in a moment, pale in comparison. Given this understanding
(i.e., model) of the audience, you naturally attempt to argue on the side
of prediction's uselessness and lack of parsimony.

Another reason that prediction seems important is that we all wish we could
predict the future. Someone who could predict only 20 seconds into the
future could make a fortune at Las Vegas. A prediction good for two minutes
would bankrupt the racetracks. If we could predict the market one day in
advance we could reap billions of dollars. The ability to predict would
save us from countless accidents, and allow us to be ready in time when an
unexpected danger arose. It's human nature, I suppose, to conclude that
because prediction would be very important and useful, we must be able to
do it.

By the last sentence I am taking you to mean that you believe prediction
could be important and useful. I am taking by the first part that you
think we greatly overestimate our ability to do it. This speaks to an
earlier point you made, which I paraphrase: "assuming that we had the
information ...." One clear phenomenon to me is that people make
predictions without the correct information or by applying the information
incorrectly (i.e., poor models) or by applying any information at all
(i.e., it is not predictable given the measures one is using or at all).
This is the question of the accuracy of the prediction.

Clearly, people _do_ make predictions, and try to use them as the basis for
control. However, many cases (like catching the baseball) are not only
possible to explain in other ways, but are better explained in other ways,
and better DONE in other ways. An outfielder who had to stop and calculate
the ball's trajectory before moving would miss a lot of easy catches. He
could try it, but just entering the numbers in his pocket calculator would
probably take up more time than is available.

So here is the crux paragraph. You acknowledge that human systems are
capability of and, in fact, do make predictions. This is what I am trying
to understand. You all have been so strident about the later point in this
paragraph (that most behavior can be explained without prediction or that
many predictions are not worth the effort) that you have ignore the idea
that prediction is a phenomenon of human systems. Hence, you have argued
against attempts to model that phenomenon using PCT (or is it HPCT). Let
me define some terms. By prediction I mean the anticipated levels of
perceptions of controlled variables. By behavior I want you to be
sensitive to the level of explanation. That is, any behavior is probably
the result of several levels of control systems operating. Hence, my
leaving the room is a behavior driven by on-line (non-predicting) control
systems. But the reason I leave the room, is because of a prediction - the
anticipation of a meteor crashing into my office.

Given the kinds of issues I deal with (e.g., job performance of managers),
the higher levels are my major concern. The models that are used to create
predictions are critical, I believe, to predicting the behavior of the
person. And interventions designed at improving those models, or reducing
their use are important. For example, the models that result in bias and
discrimination. There are even cases in which increasing their use is
important ("this is what could happen if you drive drunk"). Hence,
understanding the nature of the process is important to me. I think PCT
offers a fresh way, because you are right about many other psychologists,
to these types of issues.

Finally, don't ignore the idea that predictions are useful. I would rather
work for a business manager who has pulled out a calculator to make a
business plan (with its projections) than one who has not. Likewise, I
would guess (i.e., predict) that you would rather we interact as a PCT
model would imply, so that we can predict that trying to control another is
not easy.

Anyway, enough for now. Your turn.

Ball's in your court.

Sincerely,

Jeff

[Martin Taylor 980211 17:30]

Bill Powers (980209.2010 MST)]

Turns out I was kicked off the list some time last week. I'm signed up
again, and will just reply to a few highlights.

Good to have you back. I was getting a bit worried that something might
have happened to you.

Martin Taylor 980201 10:40

The "observed relationship between inputs and outputs" is the action
of the subject that indicates the subject saw or did not see (hear, taste,
smell...) something when a real physical "something" is known to have
been available to see (hear, tast, smell...). You seem to be saying that
whether the subject can taste salt at a concentration of x per million
is a characteristic of the environment, not of the subject. If that is
so, you might explain what it is about the environment that does the
tasting and allows the subject to act according to whether the evironment
tasted the salt or not.

There is an observable relationship between the concentration of sodium
chloride and a statement "yes/no, I can/can't taste something salty."
However, we do not know what, in the environment, corresponds to the taste.
Whatever it is, it is also affected by magnesium chloride and several other
substances, and the presence of sugar can alter it, too. The missing factor
here is the relationship between salt concentration and whatever it is that
corresponds to the salt-sensation.

What is "missing" about the observation (about the subject) that the subject
reports that s/he can detect a salt concentration difference of x% at a
concentration of y parts per million but not at (x/2)%?

What possible relevance could you attach to the fact that magnesium
chloride might have been used in a similar experiment, or, for that matter,
that a small electrical current in the tongue can also give a salty taste
sensation?

What we know from the experiment is if the subject wants to control
whatever perception corresponds to sodium chloride concentration, acting
so as to affect sodium chloride concentration ought to work if the
disturbance is as much as x%, but may not if the disturbance is (x/2)%.
Perhaps the subject could counter the disturbance equally by altering
magnesium chloride concentration by z%. So what?

---------

A comment on Standard HPCT and perceptual functions.

According to "Standard" HPCT, the hierarchy consists of a series of levels
of elementary control units (ECUs). Each ECU contains an input function
that accepts possibly many inputs from the sensors or from the outputs
of lower level ECUs. The output of an input function is a perceptual
variable. The sensors transform some physical variable in the environment
into a signal value suitable for input to an ECU input function.
Perceptual variables are therefore functions of physical variables.

This is true, but they are not _single-valued_ functions. You may be
shooting for an 8, but your reference-level for eight-ness can be satisfied
by 6 and 2, 7 and 1, 1 and 7, or 2 and 6 on the dice.

I was always under the impression that if C = A+B, C was indeed a single
valued function of A and B. You must be thinking of C = sqrt(A+B), which
is not single-valued. If A+B=4, C could be either 2 or -2.

According to "Standard" HPCT, the form of a perceptual input function is
determined by reorganization, which in turn depends on the degree to which
intrinsic variables are controlled to be near their inborn reference
levels. This implies that the perceptual input functions of (at least)
low-level ECUs change slowly, if at all, over time. Hence, again according
to "standard" HPCT, there should exist functions that relate the values
of perceptual signals to the values of physical variables.

Yes, but I repeat that these functions need not be single-valued.

Pointless to repeat _that_. Perhaps you mean that a scalar single-valued
function of several arguments usually has a constant result over a hyper-
surface of dimensionality N-1, where N is the number of arguments.
Perceptual Input Functions are like that, as a rule. There may be
exceptions, but I don't think they would be very interesting. So why
harp on what everyone takes for granted?

Neither
do they have to be stable over time.

True. I'm wondering where this is leading.

Furthermore, the relationship between
physical variables in the environment and intrinsic variables is highly
variable -- how many ways are there to stay warm? The environment has far
more degrees of freedom than our perceptions do, and far more than are
monitored by the reorganizing system. So there is no unique relationship
between intrinsic variables and the perceptual variables we learn to
control, or between either class of variables and those we call "physical"
variables.

True again. I wonder whether "motherhood" is at issue somewhere?

I happen to believe that Standard HPCT is inadequate in this linkage
between perceptual and physical variables, and I said so. There are many
reasons why I think it inadequate, based on the results of "conventional
research", some of which I have mentioned in previous messages. Bill P
agrees with me that it is inadequate, for the same reasons (980130.0517
MST) "I can reproduce the same perception I had before, but there is
no guarantee that in doing so I am producing the same situation as a
physicist would deduce it."

I don't see how this comes out with Standard HPCT being inadequate.

As a straight quote, it doesn't. But remember the context. You were agreeing
with the following:

+Martin Taylor 971231 17:45
+The question arises as to whether the perceptual input functions operate
+the same way when the resulting perception is being controlled as when it
+isn't. This issue is not ordinarily considered within HPCT, since normally
+the perceptual input function is taken to be whatever it is, and only the
+magnitude of its output is controlled. But it is an issue, one that might
+invalidate the uncritical use of the results of psychophysical studies
+to assess the elements of related control loops.

You said it cast doubt on the psychophysics, not on PCT. I wrote what you
quoted to point out a possible deficiency in the psychophysics, so of
course I agree with you that far. What I also believe is that it renders
inadquate the version of HPCT in which the _nature_ of a perceptual
input function is unaffected by whether the perception is controlled. And
that in standard HPCT, perceptual input functions exist. They are changeable
by reorganization, but not by whether their output is a controlled
versus an uncontrolled perception. But this isn't the core reason. See below.

... Psychophysical studies can't tell you the
relationship between a physical stimulus and a given perceptual signal.
They can tell you that when a given physical stimulation gets small enough,
a person will report that some aspect of experience has disappeared. They
can't tell you what perception is being caused by the physical stimulation,
or whether it is that stimulation or some other variable that depends on it
that is involved.

Quite so. It's the same when you manipulate an environmental variable in
"the Test". You can't tell whether the controlled perception is being
caused by the physical stimulation or by some other variable that depends
on it. All you can tell is that something highly correlated with the
variable you try to disturb is being controlled (or perhaps less highly
correlated if control appears to be poor). Relevance to the point?

I don't think I ever said that the results of non-PCT experiments are
inadmissible simply because of not being PCT experiments.

Good. Others have made that claim most forcefully. I'm glad you don't.

Surely you can
see why I would be skeptical if someone said he had measured the response
to a stimulus, if no attempt had been made to see if the subject was in
fact perceiving that stimulus or only something dependent on it. As far as
I know, the Test is the only way to do this, so any experiment that doesn't
include something equivalent to the Test would be inadmissible -- in any
court, not just the court of PCT.

I suppose you'll have to explain once more for the brain-addled how the
Test can distinguish whether the subject is controlling the variable
the experimenter tries to perturb or only something that depends on it.
I don't see how you could possibly tell, if the two have 100% correlation.

The ways in which a fixed function relating a perceptual signal to
physical variables is inadequate (according to "conventional" studies
can be summarized in the oversimplification that the output of the
sensors (and by extension the output of any perceptual function) is
dependent on the context extended over time.

... if context matters, the perceptual function has to
include terms that change when context changes: in other words, the initial
definition of the perceptual function did not include enough input variables.

I thought you disallowed inputs that are the outputs of perceptual functions
at the same level, at least in "standard HPCT." Yet we know that the
neural signals at even the lowest levels do change as a function of the
levels of other neighbouring signals of the same kind. This is the same-time
context I am talking about. Either I don't understand "standard HPCT"
in that it _does_ allow the connections you objected to so strongly in our
"category perception" discussion, or you don't understand the kind of
context I adduce in saying that the perceptual function is affected by it.

I have added that it is
possible that the output of the perceptual input function of an ECU
may depend on whether that ECU is actually controlling the perception
at the tested moment--but this addition is purely intuitive, and is made
largely to warn people further against taking seriously the results
of experiments that purport to "measure" the magnitudes of perceptual
variables.

It may depend on the phases of the moon, too. If it turns out that other
variables influence the measure of the controlled variable, then obviously
those other variables have to become part of the definition.

Without changing the fundamental structure of the PCT hierarchy? How?

Your definition of a "fixed" function is what is creating the problems you
mention. It was never my intention to define functions that way.

Fine. I am henceforth assuming that standard HPCT includes the following:
(1) inputs to perceptual functions from the outputs of same-level perceptual
functions; (2) inputs that change the form of perceptual functions depending
on whether the output is a controlled variable; (3) perceptual functions
that reduce their output over time as a function of a constant input, and
that show "contrast" effects when their outputs change abruptly (in this
latter I am assuming they act as the neurophysiological measurements often
do).

I hope you agree that these are parts of standard HPCT, and that I
was wrong in thinking they were not:-)

-------------------------------------------------------------------------
Martin Taylor 980206 00:20 --

Now I play you a snippet of Mozart and then a snippet of Hendrix. You say
"the first was Mozart." I do it again, and you say "the second was Mozart".
After I have played you 100 pairs of snippets, sometimes playing Mozart
first and sometimes Hendrix first, you have told me 100 times
correctly which one was Mozart. I conclude that you can discriminate
pretty well between Mozart and Hendrix with the duration of snippet I used.

This tells us that you can give responses that agree with our opinions
about the source of the music. It doesn't however, tell us anything about
what you're actually perceiving that allows you to give these "correct
responses." The distal stimulus is identified; the proximal stimulus
remains unknown.

Agreed. Relevance? The observation is (say) that with a clip lasting
0.3 seconds, the subject guesses correctly about 60% of the time. With
a clip of 1.5 seconds, 80%. With a clip of 10 seconds, 95%. Or something
like that. The natural suggestion to take away from that is that the
subject finds it easier to tell the two apart if given a longer sample,
which leads to the suspicion (if that's what the experimenter is interested
in) that some structural aspect of the music is important, rather than the
momentary sound quality (which is easily discriminated at 0.3 seconds, in
most cases). (I have no idea whether these numbers are realistic for any
particular study, but they make the point that perceptions on which you
can do psychophysics can occur at _any_ level of the hierarchy).

Glad to have you back. In your catching up, do you have any comments on
my Editorial for the special issue?

All the best.

Martin

[From Bill Powers (980212.0403 MST)]

Jeff Vancouver 980211.10:30 EST--

These two premises, which I would agree are true, are prerequisites to the
capacity of prediction in complex systems. I rarely see them in threads on
this list. Instead, the assumption is usually that the disturbances are
random.

That is because when we use regular disturbances, there's always someone
who says "Oh, the person is predicting the disturbance." So we use random
disturbances to show that the behavior doesn't depend on prediction.

In fact, a model constructed to fit behavior with a random disturbance
acting also predicts behavior just as well when the disturbance is a
regular sine-wave, without any change in parameters. People don't control
any better when the disturbance is regular, despite what theoreticians who
never do experiments may tell you. You might, with a bit of imagination,
find a slight improvement of performance with a slow regular sine-wave
disturbance. But not enough to get excited about.

When you say "capacity" in point 3, do you mean a system can be built which
has that capacity (that any system can exist with this property), or are
you saying human systems have that capacity (that a particular system has
the capacity). Obviously, if you mean the latter, the former is true.
Given what you say later, I am assuming you mean the latter.

I adopted a poor structure for this argument. Each point was supposed to be
understood as starting with a potentially (and probably) invalid assumption
about the human ability to predict. But the statements were made as
positive assertions of truth, so the intent gradually got lost as the
points built up.

The issue here may be that on a continuum of usefulness, you would put
prediction way down on the low end of the scale. But would you put it at
the zero point (or less)?

That depends. I think that some people, for psychological reasons, are very
concerned about predicting what's going to happen, so they can make
inflexible plans and stick to them no matter what happens. This gets
countless business managers in trouble, not to mention people who are
responsible only for their own welfare.

A simple prediction made under clear-cut circumstances, or made when the
consequences of being wrong are not severe, can be very useful. If it looks
like rain, you take an umbrella with you; if it doesn't rain, you haven't
incurred much of a cost even if you're wrong 75% of the time. But a complex
prediction that tries to take every factor into account can be worse than
useless. Having gone to the expense of gathering the data and working out
the prediction, you can hardly abandon it at the first sign of inaccuracy.
This means you will behave as if the prediction is true long past the point
where it has obviously (to anyone else) failed. And that is worse than
useless.

It's far better to make contingency plans. This means considering
everything you can think of that _could_ happen, and working out what to do
in each case, or at least which direction to begin acting. When you've done
this, you no longer have to predict what IS going to happen, because no
matter what happens, you're covered.

Carried to a fine level of detail in time and space, contingency planning
becomes control. If the cursor goes a little left, move the handle a little
right, and so on for all degrees and directions of possible errors. Now you
no longer have to predict what errors will happen, because for each amount
and direction of error you know what output to produce. When you boil the
contingency plans down to their most efficient form, you have an output
function and a control system. No prediction needed.

Prediction can be useful and is sometimes essential. But in the Big Picture
of human behavior, it is much less important than simply having purposes
and dealing with errors as they come up.

Best,

Bill P.

[from Jeff Vancouver 980212.09:10 EST]

[From Bill Powers (980212.0403 MST)]

That is because when we use regular disturbances, there's always someone
who says "Oh, the person is predicting the disturbance." So we use random
disturbances to show that the behavior doesn't depend on prediction.

Exactly what I was saying. To make one, clearly important, argument you
have constructed a certain type of study. But it is only one argument in
the larger question of how complex cybernetically constructed systems
operate. Other arguments require constructing other types of studies. You
know this: the TEST is a different type of study.

In fact, a model constructed to fit behavior with a random disturbance
acting also predicts behavior just as well when the disturbance is a
regular sine-wave, without any change in parameters. People don't control
any better when the disturbance is regular, despite what theoreticians who
never do experiments may tell you. You might, with a bit of imagination,
find a slight improvement of performance with a slow regular sine-wave
disturbance. But not enough to get excited about.

I never suggested they would (well never is a long time, so don't hold me
to it). But the condition to which you refer is one where information
about the current state is available and the system can act to correct the
error that occurs at the time (i.e., has the requisite variety and is fast
enough).

Each point was supposed to be
understood as starting with a potentially (and probably) invalid assumption
about the human ability to predict.

Prediction can be useful and is sometimes essential. But in the Big Picture
of human behavior, it is much less important than simply having purposes
and dealing with errors as they come up.

I am confused by these two statements. They seem to contradict. You seem
to be saying (see paragraph below) that humans do predict, but that they
should not. Thus, they have the capacity, but it is a foolish human that
uses it.

That depends. I think that some people, for psychological reasons, are very
concerned about predicting what's going to happen, so they can make
inflexible plans and stick to them no matter what happens. This gets
countless business managers in trouble, not to mention people who are
responsible only for their own welfare.

I agree that this is an issue. But PCT cannot understand it if it does not
acknowledge its existance. It seems we are getting into the area of
proscription and out of the area of description.

It's far better to make contingency plans. This means considering
everything you can think of that _could_ happen, and working out what to do
in each case, or at least which direction to begin acting. When you've done
this, you no longer have to predict what IS going to happen, because no
matter what happens, you're covered.

I agree with you on this proscriptive piece of advice. Taking it even
further, I would say that the planning should focus on the kinds of
information that need to be collected so that on-line control can be
achieved. Hence, these views of human behavior are not contradictory, but
complementary. Unfortunately, due to your first point, camps have formed
and conflicts have ensued. This is the stuff of Kent McClelland's work.

Carried to a fine level of detail in time and space, contingency planning
becomes control. If the cursor goes a little left, move the handle a little
right, and so on for all degrees and directions of possible errors. Now you
no longer have to predict what errors will happen, because for each amount
and direction of error you know what output to produce. When you boil the
contingency plans down to their most efficient form, you have an output
function and a control system. No prediction needed.

If it takes a heavy, expensive piece of equipment to move the handle a
little to the right, and you anticipate that you might need to move the
handle to the right, you might commission the heavy piece of equipment.
Without that prediction, you have no control.

Later

Jeff

Sincerely,

Jeff

[From Bill Powers (980212.0820 MST)]

Martin Taylor 980211 17:30 --

The missing factor

here is the relationship between salt concentration and whatever it is that
corresponds to the salt-sensation.

What is "missing" about the observation (about the subject) that the subject
reports that s/he can detect a salt concentration difference of x% at a
concentration of y parts per million but not at (x/2)%?

What is missing is the actual proximal variable that is affected by various
substances and that is sensed as saltiness. This variable might, in fact,
be a functions of several environmental variables, any one of which can
give rise to the same perceptual signal. You can state the threshold in
terms of the distal variable, but this doesn't tell you what the threshold
of the proximal variable is. And it doesn't tell you the threshold for
reporting the presence of a neural perceptual signal.

What possible relevance could you attach to the fact that magnesium
chloride might have been used in a similar experiment, or, for that matter,
that a small electrical current in the tongue can also give a salty taste
sensation?

The relevance is that the person reports the _same_ experience even though
_different_ substances are involved. This says that there is, in fact, no
receptor for salt per se (as NaCl). Whatever is being detected is something
that is affected by various salts, or as you say even by an electric
current. It is possible that the threshold for the actual substance is
zero, but that the effect of salt on it does not begin until the salt
concentration has risen to x%.

I don't know what all the fuss is about, here. Obviously we can't measure
the perceptual signal directly; we can only infer it on the basis of some
model of a perceptual function. What you say about the perceptual signal
depends on what you assume about the perceptual function, and there's no
way to prove you're right through any strictly behavioral experiment. Do we
have some investment in measuring thresholds of detection (or at least of
reporting)? I don't see how it matters to PCT.

Best,

Bill P.

[From Bill Powers (980212.0847 MST)]

Jeff Vancouver 980212.09:10 EST--

Prediction can be useful and is sometimes essential. But in the Big Picture
of human behavior, it is much less important than simply having purposes
and dealing with errors as they come up.

I am confused by these two statements. They seem to contradict. You seem
to be saying (see paragraph below) that humans do predict, but that they
should not. Thus, they have the capacity, but it is a foolish human that
uses it.

What I'm saying is that people predict a lot less than they're imagined to
do it, and most of the time simply control without predicting.

I agree that this is an issue. But PCT cannot understand it if it does not
acknowledge its existance. It seems we are getting into the area of
proscription and out of the area of description.

I've never denied that people _can_ predict, or that sometimes they _do_
predict, or that sometimes prediction is the best way to go. But the
capacity to predict is already in PCT in the form of logical manipulations
(level 9), so nothing needs to be added to the model to encompass
prediction (except more detail!). In a given situation, one person might
try to generate a prediction while the person next to him doesn't. That's
only a matter of how you've learned to deal with situations -- there's
nothing fundamental about human nature involved. I wouldn't include
prediction as a feature of a model of behavioral organization, for the same
reason I wouldn't include praying or speaking French. Nothing that's
optional belongs in a basic model of the brain. PCT isn't about specific
behaviors people learn to do; it's about how they can do any behaviors at all.

It's far better to make contingency plans. This means considering
everything you can think of that _could_ happen, and working out what to do
in each case, or at least which direction to begin acting. When you've done
this, you no longer have to predict what IS going to happen, because no
matter what happens, you're covered.

If it takes a heavy, expensive piece of equipment to move the handle a
little to the right, and you anticipate that you might need to move the
handle to the right, you might commission the heavy piece of equipment.
Without that prediction, you have no control.

Right. That's good practical advice -- but it has nothing to do with a
model of the brain.

Best,

Bill P.

[from Jeff Vancouver 980212.14:35 EST]

[From Bill Powers (980212.0847 MST)]

I've never denied that people _can_ predict, or that sometimes they _do_
predict, or that sometimes prediction is the best way to go. But the
capacity to predict is already in PCT in the form of logical manipulations
(level 9), so nothing needs to be added to the model to encompass
prediction (except more detail!).

Excellent! I am working on the detail.

Later,

Sincerely,

Jeff

[From Bill Powers (920718.1600)]

Back again after 5 days. Gave a talk (part of a panel) to the International
Society for Systems Science, called "Information: a matter of perception."
It was well received; a couple of people asked for more info on the CSG.
I'll post the text in a day or so. Mary and I camped one night on the way
there and another on the way back, and saw a lot of the interior of
Colorado including some prepossessing dirt roads. Fun.

···

----------------------------------------------------------------
Naturally, there was 100K of mail when I got back. I'm not going to answer
it all directly, particularly as the discussions that went on were pretty
fruitful. I'll just focus on what for me are the biggest error signals.
------------------------------------------------------------------------
As I had figured, there are some dissenters from my concept of the
hierarchy, and my discussion of implicit versus explicit functions turned
up some more.

Bruce Nevin:

I take it you mean orders of control systems (not to be confused with
levels of control within one hierarchical control system).

I meant levels of control within one hierarchy, whether it be the CNS
hierarchy or the biochemical one, or whether (as Martin Taylor proposes)
one considers the whole thing just one big hierarchy (I don't).

Suppose you have a set of control systems of the same level within a
hierarchy. These systems will be controlling for specific levels of
specific perceptions. If there are no higher levels, the reference signals
can only be random or fixed; there is no systematic means coordinating the
independent controlled variables.

Let's suppose that these controlled variables are vector forces being
generated by a set of brainstem or cerebellar control systems that control
approximately along the axes of the mechanical degrees of freedom that are
available. Martin's comments suggest that these systems would implicitly
produce a vector in this space. I agree, they do. In fact, ANY combination
of reference signals for these systems will result in a vector force. All
possible resultant vectors are IMPLICIT in this set of n component vectors.

However, neither the individual systems nor the set of all systems at the
same level control EXPLICITLY for any specific vector in this space. In
order to produce control of a specific vector (such as "twist-and-push"),
some higher-level system must perceive Av1 + Bv2 + ... Zvn, where the v's
are the individual vectors. The reference signal for that system then
specifies that the component of the total vector in that direction be
maintained at a given level. Now the lower-level world is represented, in
this one higher-level system, as a component of force in a specific
direction. There is a neural signal that represents the magnitude of this
force; the direction is set by the perceptual weightings. No other
component of the force is controlled. The actual multidimensional force can
vary in many ways, but this control system will see to it that the
magnitude of the component in one direction matches the reference signal
that this new system receives (whether from a higher system, from a random
process, or from a fixed property of a "floating" comparator).

When one thinks of a specific higher-level variable, it's easy to see that
this variable is implicit in a set of lower-level variables. But it's also
easy to miss the fact that SO ARE ALL OTHER POSSIBLE HIGHER-LEVEL VARIABLES
OF THE SAME HIGHER TYPE. Without a specific higher-level control system to
define a projected direction and to control for the amount of a perception
in that direction, there is no reason to think that the lower-level
variables will spontaneously take on just the magnitudes needed to produce
the state of the higher variable you have in mind.
------------------------------------------------------------
When I said in BCP that higher levels in the CNS hierarchy are physically
distinct from lower levels, I said this not for any abstract reasons or
principles, but because it seems that the CNS is physically connected in
exactly this way. The lowest level of behavioral control consists of the
stretch and tendon "reflexes," with reference signals arriving via the
alpha and gamma efferent signals. This level of spinal-cord control systems
forms a package, fully functional without the remainder of the CNS. What it
does is make sensed effort depend reliably on certain reference signals,
with the internal connections serving also to create dynamic stability. It
doesn't matter what supplies the reference signals. The first level of
control is a physical entity.

The second level of control receives many sensory signals including copies
of at least the tendon signals and possibly the stretch signals as well
(the bifurcation of the dorsal roots). It perceives, through functions
embodied in the sensory nuclei of the brain stem, variables that are
functions of many of the signals arriving from the first level, both those
under control and those not under control at the first level. Its outputs,
which come from the motor nuclei of the brain stem, are identically the
alpha and gamma reference signals reaching the first level of systems. Thus
the second level is physically distinct from the first level, and acts
exclusively through setting reference signals for the first level.

The third level is found in the thalamic regions, the midbrain. The sensory
nuclei here receive signals coming from the brainstem sensory nuclei; the
motor nuclei around the thalamus send their outputs to the motor nuclei of
the brainstem (where comparison takes place -- at all levels, there are
"collaterals" that carry sensory information into the motor nuclei and they
synapse with a sign opposite to the sign of signals entering those nuclei
from higher systems. Comparators of the second level are physically located
in the motor nuclei).

All of these first three levels of systems perceive and control variables
that seem consistent with my definitions of the first three levels in the
HPCT model. In addition, the perception and control of higher levels, as I
have defined them, seems consistent with the kinds of functions that have
been found at higher and higher layers in the physical organization of the
brain. In every case, moving to a higher level of control in the hierarchy
seems to go with moving to new collections of neurons distinct from those
concerned with lower levels of control.

This basic progression isn't quite exact, because the second and third
levels appear to be repeated in the motor cortex and in the cerebellum. If
you count synapses, however, the systems arrange themselves physically into
levels even though a given level may have components in the brainstem, the
motor cortex, and the cerebellum. A signal that passes through two
intermediate nuclei, wherever it ends up, is involved in the perception and
control of configurations, and so on.

While I can't prove this general theorem, therefore, I think there is
excellent reason, based on neuroanatomy, to say that different levels in
the human hierarchy correspond to physically distinct neural functions. Of
course without the constraints under which I worked, it's possible to
define functions in such a way that this wouldn't appear to be true. One
can invent all sorts of abstract hierarchies with different internal
connectivities, with levels representing abstract concepts. Most such
inventions, if not constrained to correspond to the organization of real
brains and the organization of real behavior, would show no relationship to
neuroanatomy and would suggest no intuitively-pleasing ways of parsing
experience. If you simply look at hierarchies as mathematical constructs,
anything becomes possible -- but if mathematics is the only constraint, the
chances of describing the actual human hierarchy of organization are, to my
mind, negligible.
----------------------------------------------------------------------
Allan Randall comments:

There are two ways one can talk about different "levels":
(1) Conceptual: perceived levels.
(2) Physical (architectural): perceiving levels.

In (1) the "levels" are not actually in the control system under
discussion, but are in the type (2) perceiving levels in the mind of >the

scientist building the model. The scientist is, hopefully, >controlling for
these perceptions to square with reality (or to get him >grant money,
whatever). Type (1) perceived levels are IMPLICIT in the >control system
under study, while type (2) are EXPLICIT.

What I have attempted to do with my proposed hierarchy is to make (1) and
(2) the same thing. In the course of developing the levels, I tried to
catch myself using perceptual capacities that were not yet in the model. It
took me about 35 years of observation to arrive at 11 levels. I think that
you will find in this model all the kinds of perceptual functions that a
scientist uses in conceptualizing levels (even if the conceptions aren't
the levels I define, or even if the scientist doesn't believe there are any
levels at all). If my project has succeeded, you will find the same things
in the model that you find in the observer of the model.

Observers and theoreticians who do not use my levels as a description of
brain organization nevertheless take the same perceptual elements that I
propose for granted in their arguments. They all speak of objects
undergoing transitions and making patterns we call events. They all explore
relationships among these things. They all categorize. They all consider
sequence or ordering significant. They all use some form of rule-driven
logic in developing their symbolic arguments. They all derive general
principles that guide their specific programs of reasoning. They all have
coherent system concepts that give form to the collections of principles,
programs, and so on that they treat in their investigations.

My claim is that these types of perceptions, and the systems that control
in these terms, represent the real basic organization of the human brain.
The content of brain activity at these levels -- for example, specific
kinds of taxonomies, specific mathematical analyses or verbal arguments,
specific principles -- do not represent basic organization. They are simply
examples of what this kind of organization can produce. Their main
significance is in their existence, not in what they appear to say. They
are themselves evidence about how the brain is organized -- any brain,
including the brain of a theoretician.

To me, the task of understanding how human beings work depends on putting
direct experience, anatomical and functional knowledge, and observations of
behavior together into a single self-consistent model that looks the same
from any of these viewpoints. It even depends on producing mathematical and
functional analyses -- but not just any old analyses. Whatever is analyzed,
it must be consistent with the model in all respects, not just internally
consistent. For example, many people are analyzing perception as if it
consisted of chaotic oscillators. I don't object to exploring chaotic
oscillators for the sake of their own fascination as phenomena, but how is
this concept consistent with the way the world looks to direct experience,
with simple facts of neural function, with the architecture of the brain,
with the kinds of control processes in which we see people engaged?

I have tried to develop a model that says the same things about all
phenomena no matter what point of view you take toward them. There's a lot
left out of this model, but as far as it goes, I believe that it adheres to
these principles.

You say, "In a distributed connectionist system, a single node can
participate in the (non-localised) representation of more than one concept,
depending on the global dynamical activation of the network."

But what good does it do the brain to have the theoretician know this? I've
heard this view before, and have always wondered how it connects to the
fact that we perceive specific things separately -- what is doing that
perceiving? I think this approach is the ultimate in implicit functions. As
long as one doesn't require the brain actually to do something, such as
reach out and press a specific button of a specific color, it seems to hang
together. But I predict that this concept of perception will prove
completely useless when it comes to trying to get such a system to act, to
control specific variables relative to specific reference states. When a
distributed perceptron has to produce an actual output, something has to
recognize the distributed state of the system, and conclude that it is one
state rather than a different one. All that's accomplished by this idea is
to postpone the day when a specific perceiver has to be designed and built,
saying "this is an 'A'.

I guess talk of explicit hierarchies just strikes me as wrong.

It's wrong in relation to what some people believe about distributed
functions. It's not wrong in relation to neuroanatomy.
------------------------------------------------------------------
-----------------------------------------------------------------------
Best to all,

Bill (unbeliever) P.


[From Bill Powers (930831.1410 MDT)]

Ft. Lewis forgot how to connect to a modem for about four days.
I'm pleased if not alarmed to see how nicely the net went without
me. I sent off a few suspended posts which I think didn't get
through, but everything I had to say has already been said well.

Michael Fehling (930829) --

I'll restate how one demonstrates that sensations, beliefs,
desires, intentions, and actions are distinct functional
categories. For example, one can say 'I perceive p' and yet,
freely say either (a) 'But I don't believe p' or (b) 'And I do
believe p.' This demonstration simply refutes the claim that
_belief_about_p_ and _perception_of_p are the same. I am
employing here a very basic method of rational analysis that
your remarks seem to pass over--if one can find a feature
present in one case and absent in another, then the two cases
are distinct (up to that feature).

This puts into words one basis on which I tried to develop
definitions of levels of control. It's essentially what I tried
to say about belief in p, too. One level is concerned with the
perception; a higher level is concerned with something ABOUT that
perception, or to which that perception contributes, or of which
that perception is an attribute. The trick is to separate ideas
into foreground and background, isn't it?

One may use the same method to distinguish sensation from
perception. This time, I will borrow your own distinctions to
do the work. In an earlier post you discussed how an input, i,
is transformed into a percept, p, by a function, f--i.e., p =
f(i). The sensation is the input to f. It better be "in" the
organism, else f couldn't operate on it. And, unless f(i) = i,
it must be that i is different from p. So, sensations are
distinct from perceptions.

Yes, indeed. Do this 11 times, starting with raw sensory
intensities, and you'll have my hierarchy: intensities,
sensations, configurations, transitions, events, relationships,
categories, sequences/ordering, programs, principles, and system
concepts. Or something similar. A hierarchy of "PCT-perceptions".

This knocks out recursion, too: configurations of configurations,
sequences of sequences, etc. If p = f(i), then p = f(p) = f(f(i))
isn't the same thing at all and probably won't even make sense.

"Propositional attitudes" all have something in common: they are
perceived at the levels where propositions are perceived. They
are basically sequences of words/symbols in a logical or program-
like structure of higher order. Two levels out of the 11. There's
only so much behavior you can account for in words.

These various types of propositional attitudes define
perceptual categories whose distinct properties and functions
are evidently critical in fully explaining the structure and
function of perceptual control systems.

I wouldn't say "define" so much as "refer to." The perceptual
categories are created (or so I propose) directly out of lower-
level perceptions, and are labelled at that level with names or
symbols (other perceptions). I've always had difficulties in
finding a way to think about the levels at which we form
statements (sequence level) and give them logical values and
manipulate them by logical rules (program level), because these
are the same levels we have to use in thinking propositionally
about propositions and logic. I don't believe in paradoxes or
recursion, so there's clearly something missing here. I have long
suspected that it is an underlying level of perception to which
the words merely refer, and we keep getting tangled up between
the words and that which they are supposed to mean.

When you get past those two levels, to the principle level, it
gets easier again, because principles are clearly direct
perceptions that are only exemplified by the words we use to
illustrate them. "Safety first" is a rule that exemplifies a
principle, but is not the principle. A mathematician once told me
that he could describe a proof to me but he couldn't show it to
me, because a proof is a perception, not a set of symbols or
operations. If you don't perceive the principle, the symbols and
operations won't help. And system concepts are system concepts...

I suggest that these propositional attitudes represent things we
can do WITH these higher-level systems, but themselves don't
capture what those system do. We can create propositions and
reason about them ad infinitum; no one string of propositions or
manipulations explains the process by which we create it. You
can't explain a radio by demonstrating the music, news, and
weather that the radio is producing.

I think there's some confusion in the category-sequence-program
region of my definitions; maybe you can sort it out.

... challenges to this Poperian view by Feyerabend and others
as an antidote to the assumption that falsifiability as a
cornerstone methodological principle stands unchallenged.

I take a normative view here: if falsifiability isn't in fact the
essential criterion, it ought to be. If you can't tell when a
theory is wrong, you can't tell when it's right, either. From my
very spotty reading in such areas, I got the impression that many
philosophers were really trying to excuse the poor showing of
psychological and social theories, as if their use by scientists
made them good theories no matter how sloppy they were and how
poorly they predicted. I'm sure that falsifiability doesn't stand
unchallenged, but I'm ready and willing to debate the
challengers.

How would you compare/contrast PCT with the following major
psychological theories?:

       1. William James' ideomotor theory of voluntary
           control
       2. Edward Chace Tolman's S-O-R version of behaviorism
       3. J. J. Gibson's ecological realism

First, what do I get if I pass?

William James got as close as you can get to inventing PCT
without actually writing down the equations. All he lacked was
control theory. We PCTers cite James regularly: "The hallmark of
a living organism is the ability to achieve regular ends by
variable means," which is probably a garble but is essentially
what he said. He also said that we don't intend actions or
objective events, but perceptions. PCT all the way.

All I remember about Tolman is that aside from being Don
Campbell's mentor (BCP is dedicated to, among others, Don
Campbell), he promoted something called "purposive behaviorism."
My opinion, if I remember right, was that he was trying to have
his cake and eat it, too.

J.J. Gibson seems to be a complicated case; others on CSGnet tell
me he is really saying the same things we are. My own reading
comes out a little different: I see Gibson as taking one kind of
perception for granted in explaining other kinds. Of course we
all have to do this and I do it myself, but my question has
always been whether Gibson KNEW he was doing it. Some Gibsonians
seem to cite him as a way of justifying some sort of realism or
objectivity. But others say "Oh, no, he understood that it's all
perception." I can't fit the notion of "affordances" to the
latter.

While coming to grips with PCT, I've become quite curious about
your knowledge of, and reaction to, these approaches to
psychology. Have they had any influence (positive or negative)
on your development of PCT?

My main influences were Wiener and Ashby, although I had read
(with skepticism) a good bit of psychology. I'm basically a
physicist-engineer, although still a Mister, with the usual
disdain for theories that don't work all the time. Psychology,
you might say, had a distinctly negative influence on me, in that
at the age of 26 or so I was sure I could do better than THAT.
Now, of course, I have different kinds of ambitions: I think WE
can do better than that.

I think you're definitely beginning to make noises like a PCTer.

···

---------------------------------------------------------------
Bruce Nevin (various posts) --

Bruce, I have taken great pleasure from all of your recent posts.
As Dag says, "No error, no comment," but I really should have
been at least sending a :-).
-------------------------------------------------------------
Martin Taylor (various posts) --

ditto.
--------------------------------------------------------------

Avery Andrews (more various posts) --

More ditto.
-------------------------------------------------------------
Who else? The hell with it. You're all great.
-------------------------------------------------------------
Best to all,

Bill P.

[Michael Fehling 930831 2:52 PM]

In re Bill Powers 930831.1410 MDT --

Bill,

Thanks for still more thoughtful comments. I am particularly intrigued with
(to put it in my terms) your insistence on distinguishing logical
_description_ of the agent from the _functions_ that that logic describes.
E.g., you say

        "I suggest that these propositional attitudes represent things we
        can do WITH these higher-level systems, but themselves don't
        capture what those system do. We can create propositions and
        reason about them ad infinitum; no one string of propositions or
        manipulations explains the process by which we create it. You
        can't explain a radio by demonstrating the music, news, and
        weather that the radio is producing."

I agree. I have long fought with the "logicists" in AI who claim that the
functions of agents simply _are_ logical operations on representations in
logical form. I subscribe to an alternate view; namely, that we can use
logical structures and operations to "refer to" (as you say) aspects of the
cognitive/connative/affective mechanisms, but that such logical descriptions
are not to be confused with the mechanisms themselves. In AI this latter
stance is often called "proceduralism." My own form of proceduralism begins
with the observation that, whatever else those mechanisms might be, they must
account for how the organism manages its closed-loop coupling with its
perceived environment. However, I am very interested in whether/how/why it is
that logical descriptions actually help us to undertand this control system.
So, if I can help with the "category-sequence-program" region of PCT, I'll be
quite happy to do so. First, however, I gain more fluency with your theory.

  Aded thanks for the remarks about James, Tolman, and Gibson. I intended no
test. As it happens I am doing some research with Bernard Baars that builds
upon James' ideomotor theory, precisely because seeks to "close the loop"
between orgamism and environment. I mention Tolman because the thing most
call "cognitive science" today seem to me to be a conceptual and
methodological rehash of Tolman's invention of "intervening variables." (I've
recently written about this in a paper in the journal Artificial Intelligence
in which I comment on the work of the late Allen Newell.) Finally, I suggest
that Gibson's theory focuses nicely on the relationship between percepts and
environmental states that might (or might not) be affected by control of these
percepts. I'll think about how "affordances" might fit into the PCT picture.

  Weiner and Ashby are too often overlooked by students of intelligent
systems. I am lucky to have my studies as a student of Ross Ashby's in the
biophysics program at U. of Illinois. (Ashby was also in EE at UofI.) I
still urge my students to read Ashby's "Intro to Cybernetics" and "Design for
a Brain" along with Wiener's "Cybernetics" and "God and Golem," among other
such references. I'll also confess to having been influenced by Heinz von
Foerster, who was also on the biophysics faculty at that time. Weiner's and
Ashby's ideas offer an instructive counterpoint to the far more prevelant
proclamations of mainstream AI and cognitive science.

- michael -

[Martin Taylor 930903 10:40] The 54th anniversary of the (UK) start of WWII
(Bill Powers 930831.1410 MDT)

Like Bill, I've been cut off from the Internet for three days.

"No error, no comment," Eh!

I take a normative view here: if falsifiability isn't in fact the
essential criterion, it ought to be. If you can't tell when a
theory is wrong, you can't tell when it's right, either.
...
I'm sure that falsifiability doesn't stand
unchallenged, but I'm ready and willing to debate the
challengers.

Hold the debate until October, then, because I'm a challenger gone until
then.

My position is that ALL theories that we can specify in a public way are
false in detail. The world is more complex than we can specify in a finite
way. Any theory we can state will have a range over which it claims some
validity, and the better the theory, the closer it will predict within
that range of claim. But it will not predict with infinite precision
even in its range of claim, so it is FALSE, there. Also it will not predict
at all outside its range of claim, except by chance, so it is FALSE there.
It is, a priori, false, by virtue of having been stated.

So the question of the validity of theories cannot be decided on the basis
of falsifiability. It must be decided on the relative range and accuracy
of claim, as compared to other theories that have so far been considered.
The only time a theory can definitively be falsified is if it states
that an event MUST happen and the event does not (equivalently, that it
MUST NOT happen and it does). I'd change Bill's second quoted sentence to
"If you can't tell when and by how much a theory is wrong, you can't tell
when it will be nearly enough right to be useful, either."

I got the impression that many
philosophers were really trying to excuse the poor showing of
psychological and social theories, as if their use by scientists
made them good theories no matter how sloppy they were and how
poorly they predicted.

That's the right way to approach the evaluation of theories.

Continue debate in October, if you want.

Martin

PS. The CROWD demo arrived, works, and has intrigued a few people. I'll
try to use it in Holland.

[From Bill Powers (950925.1615 MDT)]

Back again, after 7 days off the net, with 40 posts to read (nothing
like Martin Taylor's 600+, but enough). Welcome to David Wolsk!

I'll just try to hit some highlights.

···

-----------------------------------------------------------------------
Jeff Vancouver (950919.0850) --

     Now a little more about the biologists. So you are saying (with
     appropriate reservation) that many biologists question the control
     process notion because of their inability to find a reference
     signal. Let me go out on a limb here and say that I am not
     necessarily surprised. I suspect that the reference signal will
     end up like the particle in physics.

Actually "finding" a reference signal -- locating the exact physical
variable that plays that role -- is probably beyond biologists or anyone
else at this time. What's more important is to be able to recognize one
when you see one. This, as Martin Taylor points out, is primarily a
matter of understanding how control works.

Consider the "satiety system" that Bruce Abbott brought up. An animal
eats until its stomach is full, then stops. If you understand how
control systems work, you will see this as a one-way system -- but not
the kind of system Bolles imagined. Bolles sees the stomach as just sort
of getting filled because of eating, with no particular control going on
until a state of fullness is reached. Then a control system kicks in to
prevent overfilling (Bruce presented much the same picture for a fly).

I would propose a one-way control system as working on the other side of
the fullness level: the reference level is for a specific amount of
fullness of the stomach, and if the stomach is not that full the
resulting error signal contributes to lower-level reference levels for
ingesting food. Eating stops when the stomach is full not because
satiety triggers off some new control system, but because the error in
the first control system is reduced to zero. No second control system is
required.

The fullness reference signal would be the output of another level of
control concerned with (perhaps) the level of nutrients in the
bloodstream. As soon as food starts entering the stomach, digestion
starts and nutrients begin entering the bloodstream. This reduces the
error in the nutrient control system, lowering the reference level for
stomach fullness. Eventually, as the nutrient concentration rises toward
its reference level, the actual stomach loading approaches the declining
reference level for fullness (the output of the nutrient control
system); the error signal goes to zero and the eating control systems
that fill the stomach receive zero reference signals, stopping the
eating. Digestion continues, continuing (at first) to raise the level of
circulating nutrients and further reducing the nutrient error and hence
the reference level for stomach fullness, perhaps even to zero.
Eventually the nutrient level begins to decline, the reference level for
stomach fullness rises again; when it exceeds the actual fullness,
eating commences again.

I mention this simple model mainly to show that the biologists trying to
figure out the control processes haven't really tried many different
models to see if any of them might work better than the simple one they
started with. If you give up after trying one very simple-minded model,
you're not really justified in concluding that control models can't
handle the phenomenon.
-----------------------------------------------------------------------
Hans Blom (950919e) --

     Where is the reference signal in an ordinary thermostat? Can you
     point at it? Is it the wiper of that pot? Sure, you may be able to
     measure a voltage somewhere. But that voltage's value in itself is
     meaningless. In a thermostat of a slightly different brand, the
     corresponding voltage may be 1.35 Volts less or a factor of 23.7
     lower. Or whatever. It is the FUNCTION of the voltage that makes it
     a reference signal, the fact that it is compared with something
     else, which comparison is acted upon.

Well said. The important thing is to understand the _function_ of a
reference signal, as one element of a comparison. What really matters
about a reference signal (or condition or state) is that it specifies a
particular state to which another signal (variable, condition, state)
_is to be brought by the action of the system_. Just how this
specification is implemented in hardware is unimportant. The lack of
this concept is what kept life scientists from understanding the nature
of purposive or goal-directed behavior -- they thought that a goal was a
future state of something, so to speak of goals as directing anything
was to claim that the future could have an effect on present-time
processes. The concept of a reference signal brings the goal into
present time, where it can determine the outcome of actions without ever
having to work retroactively and in fact without bringing the future
into the discussion at all.
-----------------------------------------------------------------------
Bruce Abbott (950919.1235 EST) --

     Bolles starts by imagining a simple system in which a "gustatory
     stimulus," a sensory quality correlated with nourishment, turns on
     a consummatory mechanism, which produces eating.

This is obviously an S-R mechanism, which we can replace entirely by a
control mechanism. One reason that gustatory stimuli (and palatability
stimuli) have an effect on eating is that animals have to control not
only _how much_ they eat, but _what_ they eat. The fact that something
looks or feels like food does not automatically guarantee that it _is_
food. A perception of food is made up of many variables, and the more
variables that can be combined to produce the perception "food," the
more likely it is that eating a particular substance that matches a
reference level for foodness will be beneficial and not harmful.

Bolles is not imagining in enough dimensions. He is also thinking of
control in such black-and-white terms that it's no wonder he is
dissatisfied with his model:

         The first question is whether such a feeding system would be a
     regulatory system. The answer, curiously, does not seem to be so
     clear. The satiety mechanism does act something like a regulator in
     that it limits meal size. And certainly there is an obvious
     negative feedback loop that accomplishes stability. On the other
     hand, there is surely nothing in the system that is being held
     constant; there is no semblance of homeostasis.

You can see here that he's hung up on the idea that homeostasis implies
a fixed reference signal. He sees control as being strictly an on-off
mechanism. And he imagines only one level of control. I wouldn't be
satisfied with this model, either. What Bolles doesn't realize is that
the defects are not those of control theory, but of his own abilities as
a control system designer.

He goes on:

      The second question is whether such a primitive system could
     possibly serve the nutritional needs of a real animal. The answer
     appears to be no; the system is much too simple to work in the real
     world. There is nothing in the system that makes contact with
     motivation.

This is the old straw man, isn't it? Propose an inadequate model, then
criticize its inadequacies. If Bolles had any conception of hierarchical
control, he wouldn't have to worry about making "contact with
motivation."

      But the truth is that it is viable; it is, in fact, the system
     used by a great multitude of animals. The most familiar example is
     the fly (Dethier, 1976). . . . There are several aspects of the
     fly's situation that make such a simple feeding system workable.

Does he really know that this is true, or is he just making it up
because it sounds believable? The whole rationalization that follows
these sentences is based on very loose interpretations of what flies do,
followed by a rhetorical exposition based on nothing whatsoever. In
fact, flies might have very selective and competent control systems, but
organized in a way that Bolles would not be equipped to recognize.

You say

     This is the conception I talked about in my first post describing
     Bolles's view of regulation. If it is important to prevent a
     quantity from exceeding certain values, a mechanism will have
     evolved to limit its excursion.

But suppose that there is no such mechanism in the first place. Suppose
that "satiety" is just a made-up concept that has nothing to do with the
actual control processes. If in fact the appearance of satiety is
created simply by an error signal going to zero, you would then have to
propose that because this is important, a mechanism will have evolved to
do it this way instead of the other way. The argument from evolution is
completely empty, because if you happen to have guessed wrong about what
the mechanism is, the same argument will work just as well for any other
mechanism. As Bob Clark used to say, this way of invoking evolution is
just a ploy for "adding an air of verisimilitude to an otherwise bald
and unconvincing narrative."

     In the case of the fly, only the excursion of gut content above a
     certain volume is limited; excursions below a certain volume are
     passively limited by the fact that the fly is always on the move
     and, given an environment sufficiently rich in fly food, the fly
     will always initiate feeding in time to save itself.

But suppose the control is one-way in the sense that it tends to raise
the gut content up to a reference level rather than limiting excesses.
Then all the rhetoric designed to make the "limit" postulate seem
reasonable has to be redone so it makes the "control up to reference
level" postulate seem equally reasonable. And the whole explanation of
how the fly manages to stay alive even if it doesn't have a reference
for eating has to be junked, along with the added rhetoric intended to
show that the fly's reproductive strategy is sufficient to overcome the
disadvantages of being hatched without any appetite. The problem with
throwing postulates around like this is that you keep seeing
difficulties and patching them with more postulates, which lead to their
own difficulties, and so on without end.

     I really have no quarrel with most of Bolles's analysis. It comes
     down to this: in general one should not expect to find simple
     control systems directly regulating some fundamental quantity
     (e.g., energy balance, body weight) via the classic mechanism
     embodied in the standard diagram.

Well, I really do have a quarrel, as I've been trying to say. Every
single factual statement that Bolles makes needs to be thought about and
challenged. He says entirely too many things that are based on his own
imagination and his lack of experience with control systems. I'm not
going to give up on the simple model as easily as Bolles did.

     ... it is easy to conclude from Bolles's article that classical
     control principles simply do not apply to real organisms; that
     regulation is accomplished in other ways, usually more aptly
     described as equilibrium processes. What seems to be lacking in
     Bolles's exposition is any recognition that the analytic tools of
     control theory apply just as well to the kinds of systems he
     describes as they do to systems that physically parallel the
     classic diagram. Even the fly's little feeding mechanism is a
     perfectly competent control system that attemtps to keep the
     perception of gut distension between certain limits despite the
     disturbances acting upon it, in almost exactly the way that a
     flush-toilet maintains its perception of water-level in the storage
     tank nearly constant. Yet for Bolles, only the satiety mechanism,
     acting to shut off feeding, qualifies as a "control device."

I'm glad to see that you haven't been entirely taken in by Bolles'
tricks of exposition. And I hope you realize that NOTHING Bolles said
comes from an analysis of real systems -- most of his "facts" are
imagined as needed to fit his conclusions.
-----------------------------------------------------------------------

Rick Marken (950919.1330) --
Hans Blom (950920) --

Thanks for your efforts in making the parallels between my model and
Hans' clearer. Bruce Abbott helped, too. The two models really are quite
close in basic organization, I think.

I hope we're not all getting down on our hands and knees and examining
the models with a microscope. What's important is that Hans' model and
mine both can keep the controlled variable at better than 99% of the
reference level at almost all times (with the continuous disturbance),
which makes them equivalent, performancewise, in my terms. They can both
control better than real people do in a tracking situation.

What I'm looking for are situations in which one model clearly controls
well (error only a couple of percent of the reference signal magnitude,
or less) and the other doesn't (significant and repeated errors of 50%
of the reference signal or so, or even better, complete failure of
control). If we have two models that control about equally well, as we
do, all that remains is to test them against real behavior and see if
there's any way to choose between them.

Both models, I'm sure, could be adjusted to control as well as, but not
better than, a real person would control the same variables given the
same disturbances we've used so far. So until there's some reason to
pick one over the other, we might as well keep them both around.

What I would like to do now, if everyone is agreed that the two models
do in fact control well against additive disturbances, is to see how
they perform with "parametric" disturbances of the environmental
feedback function -- that is, with variations in k. I propose to let k
vary between 1 and 5 in a way determined by one of our smoothed random
disturbances. I will set up an "E. coli" adaptive method for finding the
best output sensitivity factor. Hans can again use any method he likes
to determine the parameters for his model (either adjusting them by hand
or using an adaptive program of his choice).

To determine k, I suggest that we add another disturbance table scaled
to 1000, and make

k = 3 + 2*d2(t)/1000.0.

The reason I would like to do this test is that the PCT model should be
very "robust" for changes in k, but I think the adaptive model may have
problems adjusting it fast enough. On the other hand, Hans has done very
well in producing working models, so I will just wait to see how he
solves this problem (if you want to tackle it, Hans).
-----------------------------------------------------------------------
Hans Blom (950920b) --

     Bill, there was an error in your program. As I told you for MY
     program, the computation of u which is done at iteration i, is
     actually done for time i+1, and therefore needs to refer to r [i+1]
     rather than r [i]. The same thing is necessary in your program.
     After making this change in your program, you get better results
     (but not yet as good as my program's :-).

My model is strictly a present-time model; it doesn't know what's going
to happen in the next iteration, so I don't use any i+1 indices. But
thanks. I could probably also get somewhat better performance by adding
some derivative of the error signal to the error signal. But we're both
so close to perfect control that we're already doing better than a
person would do, so I don't think that the remaining differences are
important.

Incidentally, since you do use i+1, you should probably cut off the
iterations one before the end of the tables. You could get some very
large numbers on the last iteration which would make your model's
performance look slightly worse.

In your model, the first use of u is really the value of u from the
previous iteration. Don't know if this makes any difference. It's hard
to keep these subscripts straight.
-----------------------------------------------------------------------
Martin Taylor (50920 11:30) --

Nice addition to the literature on conflict theory. The output conflict
does indeed exist. In the middle of the hierarchy, of course, it would
look like conflicting reference signals.
-----------------------------------------------------------------------

From Brian D'Agostino (950920) --

     Well, I guess I was naive to think that the CSG could learn from
     experience regarding this particular problem. My original
     complaint was that the CSG stereotyped my work without even reading
     it. Now, here you are doing exactly the same thing again, as if I
     had said nothing whatsoever. (It is clear that you didn't read it,
     because otherwise you would not attribute to me the foolish
     proposition that self image should explain all or even most of
     policy preference variance and then proceed to criticize my
     statistical model because it does not have an R-square larger than
     .9).

You're incorrect in assuming that I didn't read your paper, but maybe I
said something that implied that I thought you said that self-image
should explain policy preference variance -- anyway, I just looked at my
post and I can't find any statement about explaining policy preference
in terms of self image. Did I really say that? You're going to have to
point out where, because I can't find it. Sorry if I'm being dense.

     When you critique something you _haven't_ read, like my research or
     the literature I reviewed in my article, you usually blow hot air
     and make a contribution to intolerance and the cult of William T.
     Powers. (Who else but an omniscient guru can know the errors in
     something he hasn't even read?)

But I have read your paper. I didn't have much time to comment, as I was
leaving the next morning for Connecticut, so I simply mentioned that
there isn't much support here for the statistical methods you used, such
as factor analysis (if that's the right term).

     In your posted message, your stereotype of my research was linked
     with the patronizing belief that I am a passive product of my
     professional training and that in order to succeed in that training
     and be awarded a Ph.D. I had to produce work that was incompatible
     with control theory.

I wasn't referring to your uses of control theory, but to your uses of
adjective lists, Q-sorts, and various statistical ways of getting
meaning out of data. These are, as I understand them, normal methods
which your peers and judges would find perfectly acceptable (even if
what you used them for was not). I assumed that you had to use some such
methods to produce work that would be acceptable to your committee. If
you had used methods more like what I would use -- disturbing variables
that are supposedly under control and looking for countermeasures, for
example -- I assumed that those who judged your work would find the
methods very strange and would have given you even more trouble than you
got. That's what I was referring to. And I _thought_ I was showing some
understanding of your position in this regard, not being condescending.

     This really hurts, Bill, because I have paid more of a price than
     you can possibly know for thinking my own thoughts and refusing to
     play the academic game. Belatedly getting my Ph.D. and getting my
     research published in a leading academic journal is a personal
     triumph for me precisely because I did not sell my soul in order to
     achieve these results.

And we all admire you for that, the more so because so many of us have
had similar experiences with the academic game.

     Of course, you are free to conclude that my admittedly exploratory
     and preliminary application of control theory to the psychology of
     militarism is really an unacceptable bastardization of control
     theory. But you can only legitimately conclude this after you have
     read my work. This raises a far more serious problem, however. If
     you are willing to reject something even before you have read it,
     which you have now amply demonstrated, what good will it do for you
     to read it?

I can see that it's going to be hard to convince you that I have read
your paper, and more than once. The problem is that you're seeing
criticisms in the wrong places. Where I get hung up is not on the parts
where you talk about control theory, but the parts where you try to find
out about "hawks" and "doves" (themselves stereotypes) by giving people
lists of adjectives and asking them to rate how those adjectives apply
to them. I don't consider that a very good way to find out about human
nature, no matter what theory you're testing. That's just my opinion,
but it is in fact my opinion.

     I will probably rest my case here, because I know a heresy trial
     when I see one, and I know that nothing I can say in this context
     can make a difference.

I repeat, I am the heretic, not you. I don't believe that Q-sorts and
adjective lists and factor analysis tell us anything useful about human
nature. When I said something like that at the CSG meeting you became
infuriated with me and defended the use of these methods as something
any well-educated person would know about. So I stopped saying those
things. But I didn't stop thinking them.
-----------------------------------------------------------------------

Best to all,

Bill P.