[From Bill Powers (940722.0800 MDT)]
Bill Leach (9430720.2208) --
What still bothers me a little is that in terms of the control loop, it
seems that the perception of the absolute cursor position is incidental
to the control action.
Think a little more physically about what is perceived. Is it really
possible for sensory nerves to register "relative position?" What lies
on the retina are images of objects in specific places. The distance
between the objects lies on retinal cells where there are no objects. In
other words, distance can't actually be perceived at the level of the
retina, or the signals that arise from the retina.
This is what hierarchical perception is about. Distance can, obviously,
be perceived. But before it can be perceived, there must be systems that
say "there is an object here on the retina, and there is an object there
on the retina (or the map)." Given signals representing the positions of
the objects, it is then possible for a higher-level system to subtract
one position from the other and obtain a signal that varies with the
distance between the two objects. Now as the two objects move on the
retina, they can move so that this derived distance signal remains
constant, or so that it varies. If the control system can sense and
affect the position of one object, it can vary this position so the
variable at the next level, distance, remains constant even if the other
position-signal changes. Any desired distance, including zero, can be
maintained this way.
So even though relative position is being perceived and controlled,
lower-level perceptions of absolute position are required. And the
control of relative position requires the ability to _vary_ at least one
of the absolute positions in a controlled way (particularly, as in our
tracking experiments, when disturbances are being applied to that
Jeff Vancouver (940721) --
In studies of control by psychologists, the main missing ingredient is
an understanding of what happens when outputs affect inputs at the same
time that inputs are affecting outputs. You are perfectly correct in
seeing that a control system is just an S-R system with a reference
signal that introduces a bias. In fact, you could probably find S-R
studies in which the reference signal was recognized, as the "effective
zero" of the stimulus. Way back when, I was convinced that this was the
wedge that would get control theory into psychology; in fact, it turned
out to be a "wedge" more in the modern usage of a sticking place.
There are two reasons for this wedge. One is that psychology is heavily
biased toward seeing stimuli and responses as discrete events: first the
stimulus goes "ping" and then the response goes "pop." I think that one
reason for doing this (and designing experiments this way) was precisely
to keep responses from interfering with the administration of stimuli.
If you could get the stimulus over with quickly enough, the response
couldn't modify it before you were through manipulating it. That meant
that the stimulus could be considered an independent variable and the
response a dependent variable, as required by the statistical analyses
(and the theories) that were used. Also, for those who did recognize the
closed loop, this meant that they could treat the loop as a sequence of
events: S-R-S-R- and so on.
The second reason is that psychologists knew nothing about control
theory and therefore didn't understand that closed loops of causality
would have emergent properties that were not obvious from the sequential
event-oriented viewpoint. As rumors from engineering began to spread
into psychology, in the 1950s, psychologists picked up a few ideas, such
as the fact that control systems tended to be slow and unstable, and
would run away if they were too sensitive. This smattering of ignorance
convinced them that feedback phenomena couldn't be very important in
behavior, because organisms could act quickly and stably, and didn't
show runaway behavior.
PCT is based on the actual properties of sensitive closed-loop systems.
Most psychological attempts to analyze goal-seeking system are based on
incorrect rules of thumb, or completely misunderstandings of how such
I think your description of E.coli _as you model it_ can be interpreted
as reinforcement theory.
... "E. coli senses the time rate of change of concentration >[stimuli]
of the attractant [reinforcer]..., and varies the delay [a
response] before the next tumble according to whether the _current_
sensed rate of change is above or below a reference setting."
The problem with saying it this way is that it isn't reinforcement
theory. What is happening is that the _current_ time rate of change is
affecting the _current_ delay, thus varying the time of the _next_
tumble. According to Skinner, reinforcements are consequences of _past_
behaviors, and when they occur, they tend to increase the probability
that the same behavior will occur again in the future.
When the next tumble occurs, the result is a new time-rate-of-change of
attractant. That is the consequence of the _current_ amount of delay
being generated -- as you say, "the response". Unfortunately, the next
time-rate-of-change is completely unrelated to the current one. Whether
the current response be a short delay or a long one, the chances that
after the next tumble the concentration of attractant will be increasing
are equal to the chances that it will be decreasing. Long and short
delays are not differentially rewarded by the results of the next
There is in fact no strategy based on past rewarding outcomes that can
be used as a way of selecting a better future response. Each segment of
E. coli's travel is an entity unto itself; whatever results in
systematic progress up the gradient must happen during each segment
independently of all others before or after.
I don't object to the words of reinforcement theory. If you want to call
input variables "stimuli" and "reinforcers", and outputs "responses,"
that's OK with me. What I object to is the _organization_ of
reinforcement theory, which claims that past rewards determine future
behaviors: that behavior is controlled (meaning determined) by its
consequences, to put the thesis exactly in Skinner's words.
The problem of "reinforcement theory" in your models is important
(to me) because of Locke's claim PCT is neobehaviorism.
How about quoting us some quotes? I can see that this might give your
editor some problems. From what I've seen of Locke, he attributes
characteristics to control theory that just aren't true; maybe we can
find some specific statement to refute. One of Locke's complaints about
control theory is that it isn't based on any experimental data! I guess
Locke carries a lot of weight in your field.
Maybe I have been too generous in thinking neobehaviorism includes O.
If it does, than I can accept that, argue that PCT is neobehaviorism,
so what? It still serves functions goal theory and social cognitive
theory do not.
Attaboy. It doesn't matter what you call it. Except, of course, to your
As I understand it, controlling a perception variable occurs in two
ways. Once the appropriate lower-order reference signals have been
discovered, the loop simply sends those signals.
Actually, the higher level of control is continuously monitoring its own
controlled variable, and continuously varying the reference signal of
the lower system as a means of controlling the higher variable. The
reference signals are not just sent as blind outputs to the lower
system. The result of sending them is always being reflected in the
state of the higher perception, so control is continuous. This is true
at all levels. The higher system can't just decide on a good output and
stick with it, because there are always many influences tending to alter
its perception. It has to vary the output that becomes the lower-level
reference signal according to the current state of error in the higher
No, the real problem is the method of reference signal selection. Let
us not consider hardwired, which some consider, but has not been
modeled. Hardwired will not work for controlling most complex
perceptions anyway, so lets only concern ourselves with the only other
process considered by PCT advocates - random.
Careful, here. In PCT, control systems are what you might call soft-
wired. That is, reorganization can slowly and randomly change the
wiring, but on the time-scale of ordinary behavior the system is, for
all practical purposes, hard-wired. We handle control of complex
perceptions by dividing them into orthogonal sets, each of which varies
in magnitude only, in one dimension only. We see a separate control
system for each possible dimension of variation (that is under control).
What is normally treated as a single complex perception then becomes a
collection of perceptions, each representing one dimension of the
This seems very wasteful, but actually you end up with extremely simple
control systems for any one dimension, simple enough that they could be
implemented with a few neurons. The other way of approaching it looks
more compact, but requires an enormous number of computations, so I
don't think you end up saving any neural capacity.
Reorganization involves a random component, but the operation of the
control systems in the main hierarchy doesn't.
Locke & Bandura are arguing that reference signal selection is our
source of free will. We consciously choose our goals (reference
Well, that's a feeble step toward a hierarchical model, isn't it? After
they have pushed this idea of internal goal-selection for a few years,
maybe one of them will wonder _why_ a particular goal is chosen -- what
does it do for the person to freely chose that goal instead of another?
Then they might realize that there is another level of goals and
perceptions, which are achieved by selecting the lower level of goals.
Free will isn't as simple a concept as Locke and Bandura make it out to
be. I'm perfectly free to move my hands any way I please, until I am
using them to steer a car. Then I must vary my hand position as required
by physical laws to keep the car where I have freely chosen it to be on
the road. If I wish to avoid running into a culvert, on the other hand,
then I must choose positions for the car on the road that do not
intersect the culvert, and if I choose to leave one road and turn onto
another, I have to choose positions for the car that will achieve that
goal -- a very limited set of positions, in comparison to what free
choice might allow.
So every goal is chosen as a means of achieving a higher goal, and as
soon as that is recognized, the lower-level goal can no longer be chosen
freely. It has to be chosen so that achieving it will achieve the higher
goal. Is there a highest-level goal? Where does it come from? Questions
to be answered experimentally, not by philosophy.
Specifically, Bill P. says "beliefs about one's actual effectiveness in
achieving a given goal [Bandura's self-efficacy]" is a perception
(1991). By that do you mean self-efficacy is just another controlled
variable? If is it a controlled variable than 1) what is "F" in your
F is the set of lower-level perceptions on which you base your
perception of self-efficacy. They might consist of such perceptions as
one's own degree of skill, other's opinions about one's effectiveness,
memories of successes and failures, and so forth. You can control the
perception of self-efficacy toward a high level by developing more
lower-level skills, by persuading others to admire you, or whatever
means will alter the perceptions on which you based the perception of
self-efficacy. You can also control for a low-degree of self-efficacy:
you can make clumsy mistakes, antagonize others, refuse to learn any
skills, and so forth (depending on what perceptions add up to self-
efficacy for you). Why would you choose a low level of perceived self-
efficacy? Perhaps to avoid being given responsibilities -- after all,
nobody gives the hard important work to a klutz.
If a function type for F2 includes using an external address for a
reference signal, then outside influences are available for
constructing a hierarchical control system. Is it possible?
Uh-uh. Reference signals in the hierarchy are strictly the outputs of
higher-level systems. There is no way the environment can directly set
any reference signal inside the person. You can set up circumstances in
which a person might well choose to set a given reference signal in a
given way, but that is always up to the person, not the environment. You
can be told "Ride this horse to lose the race or I will kill your
daughter." After weighing your goals regarding losing races and losing
your daughter, you might decide to ride the race to lose. But you might
also weigh other goals, and kill the person who is threatening you on
the spot. Or you might decide that you can always make another daughter,
but you have only one reputation as Dick Francis to lose. Higher
considerations always come into play, and they always come into play
inside the person actually doing the controlling. All the outside world
can do is present circumstances and connections. It can't force a person
to choose any setting for any reference signal.
I would appreciate Bill P.'s sanction on the manuscript to assure
our respective views are properly represented.
At your service.
In my field we develop tests of cognitive ability and other predictors
of job performance. Focusing on job samples, which rarely have adverse
impact (where scores differ depending on race or sex), these tests can
predict job performance at around .30 to .60, depending on the job
mostly. The alternative, doing nothing, would correlate .00 with job
performance, or the job interview (before we improved it) .11. The
differences might seem trivial to those who look for correlations in
the upper 90s, but the difference can save a company hundreds of
thousands of dollars (we have data on this).
Now this gets ticklish, because if I don't just say that your field is
wonderful and useful, you will get all prickly and start thinking up
defenses, and we won't get anywhere. So try to remain calm.
Why do you suppose it is that companies and other large organizations
are willing to put out serious money to get potential employees tested,
while job applicants fear these tests and avoid them wherever possible?
The answer lies in the correlations you cite above. Obviously, if a
company uses screening tests, even with as low a correlation as 0.3, it
will in the long run avoid hiring quite so many unsuitable people. That
can save it a lot of money and grief.
But now consider the testing procedure from the standpoint of the person
taking the test. If the correlations are as high as 0.6, this means that
the "coefficient of uselessness" (sqrt(1-r^2), more traditionally called
the coefficient of alienation) is 0.8. If I remember Gary Cziko's
explanation correctly, this means that you would do 80% as well in
predicting performance simply by taking the mean of the group
performance. If the correlation is as low as 0.3, the coefficient is
0.95. In any event, this means that many people who test low actually
belong in the high group and vice versa.
From the individual's point of view, this means that there is a very
high probability of being misjudged -- either being accepted for a job
at which you will fail, or being rejected from employment which you
could easily handle. Where the large company can even out the statistics
by using the test on hundreds of people per year, the individual
applicant gets only one chance every five or ten years. Furthermore, the
payoff matrix for the company is weighted oppositely to that for the
individual applicant. If the company makes an occasional mistake, it
loses little, and occasionally gets even more than it bargained for. The
individual, however, is faced with the alternative between making a good
living and a poor living (or none at all). A misjudgement is far more
serious for the individual than for the company. The usual justification
for using these tests is that "over the long run" they are quite
reliable. But for the individual, there is no "long run."
I realize that I am taking on a multi-billion-dollar industry here, and
have about as much chance of reforming it as the proverbial snowflake of
surviving in hell. But am I not speaking the truth? The harm done by
psychological testing in industry is not to its beneficiaries, the
companies who commission such testing. It is to those who are tested.
You say " From the individuals stand point it will improve the fit
between their skills and their job, which usually make the individual
happier and more secure." But that is a myth. Over the long run, what
you say is true from the company's standpoint -- but it is false for a
very large proportion of the individuals, particularly if you include
all the individuals tested, not just those selected.
Best to all,