[From Bruce Abbott (981021.1340 EST)]
Bill Powers (981021.0740 MDT)
Bruce Abbott (981020.2200 EST)
When you think a remark is so stupid that it deserves a reply driping in
sarcasm, it probably isn't. What did you miss?
Nothing. Your generalization is not 100% accurate.
Guess again. (Later on you're going to tell me that my point is valid,
hardly deserving of the sarcastic reply it received initially.)
You're still forgetting something. Shall I tell you what it is, or
would you rather figure it out for yourself?
I'm not assuming that people are perfect controllers; in fact I'm well aware
that they are not. When you set up a task in which you tell a person to
keep the cursor over the target and show the person how to move the mouse to
do so, then _to the extent_ that the person controls the cursor's position
accurately, there are simply zero degrees of freedom in how that will be
accomplished.
Oh, well -- that wasn't it. What you're forgetting is that added to the
effect of the mouse on the cursor is a LARGE disturbance derived from a
random-number generator. Sometimes moving the cursor to the right requires
a mouse movement to the right, sometimes a movement to the left, and
sometimes no movement, depending on how the disturbance is changing. If you
look at the pattern of mouse movement you will find essentially no
relationship to the pattern of cursor movement or target movement. So how
are you going to tell a person "how to move the mouse?"
I can't think of a case in which "moving the cursor to the right requires a
movement to the left" -- can you? One might do that to _slow_ its movement
to the right (which suggests that subjects may use rate as well as
positional control; is that in your model?). There's no need to help the
cursor along if it's going where you want it to go on its own. All a person
need do to accomplish the task is watch the cursor, and adjust the direction
and rate of mouse movement so as to null out the cursor's motion (and if
necessary bring it back to target).
As just noted, a model that does exactly what the instructions say will
already account for most of the variance in mouse movement. That is the
point I was making. I did not and do not dispute that a model with proper
parameters may do slightly better.
"Slightly better" = RMS prediction error of 10% reduced to 2%.
Let's not quibble over loosely quantitative statements. Slightly better
compared to RMS prediction error of 99% absent knowledge of the task
requirements (i.e., randomly guessing where the cursor will be at any given
moment). 10%--->2% = 8% further improvement only slightly better compared
to 99%--->10% = 89% improvement.
As for the disturbances, they are introduced by the experimenter and do not
need to be predicted in order to predict the behavior.
In order to predict the _outcome_, cursor over target. There is no way to
predict where the mouse will move. The experimenter does not determine or
know in advance what disturbance pattern will be applied to the live
person; that is left up to a random number generator.
The model has to be given the same disturbances in order to predict what the
ideal control system _must_ do to keep the cursor over target. Thus, the
experimenter knows what disturbances will be, he or she can predict what
movements of the mouse must occur in the ideal case. The subject doesn't
have any choice but to move the mouse in the same way if he or she is to
control perfectly; it is what the task demands.
Heck, I can do even better than that. I can predict that no matter what
happens for the next 30 years (as long as you stay alive), you're going to
be breathing oxygen. If you die, you violated my instruction to stay alive.
Now you're getting the idea. Given the conditional, there's not much choice
in what must be done.
If you pick the right conditional, you can make your prediction as trivial
as you please. But why make trivial predictions?
I think you're misunderstanding me. I'm not saying that the details needed
to get the model to match the person's _mistakes_ are unimportant; in fact I
think they are central to the enterprise. But when you ask a person to
control a cursor position, and they do, most of the accuracy of prediction
comes from the fact that there just isn't any other way to successfully
complete the task given. You can explain 90% of the variance in mouse
position without knowing anything about the person; just analyze the task
requirements.
The extraordinaty precision is largely due, I will admit, to the mere
realization that the person is organized as a good tight negative feedback
control system. You're saying that if a person is organized as a good
negative feedback control system (the only way this task could possible be
done well, or at all), then assuming perfect control will predict the
results almost as well as measuring the actual parameters of control which
are somewhat less than optimum. That says no more than that this person is
acting as an almost perfect control system. What the instructions actually
say is "be a control system." But isn't the whole point to explain how that
could be done -- HOW the instructions could possibly be followed?
If the person performs the task well, then the person has organized himself
or herself as a good negative feedback control system. It shows that a
person _can_ do that, if need be. If a task demands that a person rapidly
and accurately do mental arithmetic, and the person succeeds in doing the
task well, it shows that the person _can_ organize himself or herself as a
good $4.00 calculator and, given that, I can predict what number the person
will announce as the answer to a particular problem by computing the correct
answer to the problem -- accounting for 98% of the variance in numbers
announced. (I'm assuming that the person makes one or two mistakes.)
The point is, if the subject is performing the task as
directed, he or she has essentially _no_ degrees of freedom, and this
accounts for the predictive accuracy of the analysis.
This clearly shows the point you want to make, but your concept of
accounting for something is pretty weird. What PCT explains is how the
subject could be performing the task as directed. The directions describe
only what outcome is to be produced. They don't explain how, given the
circumstances, it is possible for any organism to produce such outcomes.
For that you need a control system model; no other model, reinforcement or
otherwise, can explain it.
Of course. If the task requires that you behave as a control system and you
succeed at it, then a control system model will be required to explain how
you did it. A good reinforcement model _might_ succeed at explaining how
you came to be organized that way, I don't know. Control theory might also,
but that would require another model to explain the existence of the control
system that explains the behavior. Now _that_ would be impressive. I don't
think that random reorganization would be sufficient, in most cases.
If you know what's involved in performing the task -- what the cv is going
to be, and its means of control, you can model that without knowing anything
about the person, and get very close to a person's performance, if the
person does well on the task.
Tnat is to say, if you know the subject is organized as a good control
system with an integrating output function, you can predict that the person
will control well. Does that sound like a great relevation to you?
I looks rather obvious to me. The important question is, what is the actual
nature of that system? The actual system may be very different from the
simple control model that explains 98% of the variance in the person's
actions, and yet behave nearly identically in the test situation.
The range of movements is close to the same 50 times the
standard deviation of the mismatch if you assume the the person does as
_required_ by the task -- perfectly. What do you think the probability is
that you will admit that I have a valid point here? (I'm afraid it's
vanishingly small . . .)
Your statement is correct, and trivial.
Well, I'm making progress. My point has moved from being absurdly incorrect
to correct but trivial.
Of course if the organism is a very
good control system, it will control in nearly the same way that a perfect
control system will control. One of the main points of these experiments is
to verify that the organism is, indeed, acting as a negative feedback
control system rather than some other kind of system.
My point is that if the task is such that no other kind of system can do it,
then there is no need for such a test, except to determine whether the
person can do the task. If the person can do the task, it's the only kind
of system he or she can be acting as.
But once that
has been established, a more advanced consideration is to measure the
parameters of control, to see how one person differs from another and how
stable the parameters are over time. The same model we get from these
experiments can be used to predict behavior when control is not so good --
when the disturbances are larger and faster, so the tracking errors are
much larger. And they can predict what will happen to control accuracy when
the situation is changed -- when, for example, the proportionality constant
in the environmental feedback function is doubled or halved, or when the
EFF is changed from a proportional relationship to one involving one or two
integrations (as in controlling the position of a mass on a spring with
damping, or flying an airplane). One question of great interest is whether
the parameters of control remain the same when the EFF changes, or if they
change, and if they change, with what gain, delay, and integration factor
the changes take place. If all you know is that the instructions were
followed, there's no way to answer such questions.
I've never disputed that. That's why I said that I was not belittling
control theory or its predictive accuracy, and it's why Rick Marken's
example of the fielder is no challenge to my position. In those situations
there are many possibilities as to what is actually controlled, and how.
But PCT doesn't predict which variables are under control in those
situations (although one can come up with good hypotheses by considering the
task requirements and the actual performances of the individuals (e.g.,
fielders) in question). It can predict only conditionally -- _if_ the
subject is controlling X, then the model predicts such and such. I don't
see this as a shortcoming of PCT (there's enough to do for now just figuring
out how the system must be organized given that it _is_ controlling X) so
much as an area in much need of future development. But it's prediction
under severe constraint, and that accounts for most of the accuracy of the
predictions.
But I think VI schedules add
something besides noise; the rat adapts to them and peforms differently than
it does under, for example, ratio schedules.
You'll have to prove that. It remains possible that the rats are organized
in exactly the same way they are under all other schedules, and it is the
random variations in interval that create apparent differences in the rats'
behavior.
The rat is the same rat under all schedules, but I suspect that it learns
things when exposed to one schedule that it cannot learn when exposed to
another, and to that extent becomes "organized" differently. I'm just
starting to think about how to model what I suspect might be going on.
Regards,
Bruce