[From Rick Marken (960804.1200)]
Hans Blom (960803) --
This leads to the following proposal: Implement a cursor tracking task based
on the above "world" equation. In each trial, r is kept constant. In each
new trial, r has a new (random) value. This can be implemented by keeping r
constant for e.g. 50 time intervals of 0.1 seconds; this is a trial. The
action a will, I assume, stabilize within that period. After 50 time
intervals, a new r is presented for, again, 50 time intervals.
Hooray. A real experimental proposal! This sounds fine. But I have a
couple of questions before I set it up.
First, I think of r as something inside the subject; it sounds to me like
your r is a "target" on the screen. Is this correct? If so, then the
experiment you describe is one where the target in a tracking task moves
to a new (randomly selected) location every 5 seconds. The connection
between the subject's action (a, which I take to be a one-dimension
measure of mouse position) and cursor position (p) is x + y * a. I presume
that the values of x and y would be limited so that the subject can
generate a value of a that brings p to the target position (r) on every
trial. Is that correct?
In a first experiment, x and y are kept constant throughout all trials.
If learning is present, tracking will improve over time. If not, then not.
How do we measure "tracking"? I suspect that the subject will be able to
make p = r on every trial. The time to bring p to r will depend largely on
how far the new r (visible target) is from the prior r. So how do you
measure "tracking" so that you can see an improvement over trials (if
improvement occurs) despite random variations in r?
A second experiment could be a sequence of the above multi-trial experiments,
with new values of x and y in each element of the sequence. We expect control
to deteriorate when >new values of x and y are installed, but only if a
"theory" has been built.
How do you measure "control" in order to see whether or not it "deteriorates"?
If you measure control as RMS deviation of p from r over time then, again,
this measure of control will depend on how far the new r is from the prior
r; it will also depend on the nature of the changes in x and y; if y changes
sign, for example, the sign of the feedback loop changes; this will produce
a brief deterioration of subject performance (as shown in Marken and
Powers "Mind Readings, p 109); and it will produce failure of the simple,
single loop PCT model (there must be another loop to detect the positive
feedback effect of the change in the sign of the loop gain in the "tracking"
model).
Would this be a set of experiments that could decide whether we build
internal models? If not, what would you suggest?
I think this set of experiments could decide whether or not we need a model
(like yours) that builds internal models. If your "model based" control
model mimics the data even slightly better than a PCT model, I would readily
admit that you have provided evidence that people build "internal models".
If, however, we can account for the results of this experiment using a
simple PCT model, and we cannot improve our account of the data using
your "model based" model, would you agree that there is no evidence of
"model based" control in this situation? I bet I can predict (based on
PCT, no less) the result of _that_ aspect of the experiment;-)
Best
Rick