Analysis Bugs Fixed; Disturbances

[From Bruce Abbott (950122.1340 EST)]

Bruce Abbott (950120.2030 EST)

Bill P.'s PCT analysis routine just blew up while trying to converge on K on a
run completed by one of my colleagues. It has worked fine up to now. Guess
it's debug time.

The main problem, if that's what it is, has to do with the initial values of
k and delta in the FITMODEL functiton:

k := 0.01; delta := 0.05;

With these initial values, under some conditions, k becomes negative. Is
this supposed to happen? The routine increments k by delta until the error
between the model's versus subject's handle positions goes up. It then
divides delta in half and reverses its sign, so that k is decremented by
delta/2 until, again, error goes up between fits. Again, delta := -delta/2,
and the process continues until delta gets small enough (> .001).

In the particular data set in which the analysis failed, k reached -0.15.
When the model was run with this value, the predicted mouse position quickly
went exponentially negative, overrunning the permitted range of the
variables (real OR integer) and halting the program.

If k exceeds 1.0, it can produce some strange results, too (e.g., a double
curve of alternately high and low points), so I assume that k should remain
between 0 and 1.0. You can avoid most difficulties by choosing better
starting values for k and delta, but then the model occasionally gets fit
with k = 0.0. I've modified the code to do the "delta := -delta/2" thing if
k + delta will produce a k > 1.0 or <= 0. Perhaps you will think of a
better solution, but this seems to work on the data tested thus far. I use
initial values of k := 1.0 and delta := -k/2.0.

Another small bug is that the final display of the model versus handle
positions calls FITMODEL instead of MODELRUN. As a result, the whole
fitting process repeats, and any user-entered alternative k value gets
replaced with the fitted value. Calling MODELRUN instead at this point just
runs the model with the current k value, which is, I think, what was
intended. I've modified the code to do this.

It's interesting to try other values of k than the one that gives the best
fit. When k = 1.0 the model reproduces the disturbance curve (but with
opposite sign). (To make this easier to see, I have added the inverted
disturbance curve to the display.) As you reduce k, the amplitude of the
excursions goes down and the curve begins to lag behind the disturbance
curve. By some strange coincidence [ ;-> ] these changes seem to be exactly
what is needed to match the model's curve to the curve of the participant's
handle positions.

Bill Powers (950121.2045 MST)

There is a real question in my mind whether we can deduce all the
properties of a perceptual control system in cases where the disturbance
or disturbances acting on the controlled variable are and remain unknown
to the experimenter. The question is whether we can get a correct model
strictly by passive observation, or whether we _must_ experimentally
manipulate disturbances applied to the controlled variable in order to
eliminate ambiguities. I don't believe that there is a way to separate
the effect of a constant disturbance from an equivalent shift in the
person's constant reference level. At the moment, it seems to me that we
must establish conditions where we know that the controlled variable is
subject ONLY to the disturbances we deliberately apply and is shielded
from all others.

Strange that you bring this up at this moment, Bill. Just prior to reading
your post I had been thinking about the situation in which the disturbance
is added to the target position rather than to the cursor position. Now the
reference position we ask the participant to maintain keeps moving around
and it is the participant's job to keep the cursor moving so as to stay on
top of the target. In this situation, the controlled cursor and target
positions would be highly and positively correlated, and disturbance would
differ from target position only by a constant. If the three targets were
being separately affected by different disturbances, the controlled cursor
would be the one with the HIGHEST positive correlation with its target. All
three cursors would correlate perfectly with the handle. This would seem to
be a situation far easier to analyze using a conventional regression
approach than the compensatory tracking task.

Now consider the situation in which separate disturbances are being applied
BOTH to the cursors AND to the targets. Is there any way to separate the
effects of the two disturbances on the controlled cursor-target system? My
guess is that there isn't, unless you have a measurable external target and
can assume that the target position IS the subject's reference.

The compensatory tracking task causes problems for a traditional analysis
because it is the participant's job on the task to _eliminate_ variance in
the position of the selected cursor. The analyst has to recognize that it
is the disturbance--the movement of the cursor in the absence of handle
movement--that needs to be examined, and that the disturbance can be
recovered from the difference between cursor position and handle position.
On the tracking task involving a disturbance applied to the target rather
than to the cursor, the disturbance to the target is visible regardless of
handle movements, thus posing an easier (if still misleading) problem for
the analyst. For either task, what is easy to miss is the important
thing--that control acts to minimize the error between target (assuming
target = reference) and cursor, whether the target is stationary or moving.

Rick Marken and Tom Bourbon have both indicated that the most common
response of colleagues to this demo is "So what?". Once you have identified
the correlation between disturbance and handle position, you would seem to
have demonstrated a simple case of cause-and-effect, so I can understand why
they would respond in this way. Nothing earth-shaking here: when the cursor
goes that way, you move the mouse in the opposite direction. So what? How
do you respond to that? Get out your coin and rubber bands?

What about the following demo? You have your colleagues watch someone
tracking a moving target around the screen. The mouse controls a small
white circle; the target is a same-size red circle. Your colleagues are
watching the screen while wearing red goggles, which renders the target
circle invisible to them (but not to the participant, who is not wearing
goggles). All they see is the white circle dancing around the screen. At
the end of the task the screen clears and the colleagues are allowed to
remove their goggles. You then ask them to explain the participant's
cursor-moving behavior.

After I've had a chance to "clean them up," I'll post the revised THREECV1
and PCT analysis programs. Perhaps others will want to get in on the fun.
At this point, I've gotten five of my colleagues to run through the task and
have given them a copy of their own data to analyze as they see fit.