Analyzing SDTEST3 Data

[From Bruce Abbott (940311.1415 EST)]

We've been on spring break this week, so I took a day off yesterday and
visited my Dad in Toledo. Got back late last night to discover that the PCT
elves have been busy in the workshop, generating data and an analysis
program. This morning I compiled Bill's program and ran some of my old
SDTEST3 data through it; I also created a new data file to analyze, but
more about that later.

Bill Powers (950310.1130 MST)

Appended below is a version of the SD3an analysis called SD3anp1 for Powers
modification 1. This version has two main new features.

First, the parameters of the tracking loop are calculated using data
starting some time after the condition switch, so the data during the delay
and switchover transient are excluded from the calculation. The model is
run as usual, but the squared error is accumulated only for the periods not
excluded. The k parameter is adjusted to minimize this squared error
instead of the squared error over the whole run.

I had been thinking of doing this myself to get k for the low-level cursor
position control system sans switching error, but you've beaten me to it.
That's what I get for taking a day off. (;->

This brought the RMS error for my second run down from 24 to 6 pixels, and
the correlation of model and real mouse position up from 0.978 to 0.999
(for the nonexcluded periods).

Running SD3ANP1 "out of the box" (i.e., set to lag 40) on my SDDATA.008 run
brought the RMS error down from 21.59 (at lag 40) to 6.43 pixels, and the
correlation from 0.983 to 0.999. However, it should be noted that as
written, SD3ANP1 gives the fit only for the non-excluded data; for this
reason changing the lag parameter, which improves the fit of the transition
data, has very little effect (by slightly altering the range of excluded
points). Regarding k values, including all data (old analysis) gave a value
of 0.0693 for the best-fit lag of 30; using the steady-state data only gave
a k-value of 0.0712 using the same lag.

The 008 run included a random disturbance to the cursor position. I
wondered if perhaps I could get a better look at the switching transition if
I eliminated the disturbance. SDDATA.009 data were collected under this
condition. I first analyzed these data using SD3AN (starting with the 008
optimal lag of 30), with the following results:

Lag

30 0.6377 22.98 0.976 22.76
25 0.2764 18.25 0.985 18.13
20 0.1416 14.69 0.990 14.62
15 0.0889 13.21 0.992 13.13
10 0.0596 14.05 0.991 14.05
16 0.0967 13.30 0.992 13.30
14 0.0791 13.22 0.992 13.13

I was quite surprised to find that the optimal lag had been cut in half,
from 30 to 15. One possible explanation is that eliminating the
disturbance, by freeing me from having to constantly correct for the
disturbance, meant that I did not have to, in effect, suspend one task
(disturbance correction) in order to take up another (target-seeking).
Perhaps 1/4 second is how long it takes me to switch tasks; by eliminating
task-switching I save the 1/4 second.

In effect, the change in cursor color signals a switch from compensatory
tracking to pursuit tracking, although I realize that "pursuit" is something
of a misnomer here since the target positions are actually fixed. Yet the
two situations do seem to be comparable; it's as if the target stayed the
same but suddenly moved instantaneously to a new (but predictable) position,
and the participant then had to adjust the cursor position rapidly to catch
up. During this rapid adjustment the disturbance would have little
influence (at least given its current parameters) on the cursor position
relative to the change being produced by mouse movement, and so would be
essentially "invisible" to the participant. Once the new target had been
acquired, however, the disturbance effect would again become apparent and
the participant would go back to the compensatory tracking mode.

Once we have a measure of k from the initial fit of the model to the data
from the periods between switching transients, we can use the original
experimental handle positions to compute a table of reference-signal values
for the whole run (in the array calcref^).

In the present application, this method is used to deduce the behavior of
the reference signal entering the control system that is doing the
tracking.

When you hit the escape key, the reference-signal table is calculated, and
then is plotted over the existing display. You will see that the computed
reference signal jumps very rapidly, although not instantly, to its new
value, with a pronounced overshoot on most transitions. It then comes to a
nearly constant value centered on the yellow line representing the ideal
setting of the reference signal. You will also see that this transition
begins after the ideal reference signal jumps to the other position, which
jump happens exactly at the moment the conditions change. So the delay can
be measured here, too.

I don't see where the program reports of the fit produced OVERALL, which
would provide a basis for choosing an optimum delay.

This method asserts that all errors not accounted for by the basic tracking
model come from variations in the reference signal. . . So to some extent
the reference signal array probably includes some variations that belong
inside the tracking control system.

Looking at my own data, I would suggest that almost all the errors represent
tracking errors as opposed to variations in the reference signal. These
tracking errors are not currently represented by the model, which does not
include provision for such things as inertia of the arm/mouse system, rate
control, mouse "stickiness" and oversensitivity, and so on. Most of the
errors I can see in my performance appear to result from overcorrection
(overshoot/undershoot), and all of these show up in the estimates of
reference signal value. In the 009 no-disturbance run I still had trouble
getting the cursor on target--it was difficult to make the small adjustments
necessary because the mouse has a certain amount of resistance that must be
overcome to get it moving, which then disappears once movement begins,
causing overshoot.

However, it's not a bad guess to say that the big overshoot in the
reference signal is a logical way of designing a system so its output will
produce a quick change when used to drive a system that is somewhat
sluggish.

This is an excellent observation and worth pursuing. But the first step
should be to take inertial effects into consideration in the model. To
switch targets rapidly over the distance involved, one has to accelerate the
arm/mouse system and then begin breaking to avoid overshoot. Breaking
control would appear to require a perceptual input consisting of the
distance yet-to-travel AND the current velocity of the cursor. Don't
commercial controllers often include such systems? How are they designed?

Samuel Saunders Fri, 10 Mar 1995 18:22:07

Sam, your best lag of 27/60ths puts you squarely between Rick Marken and me.
I know there's potential confounding with such factors as practice and
individual differences, but I'm wondering if the ordering of delays might
reflect the differing ages of the participants involved. It'll be
interesting to see how your other "participants" do...

By the way, my first run best delay was

45 0.0439 0.990 15.826

My guess is that this task requires a good deal of concentration (high-level
"conscious attention") until the associative connections begin to form
("habit") so that the perceptual computations become automatic. The
high-level system is extremely slow compared to the lower-level one. To put
it another way, reorganization is in progress to establish the parameters of
the program-level (?) system. What is going on during acquisition is an
important question we can return to once we have a reasonable steady-state
model in hand.

You are no more an apprentice at this than I am. While I have built some
hierarchical control models (like the spreadsheet hierarchy) this was done
mainly to show how hierarchical perceptual control works in principle; I
have not had much experience doing what we are doing with the SDTEST3 task --
building a hierarchical model that fits real data and then, presumably,
testing and revising that model. So go forth confidently and know that your
guesses about how to model this task are as good -- and probably better -

Well, you've at least had the experience of creating hierarchical control
models that work! The models I've produced thus far have all been pretty
simple affairs, and there's still a lot I don't know about modeling even
one-level control systems, like handling transport delay or dealing with
stability.

So I am very interested in seeing where SDTEST3 takes us; I'm learning
hierarchical modelling on the fly, too.

I think this task will prove to be an excellent research vehicle. It's
already raised a wealth of interesting questions to pursue, and I expect
there will be many more.

Perhaps the most significant general aspect of what we're doing here is that
we've been able to set up an experimental procedure at at least four
locations across the country, run it on several participants (ourselves,
mostly), compare results, demonstrate the basic reliability of the observed
phenomena through both intra- and inter-subject replication, produce and
test computer models and, on the basis of these analyses, propose testable
explanations and begin to explore the phenomena these analyses revealed. It
strikes me that this is an extremely powerful way to do science.

Regards,

Bruce