Analyzing SDtest3 data

[From Bill Powers (950311.1540 MST)]

Bruce Abbott (940311.1415 EST) --

     Running SD3ANP1 "out of the box" (i.e., set to lag 40) on my
     SDDATA.008 run brought the RMS error down from 21.59 (at lag 40) to
     6.43 pixels, and the correlation from 0.983 to 0.999. However, it
     should be noted that as written, SD3ANP1 gives the fit only for the
     non-excluded data; for this reason changing the lag parameter, which
     improves the fit of the transition data, has very little effect (by
     slightly altering the range of excluded points).

Jeez, you're smart. That's exactly what happens and why it happens. I
didn't go on to try the model fit with the deduced reference signal -- it
should be essentially perfect because we're absorbing all the errors into
the reference signal (whether they should be put there or not).

     Regarding k values, including all data (old analysis) gave a value of
     0.0693 for the best-fit lag of 30; using the steady-state data only
     gave a k-value of 0.0712 using the same lag.

That's probably within the scatter of repeats of the same experiment, and
is probably due to the different sampling region as you noted. I'm happy to
get agreement within 3% using different programs.

Incidentally, the very high correlations we're getting are due mostly to
the large difference in target positions. If the total handle excursion
were only that due to the disturbances, the correlations would be lower. On
the other hand, the SD3an correlations include the switching transients
where the difference between model and real handle positions is much larger

RE: disturbance-free task

     I was quite surprised to find that the optimal lag had been cut in
     half, from 30 to 15. One possible explanation is that eliminating the
     disturbance, by freeing me from having to constantly correct for the
     disturbance, meant that I did not have to, in effect, suspend one task
     (disturbance correction) in order to take up another (target-seeking).
     Perhaps 1/4 second is how long it takes me to switch tasks; by
     eliminating task-switching I save the 1/4 second.

This is indeed interesting. Without the disturbance acting, the cursor
stops moving before the next SD. It's possible to select the next target by
looking at it or at least attending to it in peripheral vision. If you're
in the middle of tracking, perhaps you can't do this selecting of the off-
axis target at the same time. Maybe it takes 1/4 sec to locate an off-axis
target. It would be terribly nice to have an eye-position measurement

     I don't see where the program reports of the fit produced OVERALL,
     which would provide a basis for choosing an optimum delay.

It doesn't. The next thing would perhaps be to look at the excluded data
region and do the best-delay determination using that region only (because
that's where the transient occurs).

     Looking at my own data, I would suggest that almost all the errors
     represent tracking errors as opposed to variations in the reference

But isn't this because we're assuming a constant reference signal of
exactly the right magnitude in the model?

Try this. Somewhere in the middle of the run, wait until just after you've
corrected the error, and deliberately insert an excursion of the cursor
away from the target and back again. When you run SD3anp1 all the way to
the display of the calculated reference signal, you'll see that excursion
in the white reference-signal trace. In terms of the task definition that's
a tracking error, but in fact the error signal remains small, as you would
see if you used the deduced reference signal in a run of the model, rather
than the assumed ideal reference signal.

Probably a useful step would be to run the deduced reference signal through
a low-pass filter. This would separate fast tracking errors from slow
variations in the reference signal, which seems a reasonable separation
since the higher system is going to be slower and the lower system's
tracking errors are largest for the most rapid disturbances. This idea,
plus your remarks, makes me want to retract my observation that the
reference signal overshoots are real. They're too fast.

I agree that we should look into representing the physics of the external
part of the loop. The problem here is that there are really one or two
levels of control below the visual-motor loop we're concentrating on,
kinesthetic levels. These will greatly modify effects of inertia and slip-
stick friction (as seen from the visual level). What we need is the ability
to insert known mechanical disturbances, and that would take some equipment
and funds.


I have now done eight runs. Here are the best-fit results using SD3an.

Run k correlation RMS error delay

007 0.0952 0.989 17.173 33
008 0.1182 0.982 20.777 34

Mary has done three runs, 009 through 011

009 0.1021 0.989 14.83 41
010 0.0692 0.988 18.697 30
011 0.0716 0.950 35.579 28

Note that on Mary's third run (011), the interval happened to be very
short, so the transient errors constituted a much larger proportion of the
total error. Also note (re remarks on age): I am 68, Mary is 64. On the
other hand, we all know that women are superior.

I think that you intended the intervals between switches to be randomized.
However, a new random interval is not selected after the count is reset. I
suggest it would be a good idea to put in that randomization; I think we
will want to study predictable switching separately. Your call.

     Perhaps the most significant general aspect of what we're doing here
     is that we've been able to set up an experimental procedure at at
     least four locations across the country, run it on several
     participants (ourselves, mostly), compare results, demonstrate the
     basic reliability of the observed phenomena through both intra- and
     inter-subject replication, produce and test computer models and, on
     the basis of these analyses, propose testable explanations and begin
     to explore the phenomena these analyses revealed. It strikes me that
     this is an extremely powerful way to do science.

Yes indeed, and I'm very happy to see Samuel Spence joining in! I hope Tom
Bourbon manages to get his net connection straightened out soon; then it
will be five. A note to all lurkers -- anyone who can compile and run Turbo
Pascal 7.0 programs is welcome to join in, either helping with the programs
or just running the experiments and reporting results. Actually, we're not
using anything fancy like objects; everything from TP 3.0 on up ought to
work without difficulty. Chuck Tucker, how are you doing on this?

Oh, yes, I've been meaning to say that I changed the approximation
criterion from abs(delta) < 0.001 to abs(delta) < 0.0001. Seems to make a
slight improvement.



Bill P

Enclosed is a procedure "showdelay" which plots the calculated reference
signal starting with each change of state. The two directions of change are
plotted separately. This is meant to be inserted into SD3anp1.pas.
Following it is a segment from the final begin..end part, showing how and
where it is called.

procedure showdelay;
var i,j,x,delta,side,oldstate: integer;
x := 0;
for side := 1 to 2 do
for i := 1 to NPoints do
  if state^[i] = 1 then delta := 0 else delta := maxx div 2;
   if oldstate <> state^[i] then
    oldstate := state^[i];
    x := 0;
    moveto(delta,maxy div 2 - calcref^[i])
  if oldstate = side then
   if x < 60 then
    lineto(4*x + delta ,maxy div 2 - calcref^[i]);
    putpixel(4*x + delta,maxy div 2,white);
    if (x mod 5) = 0 then
     for j := -2 to 2 do putpixel(4*x + delta,maxy div 2 + j,white);
{How the above procedure is called:

... if Success then PCTmodel else halt;
    ch := readkey;
  until ch = #27;
  ModelRun(0); { note: this run is not shown or analyzed}
    ch := readkey;
  until ch = #27;
  showdelay; { <<================= }
    ch := readkey;
  until ch = #27;