[From Bruce Abbott (950331.1050 EST)]

I've tried a few runs of the compensatory tracking task (CTRACK1) using a

square-wave disturbance in order to have some data for a time-domain

analysis. The period and amplitude of the square wave were varied across

runs (only a few values tested thus far). Below are given some preliminary

results. The variables listed are as follows:

period square-wave period in 1/60th sec

amp half-wave amplitude of disturbance in pixels (60 = 120 rail-to-rail)

RTh rise time in 1/60th sec. Rise time is the time required for the

response to go from 10% to 90% of the distance between rails. The

conversion to seconds is given in brackets.

slew rate of cursor movement computed from rise time, pixels/sec

lag best-fit lag after disturbance change

k integration coefficient for simple one-level control model

rms root mean square error, handle vs model handle

r correlation, handle vs model handle

RTmh rise time for best-fit model

-----response---- ----------model fit----------

Run period amp RTh [ sec] slew lag k rms r RTmh

000 150 60 28.5 [0.48] 227 12 0.085 22.2 0.914 32.0

001 600 60 27.9 [0.47] 232 19 0.010 9.3 0.988 27.0

002 600 60 27.2 [0.45] 238 20 0.116 14.7 0.971 23.0

003 600 60 28.1 [0.47] 231 21 0.092 9.1 0.988 30.0

004 300 120 23.7 [0.39] 548 20 0.108 21.6 0.983 25.0

005 300 60 23.8 [0.40] 273 19 0.132 12.6 0.977 20.0

006 300 120 25.2 [0.42] 515 21 0.119 18.5 0.987 26.0

Although there are not enough runs here to get a fix on the stability of

these numbers, these results are at least suggestive of some possible

relationships (or lack thereof). The three runs at a period of 600 (10 sec)

produced fairly consistent values of around 0.46 sec for the rise time.

According to Malvino's _Electronic Principles_, 3rd Ed, the rise time of a

dc amplifier subjected to a step change can be related to the upper cutoff

frequency of the amplifier, which is the frequency of a sine wave input that

will reduce the output voltage amplitude by 3 dB. The formula is

fc = 0.35/rise time.

If it does not make sense to apply this formula here, I'm sure Bill will let

me know (I feel a bit like the sorceror's apprentice here). For a rise time

of 0.46 sec, this corresponds to an upper cutoff frequency of 0.76 sec, or

about 3/4 sec. Thus far I have not dared to try the sine wave disturbance

at 3/4 sec, but judging from what happened at 1.25 sec I'd say this estimate

is not too far off the mark.

The difference between my performance and that of the simple one-level model

is that the model always approaches the new disturbance value following a

step-change as a negative exponential, like a capacitor charging or

discharging, whereas I tend to switch faster, overshoot, and then correct

(although I can force myself to act more like the capacitor by limiting my

slew rate). Fitting the model without the perceptual delay following a step

produces model slew rates that are far slower than the actual ones, but

introducing a delay factor generally produces a model whose slew rates are

not far from those actually observed.

In an earlier post, Bill Powers (950312.1715 MST) noted the following:

On this latter subject, I have seen a number of papers by Kelso and by

some MIT types who have investigated this "constant-time" phenomenon:

moving the left hand to a target 20 inches away takes the same time as

moving the right hand to a target 5 inches away. All sorts of

conjectures were given about how the computations are done to make sure

the times come out the same. In fact, if you have a control system with

fixed dynamic characteristics, and it is reasonably linear, this

constant transition time falls out of the system properties. All you're

doing is scaling up the y-axis of the time plots. If the reference

position is abruptly changed by 20 units, the initial error is 4 times

what it is when the reference position changes by 5 units, so the entire

motion occurs 4 times as fast. Four times as fast to go four times as

far: Voila! Equal times. There is no system computing how to make the

times come out the same; what would be hard would be to make them

different.

Compare the three runs done at a period of 300 and at amplitudes of 60 and

120 (runs 004, 005, & 006). The rise times are virtually identical whether

the amplitude of the disturbance was 60 (120 pixels rail-to-rail) or 120

(240 pixels rail-to-rail): about 0.4 sec. Given this constant rise time, a

doubling of the disturbance amplitude implies a doubling of the slew rate

(pixels per second of cursor movement) required to correct for the change in

disturbance value, just as Bill indicated.

The rise times were about 0.1 sec faster at period 300 than at periods of

150 or 600; I'm not sure whether this is real or just uncontrolled random

variation (some runs were done on different days, etc.). However, I have a

feeling that the change is a real effect of period. At 600 I had plenty of

time following a step before the next one would occur, and my attention

("readiness") seemed to flag somewhat. At 150 I felt "rushed"--it seemed I

was still trying to make final corrections from the previous transition when

the next one would occur. (Pushing it may also be the reason the optimal

lag was smaller at 150 than at the longer periods.) So 300 may have been in

some sense closer to optimal. Note, however, that the effect here is small.

Regards,

Bruce