Stability factor

<Martin Taylor 940209 14:00>

Rick Marken (940202.1330)

Rick performed a nice experiment with compensatory and pursuit tracking
combined, and showed that he could model the human behaviour with a
single-level control system if he faked a higher-level control that
provided a "predictor" reference signal. He asked me whether I could
predict the time advance of the predictor waveform. I gave a quick answer
based on intuition and a bit of calculation (180 msec) that I realized
the next morning had an algebraic error. Fixing the error led to a
calculated result of 340 msec. Rick let me know that 180 was right,
so I went back to see whether there was any further obvious error in the
information-theory part of the argument. In doing so, I came up with
3 problems. One is very simple. I had made a second algebraic error
(something CSG-L readers know I am prone to do). I had assumed that
Rick's "stability factor" represented the ratio between the squares
of the actual disturbance and the (actual-prediction) equivalent
disturbance, rather than what he had written that it was. When this
error is corrected, the prediction lead comes out to about 230 msec. Still
wrong, but wrong on a more legitimate ground.

The main point of the exercise was to go back to first principles of
information theory to see whether I could get rid of the arbitrary assumption
that there is an "equivalent disturbance" formed by subtracting the
prediction waveform from the actual disturbance waveform. In doing so,
I wanted to use the assumption that the human is operating as well as
possible, and that my old theorem was valid:

log (maximum usable Gain) = (1 - Bp/Bd) log ((D+r)/r)

It turns out that I can't get there from here without knowing the effective
bandwidth of the random case, from which it should be possible to compute
an equivalent bandwidth for the sinusoidal case (see below for further
discussion of this concept). I've asked Rick to let me know that random
bandwidth, even though I'm not sure it is sufficient information because
there are other assumptions, specifically about the spectrum of the error
signal. Which leads to the next issue.

Before I realized that the theorem relation led to a single equation in
two unknowns, I computed the gain of Rick's model systems from the
reported stability factors, on the assumption that the stability factor
was all based on control -- the opposition of the output to the disturbance.
The expression is a quadratic:

G^2 + G + (1-F)/2 = 0

where G is the gain, and F the stability factor, S = [var(dt)+var(c)]/var(t-c)
Here is the algebra:

In the simple control loop, D/P = 1+G, and O/P = -G
F = (O^2 + D^2)/P^2 by definition (assuming a perceptual input function
of unity).
Hence F = 1 + 2G + 2G^2

The results for Rick's values of F (15.85 and 6.26 for the conditions
of interest) are absurd: gains of 2.25 and 1.2 respectively, unless I have
made more algebraic errors. Control systems are usually (according to
Bill Powers) not even called control systems unless their gains are more
than 5 or 10.

So what's wrong? (Assuming it isn't algebra, and even if it is, the following
is still valid).

In any normal tracking study, if one looks at the error (as represented by
the deviation between cursor and target on screen), it consists very largely
of high-frequency jiggles. There is not much error at low frequencies.
Rick's sine wave disturbance had a frequency of 0.3 Hz, more than 3 seconds
per cycle. If one analyzes the error into frequency bands, say 0-1 Hz, 1-2Hz,
and so forth (or even better, by 0.1 Hz bands), the variance will largely
be concentrated in the higher bands, no matter what the disturbance. I've
never done a spectral analysis on tracking error (though that will change
soon), but I hazard a guess that it will look a bit like this:

variance> ***
  in | **** *
band | ***** *

ยทยทยท

********* *

         -----------------------------
                 frequency band

and this shape will change only a little if there is good control, regardless
of the nature of the disturbance. The same shape, or something like it,
will be observed in the perceptual signal and the output signal, with the
addition of whatever variance is associated with the disturbance and the
actual control behaviour. Accordingly, if the disturbance is narrow-band,
as with Rick's sine wave, only the variance in its close spectral region
should be used in computing the real stability factor associated with
control. What you get by using the overall RMS error always incorporates
this largely high-frequency variance, reducing the contribution of the
actual control, and causing a considerable underestimation of the system
gain.

In auditory psychophysics, there is a concept called a "critical band,"
which is the narrowest band of frequencies from which extraneous noise
can be excluded. Such a construct seems useful in this situation. It
could perhaps be determined from a knowledge of the best-fit model gain,
or from the bandwidth of the "random" disturbance, but any such estimate
would depend to some extend on having a correct form for the spectrum of
the error distribution.

This is all rather more vague than I would like, but it seems a promising
avenue to investigate. In any event, the use of the "stability factor"
based on total RMS values should be viewed with suspicion until it is
demonstrated to measure something truly related to control.

Martin