SD3TEST data analysis

[From Bill Powers (950312.1715 MST)]

Bruce Abbott (950312.1410 EST)--

     I've been trying out procedures to determine the switching lag and
     have come up with two so far, neither of which I am completely
     satisfied with but which give fairly comparable results.

Good, we need something to save all this hand-adjustment and
recompiling. If you've tried my procedure for plotting the transitions,
and if your data are similar to mine (as they seem to be), you will have
found that there is a lot of variation in the post-switch delay from one
transition to the next. In my plots, the delays can vary between 23 and
37 in one run. The "average delay" therefore doesn't mean much. If we
were to add all the transitions to get an average transition, it
wouldn't look like any individual transition.

I notice that the spread of delays for your runs with a disturbance was
27-34 and 28-36 under the two methods, and without a disturbance was 11-
21 and 12-22. Considering these ranges, I wonder whether using the mean
delay will really be useful. The standard deviation greatly understates
the range of the delays.

In terms of trying to deduce hierarchical levels of control, I think
that what matters is the _least_ delay that is consistently repeatable.
The longer delays can be attributed to other control activities that
temporarily pre-empt resources (probably perceptual resources) or
distract attention (which is ANOTHER factor we'll have to get into
eventually).

Since in theory higher systems use lower systems as their output
functions, you should _never_ see an adjustment by a higher-level system
occurring within the delay-time of a lower-level system. Some processing
time is required by the higher system; there should be some minimum
increment in delay time involved in going up a level, with NO reactions
occurring in less than the total delay time.

This, by the way, is a pressing reason to randomize the intervals
between switches. When the intervals are regular and short enough, a
person will start anticipating the next switch, which can result in
delay times declining toward zero and even going negative. In the jargon
of engineering psychology this has been called a "non-determinate
response", meaning a response that occurs before its stimulus. If you do
the switching at a regular once per second, I think you will soon run
into this phenomenon.

So to sum up, what we need is a way of determining the beginning of
_each_ transition after a delay, and probably a way of determining its
initial slope. In fact, the most accurate way of estimating the start of
the transition would be to fit a line to the slope and extrapolate
backward to the intersection with the ideal reference signal. I wouldn't
use the whole slope because sometimes the transition has oscillations in
it (for me, anyway) and may stop short or overshoot. I think it would be
safe to pick one point near the midpoint and work backward from there.
This would give us the slope and intercept-time as two parameters of the
transition.

If we plot number of responses for each delay time, we should get
histograms in which the minimum delay times under each condition are
clearly distinct and non-overlapping. Unfortunately, as we work with
higher-level systems taking data will require multiple runs, so the
whole thing is going to go more slowly. The general principle of non-
overlap in delay times applies, of course, only within an individual, so
this means multiple runs with very patient participants.

···

------------------------
     In military terms I think this would be referred to as "acquiring
     the target." Perhaps one way to get at this would be to look at
     different target separations to see what effect the separation has
     on the delay. (It would also affect the cursor transit time, but
     that could be analyzed separately.)

I did a quickie try with the targets moved much closer together (about
3/4 inch) and got the same delays as before, within my current accuracy.
And as far as I could see from my new plot of the transitions, the
transition-time was about the same (although we need a more quantitative
measure of transition time).

On this latter subject, I have seen a number of papers by Kelso and by
some MIT types who have investigated this "constant-time" phenomenon:
moving the left hand to a target 20 inches away takes the same time as
moving the right hand to a target 5 inches away. All sorts of
conjectures were given about how the computations are done to make sure
the times come out the same. In fact, if you have a control system with
fixed dynamic characteristics, and it is reasonably linear, this
constant transition time falls out of the system properties. All you're
doing is scaling up the y-axis of the time plots. If the reference
position is abruptly changed by 20 units, the initial error is 4 times
what it is when the reference position changes by 5 units, so the entire
motion occurs 4 times as fast. Four times as fast to go four times as
far: Voila! Equal times. There is no system computing how to make the
times come out the same; what would be hard would be to make them
different.

     We might also try allowing the unselected target to drift and see
     if that affected the delay.

This is a good idea: disturb the targets as well as the cursor. For one
thing, this will eliminate all ideas about memorizing target positions.
I'll wait for you to come out with SDTEST4.

     But I'm wondering whether it doesn't make sense to have a POSITION
     control system at the bottom level rather than a DISTANCE control
     system, which is what the first proposal amounts to. In this case
     the reference is indeed the position of the currently active
     target. And this position is, of course, a RELATIVE position whose
     frame of reference is the screen. Is there a particular advantage
     of one conception over the other?

Yes: it will become apparent when you disturb the target positions. If
the basic perceptions were relative to laboratory space, then you would
have to predict that in a blacked-out room with only a target and a
cursor visible, the person would be unable to make the cursor track a
moving target. I think there's enough data floating around from
experiments with autokinesis and other situations like this to tell us
that a person COULD easily track the target. The simplest hypothesis
that will work both on the computer screen and in the dark is that the
cursor position is perceived relative to the target position (or vice
versa). That makes the most likely controlled variable the separation.

     I envision a component of the model that selects left-target when
     the cursor is green and right-target when the cursor is red. In
     this case the identity of the target is conferred by its relative
     position on the screen, but it could just as well be conferred by
     some other property (say, shape).

It could also be conferred, as I said a few posts back, in relative
terms: which target is to the left or right of the other.
Mathematically, it makes no difference where you anchor your frame of
reference. Whenever that's the case, I prefer the simpler computation.

Using targets of different shapes would resolve the ambiguity that
occurs when identical moving targets can cross over each other. Good
idea.

     Unless the participant is able to accurately track all targets,
     identifying the active target would have to be followed by a
     search, whose result would be the target's position (and probably
     more, e.g., velocity, angle of movement, etc.). Our current task
     with its stationary targets simplifies the participant's job, but
     it would be well to develop a more general model that would work in
     these other situations as well.

I think our present model will work in more cases than is now apparent,
because some of the seemingly different cases are mathematically
equivalent and need no change in system organization. However, we do
need to model some process called "designating a target". My thoughts on
this matter are still in a confused state.

     1. Keep cursor on the active target (compensatory tracking) while
          attending to cursor color.

     2. If cursor color changes, translate this into a change of
         target.

     3. Acquire the target (identify the new target and find its
         position.)

     4. Switch the compensatory tracking reference to the new target
         position.

         or Switch the target location in the perceptual input function
         of the compensatory tracking system to that of the new target.

As an outline of the general strategy, fine. I should mention that there
is really no significant difference between "pursuit" and "compensatory"
tracking, as far as modeling goes. The only difference is whether the
target position is constant or changing, and that is a matter of how the
environment is set up, not the control system. We use exactly the same
model in either case.

This subject came up early in engineering psychology. There was a debate
about whether pursuit tracking required an additional delay of 0.1 sec
in the model, or no difference in delay. The data couldn't resolve the
issue, which tells me that there is no difference. I have never seen any
difference between continuous pursuit tracking, continuous compensatory
tracking, or continuous pursuit tracking with a continually disturbed
cursor.

The biggest difference has to do with tracking that entails a _sudden
change in conditions_, such as commencement of a disturbance where there
was no previous disturbance, or a sudden spike in a disturbance. In this
case there seems to be an added onset delay. I think we're seeing this
in SDTEST3.

     I don't mean to imply by this that compensatory tracking is
     suspended while the switch to the other target is being processed
     (it's an empirical question).

I think our data show the answer: tracking of the wrong target continues
after the signal to switch, and is interrupted by the start of the
transition to the other target. This shows that the first sign of the
transition, as far as the tracking system is concerned, is the change in
reference signal (or target component of the perceptual signal). We'd
have to isolate those little segments to make sure, of course. Notice
how in some cases the transition actually creates a cusp in handle
position: the handle is moving up to counteract an increasing
disturbance and almost instantaneously begins its rapid movement toward
the new position.

     This suggests that the initial mechanism might be feedback
     regulated but that sufficient practice might convert this into an
     automatic mechanism in which feedback plays a minimum role (i.e.,
     active only when errors occur elsewhere).

The simplest way to test this idea is to apply a disturbance. If the
mechanism is not feedback regulated, the effect of the action will be
changed exactly as the disturbance dictates. If there is feedback
control, the disturbance will be resisted.

     There likely is a comparison being made between cursor color and
     current cursor position (alternatively, target selected?), but I'm
     wondering whether this system doesn't begin to behave more like a
     switch once the associative pathway between cursor color and target
     ID becomes habitual.

OK, that's good to wonder about, and worth testing. Speaking of
switching, however, there's one aspect of the SD that needs
investigation. On my monitor screen, the red and green colors are rather
dim (or my eyes are). There's a question as to how long a changed color
has to be present before the perceptual signal rises or falls enough to
count as a change. I'm going to try flashing a small WHITE rectangle in
the middle of the cursor to signal a change. If this reduces the spread
in delay times, some credence will be lent to the idea that some of our
uncertainties are due to perceiving a rather small change in a
perceptual signal, requiring some integration time.

     Actually, I'm not too happy with the current procedure as it wastes
     time "revisiting" k-values it has already tried.

I came across a method of saving that time while exploring the k/delay
plane for Martin's sleep data. What you do is make an array (in our
case, one-dimensional) initialized to -1. When you compute a value of k,
you first look in the table to see if there is a value of sumsq for that
value of k; if so, you use it instead of running the model again. If
not, you run the model and fill in the table for that value of k. If you
use a table of 1000 reals, you can convert k to an integer from 0 to
1000 to do the indexing. Of course you have to decide in advance what
resolution of k you will use and what the largest likely value will be.
I haven't run across any k greater than 1.0, so 1000 cells would resolve
three decimal places, more than enough. I hope that's clear.

Actually, if you change sign and halve delta each time the error
increases, I think you're doing a binary search, aren't you? In that
case there would be no duplicate values of k.
---------------------------------
I received the two papers; thanks. Will comment after they've had a
chance to simmer on the back burner for a while.
--------------------------------------------------------------------
Best,

Bill P.