modeling SDTEST3 performance

[From Bill Powers (950309.2055 MST)]

Bruce Abbott (950309.2135 EST) --

     I've been thinking about perceptions

So you have: that is how we get there. Once you start seeing everything
as perceptions, you notice that the world is chock full of them from
edge to edge. And not only intensities and sensations, but
configurations and transitions and events popping up wherever you look
for them. And the relationships spread throughout the whole field, like
a web connecting everything to everything. A World-Wide Web.

The problem in modeling is to figure out just which of these perceptions
are actually under control by your actions. As you're discovering, the
possibilities seem endless. Adding to the complications is the way your
conscious verbal-logical systems chatter away, interpreting and
commenting and acting very busy, when in fact (as you discover when you
find the right model) they have minimal effect on what is happening. In
fact you learn to control quite well and quite automatically at the
lower levels, while your upper levels are trying to put the situation
into symbolic terms and reason about them.

A large part of the process of modeling is doing the task over and over
and over, paying close attention to everything you perceive. The basic
tracking task is fairly easy to grasp, although its details are
happening a good deal faster than you can pay attention to.

While you're tracking, you may be thinking "Oops, went too far that
time, got to try to eliminate those overshoots -- hey, I started the
wrong way -- now I'm stopping too short, overcompensating -- this is a
terrible run!"

When the run is finished and you display the results, you find that you
were tracking pretty much the way you usually do. All that cognitive
action comes far too late to have any effect on current events during
tracking. It has about as much effect on the actual tracking as a sports
commentator's exclamations have on a football game. This is what the
Eastern gurus call "monkey-chatter."

Interestingly, there are things you can do consciously that will have an
effect on the tracking parameters. It is not hard to make your tracking
worse; you can, for example, pretend you're moving the mouse through a
thick cold syrup. This will lower the k factor (I've never looked for an
effect on the delay, but I suppose that could be lengthened, too). But
as far as I know, there is nothing you can do consciously that will make
the tracking _better_ once you've had sufficient practice. Maybe a
little tiny bit, but not much. The main thing that helps is to turn off
the commentary and stop trying to run everything from a high level. That
only makes the tracking worse. Zen and the art of tracking.

···

--------------------------------------
     Cursor position control is already being modeled. This one-level
     control system does a nice job, but its reference levels (and
     switching delay) are being arbitrarily set by programming rather
     than being generated by a higher-level control system. The program
     SIMULATES the presumed output of the higher-level system.

Right. But look at what we have established. We have shown that at one
level, the model can be set up as a normal tracking model, and it will
reproduce the actual behavior very accurately assuming only that the
reference level switches instantly _exactly_ from one target to the
other after a fixed delay. In other words, the control is of the target-
cursor distance, and what is selected is only _which_ target. The
reference level doesn't change smoothly from one target to the other (I
think -- if we put in a smooth change I would expect the fit of the
model to get worse; it's worth checking, however).

Assuming that we have modeled the tracking aspect of this behavior
correctly and that the reference level switches instantly from one
target to the other (after a delay), we now have a smaller modeling
problem: how to turn the available sensory information into a switch of
the target selection. If we solve that, we will automatically have
solved the rest of the problem because we have already modeled how the
cursor gets moved to the selected target.

We have a choice of hypotheses to test. The simplest one is to say that
"red" means select the left target, and "green" means select the right
target. A slightly less simple one is to say that any color transition
means select the "other" target. That is less simple because we have to
add a transition-detector and a toggling output (with the problem of
starting it out in the right state or resetting it if it gets into the
wrong state).

Another approach would be to treat the higher system as a program-level
system instead of a logic-level system. In my hierarchy these are the
same level with different kinds of rules in effect. So we're supposing
that the rules are those of programming rather Boolean logic alone.

In that case we can say

  if green and NOT red then select right target
  ELSE
  if red and NOT green then select left target
  ELSE
  if green AND red then ??? (to be determined from experiments)
  ELSE ??? (to be determined from experiments)

Maybe that's the simplest of all. Of course we would have to put in a
delay to represent the time it takes for the sensations of green and red
to be translated into a logical signal, and for the program to execute
and produce the appropriate selection -- evidently, on the order of half
a second. The delay would be another parameter to be determined from the
data.

Now there is one level left, the level that wants the score to increase.
The means of making the score increase is to set a reference level that
selects the program-level system above. We have to modify the program-
level system to allow this reference signal to have an effect:

If RunProgram then
{ program defined above}

RunProgram is the output of the system that senses the score or
increments in the score and (as long as it is desired that the score or
the rate of change be positive) sets RunProgram to TRUE.

If we want to make this look more like a comparator, we would say

if(RunProgram and -ProgramRunning) then
  { ProgramRunning = TRUE;
    execute above program
  }
else if (-RunProgram and ProgramRunning) then
{
  ProgramRunning = false;
  deselect right and left targets
}

The state of ProgramRunning is the perception (for the higher system to
receive) that the program is running.

That gives us a complete model that we can run and fit to data.
--------------------------------------------
Now, how are we to find out if this is the wrong model? We have
constructed it to fit the data in a particular experiment, and it is
almost sure to fit. What we must now do is to see what the model as it
stands predicts about changes in the conditions. Then we must put those
changed conditions into the experiment and see what really happens.

We will know we are finished when all changes of conditions affect the
model and the real person the same way. That's as far as we can go short
of dissecting the person.

We have already put in one change of conditions -- the disturbance of
the cursor. The basic construction of the model does not require that a
disturbance be present; it will work without a disturbance or with one.
Fortunately, the person's mouse behavior will be the same as that of the
model with or without the disturbance, within our measurement accuracy.
So we have, in effect, shown that the model survives one possible change
in conditions.

Another change, also easily introduced, is to disturb the positions of
the targets. As long as the positions never overlap, the model and the
real person should again behave the same way.

However, as the experiment is now set up, if the positions of the
targets cross over each other, there can be a point where the targets
are in exactly the same position, and when they depart from that
position there is no way to tell visually which one is which. The model,
however, will always know which target is which (the way we have it set
up), so where the person can make a mistake, the model cannot. That is
an error in the model.

We can see now that the model needs to be specific about the way it
perceives the "left" and "right" targets. If there is no way to
distinguish the targets except by position, then the model must perceive
leftness and rightness as the person would have to -- by relative
location. The model would then make the same KIND of mistake that the
person would make, although we couldn't be sure that the model would
make the same mistake exactly at the times the person would. We might
get the model to be more like the person if we made its perceptions more
sophisticated, so it took into account directions of movement as well as
instantaneous positions. Then the mistakes would tend to be made only
when the targets cross at a very low velocity, or actually stop
momentarily at the point of coincidence. We could fool both the model
and the person by having the targets merge and then bounce back, so they
appear to cross but actually do not.

We can suppose that the "selection" process involves something like
pointing to a particular place in a map, but the system that does the
selecting must now depend on perceptions of position in that map, so as
to be able to give meaning to "left" and "right" either absolutely or
relatively. We are definitely missing a level in our model, and this
will show up when we disturb the target positions enough that they can
reverse their left-right relative locations.
----------------------------------------
That's just a sample of where I see us being headed. We're talking about
a systematic process of hypothesizing models, then varying conditions to
bring out differences between the model and the real behavior, then
figuring out how to change the model to eliminate those differences,
then changing the conditions some more. I imagine that this procedure
will sometimes go smoothly, and sometimes get itself trapped in blind
alleys, the way the solution to a crossword puzzle can make complete
sense until a particular definition absolutely requires a word that
doesn't fit what's already there, and the puzzle solver has to erase a
whole string of words and start from an earlier stage of the solution.
-----------------------------------------
     How do we model the switching delay? Does it fall out naturally
     from the slower time-constant of the second-level system, or does
     it emerge from perceptual processes whose computations require time
     to complete? (Interesting--timing of mental processes was among the
     first experimental results out of Wilhelm Wundt's lab at the
     inception of psychological science.)

It falls out of whatever you decide to put into the model. You may find
that putting the delay in with time-constants gives results that are
indistinguishable from putting it in with transport lags. In that case
you just have to keep both solutions as possibilities until some way
turns up to pick one over the other. In the present case, I don't think
that putting in a time-constant delay will give as good a fit to the
data, but what the hell, that's why we write programs, to test different
models against the data. You might want to test the model by looking at
the data only during the transition -- the rest of the data won't tell
much about the way the delay is implemented. As a starter, you could
make the transition of reference positions follow an exponential-decay
curve, varying the time constant for best fit. If can you get a higher
correlation or lower RMS error that way, great: that tells us something
about the output of the higher control systems. Of course you'd want to
put a delay into the tracking loop too, as we do in ordinary single-
target tracking. As a matter of fact we should probably be doing that
now, because some of the transport lag -- even if just a little bit --
is surely in the tracking loop. I think it's reasonable to assume that
the tracking subsystem is the same one we see in single-target tracking;
it's just being used differently by higher-level systems.
----------------------------------------------------------------------
Best,

Bill P.

[From Rick Marken (950310.0815)]

Bruce Abbott (950309.2135 EST) --

Trying to develop a reasonable PCT model for the SDTEST3 task shows that I
am, after all, still very much the apprentice at this, especially once we
get beyond the single level.

You are no more an apprentice at this than I am. While I have built some
hierarchical control models (like the spreadsheet hierarchy) this was done
mainly to show how hierarchical perceptual control works in principle; I
have not had much experience doing what we are doing with the SDTEST3 task --
building a hierarchical model that fits real data and then, presumably,
testing and revising that model. So go forth confidently and know that your
guesses about how to model this task are as good -- and probably better -
- than mine.

I have not done much hierarchical modelling because I find it intimidating;
models of perceptual processes, in particular, can become very complex. So
what have I been doing with PCT for the last 15 years or so? Mainly showing,
though demonstration and modeling, what I think are the important lessons to
be learned about behavior from a detailed look at the operation of the
simplest control loop. When it comes to modelling, I have not written a
control model that is much more complex than the one you wrote within the
first five minutes of being on CSG-L (although I still recommend that you
take a look at my spreadsheet hierarchy-- and the accompanying paper
available on the CSG server -- if you haven't already done so).

So I am very interested in seeing where SDTEST3 takes us; I'm learning
hierarchical modelling on the fly, too.

Best

Rick