rules of the game; Tucker's data

[From Bill Powers (950105.2100 MST)]

Bruce Abbott (950105.1930 EST) --

Re: rules of the threecv1 program:

I think you could make any rules you please. The only touchy point
concerns the disturbance tables, because if they're just taken as data
(without any consideration of the physical situation) one of them will
prove to be highly correlated with mouse position and a pure
methodologist might call that one (but not the other two) "the stimulus"
-- in spite of the fact that it's invisible.

I suppose even that would be an interesting result. Why don't we just
say that the experimenter can have access to all of the data, and can do
all of the experimenting with the handle that he wants. After all, if a
conventional psychologist were to come up with the correct analysis, the
next move would be for you to recruit him or her.

One thing we have to insist upon, which is that after all the
exploration and experimenting is done, the analyst be required to
explain the performance during the stated control task after enough
practice for the participant to be reasonably skilled: keeping one of
the cursors as close to its target as possible for one minute. That's
where the statistical surprises will show up.

     The problem for conventional analysis here is that the "researcher"
     has no way to actually manipulate what he or she might take to be
     the causal variables (i.e., display variables) and observe the
     effects of those manipulations on the participant's behavior. The
     current version suffers the same defect.

I don't know of a way to use two mouses at the same time. It's possible
to use two joysticks. We could give the experimenter one joystick that
affects the display while the test subject uses the other in the control
task. That would just be another disturbance. I don't see any reason not
to let the experimenter manipulate the mouse; in fact, why not just lay
out a sketch of how the program works? We don't need to hide anything.
Here are the three disturbances, here are the three cursors, here is the
handle that affects all three cursors, try it out.

Actually, it would be extremely interesting to be able to hit a key so
the researcher's joystick had exclusive control over the cursors. This
of course would break the feedback loop and the test subject would lose
control. There would probably be loud complaints and arguments: "How the
hell can I control the cursors when you're controlling them and my mouse
has no effect on them?@$%!".

     As an ol' operant guy, I would want to plot the individual data
     points in various ways. It would be natural to suppose that the
     drift (relative to target) of the cursor chosen for control is the
     cause and that mouse movement is the effect, so these variables
     would be plotted one against the other to determine their
     relationship. If the trend appeared to be linear, I would probably
     do the regression analysis and report the stats relating to
     goodness of fit. If permitted to experiment, I would examine the
     available variables when the participant was attempting to keep the
     cursor on target, when the participant simply moved the mouse to
     various positions, and when the mouse did not move at all. To
     gather these data would require more than one run of the demo. Is
     this permitted?

Absolutely! Exactly what I hoped you would do. Do as many runs as you
like, use any method of data reduction that seems appropriate. All I ask
is that you remember that the actual values of the disturbing variables
are NOT represented on the screen, so treating them as stimuli would be
no fair. Also, one point of the experiment is to see what the analysis
says about what is happening -- including seeing whether it will reveal
the controlled variable or come up with the other two as the important
variables. So the analysis has to be done as if you didn't know which
cursor the subject intended to control. If we're talking _really_
conventional psychology, what the _subject_ is controlling wouldn't come
up anyway.

     So tell me the rules, and I'll try to get a few of my control-naive
     colleagues to give it a try. They're not a random sample of all
     research psychologists, but they do offer a good cross section of
     specialties, ranging from neuroscience to personality.

This is beginning to sound like real fun. Any way you could record the
explanations, or get them written down? I would truly like to be a fly
on the wall while you do this.

     Despite my total inability to understand the principles of the
     behavioral control of perception ...

You have my deepest synmpathy.


Chuck Tucker:

I received your data some time ago and looked them over, but just never
got around to organizing comments. From just the correlations, it's hard
to be very exact about the performance, but it does seem that some
people were good controllers and some were not, and that some people
controlled relatively better on some tasks than others. A few people
didn't seem to get the idea at all -- I'm surprised that the analysis
program didn't just blow up. When you get _positive_ correlations
between handle and disturbance, the person isn't controlling at all --
just moving the handle at random.

Among the people who did control reasonable well, you'll find that as
the disturbance-handle correlation goes up, the cursor-handle
correlation goes down (that is, if you rank people on the same task by
the disturbance-handle correlation, you'll find that the handle-cursor
correlation has the opposite trend). That's as it should be: the better
the control, the lower the correlation between handle and cursor should

As Rick noted, there are people who learn position control and other
tasks very easily and do very well at them, but can't do pitch control
at all even with practice. I've run into a surprising number of them.
They can't sing, either.

What was it about the data that presents severe problems for the model?
I didn't notice anything.
Best to all,

Bill P.