Keep On Trackin'

[From Bruce Abbott (971209.1010 EST)]

Rick Marken (971207.1230) --

Bruce Abbott (971206.0310) tries (once again) to minimize the
importance of the PCT-based behavioral research that has been
done to date by making the following comments about tracking
studies:

When you know that the person is controlling a particular
variable...and observe the person attempting to comply...it is
not terribly surprising (to me, at least) that one can predict
his behavior with high accuracy.

Bill Powers (971206.0459 MST) tries (once again) to explain the
importance of such research results by saying (among other
things):

Why not say instead that the reason for success in all these
experiments and demonstrations was that we did, in fact, have
a good idea of what the people were controlling and their means
of control, and a model that explained how they did this?

Rick confuses my assertion that the general results of the typical tracking
experiment are unsurprising with another assertion I did _not_ make, which
is that PCT research is trivial.

If a person is asked to keep a cursor aligned with a target while
disturbances continuously vary the position of the cursor on the screen,
there is one (and only one) way in which the participant can succeed. There
are no degrees of freedom. The participant _must_ move the mouse so as to
bring the cursor to target and then continue to move the mouse so as to
bring the rate of movement of the cursor to zero at all times. A person
doing a good job of this cannot help but generate data in which a very high
correlation exists between the effect of the disturbance on cursor position
and the effect of mouse position on cursor position. The same high
correlation would exist whether you have a model of this system or not.

What the control model adds is a predicted behavior, based on a _generative_
model, which can be compared to (and correlated with) the observed behavior.
This model may include physical lags, integration, and so on, that arise in
a logical way from physical system properties, and which may allow the model
to account not only for the high correlation between disturbance and cursor
observed in the data but for specific deviations (e.g., overshoot,
undershoot, oscillation) from a perfect linear relationship. That is the
power of the model, not the ability to get "100% correlations" in the data.

My argument is that a situation in which high correlations in the data exist
(i.e., little variation unaccounted for by the linear relationship between
disturbance and cursor) is a situation in which it is easy to get an
excellent fit between model and data. Such good fits are less likely to be
obtained when, as is often the case, important extraneous variables cannot
be sufficiently controlled, so that the data are contaminated by relatively
high "noise." No model, not even the right one, will generate high
correlations between predicted and observed values under those conditions.
The high correlations with which so many PCTers are justly impressed are due
not only to the application of an excellent model, but also to the
fortuitous circumstance that high correlations between disturbance and
cursor are easy to obtain in the typical tracking task, there being little
interference from extraneous variables in this task. Getting such clean
data is not always so easy.

Clean data permit one to evaluate a model better than messy data do, and
thus constitute a stronger test of the model. That the PCT model is able to
pass such a stringent test, accounting even for many apparently minor
details of performance, is certainly much to its credit. For this reason I
would judge that the tracking demonstrations were just the opposite of
trivial. But getting those high correlations required more than a good
model, it required a fortuitous experimental arrangement in which the impact
of uncontrolled extraneous variables was low.

Regards,

Bruce

[From Rick Marken (971210.0950)]

Bruce Abbott (971209.1010 EST)--

Clean data permit one to evaluate a model better than messy data do,
and thus constitute a stronger test of the model... But getting
those high correlations required more than a good model, it required
a fortuitous experimental arrangement in which the impact of
uncontrolled extraneous variables was low.

Bill Powers (971209.2023 MST) --

The "fortuitous" part of the tracking experiments consisted of
realizing that it was the participant's perception that was under
control and not some objective "error."

Right. It all comes down to recognizing those pesky _controlled
perceptual variables_ -- the ONE kind of variable that is persistently
ignored by conventional methodology.

There has been only one person who has studied this article
["experiment with volition" paper in Wayne Hershberger's
_Volitional Action_ book] thoroughly enough to realize what it
shows (Isaac Kurtzer). Nobody else has made any comments on it,
other than "nice article."

See, Bruce A. This is what I get for disagreeing with Bill [Rick
Marken (971208.1410)]:wink: In fact, I said considerably more than
"nice article" about this paper; I talked with Bill at some
length about applying this kind of methodology to recovering
references for the two-dimensional position of a cursor whose
position is being varied over time. The recovered references
would represent the pattern that the subject intends to trace. The
only thing that prevented me from carrying out this project was
my own lack of smarts. That's why Bill and I are so anxious for
you to make "a complete committment to the PCT model". With your
obvious intellectual abilities, you could actually do the research
that I know _should_ be done but am not smart enough to do. You
could be a PCT star, Bruce. Just drop the statistics text, forget
the methods text, forget all that junk that you used to call
research and start studying control. Just do it!

Best

Rick

ยทยทยท

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (971210.1147 MST)]

Rick Marken (971210.0950) --

See, Bruce A. This is what I get for disagreeing with Bill [Rick
Marken (971208.1410)]:wink: In fact, I said considerably more than
"nice article" about this paper; I talked with Bill at some
length about applying this kind of methodology to recovering
references for the two-dimensional position of a cursor whose
position is being varied over time.

Sorry, Rick -- that makes 2. I had forgotten that discussion.

It's not that hard to apply the method. Suppose you have tracking data (one
or two dimensions) in which the model is of the form

output(new) = output(old) + (gain*error - output(old))/slow

Solve this for the error:

error(old) := [output(new) - output(old) + output(old)/slow]/gain or

error(old) := [output(new) - output(old)*(1 - 1/slow)]/gain

On the right you use the _observed_ output, and the gain and slowing
factors you evaluated for the model. To determine these factors for a given
person you need a calibration run.

The perceptual signal is either just cursor position or cursor position at
some previous time if you're using a perceptual delay.

Add the perceptual signal to the computed error signal for each data point;
the result is the deduced reference signal for each data point. Put in hot
oven for five minutes, plot, and there you have it.

Best,

Bill P.

the expression

\
Best,

Bill P.

[From Bill Powers (971209.2023 MST)]

Bruce Abbott (971209.1010 EST)--

Clean data permit one to evaluate a model better than messy data do, and
thus constitute a stronger test of the model. That the PCT model is able to
pass such a stringent test, accounting even for many apparently minor
details of performance, is certainly much to its credit. For this reason I
would judge that the tracking demonstrations were just the opposite of
trivial. But getting those high correlations required more than a good
model, it required a fortuitous experimental arrangement in which the impact
of uncontrolled extraneous variables was low.

The "fortuitous" part of the tracking experiments consisted of realizing
that it was the participant's perception that was under control, and not
some objective "error." Externalizing and objectivizing the error was what
kept the early researchers into tracking phenomena from seeing the right
solution to the problem. That didn't keep them from making models, but it
kept them from seeing what they had.

What makes it possible to get good data (or know when you haven't got it)
in PCT experiments is the idea of the controlled variable. Once you begin
to see what a controlled variable is, you can start to focus on it,
protecting it from disturbances other than those you want to test, and
refining its definition. You can't do this in conventional experiments
where there is no reason to suppose that _ANY_ environmental effect doesn't
matter. If you think that behavior is simply the resultant of all the
forces in the universe converging on the organism and pushing it around,
your chances of protecting against irrelevant disturbances are nil. All you
can hope to do is average them out.

A systematic approach to PCT in which you start by identifying variables
that are clearly under control will get you clean data. If you define a
variable and want to know if it's clean, you prevent the organism from
affecting it and observe it for a while, to see what else can affect it
beside the organism's behavior. As you identify the other influences, you
arrange to eliminate or avoid their effects. When you have a variable that
will sit still for as long as you care to observe it, you can then let the
organism's behavior affect it again, and you will know that what you
observe is due entirely to the organism and the known disturbance you're
applying, and not to any other causes.

After that it's up to the model. I tried to show how we would proceed in
the "experiment with volition" paper in Wayne Hershberger's book. Here we
matched a model to tracking behavior for the first and last thirds of a
run, to set its parameters. In the middle third, the model was used to
deduce the reference signal as the participant tried to move the cursor to
a series of different positions in a staircase pattern relative to the
stationary target. The ubiquitous disturbance made the cursor deviate
considerably from the desired staircase patterm so it was barely visible.
However the deduced reference signal showed a much clearer picture of the
staircase, particularly when the highest-frequency noise was smoothed out.

I have attached a scan of figure 6 from this article, called Fig6vol1.gif.
The top trace is the real cursor behavior. Using the model that was matched
to the first and third segments of the data, I determined the integrating
constant in the output function. Applying the inverse of this function (the
first derivative) to the observed control-handle behavior yielded the
deduced error signal in the model. Since e = r - p, r = e + p. The
perceptual signal was assumed proportional to the cursor position and
delayed by a time that was evaluated from the data, so adding the observed
cursor position to the deduced error signal gave the deduced variations in
the reference signal. Running the model with that deduced reference signal
gave the second trace down (for some reason I called it the "center"
trace). The model's cursor position now exactly matched the real cursor
position, as it would have to do. That is a check that the calculation of
the deduced reference signal was done without errors.

The third trace shows the deduced reference signal. There is clearly
high-frequency noise in this trace that is not present in the top curve.
Smoothing this trace just enough to eliminate those high-frequency
oscillations produced the fourth trace, the smoothed trace of the deduced
reference signal. I have, by the way, always had a bit of hand tremor. The
high-frequency oscillations in the reference signal seem near its
frequency. This might indicate that this tremor originates in higher
control systems in my brain. If anyone else replicated this experiment they
might find that the high-frequency oscillations are not there in most people.

As you can see, the staircase pattern of intended cursor positions is very
clear. There are three steps up and four steps down. The steps are much
clearer in the reference signal trace than they are in the trace of actual
cursor positions (even if the same smoothing is applied to the cursor
positions -- not shown). If there were no variable disturbance being
applied, the cursor position would reflect the reference settings, but with
the disturbance present the participant had difficulty in making the cursor
actually move as intended. Nevertheless, this method of analysis gives a
clear quantitative picture of the intended positions -- clearer than what
could be discerned in the record of the actual positions.

What we have here is the beginning of a method for building up a picture of
hierarchical control. If we can clearly identify controlled variables of a
simple sort, we can deduce the variations in the reference signal for the
associated control system. Those reference signals variations would
presumably be the output of the next level of control, and the "cursor
position" would be one element of the controlled variable at the next
level. When we have identified all the elements of the next level of
controlled variable, and have, with this method, deduced the reference
signal variations for each lower-level control system, we can then proceed
to identify the next level of controlled variable, fit a model to the
second level of control, and deduce the reference signal variations at the
second level.

While this method is not without its difficulties, it is clearly capable of
being iterated again and again, to build up a picture of increasingly
complex multidimensional control processes at higher and higher levels.
Obviously it is necessary to have a constant reference signal only at the
highest level being explored, and then only while the baseline model is
being evaluated. Even that might not be necessary if suitable multivariate
statistical analyses can be used to provide a first approximation to the
required form of the model. I don't know how to do that, but other people do.

There has been only one person who has studied this article thoroughly
enough to realize what it shows (Isaac Kurtzer). Nobody else has made any
comments on it, other than "nice article." I started to show you how to use
this approach in analyzing your rat data, but you didn't seem very
interested.

The reason I bring this up, Bruce, is to show what can be accomplished if
one makes a complete committment to the PCT model and begins to apply it
seriously. This sort of thing can't be done if you waste your time trying
to fit control theory with some other approach to behavior analysis -- and
what's the point? No other theory comes anywhere near this ability to make
deductions about organization and test them. Why waste your time on
anything but PCT? Everything else is just a diversion from the efforts that
are needed to carry the PCT model further.

Best,

Bill P.

(Attachment Fig6Vol1.gif is missing)