[From Bill Powers (2003.05.08.0945 MDT)]
Marc Abrams (2003.05.08.0016)--
>Plain and simple I very much want to empirically test the HPCT model. I
want to provide >descriptions of everyday phenomena in PCT talk and back it
up with empirical evidence.
This is what PCT modeling has been about since the beginning. PCT research
is more than just thinking up after-the-fact descriptions of ordinary
behavior using PCT language. Anybody can do that -- just substitute what
you think is the nearest technical-sounding word from PCT for an
ordinary-language term, and there you are. Don't say "I'm bothered about
that" -- say "I have an error signal about that." Don't say "I'd like to
finish this book before we go," say "I have a reference signal for
finishing this book." Don't say "I'd like to know what you think of the
paper I just wrote," say "How about some feedback on my paper?" People love
trendy, insider's language, especially when all you have to do is remember
one simple word-association (like a Little Orphan Annie substitution code).
You don't even have to learn anything new, because you can keep the meaning
you had for the ordinary-language term and just transfer it to the
technical-sounding term. Then you sound like you're talking technically,
but you're really talking (and thinking) the way you always have. That's
why it's so easy.
In PCT modeling, the objective is to observe a natural or nearly-natural
behavior closely enough to be able to measure it. Then you try to set up a
control-system model to reproduce various parts of the observed situation,
and set it in motion to see if it will, out of its own properties, produce
simulated behavior that is very close to the observed behavior.
Tracking experiments, or more generally experiments that use a simple
action to affect a perceptual variable that's easy to display on a computer
screen, show how this process works. Demo 1 is actually a series of tests
of the hypothesis that a particular example of behavior is in fact a
control process. Most of the examples require the human participant to
maintain some condition on the screen, and measure the behavior of the
manipulated objects as well as the actions by which they are manipulated.
If a simple control-system model is sufficient, in each case we should see
a general pattern (shown as a data plot for the actual test run) and
certain quantitative relationships between variables (filled in from the
data as correlation numbers that appear in the text description on the
screen after the run). Each experiment is actually a 30-second experiment
with the participant free to move the stick or mouse in any way whatsoever.
The plot shows how an invisible disturbance varied, and how the control
handle position varied, with the numerical state of the controlled variable
also plotted. In every case, we see the handle position changing equally
and oppositely to the maqnitude of the disturbance, and the controlled
variable remaining roughly constant (as instructed). I would guess that I
have looked over the shoulders of many hundreds of people doing thousands
of runs in Demo1, and I have yet to see any example where the plot did not
show this form (if the person kept controlling until the end of the
30-second run). This ought to be more impressive than it is, because there
is absolutely nothing to constrain this person to move the control handle
in any particular way. Yet they all -- _all_ -- move it as the control
model quantitatively predicts that they will move it.
The numerical results filled into the text are (1) the correlation between
the controlled variable and the handle positions, and (2) the correlation
between the disturbing variable and the handle positions. These are
obtained from the experimental; data just acquired during each
demonstration. The text surrounding the correlation numbers does not
change, which shows to those who notice how confident we are that the
prediction will be verified. The former correlation is low (usually below
0.3), and the latter is high (usually above 0.95). The control model
predicts that these correlations will be high and low as observed. This is
astonishing to a conventional scientist when he first understands what
these results are saying. An old psychology professor of mine shouted at me
and walked angrily out of the room when I described these results to him
and drew the relationships on the blackboard, He understood them, all
right, but accused me of trying to make a fool of him. I did NOT say that
he didn't need any help. The controlled variable that is kept nearly
constant by the actions of the person shows only a very small correlation
with those actions. At the same time, a variable that is invisible to the
participant, the disturbance, shows a ridiculously high correlation with
the actions of the person. Under conventional cause-effect concepts, that
is impossible.
To do an experiment of this kind, you must first observe natural behavior
in the experimental situation very closely, closely enough that you can get
some numerical or at least quantitative measures of the variables involved.
Then you must construct a specific control-system model in which the same
variables are included, as well as any hidden variables and relationships
that are being hypothesized. Disturbances must be defined so they can be
applied in the same way to the real behavior and to the simulated behavior
of the model.
Testing the model then involves three main stages. First, using data from
one experiment with the real system, you adjust the parameters of the model
for best fit. The parameters are things like masses and inertias of
physical moving parts, gains and bandwidths of amplifiers, sensor and
actuator characteristics. Often you can get away with simple
approximations. Then, using a completely new pattern of disturbances, you
run the model and record its behavior. This amounts to a prediction of the
behavior of the real system under new conditions. The final step is to use
the same new pattern of disturbances with the real system and record its
behavior for comparison with the record of the model's behavior. The amount
of difference between the model's behavior (now amounting to a prediction)
and the real behavior is a measure of the remaining discrepancy between the
model and tbe real system.
That's the basic procedure for model-based analysis of behavior. There are,
of course, varying degrees of precision with which we can carry it out,
depending on how closely we can measure the aspects of the real system that
are relevant. The conclusions have a corresponding degree of certainty,
ranging from pretty sloppy to pretty good. It's not easy to do good
experiments with higher orders of behavior, in part because the only
available definitions and descriptions are based on ordinary language and
common sense, both of which can steer us pretty far off base.
The basic pattern in more detail is:
1. Observe the behavior of the real system that is of interest.
2. Measure or otherwise record the states of the relevant variables.
3. Construct a specific control-system model in terms of the same variables.
4. Subject the model (in simulation) and the real system to the same
environmental disturbances.
5. Adjust details of the model to get the best fit of model behavior to the
real behavior.
6. Alter the experimental conditions, an easy way being to alter the
disturbance or add new ones.Leaving the model parameters at the same value
found in step 5, repeat the run of the simulation with the model.
7. Run an experiment with the real system under the new conditions, and
compare the model's behavior (now a prediction of the real behavior) with
that of the real system.
By repeating this 7-step pattern, interspersed with periods of evaluation
and revision of the model, one can eventually arrive at a model that fits
the behavior under a wide range of circumstances without any change in its
characteristics..
You will find on careful examination that all of Rick's published models
follow this pattern, although some of the steps are carried out offstage,
as it were. Even my Little Man follows it, though you don't see anything
but the model's behavior. All during the development of the little man, I
was watching how I move my arms, and the Little Man models both contain
provisions for adjusting model parameters to get the best fit between model
and observation. What I was lacking there, of course, was the VERY
expensive equipment for accurately measuring how real arms move, instead of
just estimating the behavior. The physical parameters of the arm like
moment of inertia were calculated from measurements of the dimensions of my
own arm and application of some basic physical laws. Limits of muscle
strength were estimated from seeing how much weight I could support at
arm's length. And so on -- measure what you can, calculate what you can,
and make educated guesses about the rest.
When you get into experimental explorations of the control system model, I
strongly recommend sticking to this pattern at least as a general outline.
Remember that the point is not just to _explain_ an observed pattern of
behavior after it has happened, but far more important, to _predict_
behavior when conditions are changed. Explanations after the fact are
cheap. Predictions under changed conditions put a theory to a real grown-up
test.
Best,
Bill P.