[From Bill Powers (930914.1415 MDT)]
Michael Fehling (930914.1028 PDT) --
Yes, theory defines the data that one wants to take. When you're
starting from scratch, however, the theories aren't formal.
Control theory started from observing behavior informally and
noting that people seem to do something that we call
"controlling." They act on certain aspects of their environments
and force them to apparently preselected states, keeping them in
those states despite disturbances. Noticing control was the first
theory, in that it brought in an organizing concept that gave us
a new way to look at the details involved in behavior. A more
formal version of this theory resulted when engineers decided to
build electromechanical devices that could do the same kind of
thing in a similar way (involving sensors, actuators, and signal-
handling circuits). Then, of course, we took this theory and
turned it back on its original subjects, human beings doing
things, and used it to bring still more order into observations
of the details of behavior. This theory tells us to look for
things that would not be given any special status under other
theories: variables affected equally and oppositely by
disturbances and by the active system's outputs, for example.
Those variables, precisely because they are stabilized by
behavior, would be discarded by any methodology that tries to
find high correlations between causes and effects. So those
theories, in effect, say DON'T look for certain data because it's
meaningless. This is one reason that it has proven almost
impossible to use control theory to reinterpret data previously
used under another theory. The right data was simply not
recorded.
On this last note, let me remind you of a bit of history. Back
in the math modeling days, the "one-element" model was used to
predict some features of verbal learning experiments. However,
the real contribution of this model was not in the first- or
second-order facts that it described. (These, after all, were
the data that the modeler's were driven by in formulation.)
In fact, the model was sufficiently rigorous that one could
derive predictions about relatively complex
_contingent_sequence_effects_ that, prior to model analysis,
no-one had thought to look for. They were found.
I receive such claims with extreme skepticism. When you say "they
were found," WHAT were found? Were precisely predicted
"contingent sequence effects" found in every subject, to exactly
the predicted degree? Or was this a statistical study, in which
multiple data sets were combined to show that there was a
statistically-significant trend in the direction indicated by the
hypothesis? If the latter, chances are that I wouldn't accept the
result as supporting any theory, because the number of
counterexamples to be found in the data, person by person or
trial by trial, would be far from insignificant. I take
counterexamples very seriously. To me they say that something is
basically wrong with the theory. A theory should explain ALL of
the data, not just some of it some of the time.
As for your "anchor points in observation" of higher level
phenomena, I am suggesting the competence methodology as
playing that role. In particular, competencies give
_invariants_ of higher order behavior that are approximated in
performance.
This also rouses that nasty suspicious demon in my head. Another
way to say "invariants of higher order behavior that are
approximated in performance" is to say "performance that is only
approximately describable as a higher-order invariant." Which do
we treat as the exact observation: the observed performance, or
the theory that is being fit to it? I say that it is the observed
performance that shows us exactly what actually happened; the
theory has to be altered until it predicts what was in fact
observed. If the proposed invariants do not appear exactly as
predicted in observed behavior, then the theory from which those
invariants are predicted is incorrect. Maybe the invariants
aren't actually invariants of behavior, even though they may be
invariants of the theory.
When I talk about an anchor point in observation, I mean a match
between the model and the real behavior within very narrow error
limits -- ideally, within the limits of measurement. That's what
the anchor points of physical theories are like, and I accept
that as my goal in theorizing about behavior. If we're going to
build a new science of life, it has to be anchored extremely
well. If we sacrifice that requirement, all we do is lower the
standards and add one more dollop of mush to the art of
psychology.
Finally, in re wheels again, I fully understand the role of a
static analysis of control. But, as you showed in BCP, it's
necessary but not sufficient. Part of the job of control
organization must inevitably be in getting the time constants
right. Dynamics must be modeld to understand this. Statics
_assumes_ stability. Reorganization must _achieve_ it.
Perhaps you overlooked my justification for the quasi-static
analysis given in "Quantitative analysis of purposive systems."
The quasi-static analysis is a simplification that is permissible
when you know that the differential equations that actually
describe the system converge rapidly to a steady state. You can
know this on two kinds of grounds: first, that the actual
differential equations are solvable, have been solved, and show
that transients quickly die away; and second, that the real
system being described is in fact free of protracted transient
behaviors after a perturbation.
In fact, the basic model we use for analyzing simple tracking
behavior is a first-order integral equation (which,
differentiated, becomes a differential equation). All parts of
the system so modelled are simultaneously active in the way I've
been describing. Our simulation programs are deliberately set up
to compute the activities of each component effectively in
parallel, not in sequence. Simcon, the control-system simulation
program written by Wolfgang Zocher, has this same property: all
components are computed as if simultaneously active, by giving
special attention to the order in which calculations are done.
All calculations are completed before the "old" values of
function outputs are replaced by "new" values for the next
iteration.
The static analysis we often use in discussions is merely a way
of summarizing the basic relationships in the loop without
getting into the complication of dynamics. The actual models are
based on continuous differential equations with dynamics taken
into account except for external dynamic effects like mass. Those
effects are taken into account in Little Man Version 2, but are
unnecessary in the tracking models because the un-modeled lower-
order systems eliminate almost all the effects of physical
dynamics. In fitting the models to behavior, we do adjust the
time constants. In our fussiest modeling we adjust both time
constants and transport lags for best fit to the data. These
parameters are then left unchanged in using the model to predict
the behavior of the real system under new dynamic conditions --
in some cases, conditions involving new kinds of disturbances
that were not present during the original experiment, and in all
cases at least a new randomly-generated pattern of a former
disturbance.
All this is quite aside from reorganization. Reorganization would
be used to explain how a control system comes to have the dynamic
properties that it has when we observe and model it. We have not
yet tried to compare models of reorganization with real people
learning a new tracking task for the first time. Our quantitative
models apply only to well-practiced behavior. A real study of
reorganization would be difficult, because you need a pool of
truly naive subjects, which is hard to find for simple tasks. And
you have to record every bit of data from the first exposure to
the experiment: unless you had the procedures very well
organized, it would be exceedingly easy to screw up the data. We
just don't have the resources to tackle such a project. Yet.
···
----------------------------------------------------------------
Best,
Bill P.