[From Bill Powers (971211.1552 MST)]
Bruce Abbott (971211.1420 EST)]
Why do you want to eliminate the control systems to account for the initial
action with an open-loop system, and then re-install the control systems
immediately afterward to provide directional control?
There doesn't seem to be any directional control in the initial response to
a sudden stimulus-change of this type.
There isn't any directional control in the initial response of ANY control
system: it responds in the direction that errors cause its output to
change. If six control systems at once are suddenly disturbed, the initial
response is whatever direction is involved in all six of them responding at
once to sudden error signals.
What you're saying is that the fly's simple control system that is used to
launch it from a stationary platform can be used in only one way when it
becomes part of a way of escaping from a large swiftly-approaching object.
This makes the fly somewhat vulnerable to being swatted, but so what? It
needs that control system to take off and fly and there isn't any other
control system available. Maybe a larger smarter bug would have multiple
systems for launching it in any direction, and could start evading the
object sooner, but the fly doesn't have one. Does that make the fly into an
I don't see why you use "but" in saying "But the nature of this initial
action is completely stereotyped." The nature of the initial action by a
control system is _always_ completely stereotyped unless the system is
reorganizing. If I push on you while you're standing up, the nature of your
initial reaction is totally stereotyped; you lean _into_ the push. In fact
the reaction to any disturbance of a controlled variable is completely
stereotyped; its effects are equal and opposite to the disturbance.
To make this example equivalent to my fly example, you would have to assert
that, whether you push me from the left, right, front, or back, I will
always lean backward. I don't think you want to do this.
You happen to be able to control in two dimensions. You don't spread your
wings and fly away to avoid the push; does that make your 2-D response to
the push into a stereotyped S-R action?
All I'm saying is that under the circumstances, the fly does not have time
to begin evasive maneuvers, to adjust its actions on the basis of feedback.
For that short moment its only options are to leap or not to leap, and the
leap is completely ballistic. It's not that the directional control systems
"go away," it's that they don't have time to act.
More to the point, they don't have time to _change_ their action. Are you
saying that there is a completely separate parallel circuit hooked up
directly from sensors to muscles, which takes over when the control system
is unable to change its initial action fast enough? (and what makes you
think that would be any faster, if that is what you're saying?). If the fly
had two more control systems, so it could leap not only "up" but east-west
and north-south, it would probably use them. But it doesn't have them.
However, it does have a control system for controlling its distance from a
stationary surface. So it uses that one. If it didn't have that one (like a
caterpillar) it would just get squashed.
You also bring up the example of the fast typist or pianist. The
implication is that while there's a control system operating when the
finger movements are slow, somehow it disappears and is replaced by a
straight-through system when the movements are fast.
No, Bill, I didn't say or imply anything of the sort. On the contrary, I
explicitly stated that the lower-level control systems that move the fingers
to their required positions and apply the necessary forces can and do
control these things in the ordinary way. But the dynamic changes in
reference levels that specify these acts appear to be "scheduled" well ahead
of completion of immediately prior acts: the higher-level system doing the
"scheduling" isn't waiting for feedback from proprioceptive and other inputs
indicating the successful carrying-out of each keystroke before initiating
the next, and quite a number of them may be executed before the higher-level
system comparing specification to performance detects the error and halts
The higher system doesn't control the lower sensations at all: it controls
higher-level variables. It's not controlling proprioceptive sensations.
When Art Tatum played a run at 90 notes per second, he controlled the speed
of the run and the shading of loudness as it proceeded, but the individual
changes in finger position reference levels were going much too fast to be
controlled, even though they were being produced by the very same control
systems that operated when the fingers were moved more slowly.
Look at the diagram. When you vary the reference signal faster than the
perceptual signal can keep up, the changes in reference signal are turned
directly into error signals which operate the outputs, as if the feedback
connection were not there. There is no new wiring, no different
organization involved. The output is indeed "ballistic," because the
feedback can't keep up with it. Every control system has a maximum speed of
operation for good control. You can force it to perform faster, but control
will not be as good. By the time Art Tatum realized that he had hit a wrong
note (as he often did when going at top speed), his hands would be half a
keyboard past the wrong note.
It's not that the pattern of reference signals gets "scheduled ahead." It's
that the output pattern generator runs as usual, and the lower-level
control system runs behind. It runs so little behind, however, that the
system perceiving and controlling the pattern through the generator can't
perceive the lag, and anyway that system will still perceive the right
pattern over some range of mistakes at the lower level that come and go too
fast to be perceived at the higher level.
Please note, also, that you have to be a pretty good pianist to execute
these runs, meaning among other things that you are able to both execute
actions and hear the results faster than ordinary mortals can. Dino
Lipatti, who had forearms like steel cables, was heard to mutter (in my
pig-French) "Droight de macaron!" (macaroni-fingers) after executing a
rapid passage that his listeners thought dazzingly perfect. My friend Sam
Randlett, who taught concert pianists, could repetitively raise and lower
the little finger of his left hand on the back of my hand hard enough to
hurt. Or soft enough so I could barely feel it -- and just as fast, or so
it seemed to me.
I'm not arguing otherwise. But you appear to be making my case for me. If
there is no time to correct an error when it occurs, then the scheduling of
keys to press must be going on independently of immediate feedback as to the
successful completion of the previous keystroke(s). What is generating this
sequence of references (somehow)?
It's called an "event control system" or maybe a "sequence control system."
It's a system that perceives aspects of ongoing patterns, adjusting an
output pattern generator to control the perceived pattern. It can correct
perceived errors in the pattern by adjusting the pattern generator, but
this happens considerably more slowly than the individual changes in the
output pattern occur.
I don't know what case you're trying to make, Bruce. It seems to me that if
you took HPCT seriously, you'd be answering your own questions, not tossing
these examples up as if they somehow invalidated control theory.
The rate at which the program is executed
is controlled (as are certain other properties), and the performance is
being _monitored_ for accuracy of execution (inputs compared to stored
perceptions), but monitoring is not the same as control.
Monitoring is the essential aspect of control. You can control only what
you can "monitor" (i.e., perceive).
Once a mistake has
been made while playing a piece, it cannot be corrected. Errors can lead to
control action (interruption of play, repetition of the bar in which the
error occurred) but this is a different matter.
Yes, a matter concerning the higher-level control systems that are
controlling what they _can_ perceive in real time. This is true of any
control system. If you vary the reference signal faster than the control
system can maintain control, the perceptual signal will start to differ
significantly from the reference signal -- the system will "make mistakes."
And there is nothing that can be done about them once they have occurred
but to go back and do it again, if possible -- and slower, if you're smart.
Bruce, all the examples you bring up, and many more I have heard, all seem
to be designed to preserve some vestige of the old stimulus-response or
planned-output models of behavior, and show that control systems don't work
in those situations.
What I've done here is not to "think up reasons to reject" the HPCT model,
but to question the model. I want to know how such observations are
explained under HPCT. The fact that a system controlling at the upper end
of its bandwidth makes "mistakes" may explain the lower-level errors, but
for me at least, it does not explain how the required reference
manipulations are organized, or how such organized output is the result of
control action rather than a preorganized sequence being executed open loop
with respect to the keystrokes themselves.
The output function -- all output functions -- operate open-loop. They
convert errors into changes in reference signals for lower systems. Higher
systems that perceive and control dynamic patterns necessarily convert
their error signals into temporal patterns of output; this can't be done
with static (simple proportional or integrating) output functions like
those that would work at the configuration or lower levels. The output
pattern generators produce approximately the right kind of patterns; the
variations in the driving error signals see to it that the _perceived_
pattern is as close to what is desired as possible, even when disturbances
What is decidedly missing from
your reply is the requested explanation as derived from HPCT. I'm not
looking for reasons to reject, but I'm certainly interested to know whether
it can handle such common observations. Why should I be any less critical
of HPCT than you are of other theories?
I've said all these things before. I think they're implicit in HPCT. If you
simply treated these phenomena as problems and looked for the obvious
control-theoretic solution, we wouldn't be having this discussion. All
these examples and objections were dreamed up 30 or 40 years ago, not by
people who were simply curious as to how they could be explained, but by
people with an actively hostile hope of eliminating control theory from any
need to be considered. This "feedback is too slow" argument wasn't dreamed
up yesterday or a decade ago. When engineers talked about "hunting" in a
control system (before the days when that became a rarity), those who
wanted to do away with control theory seized on it as a reason to reject
the whole idea. When engineers remarked that long delays in a control
system could make it unstable (if uncompensated), the hostile listeners
took this to mean that feedback is too slow to work when there are any
delays at all, and furthermore that nothing could be done about it -- both
completely erroneous ideas. Erroneous -- but very useful if your aim is to
remove a threat.
You could say that these were simple errors of understanding. But most
people, when faced with a new idea that they aren't immediately equipped to
grasp, will avoid trying to act like experts and shooting their mouths off
about things they know they don't understand yet. For one reason, they know
that if they start objecting and obstructing before they know what's what,
their behavior will become suspect; people will conclude that they're
trying to protect something, such as their images as experts. The old
hidden agenda. I've seen plenty of that in my long experience with
psychologists resisting the idea of control theory. I'm very tired of it,
tired to death. I don't want to deal with it any more.