Still Thinking of You

Chris:

It seems like a couple of weeks must have now passed since your return to
UIUC to find your e-mailbox stuffed with messages I sent your way from the
Control Systems Group Network. I can understand that you must be very
busy, but I hope you haven't completely forgotten about them and will
eventually be able to provide a response.

I must admit that I have not been terribly successful in getting motor
control and human factors researchers to interact with Powers and company
on CSGnet. So when the name of a UIUC professor (who also happens to be my
next-door neighbor's advisor!) came up, I thought that I might finally get
some informative interaction between some of the more radical
negative-feedback aficionados of CSGnet and someone with an apparently more
eclectic approach to human behavior.

I must also admit that I find the arguments and computer-modelling research
of the closed-loop gang quite impressive and therefore find it hard to
understand why their approach to modelling behavior as the control of
perception is not more widely known and used in mainstream psychology,
particularly in the motor control and human factors areas.

Again, as I said last time, there is no big hurry about this. But I would
be quite disappointed if I were unable obtain any response from you on
these issues.

And just to show you that the open-loop vs. closed-loop discussion is still
alive and strong on CSGnet (and to help keep your e-mailbox full) here is a
recent exchange between Bill Powers and Martin Taylor.

--Gary

ยทยทยท

===========================================================
[From Bill Powers (931112.2330 MST)]

Martin Taylor (931112.1750) --

I would think that *a priori* the question is to ask whether
something COULD be done open loop, and invoke feedback control
only if it can't.

Interesting: exactly the opposite of what I would have said!

Your proposal rests on the assumption that accomplishing any
given behavior open-loop is somehow simpler, faster,
computationally or energetically less expensive, evolutionarily
more likely, or just better, than accomplishing it with a closed-
loop control system. From all the proposals I have seen
concerning open-loop models of behficult to handle
with an open-loop model. And what is hard to handle open-loop
usually turns out to be simple for a control system of
_relatively_ simple construction to handle.

If the simplest assumption is that a given behavior is that of a
control system as I claim, then the open-loop model must always
be the second choice, for it requires far more complex machinery
to achieve the same results. It's very easy to say that simple
well-learned behaviors are carried out as a blind response to a
command, but if you look REALISTICALLY at what is entailed in
obeying the command -- exactly what it is that must be learned --
you find that there is simply no plausible kind of preset command
that can have the observed effects in the kind of environment
that actually exists, using the kind of neuromuscular apparatus
that organisms actually have. Descriptions of open-loop behavior
sound simple and straightforward, but that is only because they
gloss over the difficulties: they avoid the question of HOW the
outcome is produced. They simply assert that it IS produced.

Now your preceding sentence:

Forget for the moment that (almost?) all behaviour occurs in a
closed- loop structure, and ask whether in a particular
situation an open-loop path (if it could work) is simpler or
more complex than a closed loop.

As I said above, I claim the open-loop system is more complex if
it is actually capable of producing what we observe. But you
raise another point: could not behavior that is normally closed-
loop change into open-loop behavior when the situation permits --
for example, when disturbances are absent?

I reject this idea on grounds of parsimony. If, for example,
reaching out to pick up an erratically moving bug requires
closed-loop control of the relationship between hand and bug, we
would necessarily propose a control-system model capable of doing
that. If the bug then stops moving, are we to say that the
control system ceases to have physical existence, and is
instantly replaced by a open-loop system of totally different
organization? Are we to replace the closed-loop system even
though THE SAME CONTROL SYSTEM CAN JUST AS EASILY, AND WITH NO
CHANGE OF ORGANIZATION, pick up the stationary bug?

To me, a control system is a physical organization brought about
by a slow process of reorganization (where it is not inherited
intact). Neurons become committed to certain pathways and
connections, with the final result being a physical control
system. To dismantle this organization and replace it by a
different one would require either a great deal of time and
experimentation, or a very complex superordinate switching system
connected both to the neurons that would serve as a control
system and to those that would carry out the necessary functions
for an open-loop organization. Whatever imaginary gains might be
realized by using the open-loop system, they are more than
cancelled by the complexity of the system needed to detect the
need for the changeover and effect it.

The functions of an equivalent open-loop system are necessarily
different from those of a closed-loop system. The feedback
signals in the closed-loop system do most of the shaping of the
output waveforms; without the feedback, those same waveforms must
be generated somehow without the use of feedback. The feedback
control system deals with external disturbing influences in the
simplest possible way: it monitors the controlled outcome itself
and cares nothing for what caused the fluctuations. The open-loop
system must be provided with sensors for every possible external
influence, and must be capable of anticipating their effects with
quantitative accuracy, at the same time taking into account
possible variations in the properties of the effectors
themselves, and the physical laws that intervene between the
effectors and the outcome being produced.

If a given kind of behavior is clearly best modeled by a control
system, then we should seriously question switching to a
different model just because disturbances happen, for the moment,
to be absent. The ONLY time we should seriously consider an open-
loop explanation is when we see behavior being carried out in
circumstances where a diligent search fails to reveal any
feedback pathways. If we assume that the default model is the
control system, then we don't need to invoke any special
superordinate switching to explain why the system appears ready
to oppose disturbances no matter when they appear, or how often.

Is control ALWAYS required? I think we have to say that if it is
EVER required, it is ALWAYS required, because there is no way to
predict when a momentarily disturbance-free environment will
return to normal. Only in cases where feedback is clearly absent
should we fall back on an open-loop model.
------------------------------
I think there is a deeper problem here. It stems from the fact
that the first model to have a large influence in the life
sciences was an open-loop model. That model has been around for
so long that most scientists consider it "normal." That is, any
deviation from that model is considered suspect, a special case,
something that is to be viewed with suspicion.

We know now that in most cases of real behavior, that open-loop
model fails miserably. Yet the proposal that control behaviors
should be modeled with control systems still seems like a special
case to many people, as if it is a departure from normal simple
explanations, something more complex than the usual input-output
explanation. It seems that the causal model is more natural,
somehow. But it is not. It is only more familiar.

One consequence of considering the cause-effect model the
simplest and most natural one is that behavioral scientists have
ceased to think of behavior as something precise. They can say,
as you have said, that at least an open-loop behavior might get
you closer to the desired outcome. Behavioral scientists have
become accustomed to predictions that are extremely fuzzy and
unreliable except over masses of observations. They therefore
don't think it unusual that organisms would be just as satisfied,
in general, with a move slightly toward a goal instead of
actually reaching it. As a result, behavioral science in general
has failed to notice all the extreme regularities of behavior,
the incredible skill with which organisms manage to control what
happens to them, to make the world be as they want it to be. They
simply don't see these regularities; they look instead for the
obscure effect, the slight tendency, the trend over a population.
And for every such slight regularity they are able to see, a
thousand highly repeatable and reproducible consequences of
behavior take place under their noses, unceasing and unremarked.

Suppose we were to start with the control-system premise instead
as being the simplest and most natural. We would expect behavior
to have highly regular effects, once we learned what to look for.
We would consider an organism that consistently failed to reach
its goals as primitive or damaged. If we saw an organism, even a
human being, using methods of control that were unreliable and
imprecise, we would conclude that this organism was physically or
mentally handicapped, or caught in some reorganizational impasse.
As control theorists, we wouldn't excuse the obviously poor
control by saying it is the best that can be done. We would ask
why the unreachable goal is still being sought, why the
ineffective method is still being used, despite clear
demonstration of its failure. In short, we wouldn't be looking
for a theoretical way to justify the status quo; we would see it
as pathology.

From this standpoint we would see open-loop behavior as the

special case, the abnormal situation. We would see failure to
control precisely as indicating something wrong, not as further
illustration of the theory. We would not be looking for the
subtle, the hidden, the occult, but for the obvious. And if we
failed to understand the behavior we saw, we would blame the
failure not on some inherent unpredictability of organisms, but
on our own ignorance.
-------------------------------------------------------------
Something Mary said, and some of Tom Bourbon's remarks, brought
to mind a very simple explanation of how a control system might
continue to be used despite losing its feedback signal. All we
have to remember is that there are higher-level systems, too. The
output of the higher-level system would still vary the reference
level for the system that has lost its feedback, and the output
would still affect systems lower in the hierarchy. The main
difference is that the loop gain of the higher system would have
greatly increased, because now the entire reference signal is the
error signal in the lower system, instead of only the difference
between the reference signal and the percpetual signal.

The higher system, which also receives inputs from other lower-
level systems, would still operate, but it would soon learn to
reduce the gain of the signal going to the system that had lost
its feedback. So we would see what is observed in deafferentation
experiments: an initial exaggeration of behavior, followed by a
reduction in gain and a restoration of more normal behavior.

In cases where feedback is regularly lost, as when we move from
light to dark areas, this reduction in gain might become part of
the higher control process, and take place almost immediately
when the lower-level feedback is lost. The loss of control
disrupts behavior at one level, but the next level up adjusts to
minimize the effect of the loss. Nothing but closed-loop behavior
ever occurs.
------------------------------------

If you know where "you" are, that's one perception. Knowing
where "it" is is another, which can come only from the model.

Both come from the model: where you are is known only relative to
where it is. You and it are parts of the same model. The space in
which you maneuver either in the light or in the dark is not
visual space, but a constructed space to which all relevant
perceptions contribute a part. Vision, kinesthesia, touch,
temperature, and sound all contribute to placing objects
(including your body) in this amodal space. That's where the
Little Man model would eventually go, if I were to live so long.
-------------------------------------

I agree with you, and disagree with that also. I'd hate to
think that there was *nothing* in your posting for me to
disagree with. How boring!

Well, this post should prove more interesting.
-------------------------------------------------------------
Best,

Bill P.