Adaptive control and PCT

[From Bill Powers (931125.0730 MST)]
Happy USA Thanksgiving, as appropriate.

Hans Blom (931124b) --

I want to transmit some of the findings of the science that
calls itself "adaptive control theory", in the hope that some
of its aspects may be helpful to you, provide you with an extra
perspective on things that PCT may leave in the dark (shadow?).

I have no doubt that adaptive control theory will illuminate many
dark places. I have been more concerned with architecture than
when many of the concepts you mention. One of my hopes has always
been that real control theorists, once they grasp the
architecture, will put their giant brains to work on the problem
of human behavior from this point of view. Your willingness to do
so is welcome.

There are some departures of PCT from ACT that are apparent only;
they arise from a different approach to analyzing behavior -- a
different way of constructing a model to do the same things. I
will give an example below.



In my diagram below, I do not presuppose a hierarchy but
parallel (vector) processing.

The reason I use a hierarchical model is that the human organism
appears to be organized hierarchically, at least at the lower
levels and quite plausibly at the higher levels, too. Consider
the "spinal reflexes," which are physically organized as control
systems. The shortest loop is the tendon reflex. This reflex was
once thought to exist merely to limit muscle forces, but it is
now known that the composite tendon signal begins to rise when
force levels are 0.0001 kilogram (compared with several hundred
kilograms of maximum force applied to a tendon). This control
loop makes the force sensed at the tendon accurately track the
net reference signal entering a spinal motor neuron, even when
muscle sensitivity to driving signals changes. In limbs, muscle
forces are applied across joints to create a couple, so the
tendon reflex is a torque servo (regulator?). It is somewhat
nonlinear because mechanical advantages change as joint angles
change. The nonlinearities tend to cancel changes in moment of
inertia as joint angles change.

The next loop involves the muscle length receptors, which are
mechanically biased by gamma efferent reference signals operating
polar muscles in the spindles; these receptors combine sensing
and comparison (the comparison is mechanical). The resulting
error signal enters the spinal motor neuron in the excitatory
sense, and contains a large rate-of-change component. The
remainder of the loop inverts the effect to produce negative
feedback; the strong rate feedback component puts in damping.
These two reflex arcs together enable reference signals
descending the spinal cord to produce very rapid (less than 0.1
sec) step-changes in angular limb position without oscillations,
effectively taking care of limb dynamics and making the overall
position response look like a simple fast exponential. This
system can produce reasonably fast response over a range of mass
loads of 20:1 or greater. It is an approximate position (angle)
controller, which operates by varying the reference signal for
the torque servo.

When opposed muscles are involved, the square-law response of
muscles to driving signals results in the ability to vary loop
gain simply by raising the reference tension in opposing tendons
by the same amount (muscle tonus). Over the middle of the range
of unbalanced forces, the combined muscle response remains
linear, the opposing parabolic nonlinearies canceling. So we have
a system for making limb angle, as measured by muscle length,
proportional to a reference signal, with loop gain being variable
through common-mode reference signals sent to opposing muscles
and setting total scalar sensed force.

The combined tendon-stretch system does not have high overall
loop gain; its principal function seems to be to nullify arm
dynamics, allowing fast near-proportional control.

The joint angles are also directly sensed, the signals ascending
to the brain stem where they join descending "command" (actually,
reference) signals in brainstem motor nuclei, with an excitatory
effect (the position reference signals are inhibitory). The
difference or error signals descend into the spinal cord, where
they vary both the alpha and gamma reference signals for the
spinal control systems. In this descending path there are neural
integrators, making this third loop very accurate but somewhat
slower. This is the main source of proprioceptive body
configuration control. When a step-disturbance is applied to a
limb, the lower two systems account for the initial fast (but
approximate) resistance, while the third level accounts for the
final correction that occurs soon after (for small disturbances,
within 0.16 sec).

The above functional and anatomical facts come from my readings
in the literature, some of which may be out of date, and from
performance of the model when optimally adjusted.

These three levels are modeled in the Little Man model, version 2
(except for the square-law muscles). Step-changes in the
reference signals for the third level produce the limb movements
we compared with those of real subjects making fast movements.

So the first three levels of control conform to the architecture
of the HPCT model (or rather, the model conforms to the actual
functional and anatomical arrangement of control systems). This
arrangement could, of course, be represented mathematically as a
single very complex transfer function involving vector quantities
and matrix operators, but that would defeat the purpose of the
HPCT model, which is to be both functionally and, as far as
possible, anatomically correct.

Notice that there is no separate provision for compensating arm
dynamics. The very simple hierarchical arrangement automatically
takes care of all dynamical effects, including Coriolis effects
and changes in moments of inertia. There is no need to compute
inverse kinematics or dynamics to produce the required driving
signals. These levels of control do not require any internal
models, and indeed if feedback is lost they simply fail,
immediately and sometimes dangerously, because of the great
increase in the ratio of motor output to reference input. Through
learning (as after deafferentation), higher systems can
compensate by reducing the driving signals, but the limb without
proprioceptive feedback remains dynamically unstable and highly
susceptible to mechanical disturbances. Learning to compensate
for loss of feedback takes several weeks, and the original degree
of skill is never again approached.

The Little Man model includes a fourth level of control, a visual
level which controls the seen position of a fingertip relative to
a movable visual target in three dimensions (ray-tracing is used
to simulate vision). This level is not as well implemented. Its
error signals are converted to reference signals for the third
level of proprioceptive or kinesthetic control, which varies the
joint angles. Again the control systems themselves are very
simple. However, adding the visual control greatly improves the
accuracy of pointing because the visual resolution is much higher
than the kinesthetic, and is directly related to spatial
relationships. Visual control can be made faster when a map is
used to convert visual errors into more appropriate changes in
proprioceptive reference signals at the third level. In my
writeups, by the way, I combined the first two levels into one,
so I refer to the "second" level of kinesthetic control instead
of the third; the visual level is called the third level.

As I said, you could represent this entire hierarchical control
system, containing three systems at each of four levels (counting
the visual level), as a single overall control system of the same
form you use to represent all control systems: a parallel vector
processing model. It would, however, be very complex. Also, there
would be no indication of the clever tricks nature has found for
stabilizing the limb in a very simple way; chances are that some
very elaborate schemes would result, involving feedforward
compensation and inverse dynamics. There is more than one way to
skin a cat, but the hierarchical approach seems to be the
simplest way.

In fact, at the lower three levels at least, I don't see anything
but a hierarchical model as being justifiable.

Incidentally, a model expressed as vectors related through
matrices is mathematically convenient, but in the actual system
the vector notation has no advantages. The actual system must
perform every individual calculation implied by the compressed

I have not included in this model any provision for adapatation.
That will be a useful addition. A few years ago I did develop a
method of adaptation that used real-time construction of a
compensating transfer function employing a neural delay network
and local feedback based on the error signal only. But that
project languished as more pressing matters came along.
Regulators and servos:

As I understand it, servos are basically controllers with
variable reference signals, while regulators have fixed reference

In the HPCT model, this distinction is not really relevant,
because the only disturbances that matter are those that act
directly on the controlled variable, and all reference signals
are variable even if, for the moment, they may be constant.

In HPCT I don't believe this distinction is necessary. You can
have both. In the Little Man model, the first two levels would
probably be classed as a position "regulator," because they are
most effective at eliminating inertial disturbances but are not
highly accurate at servoing position. The joint-angle control
system at the next level, however, is an excellent position
servo; it is slower, but fast disturbances are removed by the
first two levels and it does not need to be faster. I don't see
how the same effect could be created by a single system, but
maybe it could. The HPCT model handles this situation by
controlling different perceptual representations of the same

Also, I'm dubious about changing the basic organization of the
model just because the reference signal happens to be steady for
a while.
RE: disturbances

Yet, if there is little disturbance, "control" might be good
enough for extended periods of time.

How long is an extended period of time? You mention the effects
of meteorite impacts on a planet as an example, saying that their
effects will average to zero. But that is not exactly the time
period I'm thinking about.

Consider driving a 1500-Kg car along a highway in a crosswind. To
drive in a straight line, the front wheels would have to be
cocked enough to offset the force of the crosswind. Let the wind
force be 200 Newtons, or about 20 Kg force. Suppose the wheels
were adjusted so that the net sideward force was 2 newtons (about
200 grams), so the steering force remains accurate to 1% (highly
improbable, but let's consider an extreme case). As a moving car
has no lateral stability, this unbalanced force would accelerate
the car to one side by 2/1500 meter per second per second. How
long would it take for the car to be centered on the boundary of
its 5-meter-wide lane, starting in the middle?

The formula is s = 1/2*a*t^2, or t = sqrt(2*s/a), with a being
2/1500 and s being 2.5 meters. Solving for time, we get 61

Suppose, however, that the crosswind force increased by 20% while
the car was being steered open-loop. This would mean a 40-newton
unbalanced force. The car would begin to leave its lane in 13.6
seconds. Clearly, the length of time that a car could be driven
open-loop is quite short, even if you allow the car to wander all
the way from one lane stripe to the other, which would be pretty
poor driving.

My point is that disturbances can be very small and still have
large effects in a short time, when integrations are involved. In
a previous post you seem to have misunderstood where the
integrations I'm talking about are ("Cumulative effects do not
occur in stable systems."). I'm not referring to integrations
inside the control system, but in the environment. Many of the
effects we control result not directly from positioning our
limbs, but from integrations of movements, like walking, to
produce positions in space. The final positions of one movement
are the starting positions for the next, so in a long series of
actions all errors, most dramatically systematic errors,
accumulate with time unless feedback is used to correct them. The
environment is not generally "stable" in the sense of having
preferred states to which it tries to return. It usually stays in
the same state where you left it.

As Avery Andrews and Chris Malcomb have pointed out, in many
cases of loss of feedback, control is not lost but is simply
transferred to other modalities. If you forget about the other
modalities, you can get a wrong impression about the
effectiveness of open-loop control. Walking in the dark,
especially for a person with a lot of practice as in Chris'
anecdote, doesn't entail loss of position information, because
touch and sound are not lost, and kinesthetic sensations provide
immediate information about velocity and direction. Given an
initial estimate of distances, the perceived distances can be
corrected by reliance on kinesthetic information alone. If the
bed is seen about 5 steps away before the lights go out, and you
take 5 steps in the dark, you perceive yourself as near the bed.
Natural disturbances and uncertainties, I believe, are much more
common and much larger than what you seem to be imagining,
perhaps because of not taking into account other sources of
control information.

Also, I think you're minimizing some other problems:

That "open-loop" character shows itself in the way you walk,
the way you talk, the way you think, your habits and your
character structure, in your idiosyncracies, in the differences
between us rather than in the similarities, in the WAYS IN
WHICH we attempt to realize our goals rather than in the fact
that both of us CAN realize explicitly stated goals.

In a hierarchical model, the "way in which" you try to reach a
goal expresses the characteristics of lower levels of control.
These can, as you say, be quite different from one person to
another and even one culture to another (when I get to England
next Spring I will have to learn to get to my goals by driving, I
hope in control, on the wrong side of the road). When you
represent behavior as a single parallel vector control system,
these differences are buried in the whole model; in HPCT they can
be attributed to specific differences at specific levels.

When you characterize "walking" as open-loop, I don't think
you're really considering what is entailed in walking. I believe
that walking requires very close to continuous control; if
proprioceptive and tactile feedback vanish for even a moment, the
walking systems will simply cease to work. All experiments with
deafferentation support that statement; operated animals aren't
even expected to function for 16 days, and even after much
practice their functioning is pretty pitiful. I doubt that a
biped with deafferented legs could walk at all.
One general point. I think that applying concepts of "optimal"
control to living control systems is questionable. What is
optimal depends greatly on the criteria you adopt. Do you want
minimum-time control? Minimum-energy? Minimum overshoot? Minimum
cost? Minimum need for reorganization? Minimum complexity of
circuitry? Minimum demand for precision of components? Minimum
requirement on computing power? Minimum reliance on memory?
Minimum mechanical stress? Every optimization problem is
different; the concept of "the" optimal control system is

You've speculated that living control systems must be optimal
because nature has had so long to evolve them. According to
evolutionists, however, the only criterion for optimality that
has any evolutionary meaning is survival over a species to the
age of reproduction. That is a very broad criterion, and it
doesn't imply optimality of any other kind. We're not talking
about fine-tuning here, but just about acquiring control that's
good enough to bring reproduction to its maximum rate. Evolution
stops with good enough, not optimal, organization. And even that
evolutionary criterion is suspect; the nations doing best on this
earth are not the ones with the highest birth-rates.

When you consider the difference between Olympic athletes and
paunchy stockbrokers, it's pretty clear that there is an enormous
range of control capabilities in the human race, in one culture,
or even in one family. They can't all be matched by one
theoretically optimal control model. Even when you include the
ability to reorganize, the rate and limits of reorganization vary
radically over the population. I have practiced for many years,
and still can't play the piano from printed music -- a skill one
of my daughters (but no other child) acquired at the age of 5.

My own suspicion is that optimal control is something that
engineers invented, for the same reason that people keep trying
to break records or cut costs of production by another penny.
Given any one aspect of a living control system, a persistent
engineer could probably improve on it significantly. I don't
think we need to model optimal control, although the methods of
optimal control theory that apply to acquiring and improving
control probably will be part of the ultimate model.
I think "adaptive" control is a better term, because it doesn't
imply anything about reaching perfection.

Bill P.