Adaptive Control; Revising Bruce's book

[From Bill Powers (951114.1032 MST)]

Richard Plourde 951113 1:46AM EST --

     >[Avery Andrews 951114]

Avery lives in Oz (Australia), across the international date line. He
often responds to posts before they are sent!

     >In the Little Man Demo, it took Bill Powers quite a long time to
     >figure out how to tune the various systems...

     There's something that might be useful in such an attempt. A
     couple of years ago I went to a Texas Instruments seminar on the
     application for DSPs (Digital Signal Processors). They're pretty
     fast (30E6+ instructions/second) and pretty cheap, and self-
     contained little beasts. One application demonstrated was for a
     control system, where the DSP, in addition to performing the rather
     standard Proportional/Integral/Differential 'loop tuning', also
     figured out what the parameters had to be for stable behavior.

There are many ways to design control systems that tune themselves for
stability. Hans Blom has reported on a model-based Kalman Filter method
for doing this. I have played around with an "artificial cerebellum"
that works; there are methods using E. coli random adaptation. Our
problem is not just to find a handy off-the-shelf device that will
produce stable control systems, but to figure out how human and animal
systems create and stabilize _their own_ control systems. We're not in
the position of designers who simply want to produce a certain result
using any method that works. We're studying a system that already
exists, and trying to figure out how it does what it does.

The design approach is useful because it shows us various methods that
will work at least in some circumstances and _might_ be used by a living
control system. But given any proposed design, we then have to work out
ways to test the real system to see if it actually uses that design. An
E. coli learning method would have a certain signature; the parameters
of control would change the wrong way (by small amounts) almost as often
at they change the right way, with the final solution being approached
as in a biased random walk. A systematic way of adjusting parameters, on
the other hand, would seldom make an adjustment the wrong way, but would
require a lot more information about the local environmment than the E.
coli method does, would be more prone to getting stuck in local minima,
and would fail if the assumed model was too far from the real
environment's properties. Our criterion is not which method is
objectively the best, the quickest, or the most general, but which
method produces signatures of learning behavior most like those of the
real organism under study. It's easy to get side-tracked by the question
of which design is best, and forget about the real criterion: which
design works the most like the living control system.

     ... as it turns out, for some robotic applications, abilities
     similar to this prove quite desirable. For example, the control-
     loop 'constants' for a robotic arm are not constants at all -- have
     the arm pick up a swinging basket, and what the loop needed for
     stability with just the arm isn't even close to what it needs with
     a reactive load.

This is strictly true but may be practically insignificant. The Little
Man model works with a physically-modeled arm in which the various
moments of inertia and "cross moments" (if that is a real term) are
functions of joint angles. As it turns out, the hierarchical approach
suggested by the organization of the real spinal reflexes produces
stability with only the crudest of adjustments of the parameters, with a
wide tolerance for varying properties of the load and configurations of
the arm. Picking up a swinging basket of moderate mass would have very
little effect on the Little Man's arm. I have only the crudest ways of
changing loads (since I'm not proficient with the dynamics equations),
but by changing the mass of the forearm (for example) I can show that
the effect on dynamic behavior is minor -- and quite like the effect on
a real human arm of putting a mass in its hand. An engineer could
probably design this system so it would swiftly and automatically adapt
to produce optimum control for loads of all kinds -- but that would not
be a model of a human arm, because the performance of a human arm is NOT
optimal for all loads and dynamic conditions. An engineer could probably
design a better arm, given comparable components to work with. On the
other hand, the required computer probably wouldn't fit in the spinal

     A classical servo-system with fixed parameters boggles at such a
     job -- or it's hopelessly slow.

Some adjustment of parameters, particularly damping, can certainly help.
But it's not as bad a problem as you portray it. Remember that this
isn't engineering, where you try to squeeze the last micron of
performance out of the system as a matter of professional pride. All an
arm has to do is to reach out, pick something up, and put it where you
want it. If it's a little inaccurate, higher systems can easily adjust
the reference signals to keep the overall effect going as it should.
When a lecturer picks up a stick and moves it so the shadow of its tip
indicates something on a projected slide, he doesn't have to measure the
length of the stick carefully, adjust for its mass, and compute the ray-
tracing for the shadow. He just alters the reference signals for arm and
hand configuration until the shadow of the tip is where he wants it.

What's your situation regarding running PC programs? Can you program in
Pascal or C? The best way to see where we're coming from is to run our
demos and try some predicting of simple behaviors yourself. We're
severely undermanned when it comes to people who can construct and run


Bruce Abbott (951113.0855 EST) --

     In the first case, the mechanism is going to continue timing until
     time X elapses, then initiate the next blast:

           if T = X then fire;

     In the second case, the mechanism is going to continue monitoring
     pressure until it reaches level Y:

           if P = Y then fire;

Any system that acts through time can be described as if it is a
sequence generator. Some systems really are sequence generators, but
others are really program generators (and still others are neither). The
first case might be a programmed device of this form:

1. Increment the first counter.

2. Is the counter = 100?

3. If No, go to 1; else go to 4.

4. Issue the first command, set the second counter to 0.

5. Increment the second counter

  ... and so on through the whole sequence.

The second kind of system is best exemplified by a Rube Goldberg device.
One event reaches a certain state which starts the second event going,
and so on until the mouse is dumped into the bucket of water.

To decide whether you're looking at a program or a sequence, you have to
see whether there are choice-points at which the outcome could go either
way, or any of n ways. This generally involves checking on information
coming from outside the program. For example, the program might say "If
the time of day is between 6 AM and 11 PM., set the reference
temperature to 72 F, otherwise set it to 65 F." Just looking at the
program, you can't tell which reference temperature will be set unless
you can see the input from the clock. Another example is "If the car in
the rear view mirror is a sedan with a light bar across the top,
accelerate away from the stop sign to 35 miles per hour; otherwise
accelerate to 45 miles per hour." Which path the program will take
depends on a perceptual input that can't be predicted.

     To make the parallel even clearer, assume that the first mechanism
     does its job by generating a pressure pulse, which travels down a
     long "delay line." When the pulse reaches the end of the line, T =
     X and the mechanism fires the next cap.

The critical consideration is not the delay, but whether something else
can happen beside firing the next cap. There might, for example, be a
light signal that indicates a malfunction, so after the delay, the
program says "IF the next light is off, fire the next cap; otherwise
skip the firing step and proceed." It's the IF and the test of an
unpredictable variable against a criterion that makes this into a

     It seems to me that both mechanisms produce a sequence and that
     neither (or both) involve a decision, depending on how one defines
     "decision." Evidently I'm still unclear about your distinction.

Sequences are involved in both cases, but when a program is involved,
_which_ sequence is followed by which other sequence depends on
unpredictable data from outside the program (unpredictable by the
program). Does this make my distinction clearer?
Rick Marken (951113.0900) --

I'm not sure what you're asking of Bruce:

     If you are convinced that people do control perceptual variables,
     then I have to ask "why will you not describe methods of testing
     for controlled variables in your methods text?".

Are you asking that Bruce call up his publisher and halt production and
sales of his new book, so he can go back and revise it to include the
Test for the Controlled Variable? If that's what you're after, I might
ask if you have taken the same steps regarding YOUR book on statistical
methods. If I remember right, Bruce sent in the final galleys many
months ago, and the book is as good as published (Bruce?). Aren't you
being a bit unreasonable?

     Why do you insist that the methods described on your text include
     only those that, as you know, don't allow the experimenter to
     determine what a person is controlling?

If the methods in that book are in fact only those that don't allow
determination of a controlled variable, then too bad, that is what is in
the book. It's a done deal, history. What are you trying to get Bruce to
do? I think you're picking at a scab.

     My explanation was just a guess. It is ruled out by Bruce's answer
     to your question about whether one can determine which knot is
     controlled by examining the S-R correlation. Since Bruce knows that
     controlled variables cannot be detected by examining IV-DV
     relationships, then his mistakes must have been due to something
     other than "clinging to S-R viewpoints".

That's a more likely guess. After all, your mistakes on the same set of
questions couldn't be accounted for by saying that you were clinging to
S-R viewpoints, could they?

Do you have a reference condition that says "Bruce Abbott doesn't
understand PCT and he never will because he believes in S-R?" And I do
mean a reference condition and not a perceptual interpretation.

     But then I still have a problem understanding why Bruce insists
     that his methods book describe only methods for determining IV-DV

One last time: as far as I know, Bruce isn't insisting on writing a
proposed book containing only these methods; the book was finished long
ago and sent to the publisher. It is TOO LATE TO DO ANYTHING ABOUT IT
even if Bruce now decided he wanted to. If you keep picking at that scab
it's going to get infected.
Best to all,

Bill P.