adaptive control

[Hans Blom, 930907]

To Bill Powers, Rick Marken and Tom Bourbon

Thank you for your thoughtful remarks concerning the adaptive control
diagram that I sent as a reply to Rick. Bill: it remains a problem for
me to use the orthodox CSG-notation for an "elementary" control system
because of the constraint to use only those, in hierarchies. I will try
to convert my diagram -- as much as is possible -- toward the scheme
that you prefer. That will have to wait, however. For the next few weeks
I have other (teaching) duties. You deserve an extended reply, however.

Tom, thank you for your point of view. I need to take more time for your
extended reply than I can spare at the moment. Just a short remark, how-
ever: adaptation in a control system can be realized in a multitude of
ways. Sometimes one method is better than another, sometimes it is the
other way around. Much depends on the stability of the "outside" world,
that the organism interacts with, and on its orderliness. It is very
difficult, for instance, to find the highest peak in an Alpine landscape
if you do not have eyes (long range sensors). It is even more difficult
to find the highest peak on earth. Getting stuck in a local optimum is
the problem here. Another, major problem is, strange as it may sound,
FORGETTING. Learning is a lot easier than forgetting. The problem of when
to discard outdated knowledge has not yet been rigorously solved. Only
some kludges are known, such as "leaky" integrators rather than perfect
ones. Your approach looks promising. You may have discovered a new adap-
tation method. I cannot say yet, and for the time being I lack the time
to give your reply the time it deserves.

So I am not the only one who thinks that adaptation -- of the non-random
type -- is an important issue for research. Bill, your recent reply to
Gary Cziko is very thoughtful. Your approach looks, at first impression,
promising, but I need time to translate words into working models and see
whether they work. It seems that -- apart from surface distinctions -- my
approach is similar to yours.

Just a short question. Do you think that, physiologically, it might be
possible that effector information might be available to higher levels of
the nervous system? Does the information that the brain has available
include what I call the output of the organism, i.e. its actions? If not,
physiology would rule against my "input-output" identification scheme.

I envy all of you who can devote so much time to these fascinating dis-



[From Bill Powers (951115.0945 MST)]

Hans Blom, 951114b --

I wasn't reading closely enough: you said

     First, a kind of averaging (correlation) is necessary if there is
     noise. Second, in order to estimate N parameters, at least N
     observations are required, simply because algebra shows that N
     equations are required to solve for N unknowns.

I commented that you seemed to be assuming serial measurements. Now I
see that you weren't; you were saying that each variable has to be
sampled at least as many times as there are variables, to establish the
system of N equations in N unknowns. I agree with that.

The problem you seem to be addressing is how we can solve for the
parameters of N control systems if they are changing their parameters
during the time it takes to get the N independent measurements needed.
The answer I've been proposing is that in practice, the parameters don't
seem to change too fast to obtain stable parameters, so that even though
your point is theoretically correct, it may have no practical
importance. However, see below -- this is a function of the experimental

Another answer is that if we knew or proposed the organization of the
adaptive part of the system, we could simply add as many more parameters
as required to take its operation into account. We would end up with a
larger set of nonlinear equations, but still in principle solvable --
and the length of time needed to evaluate the parameters would no longer
be a problem.

I think we would expect rapid adaptation to be going on only when the
system encounters an environment that is changing its characteristics
rapidly. When we do experiments, however, we can prepare environments
that retain the same structure for long enough for adaptations to come
to asymmptote. This is what we did in testing your model-based control
system in a simulated tracking situation. The adaptive processes assumed
a particular structure for the environment, and after tuning, that the
system could operate with constant parameters.

In PCT tracking experiments this is essentially what we do. We set up a
situation with fixed environmental parameters, and let the controller
practice for as long as necessary to get stable performance. So we
aren't studying adaptation -- we don't try to account for the process by
which the learning takes place. This doesn't take care of the problem,
of course; we only postpone it. However, when we do change parameters in
the environment, we find that the control system parameters needed to
handle the new situation can often remain essentially constant -- in
other words, there is a range of environmental conditions over which the
same control organization continues to work well enough that no
reorganization is needed. This is valuable knowledge, because it tells
us something about what is required of a reorganizing system (using the
term generically to cover all methods of altering organization).

What I've had in mind for a long time is to study control processes in
behavior under fixed environmental conditions to get an idea of what the
final, fully-adapted, organization of the system will be in various
situations. The next step, which I have scarcely touched, would be to
look at the time-course followed by the system parameters _during_ the
changes (immediately after a change in environmental conditions that are
large enough to call for a change in control parameters). This would
tell us something about the action of the adaptive system, the
reorganizing system. If we could see how the system parameters changed,
we might be able to guess at the kind of perceptual information that is
being controlled by the changes. I have always thought that this would
lead us to higher-level control systems that work by varying the
parameters of lower-level systems. The E. coli type of reorganizing
system is only one of the possibilities.

In one of my Artificial Cerebellum simulations, I use an arm operating
in the vertical plane, with two degrees of freedom (using proper
physical dyunamics to model the arm). Each of the two control systems
has two levels: velocity control and position control, with the position
error signal varying the velocity reference signal. The output functions
of the two levels are the adaptive elements (four independent adaptive
elements altogether), with the error signal being the only criterion of
adaptation in each system. The simulated environment consists of a mass
on a spring with viscous damping; the mass, the spring constant, and the
damping coefficient can be changed over a wide range from the keyboard
while the model is running.

The state of each output function is shown on the screen as a
continuously-updated impulse response curve. What's interesting is that
as the environmental parameters are changed, there is only a very slight
change in the impulse responses, almost invisible to the eye. The
differences may be larger mathematically; I don't know, because this
model doesn't use a mathematical algorithm that would show such changes
(it works directly in terms of the shape of the impulse response curve).
All I know now is that when the mass is changed over a range of of 3 to
30 units, when the spring constant is changed in the range from zero to
50 units, and when the damping coefficient is changed in the range from
0 to 100 units, the output functions seem to change only in small and
subtle ways, while position continues to track a randomly varying
reference position very closely. When all the impulse responses are
zeroed out (using the "z" key), control is momentarily lost, but returns
to its former effectiveness in a few seconds of running time.

This tells me that adaptation, even to drastic changes in environmental
characteristics, can take place in a continuum, and that the control
process itself can handle such a wide range of conditions that the load
on the adaptive process can be relatively small. This says that studying
behavior in constant environments is probably a valid step toward
solving the whole problem.



Bill P.