Adaptive model;World Futures

[From Bill Powers (950510.1100 MDT)]

Hans Blom (950509) --

     This is a very valuable insight. For me, the term "random" stands
     for completely unpredictable, such as white noise, whereas
     "arbitrary" would then stand for partially predictable, such as low
     pass filtered white noise. But the latter can be modelled as white
     noise that has passed through a filter; knowledge of the filter
     coefficients would provide some degree of prediction!

OK, "arbitrary" it is from now on.

I think it possible that you have been working with a particular model
of control for so long that you've become rusty on the properties of
ordinary negative feedback control systems. This shows up in your post
in several places.

I said

···

Here is an example of an arbitrary disturbance waveform:

                   *
            * *
        * * * *
      * **** * *
    * * * * **********
    * * *
***** * *
                                   * *

... and you said

     The first increase is a step and hence cannot have been predicted,
     nor controlled away. But once it has been observed, it could be the
     basis of further predictions. A model can compared to a succinct
     description/resume of past observations: chances are that if
     something was true in the past, it will be true in the future.

You are assuming that this same waveform will occur again in the future.
But when I speak of an "arbitrary" waveform, I mean a non-repeating
waveform as well as one occupying a low-pass bandwidth. When I say
"unpredictable" I don't mean unpredictable in the manner of white noise,
but unpredictable in the sense that the waveforms are non-repeating --
as far as any observer (or the control system itself) is concerned,
patternless.

The fact is that a negative feedback control system can oppose the
effects of nonrepeating arbitrary disturbances, with a quite high degree
of accuracy, without any prior experience with them. As you say, the
step in the above waveform could not be exactly opposed, but the action
of the system would follow a very similar waveform, with only small
differences in the regions of finite slope. Actually, in putting that
step in, I violated my own condition that the waveform be a low-pass
waveform -- in a low-pass waveform there could be no steps.

I am sure that as a control engineer you know this, or once knew it. It
is not necessary that the control system have any information about the
fluctuations in disturbances that are going to affect xt. Suppose that
the above disturbance waveform represents five seconds of time. A
control engineer could design a control system that would oppose the
effects of that disturbance (or any other pattern with the same
bandwidth) on a controlled quantity (xt) so closely that on the scale of
the above plot, no difference between action (u) and disturbance (other
than the opposite sign) could be detected. Even the rise-time at the
step could be made short enough to lie within the width of the *
symbols.

Just to make sure you are reminded of exactly what could happen with a
proper design:

If the reference signal xopt is constant, and if the above waveform
enters as an additive term in the real system so as to contribute to the
state of xt, the resulting behavior of xt could be made constant enough
(through fluctuations of u that oppose the disturbance) that the effects
of the disturbance are undetectable. This is not a matter of just
cancelling out the "predictable part" of the disturbance. The entire
disturbance would be cancelled.
----------------------

However, this would be relatively pointless in this model because
the true system output, xt, is not being compared with the
reference signal; only the model's output is.

     I had presumed you would be happy with this. The "world out there"
     (xt) is inaccessible and all we know about it is our internal
     representation (x). Does this remark mean that you DO think that
     the external world is directly accessible?

But the world out there is not inaccessible to your system: you have a
variable y which represents (is a perception of) the real-world variable
xt. True, from the standpoint of the system, there is no way for it to
know of xt directly; all it knows is y. Therefore all it can control is
y. But if y = f(xt), then in the real world something is being
controlled that is the inverse f-function of y.

In a negative feedback control system, it is y that is actually being
controlled (behavior is the control of perception). But we can presume,
and a third party observer who can see both y and xt can confirm, that
maintaining y in a given state entails maintaining xt in some
functionally-related state. You have considered an f(xt) which only adds
noise, but other forms of f are possible. For example, if f is d(xt)/dt,
then when y is maintained at a specific reference level, xt is
maintained at a constant rate of change (tachometer feedback, for
example).

I suppose I shouldn't go further with this until you respond. The
impression you give is that you think the disturbance must somehow be
modeled inside the control system if its effects are to be opposed
precisely. What I am saying is that this is not the case. I may have the
wrong impression of your opinion; if so, tell me.

     Where in my program do you see a covariance matrix being computed?

Here:

procedure step_model;
{compute the predicted value of the parameter vector and
  its covariance matrix at the next sample interval}
...
end;

I can only assume that all the computations in your program, other than
initializations and presentation routines, are part of the model.

     All you can see are additions, subtractions and multiplications,
     things that the nervous system is well capable of.

Yes, but we have to consider complexity, too. While complexity itself is
not an absolute argument against any proposed model, it is a relative
argument: if the same effects can be generated using simpler
computations, we are more or less constrained to prefer the simpler
computations. And the more elaborate the required computations, the
harder it is to stretch the imagination enough to suppose that the
nervous system just happens to perform them in real time. It is not the
elementary operations that are hard to conceive of; it is their
organization.
----------------------------------
Second post:

     I have stated in the past, and repeat it now, that any fixed design
     controller depends upon knowledge of its world, of the thing to be
     controlled, even if that knowledge is hidden and not even visible
     to the designer of the system.

But WHAT knowledge of its world? The sign, as you say, is important for
some designs, but it is not important to know for all designs. A design
in which a runaway condition can be detected and corrected by reversal
of an internal sign does not have to know in advance what the external
sign is.

But we are talking here specifically of disturbances, external variables
which add their influence to the system's controlled variable xt. A
closed-loop negative feedback control system does not have to have any
knowledge of these external variables, nor how they are going to behave
in the future. It can generate output that effectively cancels almost
all of the effects of such disturbances, even "nonrepeating arbitrary"
disturbances, as long as the disturbance is within the control
bandwidth. There is simply no need to predict the behavior of such
disturbances.

In other respects, what you say is true: the design of the control
system reflects properties (particularly dynamic properties) of its
environment, and in discussing learning we have to try to guess how that
comes about.

Actually, your model contains an ability to deal with disturbances
without predicting how they are going to change. This is found in the
way your learning method deals with the additive parameter c. If, in the
real system, c were to change slowly (very slowly) in any arbitrary
pattern, your adaptation method would be able to change the world-
model's corresponding parameter in the same way, just by examining the
long-term differences between y and the world-model's output x. This is
a very slow way to accomplish compensation for changes in c, but it
works and it does not rely on your model's predicting the values that c
is going to have. The bandwidth of control would be very narrow because
so may iterations have to be performed to compensate for a change in c,
but this is a method that opposes the effects of those changes without
predicting c.
----------------------------------
     OK. The basic procedure is simple: use the model-based controller
     where it works, and use a conventional controller where it does
     not. Implementing this procedure might not be so
     straightforward,but the basic idea would be to split the system to
     be controlled into two parts, the part that is modelled and the
     part that is not, and to apply predictive control to the modelled
     part PLUS conventional control to the unmodelled part.

This is essentially what we are thinking of, too. Actually, if we can
rely on a conventional controller to take care of disturbances within
some reasonable control bandwidth, the burden placed on the learning
algorithm will be considerably reduced, won't it?

This is where I think a hierarchical model can greatly improve matters.
If we start with the lowest level of systems that control muscle
tension, we can reduce the parameter adjustments to just those that
determine gain and damping; the compensation for dynamics, as I have
shown with Little Man 2, falls out of this process at no extra cost.
Then a higher level of control can produce arm configurations just by
setting a few reference signals, with dynamical considerations out of
the way. The adaptation process at the second level then has a greatly
simplified world to deal with -- and so on, level after level.

     If you want to work on this here is the idea: subtract the
     predictions of the model from the observations. What is left is the
     "unmodelled dynamics" plus noise. Compute a "classical" control
     action to control the unmodelled dynamics away. Add this action to
     the action computed by the model-based controller. This is ALMOST
     what yu do in your next mail:

Yeah, but it fell apart at the last step.

I'm very glad that we agree on the general approach. I have a few other
methods of learning that you should consider; I'll communicate them to
you in program form some day soon. The Kalman Filter approach is not the
only possible one. You might like my Artificial Cerebellum method, which
does not use any analytical forms or specific parameter adjustments, but
directly constructs a transfer function based only on the error signal
in the control system (it doesn't directly use the input perceptual
information). And there is always the E. coli method, which does its
hill-climbing without using a fixed algorithm. We may find that one
method or the other is appropriate in different circumstances.
--------------------------------------------------------------------
F. Heylighen:

Thanks for the info on the upcoming special issue of World Futures. I
assume it will be all right to reproduce the projected table of contents
for the benefit of CSG-L:
--------------------------------------
"World Futures: the Journal of General Evolution": Special Issue on "The
Quantum of Evolution: toward a theory of metaystem transitions", F.
Heylighen, C. Joslyn, V. Turchin (eds.), 1995 or 1996.

V. Turchin : "A Dialogue on MST" (63 p.)
F. Heylighen: "(Meta)Systems as Constraints on Variation" (30 p.)
C. Joslyn: "Semantic Control Systems" (25 p.)
W.T. Powers : "The Origins of Purpose: the first MST's" (14p.)
J. Umerez & A. Moreno: "Origin of Life as the First MST. Control
Hierarchies and Interlevel Relation" (20 p.)
C. Francois: "An Integrative View of MST" (13p.)
E. Moritz "MST's, Memes, and Cybernetic Immortality" (33 p.)
F. Heylighen & D.T. Campbell: "Selection of Organization at the Social
Level: obstacles and facilitators of metasystem transitions" . (39 p.)
R. Glueck & A. Klimov: "Metasystem Transition Schemes in Computer
Science and Mathematics" (40 p.)
---------------------------------------------------------------------
Best to all,

Bill P.