everything under control?

[Hans Blom, 960705]

(Bill Powers (960702.1930 MDT))

Hans Blom, 960702 --

     Again, we have the problem of scales, now in the form of accuracy
     of prediction. You miss the target point by a few millimeters or a
     few centimeters, depending upon whether you use your eyes or not.
     Not by a few miles, not by a few lightyears. Your finger does not
     end up in some random location somewhere in the universe.

If all proprioception is lost as well as vision, you will be lucky to
reach within 45 degrees of the target.

My point was that knowledge about WHO is reaching and his/her physical
properties in itself gives some information about where the reach
will end up, even if no control takes place. But much of this type of
knowledge ends up in control systems as constants (e.g. the length of
a stick) or as known relations (e.g. addition rather than division),
and is therefore important.

I disagree with you about evolution; you maintain that evolution
"optimizes" behavior, a claim for which I can see no justification. The
link between specific aspects of behavior and fitness is vague to start
with; I don't think that a 10% improvement in performance is going to
make any demonstrable difference in fitness.

Well, start at basics: a 10% improvement in fitness WILL make a
significant difference over just a few generations. That is the top
level. Then go down a level: how can fitness be improved? Maybe by
getting to the food fastest. Maybe getting to the lady fastest. Maybe
by climbing a tree faster upon discovery of a tiger. Maybe by taking
better care of the kids. Lots of things, but all of them have this
"better" in them, somehow. It's not one single unique aspect that
will improve fitness, probably -- humans and bacteria have done it
and do it very differently but just as well -- but it obviously is
related to doing things better.

     Yes. That might imply that a human control system might want to do
     some real time parameter adjustment. Is that an issue that you have

Certainly. I simply haven't done it your way. Have you studied my
Artificial Cerebellum, which can automatically adjust the dynamic
response of a two-level control system controlling the position of a
damped mass on a spring, adapting within a few seconds to arbitrary
changes in mass, spring constant, and damping coefficient while making
the position follow an arbitrarily-changing reference signal? This
adaptation covers a range of 50-1 in mass, 50-0 in spring constant, and
100-0 in damping coefficient.

Where can I find it? Sounds very interesting!

                      I don't believe that your Kalman filter
model can do this, since it requires continual calculation of the
inverse of the transfer function of the mass on a spring.

That is not the reason. Kalman filter based control is optimizing
control. It works best when a relatively high quality model exists
already. I.e., if on a well-defined slope, it will rapidly find the
top of the hill. It's performance is not so good when it finds itself
in an area with lots of small hills and no mountain in the neighbor-
hood. Lots of methods cannot handle that situation. Not that that's
an excuse, but everybody is still searching for a good solution to
this problem. Genetic algorithms, which have some of the mechanisms
described by evolution theory as their basis (acquired experience plus
controlled randomness), do this well, but only because they use a
large population of searchers.

Calculation of the inverse of dynamic functions has always been
the main problem of control engineering, hasn't it?

No, definitely not. Calculations are never the problem; they are
easy. The only problem might be that they take too much time. In that
case, good approximations can be used. But time is less of a problem
as well. Wait three years and computers are ten times faster...

The main problem of control engineering was and always will be
discovering good models of what is "out there". Not so simple that
they do not adequately represent reality, not so complex that they
become ununderstandable. It is this compromise between simplicity and
faithfulness that will always be the most pressing problem, I think.
In a PCT model, that would mean a model of the world (the "world"
part in your simulations: what is the effect of an action on the
ensuing perception) that is chosen in such a way that the "dis-
turbance" is neither too large (this indicates an insufficiently
faithful model) nor too small (this indicates an unnecessarily
complex model).

What is "out there" includes, of course, the human body.

What do you do when there isn't any inverse, or when there's no
analytical method for finding it?

There is always a "best" inverse. Mathematics calls it a "generalized
inverse" or a "pseudo-inverse". You can think of this as finding the
major factors, as in factor analysis, or the major eigenvalues, as in
matrix analysis, and considering the remainder as "noise", in the
sense that its contribution is small and can be disregarded.

My artificial cerebellum would work just fine with moderate
nonlinearities in spring constant and damping coefficient. How
would your model handle them?

If nonlinearities are so moderate that the mountain still slopes
upwards, the top will be found. It is local extrema that are the
problem; you might get stuck there if you don't know whether a higher
peak exists. In biology, such a local extremum is called an
ecological niche. An extra problem with the ecological landscape is,
of course, that it keeps shifting. Relocation of a top forces
evolution; a major earthquake causes extinction. In psychology, one
could call such an extremum a habit or a character trait...

>> The short answer is that behaviour can change all over the lot
>> for any value of a reference condition.

     And I said: ... the perception must track the effects of the
     actions in order for feedback to be possible. So if action changes
     all over the place, chances are that perception will do the same.
     ... If there is circular causality (and a constant reference value,
     as is stressed time and again), you can express any variable as a
     function of the others. That is simple math. But the interpretation
     of the math is clear as well: if behavior [action] can change all

Remarks like this are what make me wonder who taught you control theory.

I've pondered this outburst of mine more closely in an attempt to
find out where it comes from. One problem is that control theory
taught me, let's say, how to use a hammer, NOT to define which tool
may legally be considered to be a hammer and which not. But our
misunderstanding goes much deeper. I think that every control
engineer would say that every variable that changes in exactly the
right way is "under control" or "controlled", whether it is compared
to some physically existing (reference) signal or not, whether in a
loop or not.

But let's consider control loops only, for now. Let me give you an
example in which we have a slightly more complex control loop than
what you normally model. You are undoubtedly familiar with the
"inverted pendulum": balancing a stick on a finger. Now consider the
control of a threefold inverted pendulum: you have to balance three
sticks on top of each other, where the sticks are connected to each
other by (infinitely compliant) flexible connections. Yes, that can be
done, at least by a control system; the control engineering group
that I am part of now (again, since the re-fusion of the medical
engineering group with the control engineering group) employs this
setup as a demonstration and as a testbed to try out different
control approaches. The goal is, of course, to keep the whole assembly

What is action and what is perception? The only allowed action is the
movement of your finger (in a demonstration, an X-Y-recorder is used)
in the horizontal plane. What you need to control is verticality, but
what that is must be better specified, i.e. you need to construct an
input function. But how? What _can be_ perceived? Let's be extremely
generous here: your allowed perceptions are the positions of all
molecules on the surfaces of all three sticks. But what _really_ must
be controlled is the position of the top of the highest stick: it
must be as high as possible (in stabilizing control, that is; it is
also possible to prescribe, e.g. that it must move in a circle). So
one approach might be to set a reference for that position higher
than it could physically be, and to minimize the error. But how?

To make a long story short, that doesn't work. You need to analyze
the problem in a more profound way. Since you know that each stick is
cylindrical and rigid, you also know that all the perception that is
really needed is that of the end points of the three sticks. And if
you know the solution of the single stick problem, you can chain
three controllers in series (or in parallel, or whatever configura-
tion appeals to you) and have the problem solved.

It turns out, if you look at the resulting equations -- or if you
consult your intuition -- that it is not just the end point (the top
of the highest stick) that must controlled (compared to the reference
of being at its highest position). The end points of ALL sticks must
be controlled SIMULTANEOUSLY. Not at THEIR highest possible positions,
but at those positions where they can effectively control the
position of the stick above them.

We can replace "loop" by "chain" here, and in this feedback chain,
ALL the links need to be controlled, not only the most proximal one
whose position corresponds with the specified perception (reference).
And a chain is only as strong as its weakest link: if one of the
links cannot be controlled, the whole chain breaks down.

I guess it was (mainly unconscious) considerations like these that
originated my reluctance to accept that, in a world with sometimes
very flexible linkages, only the final effect (perception) is
"controlled" and to insist that the values of ALL variables in the
whole loop (chain) must be "under control". Not necessarily at
PRESCRIBED positions; only the top of the highest stick has an
explicitly prescribed position. But also, for intermediate links in
the chain, at (explicitly or implicitly) COMPUTED (or otherwise
derived) positions.

A next question is whether this can be done if some perceptions about
intermediate links are missing. If we can only observe the uppermost
stick, can we still control the verticality of the whole assemblage?
That is, in your terminology: can we construct an OUTPUT function
that works correctly, despite the fact that some obsevations are
missing? I have only limited experience here, but something like this
is implemented in my blood pressure controller. On discovery of a
sudden drop in blood pressure -- which could have a variety of causes
-- act on the assumption that it has one specific (worst case) cause.
In practice, this assumption works well. It is 100% appropriate when
the most serious problem arises, and unharmful when the cause is
something else. Such a solution approaches the stimulus-response
paradigm: if you see such and such, act in so and so way, NOT because
you have accurate feedback but because this solution is the best that
you can come up with given the circumstances.

I hope that this explains some of my reactions. Most of it comes down
to language problems, it seems: when is something called "under
control" or "controlled". We do not, of course, have misunder-
standings about how control mechanisms basically work.