Everything under control?

[From Bill Powers (960708.0930 MDT)]

Hans Blom, 960705 --

Lots of good comments in your series of posts. We definitely have a
different approach to control theory. Mine is single-mindedly aimed at
explaining control phenomena in the behavior of organisms, while yours
is more general, dealing with what is possible to engineer into control
systems. I'm sure the two approaches overlap and that in the end yours
will be the more powerful. But because of the very general nature of
your approach, you include a lot of considerations that I think are not
relevant to the behavior of organisms, which were not designed but
evolved. You take a more externalized view than I do -- to you, a "goal
of the system" is simply some end that the system can be shown to
accomplish (an example is the goal of "surviving"). When I speak of
goals, I am always speaking of something that is explicitly represented
in the system as a target toward which perceptions are to be adjusted. I
doubt whether there is any goal of survival explicitly represented in a
rat, so the rat can recognize survival as either a perception or a goal.
Yet the goals which are explicit in the rat (such as filling its belly,
maintaining a specific body temperature, achieving sexual satisfaction),
if achieved, result in the rat's survival. From your point of view that
is a goal; from mine it is only a consequence. It may be a goal of the
breeder of the rats, but in my view it is not a goal of the rats.

I said:

I don't think that a 10% improvement in performance is going to
make any demonstrable difference in fitness.

You reply

     Well, start at basics: a 10% improvement in fitness WILL make a
     significant difference over just a few generations. That is the top
     level. Then go down a level: how can fitness be improved? Maybe by
     getting to the food fastest. Maybe getting to the lady fastest.
     Maybe by climbing a tree faster upon discovery of a tiger. Maybe by
     taking better care of the kids. Lots of things, but all of them
     have this "better" in them, somehow. It's not one single unique
     aspect that will improve fitness, probably -- humans and bacteria
     have done it and do it very differently but just as well -- but it
     obviously is related to doing things better.

Yes, an actual improvement in fitness will, by definition, make a
difference over the generations. But I was speaking of a different
assumption you offer is very popular: that if there is an improvement in
performance (however small), it will create an improvement in fitness.
But this is more a statement of faith than a report on the facts. Does
it really matter to a rat's fitness if it becomes able to reach the food
in 10 seconds rather than in 20 seconds? Under some circumstances,
perhaps so -- perhaps the rat gets so little food on each trial that
this improvement would effectively double its intake. But in other
circumstances, the rat would simply start eating 10 seconds sooner, and
obtain all the food it needs in either case. The coupling between most
specific performances and the ability to reproduce one's kind is, in
most cases, very weak. A mildly hungry rat can copulate just as
enthusiastically as well-fed rat. And the available females, if
impregnated, are just as pregnant if the male rat took 10 minutes or 20
minutes to find them, and are unaffected for the next x weeks (whatever
the gestation period and time to recovery of fertility).

My point is that the relationship between performance and fitness is
buried in a sea of uncertainty, and is probably significant only when
the relationship is very obvious. I don't think that it (normally)
matters whether a perception is brought within 5% or 0.05% of its
reference level. There is a point where improving the quality of control
could not make any difference in fitness over 10,000 or 100,000
generations. In effect, there must be a region of variation in quality
of control over which it simply makes no difference to survival of
species exactly how good the control is. If the predator can bring its
body within reliable claw-range of the prey, controlling that
relationship is being done as well as it needs to be done. Giving the
predator the ability to control the relationship 10 times as well would
make no difference in its ability to grab the prey, and thus to feed
itself until it is past the age of reproduction or influencing
reproduction. Whether a person walks through the exact center of a
doorway or six inches to either side of center makes no difference in
getting through the doorway. If a driver keeps the car within 0.1 mm of
the center of the road, he will wear out his front tires very quickly.

RE: artificial cerebellum

     Where can I find it? Sounds very interesting!

I'll send you the code. The basic scheme was reported in the Proceedings
of the first European CSG meeting in Wales, two years ago (or is it
three now?).

ยทยทยท

subject: the effect of a change in performance on fitness. The
----------------------------

Calculation of the inverse of dynamic functions has always been
the main problem of control engineering, hasn't it?

     No, definitely not. Calculations are never the problem; they are
     easy. The only problem might be that they take too much time. In
     that case, good approximations can be used. But time is less of a
     problem as well. Wait three years and computers are ten times
     faster...

Are you saying that finding the inverse of differential equations is no
longer a problem?

     The main problem of control engineering was and always will be
     discovering good models of what is "out there". Not so simple that
     they do not adequately represent reality, not so complex that they
     become ununderstandable. It is this compromise between simplicity
     and faithfulness that will always be the most pressing problem, I
     think. In a PCT model, that would mean a model of the world (the
     "world" part in your simulations: what is the effect of an action
     on the ensuing perception) that is chosen in such a way that the
     "dis- turbance" is neither too large (this indicates an
     insufficiently faithful model) nor too small (this indicates an
     unnecessarily complex model).

This may be a problem for the modeler, but it is a problem for the
organism ONLY if the organism works according to your model, which
requires internal world-models. In my Little Man model of motor control,
there is no internal world-model, so the organism being modeled does not
have to know anything about arm dynamics. If motor control worked as
your model says it does, the spinal cord would have to contain
calculations of the inverse kinematics and dynamics of the arm. The only
place where kinematics and dynamnics enter in my model of motor control
is in converting output torques into the resulting accelerations,
velocities, and positions (in angle) of the joints. In the real
organism, this "calculation" would be done by the physical properties of
the arm. Have you looked at this model enough to understand how it
works?

What do you do when there isn't any inverse, or when there's no
analytical method for finding it?

     There is always a "best" inverse. Mathematics calls it a
     "generalized inverse" or a "pseudo-inverse". You can think of this
     as finding the major factors, as in factor analysis, or the major
     eigenvalues, as in matrix analysis, and considering the remainder
     as "noise", in the sense that its contribution is small and can be
     disregarded.

I thought that one of your big selling-points for your model was that
(in contrast to the PCT model) it could produce the EXACT output needed
to make the perception match the reference. If the environment function
is f, then the model function, f', is the same except for adjustment of
the parameters. After the parameters converge, the inverse of f', driven
by the reference signal, will produce an output from the plant that
exactly matches the reference signal. Are you now saying that there is
some limitation on this exactness? And if that is true, then why object
to the PCT model on the basis that it can't produce an EXACT match? If
two models can produce essentially the same performance, don't we have
to choose between them on the basis of their complexity?

My artificial cerebellum would work just fine with moderate
nonlinearities in spring constant and damping coefficient. How
would your model handle them?

     If nonlinearities are so moderate that the mountain still slopes
     upwards, the top will be found. It is local extrema that are the
     problem; you might get stuck there if you don't know whether a
     higher peak exists.

Without nonlinearities, the "plant" in the A.C. demonstration is
described by

   m(d2x/dt^2) - k1(dx/dt) - k2(x - x0) = [driving force].

What is the inverse of this equation that you would use in your model?

     I think that every control engineer would say that every variable
     that changes in exactly the right way is "under control" or
     "controlled", whether it is compared to some physically existing
     (reference) signal or not, whether in a loop or not.

I would like to hear from some other control engineers on this point.

You may not realize it, but you're actually including the engineer in
this definition of control. What is "exactly the right way" for the
variable to change? Is that not defined by a reference condition in the
engineer's brain? And isn't the engineer comparing a perception of the
way the variable DOES behave with the way that is "exactly right," as a
basis for adjusting the parameters of the system that is producing those
variations? Could you design an open-loop "control" system so that
nobody would ever have to monitor its output and adjust it to behave in
the right way?

All you're doing is substituting a control system in the engineer for an
automatic control system that can do its own adjusting. But this was the
original point of inventing automatic control systems: to have the
system itself make the adjustments needed to maintain variables in a
predetermined state, despite changes in the environment and even in the
system itself. This is a particularly important point in regard to
living control systems, where there is no engineer standing by who knows
the "right" value of the variables affected by behavior -- where only
the control system itself has any basis for judging the right value.

RE: balancing a jointed stick

     What is action and what is perception? The only allowed action is
     the movement of your finger (in a demonstration, an X-Y-recorder is
     used) in the horizontal plane. What you need to control is
     verticality, but what that is must be better specified, i.e. you
     need to construct an input function. But how? What _can be_
     perceived?

What can be perceived? The positions of the various parts of the control
stick, given the means of sensing. The perception that is wanted is for
the three sticks to be aligned and vertical, as nearly as possible. My
solution would be much like yours: to construct a system that would
control the vertical angle of one stick in the presence of disturbances,
and then to build two more systems on top of it. The output of the first
system would be the position of the support; the output of the second
would be the position of the top of the first stick, and so on. Just as
you say.

I would not control for the "highest position" of the end point, because
the error in vertical position is ambigous. If the actual position is
less than the highest position, there is no indication of which way to
move the support. I would substitute four controlled perceptions: the
lateral displacement of the top in x and y, and the angle with the
vertical in x and y. We want all of these to be zero. Now the sign as
well as the magnitude of each error is useful in telling us which way to
move the support.

If I had the equations of motion of the jointed stick, I'd love to try a
PCT model of this. I'm not very handy with dynamical equations -- I'd
have to have them given to me.

     I guess it was (mainly unconscious) considerations like these that
     originated my reluctance to accept that, in a world with sometimes
     very flexible linkages, only the final effect (perception) is
     "controlled" and to insist that the values of ALL variables in the
     whole loop (chain) must be "under control". Not necessarily at
     PRESCRIBED positions; only the top of the highest stick has an
     explicitly prescribed position. But also, for intermediate links in
     the chain, at (explicitly or implicitly) COMPUTED (or otherwise
     derived) positions.

This is related to the concept of hierarchical control. It is not only
the top goal that has to be under control; all intermediate goals must
also be under control for this scheme to work. The analogy with the
stack of jointed sticks is appropriate, if not exact. And the
intermediate goals each (in general) serve more than one higher goal; in
effect the whole system has to solve a set of simultaneous equations so
that all variables at one level are brought to the states that minimize
error in all the higher-level control processes to which they
contribute. Rick Marken's spreadsheet demos show how this works. The
basic method is the method of implicit solutions invented (or at least
widely used) long ago by analogue computing people. There is never any
actual solution of the simultaneous equations; the system simply settles
to the state nearest to a solution.

RE: chance

A supercomputer simulation given precise starting
conditions could probably predict the coin's final position with
considerable success.

     The problem of a simulation is that it is only that: it
     incorporates all of our (inexact) "knowledge" (obtained through
     "averaging" previous results of the same class) about the laws that
     apply, but this knowledge is just an approximation. And that in two
     ways: each individual law will be inexact, and we won't know that
     we have taken all applicable laws into consideration. In practice,
     only validation of the supercomputer's program would indicate how
     good it is.

When I spoke of "considerable success" I meant sucess relative to a
validation. Given sensors to measure the initial position and angular
velocity of the coin, and a good model of the dynamics of air resistance
and interactions with the landing surface, the computer should be able
to predict heads or tails where a person would guess wrong close to half
the time. So what would seem a chance outcome to the person would be
highly predictable to the computer programmer.

     It is -- after a great many super- computer hours -- possible to
     compute an electron's orbit around a single proton with good
     (although far from perfect) accuracy, yet accurate computations of
     orbits in larger atoms are still impossibly time-consuming.

So at what point do we say that the electron orbit is "probabilistic"?
It this only a matter of how fast our computers are? Will we some day be
able to predict the moment at which a radioactive decay of a single atom
will occur? Is there truly any such thing as _objective_ probability?

The biggest objections I find to probabilistic treatments comes from
the use of a model that is obviously simpler than the observed
processes it is supposed to represent.

     Then you will object to ALL applications of probability theory. Its
     approach is to consider some things as "identical", because only
     then can we average and predict some partial results with good
     accuracy, although decidedly not everything to full accuracy.

Not at all. I think applications of probability theory are a valid
substitute for quantitative modeling, when we have no way to generate a
quantitative model. Probability theory, as I see it, is simply a
formalized way of using past experience to predict what will happen
next, when we have no better basis for making a prediction. The basic
limitation of the statistical approach is that it never offers an
explanation for why some antecedent situation leads to a particular
consequence. It offers no link to the next level of explanation (up or
down), since the most it can offer is a description of what leads to
what. Statistically, we can show that when animals are deprived of food
for varying periods, their activities increase as the time since the
last feeding increases. But this will never lead us to suspect that the
animals are hungry -- that being without food has any relation to
processes inside the animal.

     Where you say this, you come remarkably close to my own position,
     which is not to simply accept what others call "chance" or "dis-
     turbance" but to try to model it as well as we can. Yet, somewhen
     we must throw in the towel and work with inaccurate models, if only
     because we have to control NOW, when our knowledge is still
     imperfect.

I agree completely. When we have no good model, we can't just throw up
our hands in surrender. At least we can extrapolate from past
experience, which is a lot better than behaving at random if we have
enough consistent experiences.
-----------------------------------------------------------------------
Dag Forssell (960708) --

Dag, your post came through in some strange alphabet-soup encoding,
miles and miles long.
-----------------------------------------------------------------------
Best to all,

Bill P.

[Hans Blom, 960709d]

(Bill Powers (960708.0930 MDT))

Lots of good comments in your series of posts. We definitely have a
different approach to control theory. Mine is single-mindedly aimed at
explaining control phenomena in the behavior of organisms, while yours
is more general, dealing with what is possible to engineer into control
systems. I'm sure the two approaches overlap and that in the end yours
will be the more powerful.

No, I think they complement each other. Your approach is more looking
at a certain level from a position at that level, while I have more
of a helicopter view, looking down on those lower levels from a high
level perspective. Neither is better or more powerful than the other.
We need to see things from _all_ possible perspectives, I think.

           But because of the very general nature of
your approach, you include a lot of considerations that I think are not
relevant to the behavior of organisms, which were not designed but
evolved.

Whereas from my point of view there is little difference between a
design process and the evolutionary process. In fact, I consider
every design process to be _part of_ the evolutionary process.
Evolution has not stopped, as many people think; it proceeds through
everything that happens in the world.

You take a more externalized view than I do -- to you, a "goal
of the system" is simply some end that the system can be shown to
accomplish (an example is the goal of "surviving"). When I speak of
goals, I am always speaking of something that is explicitly represented
in the system as a target toward which perceptions are to be adjusted.

Maybe only an externalized view allows one to see certain things. For
instance, even if goals are explicitly represented in an organism
(which I doubt as much as the concept of the "grandmother cell"), it
will be practically impossible to pry one goal apart from all the
others. At all times, we have a great many concurrent goals, and The
Test would only be possible in an experimental context in which we
can keep all other goals constant -- which is impossible. (The
alternative would be to know those other goals, so that we can
compensate for their effects, but that would produce an unsoluble
circularity.) That does not mean, of course, that we cannot suggest
plausible goals. We can and we do. But then survival is as good (and
as bad) a suggestion as keeping a cursor on a target. After all,
normally we pretty well defend against attempts to finish us off.
Without exception? No. But exceptions also occur in cursor tracking.

But this is more a statement of faith than a report on the facts. Does
it really matter to a rat's fitness if it becomes able to reach the food
in 10 seconds rather than in 20 seconds? Under some circumstances,
perhaps so -- perhaps the rat gets so little food on each trial that
this improvement would effectively double its intake. But in other
circumstances, the rat would simply start eating 10 seconds sooner, and
obtain all the food it needs in either case.

I think that you misunderstand the concept of fitness. An organism's
actual fitness can only be established _after the fact_, after a
(large) number of its generations have been observed, at which time
the organism itself has been dead for a long time. So it is with a
control process in general: how well (or whether) something controls
can only be established after we have observed it for some time. At
no single point in time is it meaningful to ask: Am I in control now?
That is undecidable. Yes, my point of view is different ;-).

generations. In effect, there must be a region of variation in quality
of control over which it simply makes no difference to survival of
species exactly how good the control is.

In biology, this phenomenon is called genetic drift. It is assumed to
be the motor behind speciation. In daily life, it causes different
people to reach different conclusions or to perform different actions
even if their knowledge and their perceptions would be almost
identical. So you are right, the phenomenon is very important. Often,
enough degrees of freedom are left so that different ways of
controlling are possible which are equally good. All depends on the
flatness of what is called the fitness landscape.

Are you familiar with "genetic algorithms" and their (control)
properties? They are a must if studying "social" control.

I'll send you the code. The basic scheme was reported in the Proceedings
of the first European CSG meeting in Wales, two years ago (or is it
three now?).

Thanks.

Are you saying that finding the inverse of differential equations is no
longer a problem?

No. I was saying that computations are not the main problem;
conceptualizations are.

This may be a problem for the modeler, but it is a problem for the
organism ONLY if the organism works according to your model, which
requires internal world-models. In my Little Man model of motor control,
there is no internal world-model, so the organism being modeled does not
have to know anything about arm dynamics.

From my point of view, your Little Man model _must_ have an internal

world-model or it would not be able to control. The model may be
implicit, however, rather than explicit. I have no time right now to
expand on this.

                             If motor control worked as
your model says it does, the spinal cord would have to contain
calculations of the inverse kinematics and dynamics of the arm.

Why the spinal cord alone? And wouldn't you agree that a nerve cell
performs hardwired "calculations"? About inverse models see my
earlier post.

I thought that one of your big selling-points for your model was that
(in contrast to the PCT model) it could produce the EXACT output needed
to make the perception match the reference.

Never EXACT. To any desired precision (in a predictable world)
depending on the model's complexity. In a demo, we can make the
structure of the world-model identical to that of the world. Only
then can the result be exact. And that only if the laws of the
artificial world are so simple that they can be discovered by the
controller's internal mechanisms.

Without nonlinearities, the "plant" in the A.C. demonstration is
described by

   m(d2x/dt^2) - k1(dx/dt) - k2(x - x0) = [driving force].

What is the inverse of this equation that you would use in your model?

I don't understand the question. Do you want an explicit expression
for x as a function of the driving force? Solve the differential
equation. It's a simple second order system. What's the problem?

You may not realize it, but you're actually including the engineer in
this definition of control.

I realize that perfectly well. It's all perception, remember? One
result of this profundity is that there is no "objective" definition.
I'm willing to accept that. Are you?

If I had the equations of motion of the jointed stick, I'd love to try a
PCT model of this. I'm not very handy with dynamical equations -- I'd
have to have them given to me.

Each stick is described by a formula like the one you give above.

That's all for now.

Greetings,

Hans