[Hans Blom, 931207]
Let me try to catch up on some questions and remarks, old and new.
(Bill Powers (931122.1315 MST))
My models don't do any
predicting (except, perhaps, for functions which you "could see
as predictions" if that's what you wanted to see). Maybe I'm
missing your point.
You may be. Your models do what I call a prediction as soon as you intro-
duce a "slowing factor" or some such, so that your system contains formu-
lae like
x [k] := f (x [k-1])
This is a "prediction" from time k-1 to time k, using "predictor" f.
By "prediction" do you mean the analyst's ability to predict the
state of the controlled system at some time into the future? Or
the ability of the control system itself to do this prediction?
These predictions are purely done by/inside the system itself.
In a closed-loop system, prediction isn't a problem; it isn't
even necessary. The reference signal provides the only
"prediction" necessary.
Prediction is not necessary, but only if you continuously have and process
all the perceptual information that is needed as feedback for all your
simultaneous goals. I contend that at all times most of that information
is missing and that we, moreover, would not have the means to process all
that information in real time. Why do you think we make errors?
It's funny. In some ways, you seem to see the process of control
as involving a lot of noise, so control is always uncertain and
involves statistical predictions. Yet in other ways, you seem to
assume that the environment is so free of disturbances that open-
loop behavior is the norm.
I see no conflict here. Luckily, our goals need not be realized EXACTLY.
We seem to be quite robust against disturbances, and we create our world
that way. The lane in which I drive is much wider than necessary, were I
able to drive exactly in its middle. Here, control is allowed to be un-
certain and involve statistical variations. Yet, on the Autobahn I assume
-- thus far correctly -- that I do not suddenly encounter a large pothole.
Here I rely on prediction in order to be able to drive at the speed that I
do; feedback would be too late. Am I missing your point?
It seems to me that this approach tries to achieve control
through measuring global stochastic properties of the signals
instead of acting moment by moment on the basis of the perceived
situation. Maybe there are situations in which this kind of
control is called for, but I'll bet that the control that is
achieveable in this way would hardly merit the term "control" in
comparison with other kinds.
That is a matter of how you define control. In my view, control is still
control if it includes episodes of going ballistic. Are YOU controlling
your way to Paris on board your plane? Or are you just part of the plane's
cargo during that part of the trip? What is the "correct" view?
Are you saying that ALL perception is imagined?
Basically, yes. As you may recall from my diagram, all control is based on
the internal map. Perceptions, in my view, are used to tune that inner map
on which control, in turn, is based. For a long time, I have been sur-
prised about how little influence "objective truths" have on people. A
long standing example from science is how Aristotle, and after him two
millennia of others, "saw" an object fall at constant velocity. It seems
that inner maps may be much more important than moment-to-moment percept-
ions. For more examples, check your encyclopedia for "delusion" and
"prejudice".
PCT models this too, I think, but in less explicit way. In a PCT model, a
perception is used in order to satisfy a goal. Perceptions that have
nothing to do with goals are simply discarded as "noise". How can you have
accurate perceptions as long as you have goals? Sounds like Zen? You're
right...
Sometimes we seem to be converging, and then you say something
like this that makes it seem that we are visualizing two entirely
different kinds of systems having nothing to do with each other.
I like the metaphor of both of us watching a statue, both from our indivi-
dual perspectives. We both see the same thing, at one level, yet we will
violently disagree about the particulars. That is, as long as we remain so
immobile that we cannot change positions...
Optimal control theory describes how "Inner Reality" can be
calibrated using "real true Boss Reality". It also describes
the limitations of this process.
So far I don't see how "real true Boss Reality" even gets into
the picture. Is it assumed that the internal x[k] is a real true
representation of Boss Reality?
Assume that you have what control engineers call a "black box", a system
with an input and an output, and that it is your goal to "get to know" the
object. If the "black box" is a hi-fi amplifier, you know how to do this:
offer a test signal to the input and measure the output. You can get to
know the amplifier's frequency transfer function, the amplifier's distor-
tion at different frequencies and at different power output levels, its
music and sine wave output power, and a lot more, if you're really inter-
ested. But if you are forbidden to open the box, you will not be able to
know its schematic.
If you do NOT know that the black box is an amplifier, the problem is much
harder. What is it exactly that you need to establish about the box's
transfer function? Only if you know what it does will you be able to use
it at a later time. Your initial guess will probably be that it is an
(almost) linear system, if only because you have the tools to analyze
those, and you proceed in much the same way as with a power amplifier. If
it is not linear, you might want to establish the types of its non-
linearities, a much harder problem. And if the black box contains memory,
you will need not only to establish its transfer function for each per-
mutation of memory bits (assuming binary logic, to make it easy), but also
how the memory changes state, an even harder problem. The latter problem
cannot be solved in practice, even for moderately complex systems. Combi-
natorial explosion, they call it. Imagine being handed a 160 pin IC of
unknown function and without data sheet...
"Life is a game; its goal is to discover its rules." Who said that?
I see our coming to grips with the world in much the same way. Evolution
has given us some of the tools to come to a (partial) solution of how the
world works, i.e. how to use it to achieve our purposes. All these tools
are finely honed to the one task of transmission of our genes.
Do not understand me wrongly: no INDIVIDUAL has as his/her task to trans-
mit his/her genes to the next generation. We are left great freedom to
choose our individual goals (or does it only seem that way?). Evolution
does not depend on individuals. Nor is this a description of how HUMANS
function, for it is valid for anything alive. I am as interested as you
are in what makes a human a human, me me, and you you. But I only have
some very, very tentative and crude answers.
(Rick Marken (931124.1100))
The "error" that resulted was an irrelevant side effect of this
FEEDBACK control process.
I can get mightily annoyed by the term "irrelevant side effect". If this
is what PCT leads to, I will henceforth consider PCT an evil. I also do
not believe in this term, nor do I accept it as an excuse. First, the
"irrelevant side effect" that you introduce into the world through your
actions may be something major for somebody else. Second, when you notice
the harm that your "irrelevancy" has done, you yourself might be very
sorry for having done so. In that way, your act would be neither irrele-
vant to the other nor, now, to you. Rather than an "irrelevant side
effect", I call it BAD (predictive) control. The law of karma: your deeds
come back to you, whether you intended them or not. You cannot simply
shrug off "unintended" byproducts of your actions as "irrelevant side
effects", as if you were responsible only for the INTENDED parts of your
actions.
It is not that you perpetrated evil in this case. Your mistake did not
hurt me. Your attitude did. I know people here in the Netherlands that for
sure would have been greatly hurt by your remark. That war, you know, has
struck some very deep wounds.
(Bill Powers (931125.0215 MST))
Notes from an observer: ...
This piece is quite a literary achievement, Bill! Have you ever considered
literary art as a vehicle for your thoughts? Your "flow of consciousness"
essay reminded me of Pirsig's "Lila: an enquiry into values". Lila is not
quite the literary masterpiece that his "Zen and the art of motorcycle
maintenance" is, but great nonetheless. Have you read it? When I did, I
was tempted to -- and did -- translate whole sections into PCT jargon,
equating "quality" with "zero error". Art often makes as much an impact on
how people think/perceive reality as science. More, maybe. Your fiction
would be good, in several meanings of the word: great style, exciting
narrative, accurate observations, very plausible philosophy in the back-
ground. No smiley, this is serious.
Some remarks about the contents, nonetheless.
Can Hans' model have a conflict?
No, it cannot. And neither can people, in a way. We CAN have incompatible
goals, however. We want to eat our cake and have it. That is human, all
too human.
Can it know it has a conflict?
Sure, why not? But then it will consider the different goals' priorities,
do some machine arithmetic as to future outcome and discard one goal or
find a best compromise, very much the way people -- or chess computers --
do.
Half an hour in the life of a hierarchical perceptual control
system, babe, that's really what it is. All you have to do is pay
attention.
That's really WHAT IT LOOKS LIKE, babe. It's all perception, you know. Or
can you REALLY calibrate against Boss Reality?
(Bill Powers (931125.0730 MST))
The reason I use a hierarchical model is that the human organism
appears to be organized hierarchically, at least at the lower
levels and quite plausibly at the higher levels, too.
I do not care much for a hierarchical model, for several reasons. First,
there are the things that McCulloch pointed to and why he talked about
"heterarchies": it just isn't always true that if we prefer A to B and B
to C, that it follows that we prefer A to C. We are just not that logical,
or logic isn't that human. Then there is the time aspect: sometimes my
hunger is stronger than my thirst, sometimes it is the other way around.
Priorities vary. Third, the hierarchical level seems to be very much in
the eye of the beholder. Assume that it is my goal to live for the next
ten years. Then it must be one of my subgoals to keep my heart beating.
But now assume that it is my highest goal to keep my heart beating for the
next ten years. Then I must have the subgoal of staying alive all that
time, with all that it implies: getting food, the money for it, a job, ad
infinitum. Which is the "higher" goal? What are the criteria? I'm stuck in
circles. That is why I prefer to talk about multiple parallel/cooperating
goals. Why not, after all? There is multi-processing parallel computer
hardware, and "concurrent cooperating processes" is a buzzword in the
software circuit. Nice metaphors!
You can see the same parallellism in a PCT model, if you care to interpret
it that way. All connections at higher levels of the hierarchy can be seen
as paths going up from some goals and then down to others, which transport
information that keeps the low level goals aligned and cooperating in
simultaneously reaching all goals despite disturbances in the outside
world that no single elementary control system can tackle. See how much
you can read into a PCT model? :->
This cooperation works at the lowest levels as well. I take it for estab-
lished, for instance, that one of the functions of the muscle spindle
sensors is to keep the muscle's force properly distributed over all motor
units of that muscle (re Guyton, one of the best modelers in physiology).
There is no need to compute
inverse kinematics or dynamics to produce the required driving
signals.
Predictive control can do without inverse kinematics as well. In effect,
it has a simulation running in parallel to the real thing. Predictive
control provides a prediction of where the system would go if a certain
control trajectory were applied. The error between the prediction and the
desired position is then "controlled away". In the mean time, measurements
keep the model -- and hence the prediction -- as accurate as possible.
As I said, you could represent this entire hierarchical control
system, containing three systems at each of four levels (counting
the visual level), as a single overall control system of the same
form you use to represent all control systems: a parallel vector
processing model. It would, however, be very complex.
Why would it be more complex? It DOES the same thing...
Also, there
would be no indication of the clever tricks nature has found for
stabilizing the limb in a very simple way ...
That may be true, but you have to decide whether you want a "performance
model", i.e. a model that shows the same outward behavior only, or a model
that is as accurate a replication as possible of the full human physiolo-
gy. A "performance model" of a hi-fi amplifier may have very different
entrails. If you want to accurately model the entrails as well, you ought
to study physiology. The problem is that we want both in our models. All
too human...
In fact, at the lower three levels at least, I don't see anything
but a hierarchical model as being justifiable.
YOUR lower three levels, the choice of which forces you to leave out the
cooperation between muscle spindles in force distribution, for instance.
Are you sure that disregarding this "detail" is allowed/has no effect?
Incidentally, a model expressed as vectors related through
matrices is mathematically convenient, but in the actual system
the vector notation has no advantages. The actual system must
perform every individual calculation implied by the compressed
notation.
Sure, but the convenient mathematics provides the "helicopter view" that I
need to "understand" the system despite its complexity, much like you
choose a homogeneous hierarchy in order to understand your model.
As I understand it, servos are basically controllers with
variable reference signals, while regulators have fixed reference
signals.
In the HPCT model, this distinction is not really relevant,
because the only disturbances that matter are those that act
directly on the controlled variable, and all reference signals
are variable even if, for the moment, they may be constant.
Huh?
Regulators can be considered to be servos that are "tuned" based on the
additional (feedforward, as one could call it) knowledge that the refe-
rence signal will remain constant for a very long period of time. Servos
can be considered to be regulators that are "tuned" based on the additio-
nal (feedforward, as one could again call it) knowledge that the reference
signal can change any moment now. Basically, they are the same thing,
except that their tuning is based on different "knowledge" (assumptions)
and hence different.
In HPCT I don't believe this distinction is necessary. You can
have both.
Sure you can have both. But then you have neither a good regulator nor a
good servo.
Also, I'm dubious about changing the basic organization of the
model just because the reference signal happens to be steady for
a while.
The basic organization need not change. Just adjusting a few parameters in
some functions (the output function only?) might do the trick. A suggest-
ion would be to do this in a feedforward way based on the momentary error:
tune as a servo when the error is large, because you want servo (rapid)
behavior; as a regulator when the error is small, because you want
regulator (stable) behavior; and as something in between on intermediate
errors. I used this approach in my blood pressure controller. I can assure
you that it works.
After a moment's reflection: since the error already enters the output
function, you need not change the organization at all. Choosing a somewhat
more complex output function would suffice!
If the
bed is seen about 5 steps away before the lights go out, and you
take 5 steps in the dark, you perceive yourself as near the bed.
What do you mean by "perceive" in this context? Actual, moment-to-moment
perceptions or perceptions that are provided by an "imagination loop" or
through integrations that take place in what I call an "inner map", much
like in an inertial navigation system?
When you characterize "walking" as open-loop, I don't think
you're really considering what is entailed in walking.
I did not characterize "walking" as open loop. What I characterized as
open loop is determining, while walking, my position relative to an object
without having sensory access to that object. No feedback information, no
feedback, yet I do something. I call this non-feedback something "feed-
forward". Call it whatever you want to. "Predictive control" is not quite
it; that term is most often used when a given reference trajectory must be
maintained for some time, such as when a pilot must follow a given trajec-
tory when landing, and not in the situations that I refer to, when during
an instrument landing in heavy fog, three feet off the ground, suddenly
all cockpit instruments explode.
By the way, I frequently use walking and chess as examples for my students
when I talk about our intuitions about what is easy and what is difficult.
Chess is easy: machines now beat 99 percent up of all human chess players.
Walking is difficult: no robot is as yet able to perform anything near ad-
equate two-legged motion. Most people's intuition is the other way around.
This is a real eye-opener for many students.
I believe
that walking requires very close to continuous control; if
proprioceptive and tactile feedback vanish for even a moment, the
walking systems will simply cease to work. All experiments with
deafferentation support that statement; operated animals aren't
even expected to function for 16 days, and even after much
practice their functioning is pretty pitiful.
You choose a harsh example. A pitiful but soft enough crash after an
explosion of all cockpit instruments would be enough for me in the
situation above!
One general point. I think that applying concepts of "optimal"
control to living control systems is questionable. What is
optimal depends greatly on the criteria you adopt. Do you want
minimum-time control? Minimum-energy? Minimum overshoot? Minimum
cost? Minimum need for reorganization? Minimum complexity of
circuitry? Minimum demand for precision of components? Minimum
requirement on computing power? Minimum reliance on memory?
Minimum mechanical stress? Every optimization problem is
different; the concept of "the" optimal control system is
meaningless.
Correct. A control engineer can specify what is optimal when designing a
control system. What interests me is what evolution imparted to humans as
"optimal", much like your search for controlled variables. Very few
psychologists have studied "optimal" behavior in humans; most consider the
other side of the spectrum. Maslow comes to mind as one of the exceptions.
You've speculated that living control systems must be optimal
because nature has had so long to evolve them. According to
evolutionists, however, the only criterion for optimality that
has any evolutionary meaning is survival over a species to the
age of reproduction. That is a very broad criterion, and it
doesn't imply optimality of any other kind. We're not talking
about fine-tuning here, but just about acquiring control that's
good enough to bring reproduction to its maximum rate. Evolution
stops with good enough, not optimal, organization. And even that
evolutionary criterion is suspect; the nations doing best on this
earth are not the ones with the highest birth-rates.
Evolution does not stop; it does not have a GOAL, it is a PROCESS. As to
the nations doing "best", notice how culturally biased and shortsighted
that notion is. What would a historian say about that a millennium from
now? Or an Alpha Centaurian social scientist now, for that matter?
My own suspicion is that optimal control is something that
engineers invented, for the same reason that people keep trying
to break records or cut costs of production by another penny.
Given any one aspect of a living control system, a persistent
engineer could probably improve on it significantly.
Don't count on it. There are so many different, unpredictable (unmodelled)
interactions in an organism that changing one tiny aspect (as in just one
defective gene on conception) may be disastrous.
I don't
think we need to model optimal control, although the methods of
optimal control theory that apply to acquiring and improving
control probably will be part of the ultimate model.
Huh?
(Avery Andrews 921127.1400)
Short term is the operative word here - I don't see why people are
bothering to mention that open-loop driving would have you off the road
within 10 seconds or so -- I'd think of 1/4 - 3/4 sec. as an appropriate
time range for that.
Have you noticed how some people -- especially mothers with children, it
seems -- frequently look into the back compartment of their car while
driving? That behavior must be important for them -- why else would they
run the risk? So feedforward must have its uses, even if you'd better not
keep it up for too long.
(Bill Powers (931127.1230 MST))
Incidentally, in your model, what determines the setting of the
reference signal (the goal)?
In adaptive control models, there is always one "prewired" goal, a vari-
able that has to be minimized. This goal may be composed of several sub-
goals that have different weights. Subgoals are most often expressed as
errors squared, such as in the expression (x [i] - xopt [i])^2, where x
may be anything, a direct perception, an internal variable of the model
such as a prediction, and sometimes a parameter that needs to be kept
within a range. The goal is also usually a time integral; few are the
occasions where you need to have a goal realized at one moment only. This
description of goals is different from PCT, where you would see xopt [.]
as the goal.
(Martin Taylor 931129 12:30)
A student of dynamics
has a set of perceptions of the world that work together very nicely.
That posting was a nice review, Martin. For me, systems dynamics is most
fruitfully applied to the question where goals come from. In one of my
earliest contributions I touched on this topic of how a system "acquires"
a "goal" from nothing more than the laws of nature i.e. the properties of
matter. I modelled a microscopic particle, moved about by Brownian motion,
with the single property that its diameter varied as a function of the
concentration of a chemical. I showed that, given a well-chosen relation
between diameter and concentration, the particle "sought" -- purely pas-
sively, of course; it had no means of propulsion -- the maximum concen-
tration of the chemical, whereby it was quite resistant to disturbances.
The model was just a set of equations/relations that modelled physical
properties. Yet what emerged was this remarkable "purposive" behavior.
PCT does not touch upon where the organism's highest level goals come
from. The theory of dynamic systems does just that. I, too, think that a
synergy is called for.
(Bruce Nevin (Mon 931129 16:21:33 EST))
A technical definition of feedforward control is the following: "The
reason for applying control to a process is that there are inherent
disturbances in the process. When none of these disturbances can be
observed at their source, then their joint effect at the output can be
characterized by some stochastic model and a feedback controller which
uses the output error signal to make compensatory control action can
be developed. However, when some of these disturbances can be measured
prior to reaching the output it will usually be more desirable to use
these measurements to make some of the appropriate control action.
This is referred to as feedforward control. More generally, since
feedforward control will rarely eliminate all the sources of disturb-
ance, some combination of feedforward and feedback control will be
necessary". Quoted from: John MacGregor and George E.P. Box. Technical
Report no 308, "Topics in Control; Chapter 3. Feedforward Control".
As I recall, the "output" in engineering control theory corresponds to
the perceptual input in PCT. But if measurements are made prior to the
erstwhile perceptual input (aka "output"), and if those measurements are
used in some way "to make some of the appropriate control action," then
those measurements or their transforms within the system are additional
perceptual input signals coming in to the control system from the
environment by way of its sensors (the things that accomplish the
measurements). Looks like parallel feedback control to me. Am I
hopelessly lost?
The diagram is as follows:
feedforward signal --------------- ----------
------------------>|? | | |
reference level | | | | feedback signal
------------------>|+ controller |---------->| system |------>
> > > > >
------->|- | | | |
···
--------------- ---------- |
> >
-------------------------------------------------
The feedforward signal(s) somehow influence the controller in such a way,
that control is better with than without. The way in which the controller
is influenced can vary. The feedforward signal could adjust internal
parameters, or the reference level, or the feedback signal, or any combi-
nation of these. It is parallel control all right, in that more percept-
ions are used than the feedback signal alone. The signal(s) is (are)
called feedforward because no attempt is made to control them; they just
provide extra information about how to control the feedback signal better.
(Bill Powers (931120.0850 MST))
... the ... principle that pilots use in landing airplanes,
and that is used in automated systems for hands-off landing. I
believe that the customary term is "predictive control." The
perception you are controlling is a _present-time_ perception
derived by an iterative prediction from other _present-time_
perceptions.
Correct.
Predictive control is a form of model-assisted control, but not
quite the kind Hans Blom has been talking about. In Hans'
approach, the model runs concurrently with the real environment.
In iterative predictive control, the model runs faster than real
time, over and over, so the outcome predicted by the model is
compared with the reference level again and again. The actions
based on the error alter the perceived present-time environment,
which alters the outcome the next time the internal model is run,
which alters the error, which alters the action, and so on.
The things that you can do with an internal model is a subject that I had
postponed for the future. But now that you mention it: it can indeed be
used to rapidly precalculate "the most probable future" given a reference
trajectory or, the other way around, to precalculate a control trajectory
given the most desirable future. People do this kind of simulation all the
time: "what if ...". Chess computers practice nothing else.
In practice, you do it this way: compute the complete control trajectory
from now until the goal is reached. Assume sampled control. Apply only the
first control value (or vector, if you need to control more than one vari-
able) of all those computed, that is the one that was computed for the
next instant. Terrible overkill, it seems. But indeed, predictions become
less accurate as time proceeds. The prediction error is used to adjust the
model and hence improves the next predictions. One sample instant later
all is repeated. This is how autopilots do it.
Because predictive control of this kind is iterative, the method
of extrapolation is not critical; even if the prediction is in
error for long times into the future, as the critical event
nears, the extrapolations become shorter and shorter, so errors
of prediction make less and less difference. After the final
prediction, control merges into normal present-time control.
Errors of prediction apply to the whole flight path, not just the final
destination. If the pilot deviates from the flight path ever so slightly,
he is not allowed to land but must make another circle. Pilots hate that.
Modeling this kind of control for human behavior would require a
lot of stipulations about perceptual functions that we don't know
how to model.
Be assured, we know how to model this -- in an autopilot. It is not a
matter of perceptual functions, but of maintaining, adjusting and using an
internal model. In humans, it ought to have more to do with brain function
than with perceptual function. Of course, what we do NOT know is whether
the brain uses a mechanism even remotely similar to the ones control
engineers design. But at least we have an existence proof.
(Martin Taylor 931203 14:45)
Rocks don't
involve negative feedback systems (at the level of _being_ rocks)
Let me nitpick a little with you. When I try to pull the rock apart, the
rock resists with equal force. The elongation that I can exact is slight.
It is as if the rock's length change is a feedback signal for it to pull
back. Physics has something to say about the force that the rock reacts
with when you try to pull it apart. That has to do with a rock being a
rock. Water doesn't quite pull back that way.
A little anecdote. A colleague, an electromechanical engineer, caught me
red-handed reading my CSG mail. So I had to explain. Then he replied: "So
an electrical motor is a control system? I apply a voltage to a small DC-
motor. That determines the number of revolutions that will result. Then I
try to disturb its rotational speed by imposing a frictive load. The
motor's back-EMF now starts to deviate from the applied voltage. This
difference causes the motor to develop a lot more energy, which fights the
slowing down effect of the load. I see all components of a control system
there. Is a motor a control system?"
I was flabbergasted and did not have an answer. Do you?
Greetings,
Hans