Model-based control

I'm not sure what you mean by imagined perceptions since in my post I was
suggesting that all perceptions are imagined unless sense data constrain
it. But perhaps I need to think about what levels possess this
characteristic--maybe these seemingly unimagined imagined perceptions
exist at and above one level and not below.

But anyway, I'm not sure why its a problem updating real-time
disturbances. Perhaps I don't understand what you are refering to here,
but it seems that updating often does NOT occur in real neural systems.
For instance, the world can change quite alot during saccadic eye
movements without one noticing it at all. Maybe that's not relevant
here, but it came to mind.

So does model-based control "change the diagrams" or can it all be clumped
into the Input box? I think you said the former, but I want to be clear.

Thanks

Mark Olson

···

On Wed, 2 Feb 1994, William T. Powers wrote:

[From Bill Powers (940202.0915 MST)]
Mark Olson (940201.1752) --

Mark, you're talking about model-based control, an elaboration of
controlling imagined perceptions. This may be appropriate for
some of the higher levels of control. The problem at lower levels
is updating the model rapidly enough to handle real-time
disturbances (which the model can't anticipate). I haven't yet
seen any demos of this type of control, and haven't tried any
myself.
---------------------------------------------------------------

[From Bill Powers (960228.0600 MST)]

I've been trying to think about model-based control in the context of
PCT, and the difference between symbolic and analog computation. I'm
quite sure that model-based control does happen, and that in general
outline it happens more or less as Hans Blom and the "modern control
theorists" propose. I doubt rather seriously, however, that it is
implemented in a brain in the same way Hans implements it in a computer.
I also doubt that it exists at the lower levels of control.

As a method of designing artificial computer-driven control systems to
operate "plants" of known nature, model-based control seems to be a
powerful method (although I don't know how well it works in practice --
is this how most contemporary control engineers now design their control
systems?). The question is whether it is also a general model of control
processes in living systems (from the bacterium to the human being).

We need to know how long a person or animal can go without sensory
feedback before losing control. If this turns out to be only a very
short time for a specific control task, then there are much simpler
arrangements than the one Hans proposes that could give the appearance
of continued control for a brief period after loss of sensory
information. If loss of sensory feedback in a particular task leads to
an immediate runaway effect, then clearly there is no model-based
control associated with that task. The basic factual question is, "Do
people act as if they are using model-based control, and if so, under
what circumstances?" The answer can be provided only by experimentation;
there is no way to answer the question on hypothetical grounds, or
through qualitative and essentially data-free anecdotes.

In a comparison of a very simple PCT model and Hans' very elaborate
model-based control model of a simple tracking experiment, Hans' model
controlled slightly better than the PCT model did. But both models
controlled two orders of magnitude better (in terms of remaining error)
than real people do in the same task: a tenth of a percent as opposed to
10 percent RMS error (at the level of difficulty used). Of course Han's
model was superior in one regard to the PCT model in that it adapted
itself to the situation while the PCT model was simply given fixed
parameters. This adaptation was somewhat limited, in that when feedback
was interrupted, Han's model lost control within a fraction of a second,
essentially just as fast as the PCT model did. But I would like to put
that consideration aside for now, because there are other simpler
methods of adaptation that would also work, particularly if the overall
model has to imitate only the imperfect control that a real person shows
rather than reaching some ideal degree of control.

The basic principle of control that Hans proposes is a method of
adjusting a model and of producing an output that affects both the real
world and the internal model. The internal model receives the output
signal, and responds to it by producing a perceptual signal. For
adaptation, this synthetic perceptual signal is compared with one
derived by sensing the real world, and the difference is used to modify
the model.

With adaptation complete, the model is used by the system in two ways.
First, it produces a synthetic perceptual signal for comparison against
a real one. This involves running the model _forward_. Second, the same
model is also used in its inverse form: its inverse is used to convert
the reference signal, open-loop, into an output signal. This is
something I have only recently realized. When I originally drew a
diagram of Hans' system, I drew an internal feedback loop, as if the
synthetic perceptual signal were being compared with the reference
signal, and the error was producing the output. Hans drew his diagram in
a similar way. But in fact, there is no internal feedback loop of that
kind. The synthetic perceptual signal is not compared with the reference
signal.

What is done instead is to let the adaptation procedure adjust the
parameters of the model; then the model's inverse, with those
parameters, is used open-loop to convert the reference signal into an
output signal. The internal "loop" is closed only through transferring
the parameters from the forward model to the inverse of the model.

I think this is a correct diagram of the organization of Hans' model,
where previous diagrams have been, at best, misleading. Notice that the
synthetic perceptual signal is NOT part of an internal loop that
includes the output path. It is used only in adjusting the parameters of
the model.

In a computer program, it is no problem to compute the forward form of
the model and also its inverse. The designer of the program knows the
general form of the real-world system (with parameters of the model
being left for the adaptation to adjust), and therefore can also compute
the inverse form (assuming that one exists). The parameters can be
stored in memory or a register, then recalled for use when evaluating
the output of the inverse function.

In one sense, a nervous system can do the same thing. In Hans' computer
program, we have the evidence that a nervous system (Hans') can set up
symbolic computations, work the math with pencil and paper to compute
the forward and inverse models, and write a computer program to
implement the computations. However, one is permitted to wonder whether
the nervous system has any _other_ way to do the same thing, without
going through the step of conversion to symbols and the execution of
procedures that one has to go to school to learn.

In particular, I am struck by the practical difficulties involved in
having both a forward function and an inverse function, with the
parameters from one being transferred for use in the other. Much has
been made of the perfectness of this method, since in principle the
controlled variable can be made to match the reference exactly, whereas
the PCT model must always contain some error to make it act. But now we
see that this perfection depends critically on being able to compute one
function that is the exact inverse of another. In symbolic mathematics
this can often be done, but if we require that these functions be
represented by neural networks that operate without using discrete
symbols, then not only must the inverse be perfectly computed, but it
must be computed by a different network from the network computing the
forward function. Further, the changing parameters of the forward
function (represented, no doubt, by synaptic weightings) must be
continually and perfectly transferred (somehow) from the forward to the
inverse neural network.

I'm not trying to shoot down the whole concept of adaptive control here;
only the unrealistic method used to implement it, and the spurious claim
that this method is inherently perfectable. The most serious defect of
this implementation is the duplication of functions: the necessity of
doing the computations of the model twice, once forward and once in
reverse. This duplication is unnecessary.

There is, in fact, a way to achieve the same result within any
_practical_ limits of perfection without this duplication of functions.
All that is required is to set up an internal loop that works as a PCT-
style control system. The adaptive part of the model can be left exactly
the same.

                                 ref signal
                                      >
                                      v
            --------------->---- COMPARATOR
           > >
           > > error signal
           > v
           > OUTPUT FUNCTION
           > >
           > > output u
synthetic --<--- FORWARD MODEL<-----
perception | x' ^ |
           > param | adjust |
            \ | |
              -> adaptive process |
            / | system
real | |
perception | y |
- - - - - - - - - - - - - - - - - - - - - - - - - - -
           > > environment
          x <----- REAL WORLD <------ u
                     f(u)

Now there is an output function of the same kind that would be used in
any PCT model. The loop inside the controlling system is a PCT loop. Its
dynamic characteristics are adjusted for stability, and its gain is made
as high as possible or as high as needed (an integrator would produce
effectively infinite gain at low frequencies). The synthetic perception
is directly compared against the reference signal, so only the error
signal enters the output function. This means that small errors in the
functions affect the difference between the synthetic perception and the
reference signal, and are strongly corrected. Also, the same output
function can work over a wide range of parameters of the forward model,
now the only form of the model used by the system. A simple adaptive
function associated with the output function (not shown) and using only
the error signal as input can maintain stability of the system. This
adaptation does not have to be explicitly coordinated with the main
adaptive process.

Notice that if the synthetic perception x' is maintained near the value
of the reference signal, then the output u is effectively the inverse
model function of the reference signal. The difference between this and
the original diagram is that the inverse is achieved by feedback rather
than open-loop computation. It is achieved by what is known in analog
computing as an "implicit" computation.

One last point. A simple switch can now change the system in diagram 2
back and forth between real-time and model-based control. The switch
simply determines whether the perceptual signal entering the comparator
comes from the real perception or the synthetic one. When the connection
is to the real perception, the adaptation of the model continues as
before; the model continues to be updated even though not being used for
control. The obvious advantage is that there is no need to anticipate
disturbances in the real-time mode, and control can be maintained over a
range of parameter variations in the real world. When real-time
perception is lost, this loss can be detected and the switch can be
thrown to the model-based control mode, so that pseudo-control can be
continued at least for a short time. Even the Extended Kalman Filter
model requires a means of detecting that the real-time input has been
lost, so that the rapid adaptations can be stopped.

···

-----------------------------------------
By investigating human control processes with the above models in mind,
we should be able to say which version of adaptive control is the better
one. If the second model is the better one, we should find that upon
loss of input, the first thing that happens is what would happen in a
real-time control model, probably a tendency for the output to start
changing rapidly. But after a short delay (when some higher system
realizes that the input has been lost) the input switch will be thrown
to the other position, and the model-based control mode will be seen, at
least for some short time. The person will control an imagined
perception, with the output actions becoming progressively less
appropriate to the actual state of the environment. Eventually, the loss
of actual control will be detected through errors in other systems, and
higher systems will cease to use the affected control system.

The same investigations, I predict, will show that many control tasks do
not involve model-based control at all. When the input is lost, control
is lost and is never restored even approximately until the input
returns.

Without experiments, there is little reason to debate the relative
merits of these models. They are all plausible, but they are not all
defensible by data about real behavior.
-----------------------------------------------------------------------
Best to all,

Bill P.

[From Bill Powers (960303.0830 MST)]

Martin Taylor 960303 01:12 --

You have two perceptual signals, one from the model and one from the
real world. How do you combine them to yield "the" perceptual signal
that is compared with the reference signal?

     I hadn't conceived of either summation or averaging, but of a
     merging of information to provide a result that might be more
     accurate than either.

What is this operation you call "merging?" That's a new one on me.

One of the perceptions is the real one, and one is the output of a model
that approximates or idealizes the real perception and is derived from
the real perception. The model is based on past experience with the real
perception. If the modeled and real perceptions were based on
independent samples of the same external situation, then statistically
averaging them together might create a more accurate representation of
the external situation. Of course then you have to ask, "More accurate
compared to what?"

Anyway, the model's output is _not_ an independent measure of the same
external situation. The only information in either signal comes from the
real perceptual signal.

     The genesis of my "same problem" is the transient that occurs at
     the switching moment. The transient is inevitable if the real-time
     signal is suddenly lost, but that is often not the case. In dealing
     with the world, one may well be using sight, sound, and touch on
     the same object.

Now you're talking about a different problem: using information from
several different independent inputs, each giving a different view of
the external situation.

I think the "transient" problem is taken care of by higher systems that
anticipate the loss of information from one perceptual channel (when
this is possible). As your car approaches the black mouth of the tunnel,
you note the position of the steering wheel, and set that up (rather
than steering effort) as the reference position of the wheel just before
you lose the visual information. You rely on the tunnel engineers not to
have altered the curvature of the road suddenly just after the entrance.
The only alternative is to stop the car and wait for your eyes to adapt.
Traffic may remove that alternative.

The only serious problem (model-wise) arises when the real perception
disappears without warning. There is no system that can handle this
problem, unless it's a model-based system that always works off the
model instead of the real perception, as Hans Blom proposes. But the
tradeoff here is how rapidly the parameters of the model (and modeled
disturbances) are allowed to vary. If they can vary rapidly, then you
lose the advantage that the model ignores meaningless variations in the
perceptual signal; also, the time over which the model can continue to
operate properly after loss of input becomes very short. In fact, you
end up with control that's hardly better than what you can achieve with
much simpler real-time control and suitable phase-advances using
derivative feedback. If you adjust the adaptation time constants to
permit the model to operate for a significant period with no real input
data, then you lose the ability to counteract rapid changes in
parameters or additive disturbances. As I pointed out in the "challenge"
series of programs, Han's model with a modeled disturbance did very well
(slightly better) in comparison with the elementary PCT model -- but it
also lost control within a fraction of a second when the real perceptual
input was unexpectedly cut off. With respect to prediction, it was not
significantly better than the simple PCT model.

     Perhaps this kind of thing happens only at higher levels. But I'm
     always a bit wary of structures that inherently introduce
     transients into the system operation.

Transients happen. But I agree with your main point: model-based control
might be useful when the environment changes slowly and is highly
regular (although that also makes ordinary control more effective, too).
But these characteristics exist only for the environment as it is
represented by higher-level perceptions, or where very regular non-
living systems are involved. You can model another person with respect
to personality, aspirations, patterns of speech and comprehension, and
so forth, because these things are acquired only slowly and change only
slowly. You can model the effect of boiling water on how rapidly your
eggs will reach the correct state of solidification, because water boils
at a relatively fixed temperature and eggs are relatively standardized
-- and you can use an egg-timer.

The disadvantage of model-based control is that it commits you to
assuming a world with characteristics that change slowly. The more you
rely on model-based control, and the longer you expect the model to
remain valid, the more often you will find your actions inappropriate to
the actual situation. If you speed up your adaptation to reduce the
amount of inappropriate action, the length of time the model can be used
without input information shortens. If you want to operate for long
periods without input information and slow the adaptation process
accordingly, you become less able to handle the present environment. For
lower-level control processes like hitting a tennis ball or dancing,
you're better off using present-time control and not trying to base
present actions on past experience.

···

------------------------------
I wonder whether you (and Hans) have tried the regular-square-wave
tracking experiment yet. A great deal has been said about how control
would work in this situation, but none of it has been based on data
about how people actually do square-wave tracking. What we have found so
far is that while it is possible to go into a synchronized-tracking
mode, the actual tracking is not remarkably good, nor is it better in
terms of total error than ordinary tracking. The only obvious difference
is that the _average_ delay can be brought approximately to zero,
varying plus and minus around the ideal delay of zero. The error due to
delay is replace by an error due to reacting too soon, with the average
error being about the same as in ordinary tracking. And the synchronized
tracking works best only over a narrow range of square-wave frequencies.
At frequencies higher than about one hertz, tracking accuracy falls off
rapidly, and at frequencies lower than about 1/2 hertz, timing errors
become larger and larger. At a frequency of 1/4 Hz, for example, I do
better by just waiting for the target to jump and correcting the error
as fast as I can. As Bill Leach (I believe) pointed out, a musician
would probably do better.

Of course a computer program, using a crystal oscillator as its time
base and doing its calculations with 80-bit precision, could probably be
written to do this task very well at any square-wave frequency. But
we're supposedly trying to model human beings, not computers. I think
it's time to bring some facts into this discussion./
-----------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 960304 10:45]

Bill Powers (960303.0830 MST)

Martin Taylor 960303 01:12 --

You have two perceptual signals, one from the model and one from the
real world. How do you combine them to yield "the" perceptual signal
that is compared with the reference signal?

    I hadn't conceived of either summation or averaging, but of a
    merging of information to provide a result that might be more
    accurate than either.

What is this operation you call "merging?" That's a new one on me.

It depends on the situation. In some cases it is the process that goes on
when one gains precision by observing soemthing a little longer, in some cases
it is the process that allows one to use:

... information from
several different independent inputs, each giving a different view of
the external situation.

Which is _not_ "a different problem," as you claim it to be.

Anyway, the model's output is _not_ an independent measure of the same
external situation. The only information in either signal comes from the
real perceptual signal.

That is true, but the real perceptual signal in question may have occured
minutes, hours, days, or years in the past.

The model's output is not a measure of the current external situation at
all. It doesn't pretend to be. It's a measure of what the current
external situation may be if the external world behaves the way it used
to do, given what the observed external situation was a short while ago.
It's only useful insofar as current observations are an imprecise measure
of the current external situation, or when the effects of current output
will only be reflected in perception after some time delay in the external
world. (And I'm not forgetting the Artificial Cerebellum approach, which
works on the output rather than the perceptual input).

I think the "transient" problem is taken care of by higher systems that
anticipate the loss of information from one perceptual channel (when
this is possible).

I have no problem with that except for the generic problem that it is hard,
if not impossible, to tell the difference between the behaviour of a complex
model-based system and a hierarchic one. My personal bias is toward the
simple hierarchy at the lower levels and toward models at higher, especially
symbolic/logical levels. But I've yet to hear of any evidence that it
is possible to tell the difference without actually going in and dissecting
the control system's physical/physiological structure. Indeed, you recently
told someone (Shannon, was it?) that there was no way to tell the difference.

The only serious problem (model-wise) arises when the real perception
disappears without warning. There is no system that can handle this
problem, unless it's a model-based system that always works off the
model instead of the real perception, as Hans Blom proposes.

Well, if this is true, it contradicts what I just said. But I don't think
it is true. I have two problems with it. Firstly, I'm always sceptical
when someone says "There is no X that Y except this one." All perceiving
systems everywhere combine information from various times and places, and
I see no reason why one of those sources should not be a model of the
perception to be expected in some situation. Secondly, I may be wrong,
but I think that in a simple hierarchy the reference signals from higher
levels even after a "loss of input accident" will still have the appropriate
lead times that have all along been reducing the "ad-hoc" error corrections
of the lower levels--think of Rick Marken's demonstration of two-level
fit with simple sine-wave tracking.

The main problem in this picture is the problem that the output of a simple
scalar perceptual function cannot distinguish between a loss of input and
a zero-level signal. If you are doing a tracking task on a computer screen,
there is a difference in what you do if the screen goes blank and what you
do when the cursor suddenly starts magically matching the target exactly.
That difference cannot be due to variation in the value of the perceptual
signal for magnitude of tracking error. It must lie elsewhere.

The disadvantage of model-based control is that it commits you to
assuming a world with characteristics that change slowly.

Isn't that a good assumption much of the time? Isn't it also true of the
behaviour of a hierarchy? The "characteristics" in question represent the
environmental feedback function. However, if the disturbance influence has
had a coherent temporal pattern over time, that pattern cannot be separated
from the temporal characteristics of the environmental feedback function
unless it can be separately observed. The effects of either will show up
in a model, and Artificial Cerebellum, or an effectively phase-advancing
higher-level control system.

For
lower-level control processes like hitting a tennis ball or dancing,
you're better off using present-time control and not trying to base
present actions on past experience.

A while ago, I presented a counter-example to that from personal experience.
Playing table-tennis by the light of the full moon, I found it necessary
to hit the ball when visually the ball had only reached the net. The delay
in the visual system at low light is that long. I had to use deliberate
phase-advance (or model) to hit the ball. Since my opponents took longer
to realize what was happening, I won a lot of games.

The key here, I think, is to take seriously the two aphorisms: that you can
control only perception, and that it's not the perceived world but the
real world that hits you. Useful control improves your chances of not
getting hit. It's almost always true that this implies good perceptual
control, but that's a matter of empirical truth rather than of necessity.

If I seem to disagree with you, it's not on the question of preferred
interpretation. It's on the necessity of one interpretation rather than
another.

Martin

[Hans Blom, 960305d]

(Bill Powers (960303.0830 MST))

You have two perceptual signals, one from the model and one from
the real world. How do you combine them to yield "the" perceptual
signal that is compared with the reference signal?

I hadn't conceived of either summation or averaging, but of a
merging of information to provide a result that might be more
accurate than either.

What is this operation you call "merging?" That's a new one on me.

I described and demonstrated "merging" in my adaptive control demo.
In one dimension, one can think of merging of having two (or more)
independent measurements of one and the same thing and combining
those into one measure, which is more accurate than either one
individually.

In more than one dimension, it is combining two multi-dimensional
"views" from different perspectives into one overall picture. The
latter cannot, of course, physically exist, but it _will_ be a more
accurate description of "the real thing" than either perspective
alone. It does not matter much whether one perspective includes
dimensions which are lacking in the other one.

If you're familiar with artificial neural networks, the mechanics of
how this is done will be familiar to you. Offer one "example" and see
how the weights of the network change. Offer the second "example" and
see how the weights improve even more. But note that the standard NN
learning techniques are not very suitable for small numbers of
examples. There are (much) better techniques.

If the modeled and real perceptions were based on independent
samples of the same external situation, then statistically averaging
them together might create a more accurate representation of the
external situation.

That's the idea, yes. "Independent" is in the sense of statistically
independent -- the two are "really" of the same thing but have
different disturbances associated with them.

Of course then you have to ask, "More accurate compared to what?"

More accurate in the sense of being a better description of what is
"out there". In real life, of course, we cannot know what is "out
there", but in simulations where we create it ourselves, we can.

Anyway, the model's output is _not_ an independent measure of the
same external situation. The only information in either signal comes
from the real perceptual signal.

No, no, no. The model's prediction _is_ an independent measure of the
actual observation. That is all that prediction is about! The
information in a prediction does not come from the _momentary_
perceptual signal but is an extrapolation of previous values.

I think the "transient" problem is taken care of by higher systems
that anticipate the loss of information from one perceptual channel
(when this is possible).

There is a "transient problem" only if you use a hard switch. Using a
"soft" switch -- i.e. processing both prediction and current observ-
ation at all times, depending upon how useful / certain either is --
does not introduce switching transients.

The only serious problem (model-wise) arises when the real
perception disappears without warning.

This is truly a problem in one-dimensional systems, yes. In systems
of very high dimensionality where only part of the dimensions are
unobservable at any time, I expect much less of a problem.

The disadvantage of model-based control is that it commits you to
assuming a world with characteristics that change slowly.

Yes. I do assume that the laws of physics, which underly everthing,
do change slowly ;-).

I wonder whether you (and Hans) have tried the regular-square-wave
tracking experiment yet.

I haven't yet, but the frequency range that you mention where "zero
delay times" can occur do not match what I remember from the
literature. I wonder whether the relative weight and clumsiness of
operating a mouse influence the results. Have you tried a joystick?

At frequencies higher than about one hertz, tracking accuracy falls
off rapidly, and at frequencies lower than about 1/2 hertz, timing
errors become larger and larger. At a frequency of 1/4 Hz, for
example, I do better by just waiting for the target to jump and
correcting the error as fast as I can. As Bill Leach (I believe)
pointed out, a musician would probably do better.

Anyone probably can, after extensive training, but maybe not with a
mouse...

Greetings,

Hans

···

================================================================
Eindhoven University of Technology Eindhoven, the Netherlands
Dept. of Electrical Engineering Medical Engineering Group
email: j.a.blom@ele.tue.nl

Great man achieves harmony by maintaining differences; small man
achieves harmony by maintaining the commonality. Confucius

[From Bill Powers (950508.1700 MDT)]

Bruce Abbott (950508.1110 EST)--

Actually I forgot to mention that you had already suggested a switch
from real-time to model-based control -- sorry.

     Of course, at this point we aren't even _considering_ how we are
     able to construct the world-model and then "play" it in real time,
     but it seems apparent that we are indeed able to do so.

There is still room for considering "whether." In the sketchy
experimental program I sent, I find that I switch from watching the
cursor (when I can see it) to concentrating on the kinesthetic sense of
where my hand is (when I can't see the cursor). What I seem to do is to
try to make the felt hand position (also glimpsed vaguely in peripheral
vision) follow a path that is parallel to, or analogous to, the path of
the target. When I'm watching the cursor, I would normally pay no
attention to my hand. However, when I know that there's a run coming up
in which I won't be able to see the cursor, I try to split off some
attention so I can get a feel for how the hand is moving, because that
is what I am going to have to control. On the "blind" runs, I find that
I also am paying attention to the target velocity, and consciously
moving my hand faster when the target is moving faster; my best "blind"
runs occurred when I was attending to both target and hand amplitude and
speed.

The main reason I put in the mouse calibration routine was that without
it, the hand movements were only about 1/4 of the screen movements, and
my "blind" runs were lousy. It seems helpful to have the cursor
movements on the screen close in size to the hand movements that cause
them.

It's possible that when we lose the original controlled variable, we
switch to controlling another variable that is related to it, still
controlling real-time perceptions and not a world-model. We'll have to
come up with some better experiments to rule the world-model idea in or
out.

···

-----------------------------------------------------------------------
Best,

Bill P.

<[Bill Leach 950509.01:42 U.S. Eastern Time Zone]

[From Bill Powers (950508.1700 MDT)]

Forgive my intrusion into this discussion as I have not really been
following net happenings much recently.

I am experiencing some difficulty even imagining what "model based"
control could mean (in terms of human behaviour). I admit that since
coming to some measure of understanding of PCT I have trouble
"challenging" the concept that anything _done_ by humans IS done through
control system action.

The example of walking through a dark room seems like a description of
what one would think of for "model based control" however, even for those
of us that "briskly" walk through pitch dark places, I believe that is
still an example of PCT. A "model" is indeed involved but the model is
setting references for current perceptions.

These references are set "blindly" that is, open loop with respect to the
"ultimate goal" of traversing the room without mishap but the lower level
control actions are still PCT. It does seem to me however that I really
did "think of myself" as walking through the room as opposed to Rick's
inching along. Though I rarely do such things much today (and my house
is such a disaster that "inching along" is the appropriate way to
manuever in bright light much less in the dark), I do still traverse
short distances with what I can only describe as complete confidence
(maybe blind stupid confidence would be a better term but that is not how
it 'feels').

Even thinking about the rarer situation (at least for me) where I would
even walk to and decend stairs without light (yea, I was young and stupid
before I became older and stupid)... There was ALWAYS a current
perception to "resync" the model.

There undoubted IS something to this "open loop" aspect of human control
system operation and I believe that such is evidenced in examples where a
single step in a series of stairs is "off" by as little as an 1/8 of an
inch or a floor has a non-visable sharp variation in height (again as
little as 1/8 inch). People really do stumble quite reliably when
encounters such anomolies.

To me, that means that the control system sets motion references based
upon a specific AMOUNT OF MOVEMENT rather than a current perception of
something like relative position between the foot and the stair or floor.
We do, OTOH, have an almost(?) amazing capacity to "learn" about such
anomolies and correct for them without conscious effort.

-bill

[From Bruce Abbott (960716.1255 EST)]

For about six years now there has been a troll living beneath the shed in my
back yard, despite my best efforts to remove it. (I finally gave up.) Each
day I make amulets which my local sorcerer guaranteed would keep this troll
at bay. I know it works, because after six years, not a single one of my
children has been eaten. Now _that's_ control!

The question is, what kind of control is it?

Regards,

Bruce

[Bill Leach (960716.2150 EDT)]

[From Bruce Abbott (960716.1255 EST)]

For about six years now there has been a troll living beneath the shed in my
back yard, despite my best efforts to remove it. (I finally gave up.) Each
day I make amulets which my local sorcerer guaranteed would keep this troll
at bay. I know it works, because after six years, not a single one of my
children has been eaten. Now _that's_ control!

The question is, what kind of control is it?

Model based control of perception (keeping the troll at bay) using control
of current perception (making amulets).
-bill
b.leach@worldnet.att.org
ars: kb7lx

[From Rick Marken (950921.1200)]

Bruce Abbott (950920.1425 EST) --

doesn't Hans's model benefit from the same addition [of the first derivative
of controlled variable]?

Yes. Probably true.

And Rick, the first derivative is hardly what I would call "noise."

I use the term "noise" to mean "unnecessary" rather than "random". My rewrite
of Hans model

u := u - x + (u - x + dold)

shows that the model is an integral control system (u := u - x). That is,
Hans model is a system that will control its perception (x) even if the
term (u - x + dold) were not added. So the term (u - x + dold) is not a
necessary aspect of the control process. The addition of this "noise" term
does improve control when the disturbance is low pass noise and the feedback
function is linear; but the addition of this term reduces control under many
other circumstances, such as when the disturbance is a square wave or the
feedback function is non-linear.

Hans Blom (950921) also had a problem with my use of the term "noise" to
describe the term (u - x + dold). He says:

I wouldn't call something that systematically improves things "noise", not
even "intelligent noise".

Again, I call it "noise" because the model controls without it and because,
when added, it doesn't necessarily improve control.

I said:

It is "noise" in the sense that it is an independent addition to the
error signal (u-x) that drives the integrated output (u).

Hans replies:

Independent? No! Extra.

Right. I was wrong. It's not independent; it is unnecessary and it doesn't
necessarily improve the control system's ability to control.

Me:

if the disturbance is not smooth (if, for example, it is a square wave),
this noise term degrades control (relative to that produced by the basic PCT
model).

Hans:

Have you verified this last assertion?

Yes. With a square wave disturbance, the RMS error for your model is nearly
twice what it is for the control model without the (u - x + dold) term.

It is incorrect, except under very exceptional conditions

Then I found some exceptional conditions.

What kinds of square wave did you test with?

Starting with d := 25 outside the do loop (indexed by i) I used a square wave
defined by the following code:

if i mod 25 = 0 then d := -d

How often do you estimate that the conditions that make the basic PCT model
better exist in practice?

I don't know. Square wave disturbances may be rare but non-linear feedback
functions are not.

I agree that your model can control better than the basic PCT model in
many circumstances. My points regarding your model are simply 1) your model
controls perception so it is a version of the basic control of percpetion
model 2) your model contains an added term that can lead to better or worse
control, depending on circumstances (nature of disturbance, nature of
feedback function) and 3) the behavior of your model does not match the
behavior of human subjects.

Me:

Although Hans' model based addition to the PCT model can improve
control under certain circumstances, there is no evidence that this
addition is needed to explain anything about the behavior of living
control systems.

Hans:

If there is no evidence, let's try to obtain some.

I tried. I tested a human in a situation where your model behaves quite
differently than the simple PCT model: the behavior of your model did not
match that of the human; the behavior of the PCT model did. That doesn't
prove that the PCT model is correct but it does rule out the simple model-
based controller that you posted as a model of human controlling.

They [evidences of model-vbased control] must be there, in my opinion

It seems odd to be arguing (if you are) for a model-based control model of
human behavior when you have absolutely no evidence that such a model is
necessary.

Best

Rick