"bare bones" MCT-PCT comparison

[Hans Blom, 970327d]

Last post for a week. This extremely simple program compares the
simplest possible MCT and PCT controllers in the simplest possible
environment. Its goals?

1. To demonstrate that the reference prescribes the value of x in
   the NEXT iteration. Just compare xref and x for each time!

2. To demonstrate that for perfect performance the PCT controller
   also needs a "model" (a G equal to 1/K). Try and change G to a
   different value and see what happens!

Bill, what do I do wrong? :wink:

Greetings,

Hans

program proportional_control;
const
  K = 0.25; {or any value}
  G = 4.0; {1/K for proper operation}
var
  x, xref, umct, upct: real;
  t: integer;
begin
  t := 0;
  umct := 0.0;
  upct := 0.0;
  repeat
    x := K * upct; {environment equation}
    {specify either umct or upct in the previous line}
    xref := t; {reference for x}
    umct := xref / K; {MCT u}
    upct := upct + G * (xref - x); {PCT u}
    writeln (t:3, xref:12:6, x:12:6, umct:12:6, upct:12:6);
    t := t + 1
  until t > 23
end.

[From Bill Powers (970327.1051 MST)]

Hans Blom, 970327d--

Last post for a week. This extremely simple program compares the
simplest possible MCT and PCT controllers in the simplest possible
environment. Its goals?

1. To demonstrate that the reference prescribes the value of x in
  the NEXT iteration. Just compare xref and x for each time!

Yes. Of course xref prescribes the next value to be achieved. You will
notice that x is always one iteration behind xref, in either model.

2. To demonstrate that for perfect performance the PCT controller
  also needs a "model" (a G equal to 1/K). Try and change G to a
  different value and see what happens!

Performance is not perfect; x is always one iteration behind xref. The
performance is simply as good as you can get with a discrete controller.

Your model is silly. You say u = xref/4, and x = 4*u, which is simply saying
x = xref.

If some nasty person comes along and adds a disturbance that nobody told us
was going to happen, the PCT model continues to work, while the MCT model
fails. Substitute this line:

   x := K * upct + 10; {environment equation}
   {specify either umct or upct in the previous line}

After the last writeln, the MCT model comes up with x = 32, while the PCT
model comes up with x = 22 as before. NOTHING ABOUT THE PCT MODEL HAS TO BE
CHANGED TO HANDLE THIS DISTURBANCE. In the MCT model you'd have to change
the model so as to subtract out that disturbance (after somehow deducing
that it's there).

Bill, what do I do wrong? :wink:

Right now, Hans, I'm tired of trying to tell you.

Best,

Bill P.

[Hans Blom, 970407b]

(Bill Powers (970327.1051 MST))

1. To demonstrate that the reference prescribes the value of x in
  the NEXT iteration. Just compare xref and x for each time!

Yes. Of course xref prescribes the next value to be achieved. You
will notice that x is always one iteration behind xref, in either
model.

Do we all agree now? Martin?

2. To demonstrate that for perfect performance the PCT controller
  also needs a "model" (a G equal to 1/K). Try and change G to a
  different value and see what happens!

Performance is not perfect; x is always one iteration behind xref.

We just established in (1) that x being one iteration behind xref is
necessarily the case; there is no escape from causality in a control
system. Any control engineer knows that x _follows_ xref. This being
so, it appears strange to me to call this inescapable property "less
than perfect behavior", unless it is based on a rather Calvinist
philosophical perspective that even the best we could possibly do is
not perfect. In a control context, my perspective is that "perfect
control" is to be defined as that type of control which cannot be
improved on, even in theory. Your definition relegates "perfect" to
the domain of the imaginary hoped-for-but-impossible, where it loses
all practical meaning.

The performance is simply as good as you can get with a discrete
controller.

Being the practical person that I am, I define "simply as good as you
could possibly get" as "perfect". Not that this type of perfection is
easy to achieve; it is not -- where the rubber meets the road. Yet it
_is_ something that's worth striving for, in contrast with your type
of unobtainable perfection.

And, much to our relief, in the world as it exists a discrete
controller is not inferior to a continuous controller, due to the
regularities ("smoothness") of the world. Computer programs such as
Tutsim and Simcon are testimony; they just choose dt judiciously.

Your model is silly. You say u = xref/4, and x = 4*u, which is
simply saying x = xref.

Bill, please follow the basic MCT argument. It consists of two steps:

1) modeling; this is concerned with the system discovering what you
call the "environment equation". In our example case I was
specifically allowed to have a "perfect model", so my controller does
not model; it just knows that x := 4*u;

2) control; this is concerned with finding a u such that x = xref;
u := xref/4 follows trivially (in this simple case!).

These are two basic properties of an MCT controller. I have no idea
what's silly about that.

If some nasty person comes along and adds a disturbance that nobody
told us was going to happen, the PCT model continues to work, while
the MCT model fails.

Whether the PCT model continues to work (or work better than the MCT
controller) depends on the _type_ of disturbance and is an empirical
question. There is a difference in behavior, to be sure. Anyway, in
case of a disturbance the environment equation would be

  x := 4*u + d

which the MCT controller would need to know, in addition to any
properties of the disturbance (in terms of probability distribution)
that might be known. MCT considers the disturbance to be generated
somehow by the environment. Thus d belongs to the environment
equation. And it is the model's task to describe the environment,
_including d_!

Substitute this line:

   x := K * upct + 10; {environment equation}

and you have a different environment equation, so a new model is
needed, different from the previous x := K * upct. In the "bare
bones" case, where the MCT controller lacks a model-building or
-updating facility, MCT control would obviously not be good. But your
suggested substitution fails your allowance that the MCT controller
could use a "perfect model". And it will be obvious to you, that the
MCT control law, after the new environment equation had been learned,
would be a simple

    u := (x-10)/K

Thus the failure would simply be due to having a bad model. Note,
however, that exactly this case is handled well (as well as possible,
I dare say :wink: in my theodolite controller, which _does_ model this
type of disturbance.

NOTHING ABOUT THE PCT MODEL HAS TO BE CHANGED TO HANDLE THIS
DISTURBANCE. In the MCT model you'd have to change the model so as
to subtract out that disturbance (after somehow deducing that it's
there).

In an intact MCT controller -- one that includes a model-builder --
nothing has to be changed either. My MCT theodolite controller _did_
routinely change the (disturbance) model "after somehow deducing that
it's there", which was equally simple. But we'd better reserve
model-building for later, when it is fully clear how the control part
of an MCT controller works. Or _is_ that clear by now?

Greetings,

Hans

[Martin Taylor 970407 13:30]

Hans Blom, 970407b]

(Bill Powers (970327.1051 MST))

>>1. To demonstrate that the reference prescribes the value of x in
>> the NEXT iteration. Just compare xref and x for each time!

>Yes. Of course xref prescribes the next value to be achieved. You
>will notice that x is always one iteration behind xref, in either
>model.

Do we all agree now? Martin?

Since you ask, I guess I ought to answer.

But my answer consists of another question: "Agree on what?" Am I supposed
to agree that the controller being simulated has its variable that corresponds
to "x" following its variable that corresponds to "xref" by one iteration?
If so, I can't agree in the case of a PCT controller. But if I'm supposed
to agree that the effects of a change in reference value at time t0 cannot
be observed in a simulation until one iteration later, of course it's true.
It's true of all variables in the simulation that a change imposed at time
t0 has effects that cannot be observed until one iteration later--and may
not have their full effects until much later than that. In fact, they
_should not_ have their full effects until quite a bit later than one
iteration after t0. If they do, then one cannot believe the results to
represent what would happen in the continuous world being simulated.

In a control context, my perspective is that "perfect
control" is to be defined as that type of control which cannot be
improved on, even in theory. Your definition relegates "perfect" to
the domain of the imaginary hoped-for-but-impossible, where it loses
all practical meaning.

In many contexts, the word "ideal" is used, rather than perfect. An
"ideal observer," for example, extracts all the information that is
mathematically available, but is not "perfect" in being able to reproduce
exactly the source of the observations. "Perfect" does normally mean
the "imaginary hoped-for-but-impossible", whereas "ideal" means that
which cannot be improved upon.
Martin

[Hans Blom, 970408c]

(Martin Taylor 970407 13:30)

Am I supposed to agree that the controller being simulated has its
variable that corresponds to "x" following its variable that
corresponds to "xref" by one iteration? If so, I can't agree in the
case of a PCT controller.

Your position translates to: It is not true that the [PCT] controller
being simulated has its variable that corresponds to "x" following
its variable that corresponds to "xref" by one iteration.

May I understand this two mean that there are (1) a "real" [PCT]
controller and (2) that same controller being simulated? And that the
"real" [PCT] controller is a continuous one? In that case, the above
statement is not _untrue_; it is _meaningless_ since the notion
iteration does not apply to continuous systems.

But if I'm supposed to agree that the effects of a change in
reference value at time t0 cannot be observed in a simulation until
one iteration later, of course it's true.

This is what I meant all along, of course.

Your definition relegates "perfect" to the domain of the imaginary
hoped-for-but-impossible, where it loses all practical meaning.

In many contexts, the word "ideal" is used, rather than perfect.

Yes, I know. Informally, "ideal" has the same connotations as
"perfect", so this change of terminology doesn't seem to help much. I
did, however, define what I meant by "perfect", so we're once again
-- as so often -- stuck with the inadequacy of words, or the naive
useage thereof.

Anyway, what I'm not willing to discuss with you is imaginary
controllers with imaginary properties -- such as I understand you to
mean when you talk about "the controller being simulated". I need to
ground what I say in reality -- real, existing implementations whose
operation can be investigated unambiguously. Both you and I lack an
analog computer, alas ;-).

Greetings,

Hans

[Martin Taylor 970408 13:45]

Hans Blom, 970408c]

Anyway, what I'm not willing to discuss with you is imaginary
controllers with imaginary properties -- such as I understand you to
mean when you talk about "the controller being simulated". I need to
ground what I say in reality -- real, existing implementations whose
operation can be investigated unambiguously. Both you and I lack an
analog computer, alas ;-).

I take this to mean that you have no interest in whether the properties of
the non-imaginary controllers that exist only in the software bear any
relevance to controllers that might be hitched up to real people.

So be it.

But I'd prefer to deal in simulations that have at least the property that
the behaviour of the system does not change appreciably when you observe
it at ever decreasing time intervals. The behaviour of the system being
tested should not depend on the attentiveness with which it is observed.

And I would even more prefer to deal in simulations for which the results
have at least a decent chance of representing what happens in a real-world
controller made of physical molecules that has properties matched as best
they can be to those assigned to the non-imaginary controller that exists
only in software and the imagination of an experimenter.

But you don't want to do that, so I'll settle for second-best: A system
whose behaviour is not critically dependent on how often it is observed.

Martin

[Hans Blom, 970410c]

(Martin Taylor 970408 13:45)

Anyway, what I'm not willing to discuss with you is imaginary
controllers with imaginary properties ...

I take this to mean that you have no interest in whether the
properties of the non-imaginary controllers that exist only in the
software bear any relevance to controllers that might be hitched up
to real people.

Not at all. It's the comparison between a non-imaginary and an
imaginary controller that bothers me. It just can't be done. Give me
two non-imaginary controllers and I can compare things. Only then.
That's why I put so much emphasis on computer implementations/models
-- and even more on true-to-life physical controllers that actually
do something in the real world -- and so little on words or their
equivalent, block diagrams ;-).

But I'd prefer to deal in simulations that have at least the
property that the behaviour of the system does not change
appreciably when you observe it at ever decreasing time intervals.

How do you think the behavior of a theodolite controller (a real one
or a computer simulation program) is changed by it being observed?
That may be a concern in quantum mechanics, but surely not here...

The behaviour of the system being tested should not depend on the
attentiveness with which it is observed.

Exactly. A control system is an "autonomous agent" which does its
thing whether it's observed or not.

And I would even more prefer to deal in simulations for which the
results have at least a decent chance of representing what happens
in a real-world controller made of physical molecules that has
properties matched as best they can be to those assigned to the
non-imaginary controller that exists only in software and the
imagination of an experimenter. But you don't want to do that ...

I have no idea how you've come to induce that. If there ever is a
case of "going beyond the data", this must be it.

so I'll settle for second-best: A system whose behaviour is not
critically dependent on how often it is observed.

For the moment, I prefer to discuss only things far above the level
of quantum mechanics ;-). So I dare say -- with minuscule
reservations -- that the behavior of an autonomous agent can only be
dependent on whether it is being observed if the thing is aware of it
and includes it in the way it acts. That's pretty far beyond anything
we have discussed so far...

Greetings,

Hans

[Martin Taylor 970410 12:20]

Hans Blom, 970410c]

This one can be answered more quickly than your others of today.

(Martin Taylor 970408 13:45)

>But I'd prefer to deal in simulations that have at least the
>property that the behaviour of the system does not change
>appreciably when you observe it at ever decreasing time intervals.

How do you think the behavior of a theodolite controller (a real one
or a computer simulation program) is changed by it being observed?
That may be a concern in quantum mechanics, but surely not here...

It is changed if the value of the simulated forces, or the speed with
which the controller achieves its reference, is affected by the value of dt.
That is an immediate concern "here". The shorter dt, the quicker the
theodolite reaches its reference position and the larger the simulated
forces upon it.

Hitherto, I've tried to get you to contemplate the GENERAL statement that
it is impossible to take any significant event that happens in one sample
iteration of the simulation as representing anything that might happen in
a corresponding continuous (or differently sampled) system. You seem
suspended in mid-air between saying your theodolite moves according to
equations valid for continuous systems and saying that it can see its
world only when the experimenter-observer can, and that it is therefore
a discrete (and non-imaginary) theodolite.

Rather than beat upon this inconsistency now, let me ask you if you could
code your theodolite in such a way that it's (non-imaginary) sampling time
is some value Dt (referenced to what, if there is no imagined continuous
equivalent?) and the experimenter observes at some interval value dt, where
neither Dt is an integer multiple of dt, nor the reverse.

(Parenthetically, I may note an additional confusion, which is that I am
quite unclear what dt could possibly mean if, as you insist, there is no
underlying conception of a tangible theodolite in a continuous world to
which this simulation is implicitly referenced.)

Martin

[From Bill Powers (970411.1148 MST)]

Hans Blom, 970407b--

(Bill Powers (970327.1051 MST))

Performance is not perfect; x is always one iteration behind xref.

We just established in (1) that x being one iteration behind xref is
necessarily the case; there is no escape from causality in a control
system. Any control engineer knows that x _follows_ xref. This being
so, it appears strange to me to call this inescapable property "less
than perfect behavior", unless it is based on a rather Calvinist
philosophical perspective that even the best we could possibly do is
not perfect.

You've changed your tune, considering that it was you who initially pointed
out the supposed advantage of the MCT model over the PCT model -- the MCT
model could control "perfectly," while the PCT model, because it was
error-driven, could not. Now, it seems, perfection is to be defined as
whatever the MCT model is actually capable of doing, however imperfect that
may be in Calvanistic terms. I'm happy to go along with this, provided you
will revise your previous remarks and admit that the PCT model controls just
as pseudo-perfectly as the MCT model does, given the proper parameters as in
theo6MCT or in your bare-bones model.

Your model is silly. You say u = xref/4, and x = 4*u, which is
simply saying x = xref.

Bill, please follow the basic MCT argument. It consists of two steps:

1) modeling; this is concerned with the system discovering what you
call the "environment equation". In our example case I was
specifically allowed to have a "perfect model", so my controller does
not model; it just knows that x := 4*u;

2) control; this is concerned with finding a u such that x = xref;
u := xref/4 follows trivially (in this simple case!).

These are two basic properties of an MCT controller. I have no idea
what's silly about that.

I knew I shouldn't have said "silly." I'll go along with "trivial."

If some nasty person comes along and adds a disturbance that nobody
told us was going to happen, the PCT model continues to work, while
the MCT model fails.

Whether the PCT model continues to work (or work better than the MCT
controller) depends on the _type_ of disturbance and is an empirical
question. There is a difference in behavior, to be sure.

Were you ever in Public Relations before you took up control engineering? I
would agree that there is a "difference in behavior." The disturbance of 10
units is not resisted by the MCT model, and x departs significantly from the
reference value. That is, indeed, different.

Anyway, in
case of a disturbance the environment equation would be

x := 4*u + d

which the MCT controller would need to know, in addition to any
properties of the disturbance (in terms of probability distribution)
that might be known. MCT considers the disturbance to be generated
somehow by the environment. Thus d belongs to the environment
equation. And it is the model's task to describe the environment,
_including d_!

So this is NOT a bare-bones model, is it? How is the model going to describe
the environment, including d? It seems to me that the only way this can be
done is for the _modeler_ (you) to rewrite the program specifically to
calculate d.

Substitute this line:

   x := K * upct + 10; {environment equation}

and you have a different environment equation, so a new model is
needed, different from the previous x := K * upct. In the "bare
bones" case, where the MCT controller lacks a model-building or
-updating facility, MCT control would obviously not be good.

OK, we agree on that. I repeat the question in my previous post: doesn't it
seem odd to you that the "bare-bones" MCT model needs all this added
information and computing ability to deal with the disturbance, while the
"bare-bones" PCT model does not?

NOTHING ABOUT THE PCT MODEL HAS TO BE CHANGED TO HANDLE THIS
DISTURBANCE. In the MCT model you'd have to change the model so as
to subtract out that disturbance (after somehow deducing that it's
there).

In an intact MCT controller -- one that includes a model-builder --
nothing has to be changed either. My MCT theodolite controller _did_
routinely change the (disturbance) model "after somehow deducing that
it's there", which was equally simple. But we'd better reserve
model-building for later, when it is fully clear how the control part
of an MCT controller works. Or _is_ that clear by now?

Yes, it's clear. It always has been clear to me. An MCT controller needs to
be told what the form of the environment equation is, and what the inverse
of that equation is, and where disturbances might act in the environment --
all of this it must be told by the designer of the controller, in order for
it to work. There is a great deal of hidden machinery in the MCT controller,
work that is done for the controller by its human designer. This is OK when
we're talking about engineers designing artificial systems to be used by a
customer to meet predefined specifications. It is not OK when we are talking
about how living systems work. Living systems do not have the benefit of a
control system engineer/mathematician standing by to suggest suitable forms
for the world-model, or to do the mathematical manipulations required to
derive the inverse of the world-model and implement it as a neural
calculation. And nervous systems do not have infallible floating-point
digital computers available, or sensors that remain perfectly calibrated, or
output actuators that respond to driving signals with absolute precision and
reliability.

···

--------------------------------------
We have yet to take up the subject of the effects of non-ideal computations
and functions in the MCT and PCT controllers. The only hint of this subject
came a month or ago or so when I raised the subject of accuracy in computing
the inverse of the world-model. You indicated that this subject was new to
you, and that it needed some thought. When we get into this more in detail,
the main advantages of the PCT controller will become more evident. We can
see them even now.

Consider your MCT bare-bones model (without the disturbance, to make the
point simpler). The environment function is

x = K*umct

and the controller's world-model is

umct := x/K.

Suppose the world-model is a little bit in error -- K has a value that is 10
percent too high. Clearly, the MCT model will simply produce a value of x
that is 10% too low. Of course the adaptive process, if included, would
eventually correct that error, but let's see how the PCT model deals with it.

The PCT controller is

upct := upct + G*(xref - x),

and the equivalent error would be for G to be 10% too large.

If you run the bare-bones model using 1.1*G instead of just G, the PCT model
produces a final value of 22.0909..., compared with 22.000... when the
optimum value of G is used. The result is off by 1/2 percent. The MCT model,
given 1.1*K instead of K, produces a value of 20.909, which is 10% low
relative to the best possible value of 22, achieved with the optimum value
of K in the world-model.

So the PCT model is affected by this error in the output sensitivity only
1/20 as much as the MCT model is affected. In order just to _match_ the
performance of the PCT model, the MCT model would need an adaptive process
that could adjust K to the optimum value with an accuracy of 0.5%. The
closed-loop PCT approach can handle variations in the output sensitivity
with the same accuracy, but without requiring any adaptation.

In a digital computer, achieving an accuracy of one-half percent is no
problem. But in a nervous system, where most processes are analog and JNDs
run to 5% of the signal magnitude, it is highly unlikely that ANY output
computation (aside from those using symbols and pen-and-paper) can approach
an accuracy of one-half percent. Thus with respect to handling variations in
output sensitivity, the PCT model does about as well as possible without
requiring any adaptive mechanisms at all. And if the inherent errors in the
nervous system computations are any larger than one-half percent, the PCT
model can do _better_ than the _adaptive_ MCT model, for this simple case.

The MCT model, because it is basically open-loop, is inherently sensitive to
computational errors and miscalibrations. This is why it MUST contain an
adaptive process. Without the adaptation, the MCT model would drift away
from optimum performance rather quickly, if it had to be built with real
components and work in the real world.

The PCT model, on the other hand, is inherently _insensitive_ to many
computational errors and changes in environmental parameters. It is
commonplace for real-time negative feedback systems to reduce errors due to
disturbances and parameter variations by a factor of 20 and much more,
without any adaptive methods or world-models being needed.

It is perfectly possible to design a PCT system with adaptation. We have
tried two such designs, both of which work well. Others are quite possible
-- including others that use the Kalman filter approach but not needing a
model of the environment. But because the PCT system is inherently far less
sensitive to computation errors and changes of system parameters, the amount
and precision of adaptation required to reach a given level of performance
are far less than in the MCT model.

The MCT approach is ONE approach to adaptive control. Because it can be made
to work, many people have accepted it and devoted themselves to elaborating
on it. But there are other approaches, and if the same time and effort had
been devoted to them, they would probably look just as good. The PCT
approach obviously needs attention from mathematicians and just plain
engineers; it is much too early to give up on it.

Best,

Bill P.

[Hans Blom, 970415b]

(Martin Taylor 970410 12:20)

Hitherto, I've tried to get you to contemplate the GENERAL statement
that it is impossible to take any significant event that happens in
one sample iteration of the simulation as representing anything that
might happen in a corresponding continuous (or differently sampled)
system.

You reverse the problem that I am concerned with, and that I'm sure
you are familiar with: given a continuous, bandwidth-limited system
(as all causal physical systems presumably are), how can that system
be modeled/represented as a sampled system. Shannon has told us to
sample at at least twice the highest frequency that occurs in the
system. In practice, there is no single "highest frequency", of
course. Yet, ever higher frequencies contribute ever less to the
signals, and if one samples at, say, 10 times the highest significant
frequency, one has a pretty good approximation, which defines a
practical value for dt. At that dt, one can reconstruct the original
continuous signal from just the samples with very small distortion.
And that's "good enough". Well chosen approximations (approximate
models with the right properties) suffice for control, as practice
demonstrates.

You seem suspended in mid-air between saying your theodolite moves
according to equations valid for continuous systems and saying that
it can see its world only when the experimenter-observer can, and
that it is therefore a discrete (and non-imaginary) theodolite.

If we want a computer to control a continuous system, we _have to_
translate the continuous system to a discrete one (samples). We then
compute the controller's outputs in the discrete domain, and apply
that output back to the physical thing, which presumably operates in
the continuous domain. What makes this possible is the fact that the
translation continuous <--> discrete can be arbitrarily good by an
appropriate choice of sample frequency/sample interval dt. What's
new?

Rather than beat upon this inconsistency now, let me ask you if you
could code your theodolite in such a way that it's (non-imaginary)
sampling time is some value Dt (referenced to what, if there is no
imagined continuous equivalent?) and the experimenter observes at
some interval value dt, where neither Dt is an integer multiple of
dt, nor the reverse.

Sure, and I've demonstrated the method. The method presupposes that a
dt-independent prediction algorithm is available. Bill Powers was so
nice to make one available. That algorithm, in turn, pressuposes that
the controller's output is piecewise constant. If it is (in practice,
this is an approximation), there is open-loop behavior during the
sample interval dt, so the open loop equations apply.

(Parenthetically, I may note an additional confusion, which is that
I am quite unclear what dt could possibly mean if, as you insist,
there is no underlying conception of a tangible theodolite in a
continuous world to which this simulation is implicitly referenced.)

I don't see your problem: dt is simply the sampling interval. What do
I miss?

Greetings,

Hans

[Hans Blom, 970415d]

(Bill Powers (970411.1148 MST))

Now, it seems, perfection is to be defined as whatever the MCT model
is actually capable of doing ...

It frequently helps, in a design process, to have an objective norm
that specifies what _any_ controller could at best achieve, whether
MCT or PCT or what have we. Whether we call such an absolute norm
"perfection" or something else is hardly interesting. What _is_
interesting is whether we can find such a norm at all. That is, when
we can stop the design process because we know that no better design
is possible.

I knew I shouldn't have said "silly." I'll go along with "trivial."

Great, that helps. Once one truly understands something, it has
become trivial. Does that mean I can stop talking now about how an
MCT controller _uses_ its model?

Were you ever in Public Relations before you took up control
engineering?

Regrettably not. It sure would have helped to make myself better
understood :-).

Going up a level: have you ever considered how come we humans so
often attempt to "explain" what we perceive by some hypothesis, such
as in this case? We also tend to accept the first explanation that
comes to mind. We seem to try to identify a "cause" for a perception.
An inverse model?

So this is NOT a bare-bones model, is it?

You introduce a disturbance, whereas the controller "knows" there
isn't one.

How is the model going to describe the environment, including d? It
seems to me that the only way this can be done is for the _modeler_
(you) to rewrite the program specifically to calculate d.

Is this going to be the next issue: how to model/describe the
environment? If so, there are two approaches: (1) the designer can
discover what the environment equation is and impart that knowledge
to the controller, as I did in the "bare bones" simulation for both
the MCT and the PCT controller. Or (2) the designer can give the
controller a mechanism that will let it discover a model of the
environment all by itself, which it can subsequently use in its
control, whether it takes the form of an MCT or a PCT control law.
Note that in the "bare bones" demo the PCT controller also requires
knowledge of the environment's gain (and its inverse!). So model
formation and model tuning are rather separate subjects, independent
of the form the controller takes.

I repeat the question in my previous post: doesn't it seem odd to
you that the "bare-bones" MCT model needs all this added information
and computing ability to deal with the disturbance, while the
"bare-bones" PCT model does not?

No, that does not seem odd to me; it seems tautological: a model-
based controller is based on and needs an explicit model of its
environment, including the disturbance -- which is, after all, also
generated by the environment.

It always has been clear to me. An MCT controller needs to be told
what the form of the environment equation is, and what the inverse
of that equation is, and where disturbances might act in the
environment -- all of this it must be told by the designer of the
controller, in order for it to work. There is a great deal of hidden
machinery in the MCT controller, work that is done for the
controller by its human designer.

My "bare bones" demo shows that its PCT controller, too, must be told
what the form of the environment equation is and what its inverse --
all of this by the designer of the controller. This is not different
from the MCT controller.

What _is_ different in a full-blooded adaptive controller is that it
can discover (an approximation to) the environment equation
independently of any designer. That would be useful in a PCT
controller such as the "bare bones" one as well, I believe.

What is also different is that, depending on how good a model can be
discovered, an MCT controller might find a better control quality (in
terms of integrated square error, for instance) whereas a PCT
controller is more robust in the face of modeling errors. In other
words: an MCT controller may be better if an accurate model is
available; a PCT controller may be better if no accurate model can be
discovered. Note that it has never been my claim that an MCT
controller is _always_ better...

This is OK when we're talking about engineers designing artificial
systems to be used by a customer to meet predefined specifications.
It is not OK when we are talking about how living systems work.
Living systems do not have the benefit of a control system
engineer/mathematician standing by to suggest suitable forms for the
world-model, or to do the mathematical manipulations required to
derive the inverse of the world-model and implement it as a neural
calculation.

Living systems, on the other hand, have had the benefit of an
evolutionary process that has finely tuned them to their environment.
And, if that environment is variable, they have "operational laws"
that seem to match the variations of the environment. The better the
variations of the environment are employed in a living system's modus
operandi, it seems to me, the higher its fitness would be.

And nervous systems do not have infallible floating-point digital
computers available, or sensors that remain perfectly calibrated, or
output actuators that respond to driving signals with absolute
precision and reliability.

That's a different point: how much error can be tolerated in the
system's various components before control quality deteriorates
beyond survivability. Or the other way around: how can nature (or we)
design components or complete systems that are as robust to errors
(but which?) as possible.

Greetings,

Hans

[From Bill Powers (970415.1524 MST)]

Hans Blom, 970415d--

What is also different is that, depending on how good a model can be
discovered, an MCT controller might find a better control quality (in
terms of integrated square error, for instance) whereas a PCT
controller is more robust in the face of modeling errors. In other
words: an MCT controller may be better if an accurate model is
available; a PCT controller may be better if no accurate model can be
discovered. Note that it has never been my claim that an MCT
controller is _always_ better...

I agree that the basic issue here is "how good a model can be discovered."
Remember that "discovering" a model itself requires complex computations
(according to your descriptions of proposed methods), and these computations
have to be counted as part of the model. In artificial systems they are
provided by the modeler; they are not themselves built up from experience.

Going along with that issue is, for various control systems low in the
nervous system, "How much computing capacity is available for constructing
models, and how fast and accurate are the computers?" The great advantage of
the PCT model is that it requires very little computation, and very little
accuracy of computation on the output side at least, yet it can achieve
performance that is very close to that of an ideal MCT model, with the
adjustment of only a few parameters.

This is OK when we're talking about engineers designing artificial
systems to be used by a customer to meet predefined specifications.
It is not OK when we are talking about how living systems work.
Living systems do not have the benefit of a control system
engineer/mathematician standing by to suggest suitable forms for the
world-model, or to do the mathematical manipulations required to
derive the inverse of the world-model and implement it as a neural
calculation.

Living systems, on the other hand, have had the benefit of an
evolutionary process that has finely tuned them to their environment.
And, if that environment is variable, they have "operational laws"
that seem to match the variations of the environment. The better the
variations of the environment are employed in a living system's modus
operandi, it seems to me, the higher its fitness would be.

Exactly the same argument can be made in favor of the PCT model, although I
hate to rely on "evolution took care of all that" as a real argument. The
question is, how much detail must evolution take care of to produce a
workable system? The simpler the system, the more likely it is to be a
feasible product of evolution.

And nervous systems do not have infallible floating-point digital
computers available, or sensors that remain perfectly calibrated, or
output actuators that respond to driving signals with absolute
precision and reliability.

That's a different point: how much error can be tolerated in the
system's various components before control quality deteriorates
beyond survivability. Or the other way around: how can nature (or we)
design components or complete systems that are as robust to errors
(but which?) as possible.

In the present context, I think the real point is how much error there can
be in the system's various components before the MCT model's imperfections
exceed those of the simpler PCT model. As I hinted in a previous post, we do
not need the errors to be very large before the PCT model performs better
than the MCT model. "Near-perfection" is a condition that occurs only in a
very narrow region, and it requires components and computations with
exceedingly precise characteristics to be achieved. When you start comparing
the PCT and MCT models using _realistic_ ideas about precision and
complexity, the picture begins to look very different.

Best,

Bill P.

[From John Anderson (970416.1030 EDT)]

[From Bill Powers (970415.1524 MST)]

The simpler the system, the more likely it is to be a
feasible product of evolution.

I agree with your assessment that MCT seems too complicated to apply to
living organisms. But be careful here. Because evolution always has to
work with what's already there, what it produces need not always turn
out to be the simplest design.

John

···

--
John E. Anderson
jander@unf.edu

[From Bill Powers (970416.1118 MST)]

John Anderson (970416.1030 EDT)--

The simpler the system, the more likely it is to be a
feasible product of evolution.

I agree with your assessment that MCT seems too complicated to apply to
living organisms. But be careful here. Because evolution always has to
work with what's already there, what it produces need not always turn
out to be the simplest design.

You're right. But I think that even if evolution were to come up with a
complex way of solving a problem, the likelihood of accurate replication
would go down as the complexity goes up. While we can imagine a complex
system that computes inverse dynamics and kinematics to control limb
movements, the likelihood that all the necessary computations would be
passed without change to succeeding generations would seem to be much
smaller than if only a few computations were involved. And if we imagine
populations in which these two organizations compete, the one that is less
vulnerable to natural variations would inevitably come to predominate.

We don't have to think in terms of the simplEST design; just the simplER design.

Best,

Bill P.