# MCT controller code

[Hans Blom, 970324]

Here is the code of the MCT controller that is the result of my
earlier analysis. It realizes its control objective (x=r) perfectly,
and its reconstruction of the disturbance d is as good as can be. The
only complaint could be that it is TOO good: the control sequence u
-- although exactly such that x is made equal to r -- oscillates. But
that is necessary: with any other control sequence, control would be
worse. Moreover, we didn't specify that such "nervous" behavior was
not allowed.

On time indices: it is usual to suppose that the calculation of u
takes negligible time compared to dt. Thus the program assumes that,
immediately after x(t) has been observed, u(t) will be applied. If
the computation of u takes more time, the computation becomes more
complex because an additional delay must be dealt with.

Greetings,

Hans

program theodolite_model_1;

const
J = 1.0; {theodolite characteristic}

var
x, {perceived position}
v, {perceived velocity}
u, {controller's output}
r, {reference level}
d, {disturbance estimate}
p, {prediction of next observation}
dt, {time increment; var so it can be changed}
t: real; {time; for display/loop control purposes}

procedure observe;
{this procedure implements the theodolite motion equations; it
generates the NEXT observation(s) x(t+dt) and v(t+dt), given u(t)
and d(t); thus it also shifts the time by dt; note that aa, vv and
xx are LOCAL variables, not visible outside this function; declaring
them as "const" saves their value from run to run; aa, vv and xx are
initially known to be zero}

function disturbance: real;
{this function implements the disturbance, which is known only
to procedure observe, the theodolite's motion simulator}
begin
if (t < 4.0) or (t > 5.0) then
disturbance := 0.0
else
disturbance := 10.0
end;

const
aa: real = 0.0;
vv: real = 0.0;
xx: real = 0.0;
begin
aa := (u + disturbance) / J;
xx := xx + vv*dt + 0.5*aa*dt*dt;
vv := vv + aa*dt;
t := t + dt;
x := xx; v := vv {the controller may use these}
end;

function reference (t: real): real;
{this procedure specifies the reference level r(t) for x(t)}
begin
if t < 2.0 then
reference := 0.0
else
if t < 6.0 then
reference := (t - 2.0)
else
reference := 4.0
end;

procedure control;
{this function implements the controller; it knows the motion equa-
tions and J, is allowed to observe x(t) and v(t), and delivers u(t)}
begin
u := 2.0 * J * (reference (t+dt) - x - v*dt)/(dt*dt) - d;
end;

begin
dt := 0.1; {or any other value}
t := 0.0; {start at t=0}
r := 0.0; {initially zero}
x := 0.0; {initially zero}
v := 0.0; {initially zero}
d := 0.0; {estimate of disturbance}
repeat
writeln (t:12:3, r:12:6, x:12:6, v:12:6, d:12:6, u:12:6);
r := reference (t+dt); {specify desired x}
control; {controller computes u}
{predict next observation}
p := x + v*dt + 0.5 * (u + d) * (dt*dt) / J;
{time shifts by dt here}
observe; {observe x (and v?)}
{update disturbance estimate}
d := d + 2.0 * J * (x-p) / (dt*dt)
until t > 9.0
end.

[From Bill Powers (970324.2125 MST)]

Hans Blom, 970324 --

I've just got back from a nightmarish weekend of air travel and I probably
shouldn't comment on anything in this state of mind, but when I saw your
posts tonight -- the program and your comments back and forth with Martin
Taylor, I found that it's either going to be insomnia or getting this off my
chest.

That is the sloppiest piece of modeling I have seen in a long time, Hans.
You are using the same variable names for different variables; you are
assuming that the two models behave identically and then deriving the values
of variables to "discover" that the assumption is true; you are letting the
controller know the future value of the reference signal, and worst of all
oscillating forces that would quickly destroy anything attached to it if it
didn't blow its motor out first.

On top of that you're fencing with Martin about whether the theodolite
"really exists." I've been trying to bring you down to earth to design
something that will actually control the ding an bloody sich, and you come
back with this sophomoric claptrap. I think your bluff is being called,
Hans: you haven't shown that you have the faintest idea how to design a real
controller for a real physical object.

And that, my friends, is what is known on the net as a FLAME.

Worst,

Bill P.

[Hans Blom, 970325]

(Bill Powers (970324.2125 MST))

I've just got back from a nightmarish weekend of air travel and I
probably shouldn't comment on anything in this state of mind ...

Well, nice to get to know you in this state of mind as well. Thanks!
(I'm in doubt whether to finish this sentence with a ;-).

That is the sloppiest piece of modeling I have seen in a long time,
Hans.

Thanks, Bill ;-).

You are using the same variable names for different variables ...

I was very careful to avoid that, and I'm sure I did. I used names
like aa, vv and xx for the simulation, for instance, and a, v and x
for the controller. And explicit assignments x:=xx and v:=vv to
indicate what the controller is able to observe. I even hid the code
that generates the disturbance in a place where it cannot be accessed
by the main program. Tell me where you (erroneously) think I went
wrong...

... you are assuming that the two models behave identically

Yes, we agreed that the MCT controller could use a perfect model.
Remember? Your PCT model is allowed to do the same, by the way...

... and then deriving the values of variables to "discover" that
the assumption is true;

This I don't understand.

... you are letting the controller know the future value of the
reference signal ...

You do, too: I'll give an analysis of _your_ code in the next post.

which generates violently oscillating forces that would quickly
destroy anything attached to it if it didn't blow its motor out
first.

No, I'm not proud of that. What I was showing was that a standard
"cookbook" design process would lead to a controller that fulfills
its design objective (x=r) perfectly. If this perfection is wanted,
the "violently oscillating forces" are a necessary result, and
analysis will show that this is true. By the way, didn't I mention
that the controller was _too_ good?

Note that you implicitly say here that x=r is not the _only_ design
requirement. There are additional (unspecified) ones, obviously,
because you don't accept the result of this design. That is normal,
Bill. We frequently discover that the design result is unacceptable
and thus the design process is incomplete, simply because we forgot
to specify certain requirements that the controller should fulfill.
Such discoveries actually unearth additional requirements that a
subsequent, better design should incorporate. In AI circles such an
iterative design process is even given an honorable name: the rapid
prototyping method. Implement the user's wishes, note that the user
doesn't like the result, and hear him complain (flame) about what is
wrong; his complaints will most likely carry information about
missing requirements, however uncivilly uttered. This iterative
design method is meant to arrive, step by step, at the exact
specifications of a design given initially fuzzy or incomplete
knowledge or information. As a result, I can now design a new
controller that takes this additional constraint -- quiet operation
-- into account. Does your state of mind at this moment allow you to

You may, however, be able to see that, whatever its bad properties,
the first design _does_ fulfill its only design requirement (x=r)

Hans: you haven't shown that you have the faintest idea how to
design a real controller for a real physical object.

Can you tell me which reasoning led you to this conclusion? And what
is a "real" controller? One that _you_ could design?

And that, my friends, is what is known on the net as a FLAME.

And this, my friends, as using a flame constructively, as a step in
getting to know the ultimate requirements of the system to be built.

Greetings,

Hans

[From Bill Powers (970324.0100 MST)}

OK, after a few hours' sleep let's see if we can approach the problem
systematicallyl.

[Hans Blom, 970324]

If you want to be obtuse and go on arguing about the Ding an Sich, then I'm
going to call a halt to this discussion on the grounds that you're not
serious about it. You, the engineer, can analyze the physical theodolite,
and you, the engineer, can analyze the controller. If you can't see any
difference between them, we have no basis for talking to each other.

Assuming that you're interested, I will go through your program.

program theodolite_model_1;

const
J = 1.0; {theodolite characteristic}

var
x, {perceived position}
v, {perceived velocity}
u, {controller's output}
r, {reference level}
d, {disturbance estimate}
p, {prediction of next observation}
dt, {time increment; var so it can be changed}
t: real; {time; for display/loop control purposes}

This list of variables is unacceptable: the controller's internal model of
the physical theodolite is not shown separately, as it must be. What we need
are the following:

x {actual position}
xmod {modeled position}
v {actual velocity}
vmod {modeled velocity}
d {actual disturbance}
dmod {modeled disturbance}

I am substituting "dmod" for what you call "d", and "d" for what you call
"disturbance," for uniformity of notation.

I have agreed that the controller's internal model is to be accurate, but
this doesn't mean it behaves the same as the real theodolite. The FORM of
the model is accurate, and the PARAMETERS are accurate, but the internal
model is driven by u plus the MODELED disturbance, while the theodolite is
driven by u plus the ACTUAL disturbance. If the modeled disturbance differs
from the actual disturbance, there may be a difference (however momentary)
between x and xmod, and between v and vmod. That difference will propagate
to the next iteration, with results to be discovered.

When you do calculations inside the controller, the values of position and
velocity you use MUST be xmod and vmod, not x and v. By using x and v, you
are substituting the actual values of these variables for the modeled
values. This prevents any differences from developing between the model and
the actual theodolite. This strategy allows using the assumed perfection of
the model as a premise in proving that the model is perfect.

The following procedure is acceptable, using x and v for the actual values
of position and velocity:

procedure observe;
{this procedure implements the theodolite motion equations; it
generates the NEXT observation(s) x(t+dt) and v(t+dt), given u(t)
and d(t); thus it also shifts the time by dt; note that aa, vv and
xx are LOCAL variables, not visible outside this function; declaring
them as "const" saves their value from run to run; aa, vv and xx are
initially known to be zero}

function disturbance: real;
{this function implements the disturbance, which is known only
to procedure observe, the theodolite's motion simulator}
begin
if (t < 4.0) or (t > 5.0) then
disturbance := 0.0
else
disturbance := 10.0
end;

const
aa: real = 0.0;
vv: real = 0.0;
xx: real = 0.0;
begin
aa := (u + disturbance) / J;
xx := xx + vv*dt + 0.5*aa*dt*dt;
vv := vv + aa*dt;
t := t + dt;
x := xx; v := vv {the controller may use these}
end;

The following procedure is not acceptable: it uses the values of x and v
where it should use xmod and vmod. Also the value of the reference signal it
taken from the future, at time t+dt, which has not occurred yet:

procedure control;
{this function implements the controller; it knows the motion equa-
tions and J, is allowed to observe x(t) and v(t), and delivers u(t)}
begin
u := 2.0 * J * (reference (t+dt) - x - v*dt)/(dt*dt) - d;
end;

The body of this procedure should read, in my notation,

u := 2.0 * J * (reference (t) - xmod - vmod*dt)/(dt*dt) - dmod;

Now for the main loop:

repeat
writeln (t:12:3, r:12:6, x:12:6, v:12:6, d:12:6, u:12:6);

The following line is superfluous, since you use the "reference" function
inside the next line, the "control" function. Its main purpose here is to
shift the next value of the reference signal down one line in the printout,
to make it line up with the values of x: This is what creates the false
appearance of x matching the reference signal perfectly.

r := reference (t+dt); {specify desired x}

The controller, next, must use the corrected values given above:

control; {controller computes u}

Now comes the prediction step: the controller's internal model is used to
predict the next value, not of x but of xmod:

{predict next observation}
p := x + v*dt + 0.5 * (u + d) * (dt*dt) / J;

xmod := xmod + vmod*dt + 0.5*(u+dmod)*(dt*dt)/J

The next statement computes the actual position and velocity based on the
same u as in the "prediction" statement, but using the actual disturbance.
Note that time does not shift by dt where your comment is, but AFTER the
computations in "observe" have been done (check the "observe" function):

{time shifts by dt here}
observe; {observe x (and v?)}

The next statement is now incorrect, since it fails to take into account
possible differences between x and xmod, and v and vmod. And in my notation,
the "d" should be "dmod."

{update disturbance estimate}
d := d + 2.0 * J * (x-p) / (dt*dt)

To derive the correct form, you must use the equations for the predicted and
actual values of x(t+dt):

xmod(t+dt) = xmod(t) + vmod(t)*dt + 0.5*(u(t)+dmod(t))*dt*dt/J, and

x(t+dt) = x(t) + v(t)*dt + 0.5*(u(t)+ d(t))*dt*dt/J

Subtracting the first equation from the second and solving for dmod(t), we
get the difference between the predicted and actual values of x(t+1), which
is the error in d. Adding this difference to d gives us a corrected value of
d based on the _current_ iteration (the best we can do). This leads to the
expression I used in my program:

dmod :=
dmod + 2*J/dt/dt*(x-xmod-(oldx-oldxmod)-(oldv-oldvmod)*dt);

In your computation of d, a subtle trick occurs:

d := d + 2.0 * J * (x-p) / (dt*dt)

What is "x-p" as you have computed it? x comes from the "observe" function,
where (substituting for the temporary variables) we have

x := x + v*dt + 0.5*(u + disturbance)*dt*dt/J,
where "disturbance" means the true disturbance.

The expression for p as you wrote it is

p := x + v*dt + 0.5 * (u + d) * (dt*dt) / J;
where "d" is the modeled disturbance.

Note that x and v in this equation are the _actual_ values, and so are
identical to the same values in the other equation. If we now compute x - p,
the first two terms disappear, leaving

(x-p) = 0.5*dt*dt/J*(u + disturbance - u - d)

Substituting this into your

d := d + 2.0 * J * (x-p) / (dt*dt)

we get

d := d + disturbance - d, so

d := disturbance.

If course if the controller's xmod and vmod actually did equal x and v
respectively, then this would be true: the correction of the modeled
disturbance would be perfect. So your development uses the following logic:

1. Assert that xmod = x and vmod = v.

2. Compute the value of d that will make xmod = x (which you have already
caused to be true by substituting x for xmod).

If xmod is NOT equal to x, and vmod is NOT equal to v, then the correction
of d done above will be wrong. And of course they are not equal; if they
were, d would already be correct!

To sum up:

In your development, you assume from the start that the controller's model
and the theodolite will behave identically, simply because the same equation
is used. But this assumption overlooks the fact that in the controller, the
modeled disturbance can differ from the real one, for at least one iteration
after the real one changes. This assumption effectively forces position and
velocity in the model to match the real position and velocity when they do
not and cannot match. Thus the effects of this momentary mismatch are not
caught by your model; they do not propagate around the loop as they should.

The true effect of the momentary mismatch is that there is always a small
unbalance between the disturbance and the output that is supposed to cancel
it. This shows up in the velocity term, which instead of going to zero, goes
to a small nonzero value that causes a continual departure of position from
the reference position. This value gets smaller as dt gets smaller, but it
never becomes zero, and the disturbance always causes a continuing and
unlimited departure of x from the reference value.

the corrected form) produces an output that alternates on every iteration
between extreme values, with an amplitude that grows exactly as dt
decreases. As dt decreases toward zero, the behavior of your model does not
approach some asymptote, but diverges toward infinity. This is clearly not a
well-behaved controller, if one can call it a controller at all.

···

---------------------------------
If there is one thing I am sure of in addition to death and taxes, it is
that you will reject my analysis and assert that yours is correct. I assure
you that if this is what happens, I will be finished with this discussion.
So I look forward to getting more sleep in the future.

Best,

Bill P.

[Hans Blom, 970326b]

(Bill Powers (970324.0100 MST))

Assuming that you're interested, I will go through your program.

Fine. You express some realistic concerns, but I will demonstrate
that I have foreseen them. It seems to be just the namegiving of
variables that introduces confusion.

This list of variables is unacceptable: the controller's internal
model of the physical theodolite is not shown separately, as it must
be. What we need are the following:

my names:

x {actual position} xx
xmod {modeled position} x
v {actual velocity} vv
vmod {modeled velocity} v
d {actual disturbance} disturbance
dmod {modeled disturbance} d

Except for name changes, I have already done what you require here.

The problem may be a misunderstanding: I use the statements x := xx
and v := vv to indicate that the modeled v and x are direct copies
(observations without noise or error) of the "true" variables. That
we agreed upon. The model then works with these "internal variables"
x and v. Just change the names in the program to the ones you suggest
and you will see that the theodolite (aa, vv, xx, disturbance) and
its model (a, v, x, d) are fully separate, except for the explicit
link through the observations x := xx and v := vv.

If the modeled disturbance differs from the actual disturbance,
there may be a difference (however momentary) between x and xmod,
and between v and vmod. That difference will propagate to the next
iteration, with results to be discovered.

Actually, TWO variables stand for the model-x. One is called x, and
it stands for the model-x as it exists directly after an observation.
Since the observation is error-free, xx is copied directly into x. In
technical notation this x would be called x (t | t), that is: the
best estimate of x at time t given observations upto and including
time t. Due to the fact that the measurement is error-free, this
"best estimate" is perfect. The other model-x is called p, and it
stands for the best current estimate of x as it exists _before_ the
next observation is performed. In technical notation it would be
called x (t+dt | t), that is the best estimate of x at time t+dt
given observations upto and including time t (and thus a prediction
or extrapolation). It is only _this_ model-x (called p) that will
deviate from the real x, the _next_ (at time t still unavailable)
observation. Moreover, this deviation carries information: it is used
to compute (model) the disturbance (with a one-sample delay).

And the difference _does not_ cause "integration error" when it
propagates to the next iteration, because it is immediately used to
update the estimate of the disturbance d _right after the next
observation_. At that moment, the model is perfectly in line with
reality again, due to the fact that the observation is error-free. So
there are no "results to be discovered"...

When you do calculations inside the controller, the values of
position and velocity you use MUST be xmod and vmod, not x and v.

My calculations do exactly that (with a change of names).

The following procedure is not acceptable:

u := 2.0 * J * (reference (t+dt) - x - v*dt)/(dt*dt) - d;

The body of this procedure should read, in my notation,

u := 2.0 * J * (reference (t) - xmod - vmod*dt)/(dt*dt) - dmod;

With the name changes mentioned above, I do exactly as you want
(except for the time index of reference, which specifies the value x
_is to have_ in the next iteration).

This is what creates the false appearance of x matching the
reference signal perfectly.

r := reference (t+dt); {specify desired x}

Do you agree that the reference specifies the _desired_ value of x,
and not its _current_ (uncontrollable) value?

Now comes the prediction step: the controller's internal model is
used to predict the next value, not of x but of xmod:

{predict next observation}
p := x + v*dt + 0.5 * (u + d) * (dt*dt) / J;

xmod := xmod + vmod*dt + 0.5*(u+dmod)*(dt*dt)/J

Except for name changes, this is exactly what I do.

Note that time does not shift by dt where your comment is, but AFTER
the computations in "observe" have been done ...

Right. I meant to say that time _is going to_ shift by dt in the next
line's statement, where we "wait for" the next observation.

In your computation of d, a subtle trick occurs:

d := d + 2.0 * J * (x-p) / (dt*dt)

What is "x-p" as you have computed it?

This is why I need an extra variable p, which stands for x (t+dt|t).
In this line, I need both x AND p; I cannot use the same variable for
two things. x-p stands for x(t+dt|t+dt) - x(t+dt|t), that is the
difference between the true observation at time t+dt and its
predicted value.

Note that x and v in this equation are the _actual_ values, and so
are identical to the same values in the other equation.

The reason is that the observations are noiseless/error-free. The
controller was allowed to have a "perfect model", so it knows this
and makes use of the fact.

Of course if the controller's xmod and vmod actually did equal x and
v respectively, then this would be true: the correction of the
modeled disturbance would be perfect.

This _is_ true because the observations are noiseless. Therefore the
model-x and the true x coincide directly after the observation -- but
not necessarily in the prediction p.

So your development uses the following logic:

1. Assert that xmod = x and vmod = v.

Yes: one again, the observation is error-free, so the best estimates
(x and v) of the model coincides exactly with the measurements (of xx
and vv). What better estimates could one think of?

2. Compute the value of d that will make xmod = x (which you have
already caused to be true by substituting x for xmod).

Slightly more subtly: the predicted value of the next observation is
called p or x(t+dt|t). This value p will only coincide with the next
observation called x or x(t+dt|t+dt) if the disturbance is modeled
correctly, i.e. if d = dmod (using your names). If not, the
difference tells us what the true deviation has been (one sample
after the fact).

If xmod is NOT equal to x, and vmod is NOT equal to v, then the
correction of d done above will be wrong. And of course they are

not equal; ...

Once again: directly after an observation the controller knows (due
to its knowledge of the fact that the measurement is noise-free) that
the best estimate x (which you call xmod) it could ever have of xx
(which you call x) is the measured value. It is only the _prediction_
that can be in error -- because d <> disturbance (or d <> dmod, as

Please analyze the program once again after you have substituted the
names of the variables to your liking (or do you want me to do
that?). You will see that all is as you would want it, except maybe
where the "meaning" (or time index) of the reference level is
concerned).

What may appear strange to you is the fact that the model uses the
observations directly as its "state variables". But on second
thought: what else can the model's internal variables be based on
than on (maybe suitably processed) observations? It's all perception,
you know ;-).

Greetings,

Hans

[From Bill Powers(970326.0849 MST)]

Hans Blom, 970326b--

You express some realistic concerns, but I will demonstrate
that I have foreseen them. It seems to be just the namegiving of
variables that introduces confusion.

This list of variables is unacceptable: the controller's internal
model of the physical theodolite is not shown separately, as it must
be. What we need are the following:

my names:

x {actual position} xx
xmod {modeled position} x
v {actual velocity} vv
vmod {modeled velocity} v
d {actual disturbance} disturbance
dmod {modeled disturbance} d

Except for name changes, I have already done what you require here.

The problem may be a misunderstanding: I use the statements x := xx
and v := vv to indicate that the modeled v and x are direct copies
(observations without noise or error) of the "true" variables.

But that is exactly what I am objecting to here: you are forcing the
world-model's variables to have exactly the same values that the real
variables have, outside the controller. These values are not being generated
by the world-model, but simply by substitution from the real values. So the
controller is not generating its own predicted value of the theodolite's
position -- it's simply using the theodolite's position directly. There can
never be any difference between the model's output and the real position of
the theodolite, when you do it this way.

If the modeled disturbance differs from the actual disturbance,
there may be a difference (however momentary) between x and xmod,
and between v and vmod. That difference will propagate to the next
iteration, with results to be discovered.

Actually, TWO variables stand for the model-x. One is called x, and
it stands for the model-x as it exists directly after an observation.
Since the observation is error-free, xx is copied directly into x.

When you simply copy one variable into another, you do not have two
variables. You have one variable with two names. The only variable is then
the _actual_ position of the theodolite. The _modeled_ position is not being
used for control.

In the Kalman Filter approach, the observation may be without noise, but it
is not compared directly with the reference signal. It is compared with the
output of the controller's internal model, and that model is then adjusted
to bring the model's output closer to the observed value. But the _model's_
output is what is compared with the reference signal, if we are truly

This adjustment takes time and is never exact; it can't be, because the
adjustment is done on the basis of a _difference_ between x and xmod (in my
notation). And if your estimate of d is incorrect even on a single
iteration, the next values of vmod and xmod will _differ_ from the values of
x and v -- but your approach prevents this difference from occurring!

In
technical notation this x would be called x (t | t), that is: the
best estimate of x at time t given observations upto and including
time t. Due to the fact that the measurement is error-free, this
"best estimate" is perfect. The other model-x is called p, and it
stands for the best current estimate of x as it exists _before_ the
next observation is performed. In technical notation it would be
called x (t+dt | t), that is the best estimate of x at time t+dt
given observations upto and including time t (and thus a prediction
or extrapolation). It is only _this_ model-x (called p) that will
deviate from the real x, the _next_ (at time t still unavailable)
observation. Moreover, this deviation carries information: it is used
to compute (model) the disturbance (with a one-sample delay).

This is completely specious reasoning. You may be calculating a predicted p,
but what you are using for control is the observed value of the actual x,
using that value instead of the value generated by the controller's internal
model. You are comparing a perception of the actual x against the reference
signal and using the error to generate u. That is exactly the PCT model, and
it is not model-based control at all! If the input is lost, your model will
immediately lose control, because control is now based on the real-time
observation, and not on the model.

Hans, think this over. If you can't see that what you've done here violates
the architecture of your own model, we really can't go any further.
Everything in your post that follows this point is a rationalization;
instead of just following the mathematical argument through to its
conclusion, you're inserting all sorts of interpretations and extraneous
ideas to justify your results and support your conviction that you've done
nothing wrong. The basic error is in saying x = xx and v = vv. Doing this
bypasses the internal world-model, turning your whole model into a real-time
negative feedback controller with some hybrid notions in common with the MCT
model. But it's not the MCT model that you disclosed to us so many months
ago. It's neither fish nor fowl.

Best,

Bill P.

[Hans Blom, 970327]

Two paradigms converse with each other. Says one to the other: "I
don't understand a word you say". Responds the other: "What did you
say? I don't understand you at all".

(Bill Powers(970326.0849 MST))

The problem may be a misunderstanding: I use the statements x := xx
and v := vv to indicate that the modeled v and x are direct copies
(observations without noise or error) of the "true" variables.

But that is exactly what I am objecting to here: you are forcing the
world-model's variables to have exactly the same values that the
real variables have, outside the controller. These values are not
being generated by the world-model, but simply by substitution from
the real values. So the controller is not generating its own
predicted value of the theodolite's position -- it's simply using
the theodolite's position directly. There can never be any
difference between the model's output and the real position of the
theodolite, when you do it this way.

That is correct, and it is due to the fact that the measurements are
error-free. In fact, the Kalman Filter equation of what a model-based
controller does when _processing an observation_ is

x-model := w1 * x-model + w2 * x-observation

where w1 and w2 are _weights_ which depend on the accuracies (known
or estimated in/by the model) of both x-model and x-observation. Thus
x-model is a weighted average of the internally generated prediction
x-model and the observation. This is the general formula, but we can
distinguish two extremes:

1) x-observation is error-free. In this case, w1=0 and w2=1. This is
the case that obtains in the MCT theodolite controller. The equation
thus reduces to x-model := x-observation, which you will recognize in
my code as x := xx. I could instead have written a code line
somewhere in my program such as

x := 0.0 * p + 1.0 * xx

where in my code p is the predicted model-x and xx the direct
observation. Would that have helped?

2) x-observation is or can be extremely erroneous. This is the
"missing observation" or "walking in the dark" case. The above
formula now has w1=1 and w2=0 and reduces to x-model := x-model,
resulting in open loop behavior.

In intermediate cases, e.g. w1=0.5 and w2=0.5, the new x-model after
an observation uses both the prediction of the model itself AND the
observation, suitably weighted. In that case, the model-x will differ
from the observed x. But not in our example.

The confusion may arise because you cannot see the model "in" the
computer code. That is right: it is "behind" the code. It will need
to explicitly appear _in_ the code only when measurements are noisy.
But that is not the case in our example. And it would make the PCT -
MCT comparison far more difficult to investigate -- except in terms
of performance...

When you simply copy one variable into another, you do not have two
variables. You have one variable with two names. The only variable
is then the _actual_ position of the theodolite. The _modeled_
position is not being used for control.

Is this clear now? The _modeled_ position _is_ being used for
control. It just happens to coincide with the measured position. And
that "coincidence" happens to exist because the measurement is known
to have no error -- whereas the prediction might.

This is completely specious reasoning.

Bill, I'm just rehashing textbook stuff, simplified to the extreme.
And you still don't get it. What am I doing wrong?

Hans, think this over. If you can't see that what you've done here
violates the architecture of your own model, we really can't go any
further.

Bill, what gives you the perception that I'm a blathering idiot? I do
know what I'm talking about. And it's not your lack of knowledge that
worries me, but your attitude. When you say things like this in your
posts, that is -- _not_ in your attempts to reach understanding...

Yet, greetings!

Hans

[From Bruce Gregory (970327.0525 EST)]

Hans Blom, 970327

The confusion may arise because you cannot see the model "in" the
computer code. That is right: it is "behind" the code. It will need
to explicitly appear _in_ the code only when measurements are noisy.
But that is not the case in our example. And it would make the PCT -
MCT comparison far more difficult to investigate -- except in terms
of performance...

Bill, I'm just rehashing textbook stuff, simplified to the extreme.
And you still don't get it. What am I doing wrong?

Einstein is reputed to have said that things should be made as simple
as possible -- but no simpler. It appears that an error-free MCT model
has crossed this line. If the model does not appear in the code, the
example fails to be instructive. This seems to be the reason that Bill
and Martin have such difficulty making sense of "textbook stuff."

Bruce Gregory

[Hans Blom, 970327d]

(Bruce Gregory (970327.0525 EST))

Einstein is reputed to have said that things should be made as
simple as possible -- but no simpler. It appears that an error-free
MCT model has crossed this line. If the model does not appear in the
code, the example fails to be instructive. This seems to be the
reason that Bill and Martin have such difficulty making sense of
"textbook stuff."

Thanks, Bruce. I'll change the line

x := xx

into

x := 0.0 * p + 1.0 * xx

I somehow doubt that this is the problem. Maybe Kuhn is more
applicable here than Einstein ;-).

Greetings,

Hans

[From Bill Powers (970327.0655 MST)]

Hans Blom, 970327--

There can never be any
difference between the model's output and the real position of the
theodolite, when you do it this way.

That is correct, and it is due to the fact that the measurements are
error-free.

That is irrelevant to the point I am making. Suppose that at time t, the
disurbance, which has been zero for a several iterations, suddenly becomes
50 units, as in our program. On that iteration, the real theodolite moves to

x(t+dt) = x(t) + v(t)*dt + 0.5*(u+50)*dt*dt/J,

and its velocity becomes

v(t+dt) = v(t) + (u+50)*dt.

The world-model, even though it is perfect, moves to

xmod(t+dt) = xmod(t) + vmod(t)*dt + 0.5*(u + 0)*dt*dt/J,

and its velocity becomes

vmod(t+dt) := vmod + (u + 0)*dt,

for the simple reason that it cannot know yet that the disturbance has
occurred. It can detect that the disturbance has become nonzero only by
looking at the difference between the model's behavior and the real behavior.

Clearly, at the end of this iteration, the world-model's position and
velocity are different from the real position and velocity. That difference
can then be used to calculate a corrected disturbance for the _next_
iteration, but nothing can be done about the _current_ iteration which has
just finished. You can't go back in time and fix it.

If, however, you use v and x in place of vmod and xmod, this difference is
not allowed to occur; you are forcing the world-model's variables to have
exactly the same values as the real variables, and that greatly changes the
outcome.

The Kalman filter approach cannot do away with this one-iteration lag. It
can make the FORM of the world-model perfect, in principle, but it can't
make the _values of the variables in the model_ perfect. You can't get the
information about the disturbance into the model until one iteration after
the disturbance has occurred. Your interpretation of "perfect model" is
incorrect: it does NOT mean that the values of the variables are identical.
The step by which you make them identical is not defensible.

I notice, by the way, that you have changed dt to 0.1 sec, which makes the
maximum u less than the limit I specified (100 n-m). This permits you to
ignore this limit in your subsequent programs, which in turn permits you to
say that this program works for whatever dt is chosen.

I'm sorry, Hans. We have gone as far as we can go.

Best,

Bill P.

[Hans Blom, 970407]

(Bill Powers (970327.0655 MST))

Picking up on an old discussion:

Suppose that at time t, the disurbance, which has been zero for a
several iterations, suddenly becomes 50 units, as in our program. On
that iteration, the real theodolite moves to

x(t+dt) = x(t) + v(t)*dt + 0.5*(u+50)*dt*dt/J,

and its velocity becomes

v(t+dt) = v(t) + (u+50)*dt.

The world-model, even though it is perfect, moves to

xmod(t+dt) = xmod(t) + vmod(t)*dt + 0.5*(u + 0)*dt*dt/J,

and its velocity becomes

vmod(t+dt) := vmod + (u + 0)*dt,

for the simple reason that it cannot know yet that the disturbance
has occurred. It can detect that the disturbance has become nonzero
only by looking at the difference between the model's behavior and
the real behavior.

Clearly, at the end of this iteration, the world-model's position
and velocity are different from the real position and velocity. That
difference can then be used to calculate a corrected disturbance for
the _next_ iteration, but nothing can be done about the _current_
iteration which has just finished. You can't go back in time and fix
it.

Right. But in MCT terminology, several things are said differently.

First, "disturbance" is called "unmodeled dynamics", and things that
are not modeled cannot be controlled -- as you have frequently noted.
You consider this to be a defect of MCT, but you forget (or don't
want to hear how well a disturbance can often be modeled. The
best one could possibly do is to attempt to model the disturbance as
well as possible. Usually this is possible to a large extent, but
things that are fully unpredictable can of course not be modeled at
all. Nor "controlled away", although their (later time) _effects_ may
be "controlled away". In my example, this is done by "reconstructing"
the disturbance (one sample after the fact; in fact, one cannot do
better).

Second, this "reconstruction" is a form of learning. Although it is
impossible to predict the unpredictable, once it has happened it can
be reckoned with.

Third, an "ideal model" is a model that captures everything that
_can_ be modeled. And this obviously does not include things that
cannot be modeled even in principle: fully unpredictable happenings.
My theodolite dynamics model was "ideal" in the sense that the
controller knew the exact equation that generated the observation
given the actions. And my "disturbance model" was ideal in the sense
that it modeled _what could be modeled_.

The Kalman filter approach cannot do away with this one-iteration
lag.

The Kalman filter is the part that builds the model, and we don't
consider it here: the model is just assumed to be available. You mean
that the _controller_ cannot do away with the delay. That is correct:
no model can predict (and control for) the unpredictable. So what's
the complaint? _No_ controller could do better!

Greetings,

Hans

[From Bill Powers (970411.1106 MST)]

Back from Green Valley, AZ, after getting my father moved into his new
apartment.

Hans Blom, 970407]

(Bill Powers (970327.0655 MST))

Picking up on an old discussion:

Suppose that at time t, the disturbance, which has been zero for a
several iterations, suddenly becomes 50 units, as in our program. On
that iteration, the real theodolite moves to

x(t+dt) = x(t) + v(t)*dt + 0.5*(u+50)*dt*dt/J,

and its velocity becomes

v(t+dt) = v(t) + (u+50)*dt.

The world-model, even though it is perfect, moves to

xmod(t+dt) = xmod(t) + vmod(t)*dt + 0.5*(u + 0)*dt*dt/J,

and its velocity becomes

vmod(t+dt) := vmod + (u + 0)*dt,

for the simple reason that it cannot know yet that the disturbance
has occurred. It can detect that the disturbance has become nonzero
only by looking at the difference between the model's behavior and
the real behavior.

Clearly, at the end of this iteration, the world-model's position
and velocity are different from the real position and velocity. That
difference can then be used to calculate a corrected disturbance for
the _next_ iteration, but nothing can be done about the _current_
iteration which has just finished. You can't go back in time and fix
it.

Right. But in MCT terminology, several things are said differently.

First, "disturbance" is called "unmodeled dynamics", and things that
are not modeled cannot be controlled -- as you have frequently noted.

No, I have said that what is not _perceived_ cannot be controlled.
Perceiving is not modeling. In the terminology you have used, y is the
perception of x: the internal representation of the external variable to be
controlled. If the plant function is f, so that x = f(u), the model of the
plant is f', so that x' = f'(u). The real disturbance is not perceived;
there is no internal variable whose value is derived from sensing the real
disturbance.

You consider this to be a defect of MCT, but you forget (or don't
want to hear how well a disturbance can often be modeled.

The problem is that you have to be told that a disturbance of a particular
kind is going to occur, so you can incorporate it into your controller's
world-model. The controller itself has no way to do this.

The best one could possibly do is to attempt to model the disturbance as
well as possible. Usually this is possible to a large extent, but
things that are fully unpredictable can of course not be modeled at
all. Nor "controlled away", although their (later time) _effects_ may
be "controlled away". In my example, this is done by "reconstructing"
the disturbance (one sample after the fact; in fact, one cannot do
better).

Fine. And that is the only limitation on the PCT model's way of controlling
the (later) effect of the disturbance, as you can see by looking at the
performance of the PCT model in your "bare-bones" program. But what you
seem to be overlooking is that the PCT model does not need to be told about
the disturbance. The same model works whether that disturbance term is added
to the environmental part of the simulation or not, and it works just as
well as the MCT model _after_ you modify the MCT model to take the
disturbance into account. Doesn't it seem odd to you that the PCT model
counteracts the effects of the disturbance (per your comment, as well as
possible) without containing any term equal and opposite to the disturbance?

In the bare-bones program, the PCT controller is represented by the line

upct:= upct + G*(xref - x)

and the MCT controller is represented by

umct := xref/K

The environment function (which converts the output of the controller to a
value of x) is

x := K * umct or
x := k * upct

What I did was simply add a constant disturbance to the environment function:

x := K*umct + 10, or
x := K*upct + 10

This caused your bare-bones MCT model to compute a value of _x_ 9 units too
large, whereas without the disturbance it computed a value of _x_ 1 unit too
small (as the PCT controller does, and as any controller must). In order to
deal with the disturbance, you would have to add a term d to the MCT controller,

x := (Xref - d)/K,

and then you would also have to add a means for estimating d, implying
still more mechanisms needed in this "bare-bones" controller.

Note that with the disturbance present, the PCT controller is still the same
as it was:

upct:= upct + G*(xref - x).

The PCT controller works with or without the added disturbance, and does not
need any modification or added mechanisms.

Other subjects in reply to later posts.

Best,

Bill P.