[From Bill Powers (970411.1148 MST)]

Hans Blom, 970407b--

(Bill Powers (970327.1051 MST))

Performance is not perfect; x is always one iteration behind xref.

We just established in (1) that x being one iteration behind xref is

necessarily the case; there is no escape from causality in a control

system. Any control engineer knows that x _follows_ xref. This being

so, it appears strange to me to call this inescapable property "less

than perfect behavior", unless it is based on a rather Calvinist

philosophical perspective that even the best we could possibly do is

not perfect.

You've changed your tune, considering that it was you who initially pointed

out the supposed advantage of the MCT model over the PCT model -- the MCT

model could control "perfectly," while the PCT model, because it was

error-driven, could not. Now, it seems, perfection is to be defined as

whatever the MCT model is actually capable of doing, however imperfect that

may be in Calvanistic terms. I'm happy to go along with this, provided you

will revise your previous remarks and admit that the PCT model controls just

as pseudo-perfectly as the MCT model does, given the proper parameters as in

theo6MCT or in your bare-bones model.

Your model is silly. You say u = xref/4, and x = 4*u, which is

simply saying x = xref.

Bill, please follow the basic MCT argument. It consists of two steps:

1) modeling; this is concerned with the system discovering what you

call the "environment equation". In our example case I was

specifically allowed to have a "perfect model", so my controller does

not model; it just knows that x := 4*u;

2) control; this is concerned with finding a u such that x = xref;

u := xref/4 follows trivially (in this simple case!).

These are two basic properties of an MCT controller. I have no idea

what's silly about that.

I knew I shouldn't have said "silly." I'll go along with "trivial."

If some nasty person comes along and adds a disturbance that nobody

told us was going to happen, the PCT model continues to work, while

the MCT model fails.

Whether the PCT model continues to work (or work better than the MCT

controller) depends on the _type_ of disturbance and is an empirical

question. There is a difference in behavior, to be sure.

Were you ever in Public Relations before you took up control engineering? I

would agree that there is a "difference in behavior." The disturbance of 10

units is not resisted by the MCT model, and x departs significantly from the

reference value. That is, indeed, different.

Anyway, in

case of a disturbance the environment equation would be

x := 4*u + d

which the MCT controller would need to know, in addition to any

properties of the disturbance (in terms of probability distribution)

that might be known. MCT considers the disturbance to be generated

somehow by the environment. Thus d belongs to the environment

equation. And it is the model's task to describe the environment,

_including d_!

So this is NOT a bare-bones model, is it? How is the model going to describe

the environment, including d? It seems to me that the only way this can be

done is for the _modeler_ (you) to rewrite the program specifically to

calculate d.

Substitute this line:

x := K * upct + 10; {environment equation}

and you have a different environment equation, so a new model is

needed, different from the previous x := K * upct. In the "bare

bones" case, where the MCT controller lacks a model-building or

-updating facility, MCT control would obviously not be good.

OK, we agree on that. I repeat the question in my previous post: doesn't it

seem odd to you that the "bare-bones" MCT model needs all this added

information and computing ability to deal with the disturbance, while the

"bare-bones" PCT model does not?

NOTHING ABOUT THE PCT MODEL HAS TO BE CHANGED TO HANDLE THIS

DISTURBANCE. In the MCT model you'd have to change the model so as

to subtract out that disturbance (after somehow deducing that it's

there).

In an intact MCT controller -- one that includes a model-builder --

nothing has to be changed either. My MCT theodolite controller _did_

routinely change the (disturbance) model "after somehow deducing that

it's there", which was equally simple. But we'd better reserve

model-building for later, when it is fully clear how the control part

of an MCT controller works. Or _is_ that clear by now?

Yes, it's clear. It always has been clear to me. An MCT controller needs to

be told what the form of the environment equation is, and what the inverse

of that equation is, and where disturbances might act in the environment --

all of this it must be told by the designer of the controller, in order for

it to work. There is a great deal of hidden machinery in the MCT controller,

work that is done for the controller by its human designer. This is OK when

we're talking about engineers designing artificial systems to be used by a

customer to meet predefined specifications. It is not OK when we are talking

about how living systems work. Living systems do not have the benefit of a

control system engineer/mathematician standing by to suggest suitable forms

for the world-model, or to do the mathematical manipulations required to

derive the inverse of the world-model and implement it as a neural

calculation. And nervous systems do not have infallible floating-point

digital computers available, or sensors that remain perfectly calibrated, or

output actuators that respond to driving signals with absolute precision and

reliability.

## ···

--------------------------------------

We have yet to take up the subject of the effects of non-ideal computations

and functions in the MCT and PCT controllers. The only hint of this subject

came a month or ago or so when I raised the subject of accuracy in computing

the inverse of the world-model. You indicated that this subject was new to

you, and that it needed some thought. When we get into this more in detail,

the main advantages of the PCT controller will become more evident. We can

see them even now.

Consider your MCT bare-bones model (without the disturbance, to make the

point simpler). The environment function is

x = K*umct

and the controller's world-model is

umct := x/K.

Suppose the world-model is a little bit in error -- K has a value that is 10

percent too high. Clearly, the MCT model will simply produce a value of x

that is 10% too low. Of course the adaptive process, if included, would

eventually correct that error, but let's see how the PCT model deals with it.

The PCT controller is

upct := upct + G*(xref - x),

and the equivalent error would be for G to be 10% too large.

If you run the bare-bones model using 1.1*G instead of just G, the PCT model

produces a final value of 22.0909..., compared with 22.000... when the

optimum value of G is used. The result is off by 1/2 percent. The MCT model,

given 1.1*K instead of K, produces a value of 20.909, which is 10% low

relative to the best possible value of 22, achieved with the optimum value

of K in the world-model.

So the PCT model is affected by this error in the output sensitivity only

1/20 as much as the MCT model is affected. In order just to _match_ the

performance of the PCT model, the MCT model would need an adaptive process

that could adjust K to the optimum value with an accuracy of 0.5%. The

closed-loop PCT approach can handle variations in the output sensitivity

with the same accuracy, but without requiring any adaptation.

In a digital computer, achieving an accuracy of one-half percent is no

problem. But in a nervous system, where most processes are analog and JNDs

run to 5% of the signal magnitude, it is highly unlikely that ANY output

computation (aside from those using symbols and pen-and-paper) can approach

an accuracy of one-half percent. Thus with respect to handling variations in

output sensitivity, the PCT model does about as well as possible without

requiring any adaptive mechanisms at all. And if the inherent errors in the

nervous system computations are any larger than one-half percent, the PCT

model can do _better_ than the _adaptive_ MCT model, for this simple case.

The MCT model, because it is basically open-loop, is inherently sensitive to

computational errors and miscalibrations. This is why it MUST contain an

adaptive process. Without the adaptation, the MCT model would drift away

from optimum performance rather quickly, if it had to be built with real

components and work in the real world.

The PCT model, on the other hand, is inherently _insensitive_ to many

computational errors and changes in environmental parameters. It is

commonplace for real-time negative feedback systems to reduce errors due to

disturbances and parameter variations by a factor of 20 and much more,

without any adaptive methods or world-models being needed.

It is perfectly possible to design a PCT system with adaptation. We have

tried two such designs, both of which work well. Others are quite possible

-- including others that use the Kalman filter approach but not needing a

model of the environment. But because the PCT system is inherently far less

sensitive to computation errors and changes of system parameters, the amount

and precision of adaptation required to reach a given level of performance

are far less than in the MCT model.

The MCT approach is ONE approach to adaptive control. Because it can be made

to work, many people have accepted it and devoted themselves to elaborating

on it. But there are other approaches, and if the same time and effort had

been devoted to them, they would probably look just as good. The PCT

approach obviously needs attention from mathematicians and just plain

engineers; it is much too early to give up on it.

Best,

Bill P.