[Hans Blom, 970213]

(Bill Powers (970212.1815 PST))

I have no problem with modeling; it's usually possible to come up

with a reasonable model of a physical system, as long as you don't

try to use the model to predict a long distance into the future. The

issue here is control: using any means to make the output of some

physical process follow a preselected path.

However, it seems to me that the problem I proposed is simple

enough: to sketch in how the MCT model would be used to control

the value of x for a plant in which

dx/dt = A*u(t).

I have selected this simple plant because it involves one

integration: that is, x = integral( A*u(t)*dt). I believe that any

system with the architecture you are proposing will have difficulty

controlling this plant, because it will be unable to correct for

integration errors in the model.

However, if you are able to show that it doesn't have this problem,

I will have learned something of value, so I hope you'll address

yourself to laying out the MCT controller for this plant.

Will do. Reasonable enough. And I'll spell it out so clearly that

even Rick will be able to follow my argument ;-).

In view of your statement that you have no trouble with modeling,

let's first assume that a perfect model is available. A perfect model

is, of course, identical to the "system to be controlled":

dx/dt = a*u(t)

or

dx = a*dt*u(t)

[In a more formal derivation, I would need to use different symbols

for the real system and for its model; since we assume a perfect

model for now, the two coincide and we can dispense with this formal

nicety].

In order to avoid integrals, I'll reformulate this as a difference

equation

x(k+1)-x(k) = a*dt*u(k)

where dt is the sampling/control interval. Rewrite this to

x(k+1) = x(k) + a*dt*u(k)

and we have the standard difference equation form. Note that this

equation has the character of a prediction: at sample instant k, and

knowing only x(k) and u(k) [and, of course, a (given by the model)

and dt (any small enough value will do)], we also know x(k+1), the

value of x one sample time ahead.

Now, in the control law that I have used thus far, we desire the

following relationship

x(k+1) = r(k+1)

where r is the prescribed reference level, i.e. the value that x is

to have in the immediate future. It would make no sense to specify a

reference level r(k) for x(k) at sample instant k, where x(k) has a

known value, is fixed, and thus cannot be manipulated anymore. This

formula shows that control is all about manipulating the _future_.

By combining both formulas we get

r(k+1) = x(k) + a*dt*u(k)

and, solving for u, we have

u(k) = [r(k+1)-x(k)] / (a*dt)

This is the computation of u in case of a perfect model. The dt in

the denominator, by the way, tells us that this control law is

differentiator-like.

If we insert the control law into the model equation, we get

x(k+1) = x(k) + a*dt*u(k) =

x(k+1) = x(k) + a*dt*[r(k+1)-x(k)]/(a*dt)

= x(k) - x(k) + r(k+1)

= r(k+1)

That is, the controlled variable goes to the specified reference

value (1) in one step and (2) without error and (3) even if the

reference value changes over time in whichever way (assuming no

limitations on u).

Let us now analyze what happens if we have an imperfect model. In

that case, we must select a suitable model structure, i.e some

(regression) relationship between x(k+1) and previous values of x

and/or u that "explains" the data sufficiently accurately. Choosing

such a relationship belongs to the domain of modeling. Let's assume

that the modeling is well done and comes up with the simple structure

that best resembles reality:

x(k+1) = x(k) + a*dt*u(k) + e(k)

where the difference equation is of the same form but an additional

term e, the modeling error (also called "noise" or "disturbance"),

represents all deviations between model and reality. In the imperfect

model case, the model's a will also not coincide exactly with the

real a (this makes the analysis slightly more complex, so I'll skip

it for now). We compute u from the above expression and the control

goal x(k+1) = r(k+1):

r(k+1) = x(k) + a*dt*u(k) + e(k)

We do not know e(k) and assume it to be zero. Why? Well, if we don't

know anything about e, it could have any value at all. However,

modeling assures that e will be (close to) zero _on average_; in

facts, modeling has as its primary goal to pick such a model that e

is as small as possible, on average, in the data that were analyzed

when constructing the model.

Thus, solving for u, we get the same (on average "best") control law

as above:

u(k) = [r(k+1)-x(k)] / (a*dt)

See how good control is, i.e. what actually happens to x(k+1), if we

use this control law in the noisy/imperfect model case:

x(k+1) = x(k) + a*dt*u(k) + e(k) =

= x(k) + r(k+1) - x(k) + e(k)

= r(k+1) + e(k)

or

x(k+1) - r(k+1) = e(k)

Thus the control error at every sample/control instant is equal to

the modeling error. That is why we want good models...

Note that, in the above, x must be measurable at all (sample) times

(i.e. there is no "walking in the dark" in this derivation). Note

also that the derivation is valid for any and all sample instants.

Thus, in particular, the control error does not increase over time,

as seems to be your worry.

As the control law formula shows, the computation of u(k) depends on

knowledge of r(k+1), x(k), a (or rather its estimate by the model)

and dt (and on accurate subtraction, multiplication and division, of

course!).

All right so far?

Greetings,

Hans