[Hans Blom, 950927]
Here are the results of our new challenge, model-based versus PCT
control. As before, the smaller the RMS ERROR, the better the con-
troller.
The "real world" is:
x [i] := k [i] * u [i] + d1 [i]
where
k [i] := 3.0 + 2.0 * d2 [i] / 1000.0
and d1 [i] and d2 [i] are low pass filtered random sequences.
My model of the "world" is:
^ ^ ^
x [i] := k [i] * u [i] + d [i]
where for the "noise" terms k and d I assume:
^ ^
k [i] := k [i-1] + m [i]
^ ^
d [i] := d [i-1] + n [i]
"Hats" above variables denote estimates and m and n are assumed to
be (unknown) random noise. The model's prediction is called x-hat,
and it is this value that determines how to control. The observed
value (the "real" x) is called y, and it is this value that is used
to update the model -- and thus only indirectly to control.
Note that this program models a simpler noise than my previous
program did: no first derivative term, just a zero order hold,
i.e. both k and d are assumed to remain constant from iteration to
iteration. No doubt the derivative method would be even better.
But the challenge results below show that the quality of control
does not depend on derivatives but on the fact that _knowledge
accumulation_ takes place, which makes _prediction_ possible. Even
this simplest possible kind of prediction provides quite an
improvement over the PCT method: the PCT controller's RMS error
is from 50% to more than 100% larger.
Bill/PCT control Hans/model-based
Slowing = 0.01 RMS ERROR = 10.5 RMS ERROR = 7.5
Slowing = 0.02 RMS ERROR = 15.0 RMS ERROR = 7.4
Slowing = 0.03 RMS ERROR = 22.5 RMS ERROR = 10.7
Slowing = 0.04 RMS ERROR = 30.1 RMS ERROR = 14.2
Slowing = 0.05 RMS ERROR = 34.3 RMS ERROR = 15.6
Slowing = 0.06 RMS ERROR = 37.5 RMS ERROR = 18.5
Slowing = 0.07 RMS ERROR = 43.5 RMS ERROR = 20.8
Slowing = 0.08 RMS ERROR = 44.8 RMS ERROR = 20.7
Slowing = 0.09 RMS ERROR = 56.2 RMS ERROR = 30.2
Slowing = 0.10 RMS ERROR = 61.1 RMS ERROR = 31.3
Slowing = 0.11 RMS ERROR = 68.2 RMS ERROR = 36.0
Slowing = 0.12 RMS ERROR = 70.4 RMS ERROR = 37.2
Slowing = 0.13 RMS ERROR = 78.7 RMS ERROR = 40.1
Slowing = 0.14 RMS ERROR = 89.7 RMS ERROR = 44.1
Slowing = 0.15 RMS ERROR = 91.8 RMS ERROR = 45.5
Slowing = 0.16 RMS ERROR = 92.6 RMS ERROR = 49.1
Slowing = 0.17 RMS ERROR = 102.1 RMS ERROR = 59.5
Slowing = 0.18 RMS ERROR = 104.6 RMS ERROR = 60.0
Slowing = 0.19 RMS ERROR = 108.0 RMS ERROR = 58.0
Slowing = 0.20 RMS ERROR = 110.9 RMS ERROR = 63.9
As before, the first iteration's error is not added to the sum of
the total RMS error, because a model-based controller needs this
iteration to initialize itself. As before, to be fair, I added the
same change to Bill's program. This is my code (except for some
declarations of variables):
sum := 0.0;
u := 0.0;
{initial estimates; quite robust for different assumptions}
k := 0.1; kold := 0.1; pmm := 100.0; pkk := 10000.0;
d := 0.0; dold := 0.0; pnn := 1000000.0; pdd := 10000.0;
pdk := 0.0;
{RUN THE MODEL MAXTABLE TIMES}
{============================================================}
for i := 0 to maxtable - 1 do
begin
{control; compute u}
if k = 0.0 then
u := 0.0 {cannot control, so don't}
else
u := (r [i] - d) / k;
{prediction; at this moment, u is known}
x := k * u + d;
pxx := pkk * sqr (u) + 2.0 * pdk * u + pdd;
pxk := pkk * u + pdk;
pxd := pdk * u + pdd;
pkk := pkk + pmm;
pdd := pdd + pnn;
{get response of the "world"}
y := (3.0 + 2.0 * d2 [i] / 1000.0) * u + d1 [i];
{correction; at this moment, y is the response to u}
k := k + (y - x) * pxk / pxx;
d := d + (y - x) * pxd / pxx;
pkk := pkk - sqr (pxk) / pxx;
pdd := pdd - sqr (pxd) / pxx;
pdk := pdk - pxd * pxk / pxx;
{estimate of m and n noise variances}
pmm := pmm + (sqr (k - kold) - pmm) / 20.0; kold := k;
pnn := pnn + (sqr (d - dold) - pnn) / 20.0; dold := d;
if i > 0 then {allow one iteration for initialization}
sum := sum + sqr(r[i] - y); { ACCUMULATE FOR RMS ERROR }
end;
{============================================================}
sum := sqrt(sum / (maxtable));
Bill, check and see. Satisfied? Rick?
Greetings,
Hans