[Hans Blom, 950928]
(Bill Powers (950927.1220 MDT))
Did you get my statement that it was _knowledge acquisition_ and
the subsequent _use_ of this knowledge that provides the
improvement in control?
Yes, but I don't see that this adds to the discussion. It's the
_specific_ knowledge that makes better control possible, not
knowledge in general. I can agree with your statement without
feeling any wiser. Is there some view opposed to this that I don't
know about?
The funny thing is that in model-based control there is no _specific_
knowledge. It's basically all curve fitting. And prediction is
basically nothing but extrapolation into the future of the curve.
Sounds PCT-like in spirit, if not in details, doesn't it?
I'll have to study your explanation for a while. One question that
comes to mind is how the system knows it should use a form for its
world model that is
x = a * u + b.
_I_ knew ;-). Generally, a system does not know the form of the
equation. It might be first order in the variables, or second order
or even more complex. Strange enough, first order usually already
works very well, and almost never do you have to go beyond second
order. A general approach would be to try, in sequence, until the
model is good enough, a sequence of expressions like
x = constant
x = constant + a * u + b
x = constant + a * u + b + a * b * u * c + d * u^2 etc.
Well, you get the idea. As an alternative, you can run the models in
parallel and check which best "explains" the data. Another alter-
native is to estimate only the parameters of the most complex model
and throw away parameters (and terms) that are not significantly
different from zero. But I bet that the latter is _not_ what we
humans tend to do...
Considering all the different kinds of "plants" that may exist,
wouldn't the adaptive system have to be incredibly lucky to hit on
the right one with the first guess? I seem to recall your saying
something about the problem of "system identification," which is
probably what I'm talking about.
Luck is not required. A mechanism is. This is, indeed, the problem of
system identification, and more in detail the identification of the
order of the system. There too, practice shows that you seldom need
more complex models than second order. Even if reality is very
complex, our models of it generally aren't. Surprised? Even extremely
exact physics equations seldomly go beyond second order.
It seems to me that you're building the solution into the program
from the start.
No, no. Or only partly -- because I was too lazy to model the
"general" case. I could have tried to find the parameters of a higher
order equation, and would have found a bunch of zeroes for those
higher order parameters. Or maybe a higher order model _would_ have
worked as is. Besides, I _modelled_ the noises. I assumed, for
instance (incorrectly) that subsequent noise samples would have the
same value. Incorrect, but not _too_ incorrect, assumptions usually
do little harm, as you have noticed. The map _need not be_ the
territory if control, and not the "truth", is the objective. Although
getting as close to the truth as possible improves control...
I'll probably have more elementary questions, as I'm not used to
dealing with variances and covariances this way.
Let me give you an analogy. Just like mathematics has invented
"imaginary" numbers -- which non-mathematicians think "really"
consist of two components -- statistics has invented "stochastic"
numbers -- which also can thought of to consist of two parts, a
known and an unknown. But it does take some getting used to...
(Bill Powers (950927.2100 MDT))
I rearranged the steps in my model so the output is calculated
first, then the feedback effects. This has the same effect as using
r[i+1] in the place where you said I had made a mistake.
Glad you did that ;-). Notice the improvement?
... ([after] me playing the part of E. coli), The result, for the
maximum slowing factor of 0.2, was
Bill's model, RMS error: 59.42
Hans' model, RMS error: 58.98
Not bad!
So your model still controls better than my model, but not so much
better.
So it's better than a coli? Or a better coli than you?
I also did a plot of the actual k and the computed k for your model.
Your model produces a fluctuating value of k that doesn't have much
resemblance to the actual values of k.
Yes, I've noticed the same thing often before. My blood pressure
controller, for instance, we initially thought of as trying to
establish the patient's sensitivity to a drug. Now we think of the
estimated parameter more as a "kludge factor" that keeps the
controller in its best operating region.
Especially when the structure of a model is different from the
structure of the "world", the model-parameters will have no "real
world" meaning. The map is not the territory. Yet, often control will
be fine.
[if ...] you aren't gaining much from the adaptation of k.
Yes, there are cases when adaptation doesn't help much because it may
not be required in the first place.
For the smallest slowing factor of 0.01 (smoothest changes in
disturbances), my (corrected) model actually controls better than
yours:
Bill's model, RMS error: 4.28
Hans' model, RMS error: 9.49
That is surprising, because with very smooth changes, i.e. if d[i] is
almost equal to d[i+1], the noise model ought to perform best. I must
have done something else wrong.
The two methods achieve control of almost identical quality over the
whole range of disturbance bandwidths.
Is that still the case when you let the "world gain" vary over a much
wider range?
The close similarity between our results makes me wonder whether
your approach isn't actually just some obscure transformation of
mine.
It's more like the other way around. Your approach is more like a
subset, or an approximation in a certain (parameter) environment, of
my method. With the disadvantage that it does not perform as well, in
the general case. With the advantage that it is much simpler. Much
like a straight line segment can approximate any smooth function in a
limited region around an "operating" point.
The great complexity of your program, however, defeats me; I'm not
enough of a mathematician to do a rigorous comparison.
Too bad. But can you do a less rigorous comparison?
(Bill Powers (950928.0900 MDT))
I found that your program was running into computational
oscillations for some lower values of the slowing factor. This was
apparently cured by making the corrections of pmm and pnn slower ...
With that change, your model controls almost exactly as well as mine
at the lower values of "slow" as well as the higher values.
Thanks for the trouble shooting. I delivered my product in haste,
without thorough checks. Can you be more precise about "almost
exactly as well"? Do you mean that your program now performs better?
Greetings,
Hans