[Hans Blom, 950831]
(From Rick Marken (950830.0830))
I think it is this attitude that nearly killed the patient who was
attached to your model-based blood pressure control system.
Accidents happen where the rubber meets the road, you know. Even
though you think a world-model is accurate and complete, it never is.
My world-model model explains that as well, doesn't it, as you so
astutely noticed in my model-based control demo. And are you sure
that it was an ATTITUDE that nearly killed? Please explain what an
attitude is (this is not teasing; the question goes deeper than you
might think).
A scientific theory can be completely internally consistent (no
conflicts in the model) and completely wrong (inconsistent with
external reality).
Huh? Wrong? Maybe inapplicable in the real world. But that has been
thought of a great many scientific theories, initially.
And, pretty please, tell me how I can get to know "external reality",
so that I can verify that my theory is consistent with it.
Bill Powers showed that your model-based controller controls no
better (and in most cases far worse) than a simple perceptual
control model of the same situation; ...
He did no such thing. He demonstrated that if the world-model doesn't
correspond to the world, control deteriorates or becomes impossible.
That is completely in line with my findings.
I am aware that, as any model, it is incomplete and susceptible to
change at any time upon new insights (perceptions).
I guess "change" doesn't include "rejection".
Maybe, but I'm not sure whether that is humanly possible. It seems as
if, once a model has converged, it is very difficult to let go of it,
especially when one disregards outliers -- perceptions that are not
predicted by the model. I hope that I am not so narrow-minded.
I have reported experiments on "predictive control" where the
behavior is explained perfectly by a perceptual control system;
apparent "anticipation" (cursor "leading" target) occurs when the
perception controlled includes the derivative of target position.
Not "perfectly". Derivatives are a good start. Can we do even better?
Let us strive for perfection, not assume that we are there already.
... you assume that a controller must have a model of the feedback
function in order to control.
I have said, several times -- and that surely isn't new for you --
that a controller must, in some ways, be matched to the thing that it
controls, if the controller is to be stable and control well. Now the
problem is how this matching comes about. Is it a creator (designer)
that does this, or can an innate (built-in) adaptation system provide
it? I think our difference in viewpoint is this: whereas you demon-
strated that you can create controllers that can control well, I
showed that it is possible to design a controller that initially
cannot control at all, but that can LEARN to control, and that can
relearn and regain control, even if the world changes in unpredict-
able ways. Didn't the fact that such a controller can handle sign
reversals WITHOUT ANY ADDITIONAL MECHANISM impress you in the least?
I guess that I just want to say that it works, for me, as a high
level concept that is able to give a inclusive, though hazy,
picture.
If it works for you, that's great.
Thank you.
But I'm more interested in models that really work -- in the sense
that they are "externally" consistent, ie. consistent with what we
actually observe.
But don't you see the basic problem? "We" actually observe different
things, due to our differing world-models. We project our model upon
the world, at least a little, all the time. And, since we cannot know
"external reality" except through our models of it, we will never be
able to verify whether a model is "externally" consistent.
Greetings,
Hans