------------------- ASCII.ASC follows --------------------
[Hans Blom, 950622]
(Bill Powers (950621.1415 MDT))
I suggest that we end this discussion about my demo of model-
based control. Partly because in the mean time we lost the
interest of most, and partly because too many misunderstandings
still crop up -- where I simply don't have the time or the incli-
nation to go back to the basics of this approach to explain all.
I will give some final comments, but first let me summarize some
of the conclusions:
- I have given a counter-example of "control of perceptions" by
demonstrating a controller that does not attempt to bring about a
match between a prescribed state (reference level; xopt) and its
perception (y) but a match between a prescribed state (reference
level; xopt) and a properly filtered version of the noisy per-
ception (model-x); this filtering is based on knowledge that was
collected about the characteristics of the "world".
- I have demonstrated a mechanism for collecting internal "know-
ledge" about the external "world", and for a mechanism that can
keep the internal world-model in line with changing characteris-
tics of the "world".
- I have demonstrated a control method that is based on an inter-
nal world-model, where the world-model AND NOT THE PERCEPTIONS
determines how the control action is computed; however, the per-
ceptions INDIRECTLY determine the control action because they
determine the world-model.
On "real" control systems (are mine "unreal"?):
So in practical terms, a real control system will beat the
performance of any compensating system by a large margin, and do
so using cheaper components. The theoretical perfection of the
compensating system is a mathematical fiction, and anyone who
believes such a system can outperform a -- pardon me -- real
control system is only betraying a lack of practical experience.
What is your practical experience with adaptive compensators? You
severely overstate the differences between "controllers" and
"compensators". An adaptive compensator IS a controller.
To me, this example indicates that we ought to find out how
this type of prediction [discovery of regularities and their
use in control] might come about.
But that's an old problem, solved long ago.
Solved? Where? When?
Have there been discussions about the importance of keeping
perceptions limited? I doubt that a subject who is happily
tracking away and hears the exclamation "fire!" coming from
somewhere will control his joy- stick well from then on. Even
"coffee!" would do it for me ;-).
I think that consideration arises from common sense.
Sure. But can we go beyond common sense?
This is a self-cancelling paragraph. Naturally, if you're not
controlling well, you need to learn.
I said something different: ONLY IF you're not controlling well
CAN you learn. And: If you're controlling well despite a wrong
world-model, the erroneous world-model will not be improved. In
the context of psychology, this has major implications: you can-
not know that your world-view is incorrect until you experience
major control problems (a crisis). Or the other way around, expe-
riencing a crisis shows that your world-view is incorrect.
You've lost me here.
I noticed that I lost you in many places. Sometimes I get the
impression that I come across perfectly, and then again I notice
that I didn't at all. I haven't commented on a great many remarks
where you essentially said something like "yes, but ...". Some-
times I really cannot understand why you cannot understand, as in
the following:
Assume that u has been zero for some time, and (therefore) x as
well. Now set u=1 henceforth and see how x changes:
x (t+1) = 0.9 * x (t) + 1
x (0) = 0
x (1) = 1
x (2) = 1.9
x (3) = 2.7
Interesting. I hadn't noticed that. So if you apply a step-
function as xopt, x will follow it in one iteration, but the
real system will approach the value of xopt exponentially?
What I did was to cut out the "world" from the feedback loop to
demonstrate how it behaves by itself when excited with a step
function. That was to show that the "world" of the demo has
dynamics, remember? And to show that the "world" will not stabi-
lize in one iteration if there is no control. It does NOT show
how the "world" behaves when the feedback loop is closed. So
correct is:
1) if you apply a step-function u in the open-loop case, x will
approach a stable value exponentially.
2) in the closed-loop case, x will follow xopt in one iteration.
... The initial assumptions and the stochastic effects can't
explain this result, because the world-model comes to the SAME
final (wrong) values EVERY TIME.
In the case of unmodelled dynamics, yes. That is because the
world-model attempts to subsume those as well as it can, given
its limited number of degrees of freedom.
If you run a large number of trials with different noise
sequences, you will find that the converged model-parameters
have some probability distribution around the world-parameter
value.
No. That is not what happens. Try it your self and see.
My remark applied to the case of no unmodelled dynamics. If there
ARE unmodelled dynamics, the model-parameters will generally NOT
converge to the world-parameters. I have mentioned that now and
again, I think.
RE: Anecdote about adaptive blood pressure control system
I was astonished by this tale. I think you gave up far too soon
on a non-adaptive controller ...
Oh no. I spent many years trying out all the methods that I knew
about, and then some more that I found in the literature. All
worked more or less, but none well enough in this critical situ-
ation, where it is not enough to be able 95% of the cases, but
where you need to perform well -- or at least safely -- in ALL
cases.
... or at least on finding a method of adaptation that would
work.
Regrettably, no method of adaptation works if the amplitude of
the unmodelled dynamics is too large. For practical reasons, we
could not give the system eyes to see what the surgeon was doing,
nor a sensor for the degree of anesthesia -- such a thing simply
does not exist. So the perception was necessarily limited to the
invasive arterial blood pressure measurement only.
I'm glad you report this as an _initial_ set of trials, although
it seems to me that using a live patient before you knew that
the system would work was inexcusable. When I worked in medical
physics, I NEVER allowed a doctor to use a system with live
patients until I was sure, through extensive testing, that it
would function properly.
When do you "know" that the system works? That is exactly the
problem that I need to come to grips with. I can assure you that
the testing that I had done before, using simulations based on a
patient-model that was put together on the basis of literature
data and our own animal experiments, had been very, very extens-
ive. I can assure you also, that the "patient" that I talked
about in my anecdote was a pig, not a human. And I can also
assure you that at all times, even in the animal experiments, the
behavior of the controller was monitored very closely by both an
MD and a control engineering student, and that switching back to
manual mode (manual setting of the infusion flow rate) or zeroing
the flow rate could be done by pressing a single key.
Yet there comes a time when all testing -- on simulations and on
animals -- has been done and the system has to be tried out on
the first real patient. That, now as ever, is a very electrifying
period, despite all safety precautions that exist both in a soft-
ware safety shell around the controller and in a tight protocol
of monitoring the system by one or more persons. And still you
can be sure that, despite all testing, something has been forgot-
ten...
The control system as a whole, including all safety measures, has
been used on some 300 patients so far, without ever developing a
dangerous situation. But when can we be really, really sure that
it is full- and fool-proof? After our 1000th patient? After
1000000 patients? What we DID demonstrate adequately was that the
system controls better than a well-trained anesthesiologist is
able to do.
I can't think of a better illustration of the inadequacies of
this kind of control model. A properly stabilized negative
feedback control system would NEVER have created this kind of
dangerous problem.
A "properly stabilized" non-adaptive negative feedback control
system couldn't do the job. Moreover, even a human -- which you
tend to think of as a properly stabilized negative feedback con-
trol system -- cannot do as good a job.
Just out of curiosity, what other kinds of control systems have
you designed and operated? I am interested in how this approach
works in other situations.
In the early stages of my career (I'm 51 now), I've tried out
many different approaches, including the classical ones that your
simulations are based upon, but a more theoretical interest of
mine has always been how those different approaches are alike and
how they differ. The current version of the blood pressure con-
troller is a different approach again: basically a PID-control-
ler, supervised by an expert system based adaptation mechanism
and safety shell that incorporates rules that override the con-
troller in all those situations where a PID-controller does not
function well, for example when the feedback signal is missing
for some (limited) time.
... When you use an overcomplicated model based largely on the
wrong conception of control, you get poor results and have to
make the system even more complicated to get it to work at all.
Huh? Give me a SIMPLE controller that can do the job and I'll be
eternally grateful! If you are interested in some of the control
problems, look up
J.A.Blom: Expert control of the arterial blood pressure during
surgery. International Journal of Clinical Monitoring and
Computing 8: 25-34, 1991.
Greetings,
Hans