------------------- ASCII.ASC follows --------------------
[Hans Blom, 950606]
(Bruce Abbott (950605.1545 EST))
Bruce, this post of yours generated some thoughts that might offer
some additional understanding into what the Kalman Filter does:
. . . A control system is, in the broadest sense, any
interconnection of components to provide a desired function.
This statement led me to the following train of thoughts. I have
always taken it to be the task of the basic PCT controller to bring
about a best match between perceptions AT THE LOWEST LEVEL, i.e. the
intensities that directly impinge upon our sensors, and internal
reference levels. Now that might not necessarily be true, but somehow
I have always thought so.
The "desired function" of a Kalman Filter based controller, on the
other hand, is not to control these lowest level perceptions, but to
control "filtered" or, in PCT terminology, "higher level" percept-
ions, where the filtering process can remove components that are
irrelevant to control from the lowest level perceptions. That would
be very much like a classical PCT controller that inputs a lowest
level perception that is e.g. contaminated somehow by a high fre-
quency sine wave whose origin is not in the outside world. Rather
than relying upon the output function (an integrator) to smooth out
the EFFECT of the extraneous sine wave, it would be better for the
input function to suppress it so that it would not contribute to
control at all. But the consequence would also be that the extraneous
sine wave is not controlled away from the lowest level perceptions.
Could that be all that our differences of opinion are about: control
of lowest level versus control of higher level perceptions? Does such
a re-interpretation make sense?
As an example, suppose a gasoline engine is used to drive a large
pump. The carburetor and engine comprise a common type of control
system wherein large-power output is controlled with a small-power
input. The carburetor is the controller in this case, and the
engine is the plant. The fuel rate is the control input, and the
pump load is a disturbance signal. The desired plant output, a
certain engine shaft speed, may be obtained by adjusting the
throttle angle [using a calibration curve].
. . . This "calibration curve" gives the engine speed for a given
throttle setting, at constant load on the engine. . . . If the
engine should become untuned (a change in plant) or if the load
should change (a disturbance), the calibration curve would change,
and an 80 degree throttle angle would no longer produce a 2300-rpm
engine speed.
This is a great example of what the Kalman Filter approach in effect
does: as a higher level perception, it computes the (parameters of
the) calibration curve. But it computes those parameters ON LINE, so
when the calibration curve changes, this will be known and can be
used -- as long as there are observations. If not, the old (and now
outdated) calibration curve will still be used.
The parameters of the calibration curve can be thought of as "higher
level perceptions". Bill Powers, do you agree?
(Bill Powers (950601.1120 MDT))
A very thoughtful post, that on first impression looks absolutely
right. I'll check all the details at a later time...
You have an idea:
... instead of acting to reduce the difference between an internal
model's output and the value of a controlled variable, the Kalman
process would be used to reduce the error signal of the control
system (by altering parameters in the output function).
The basic output function ... would be of a form that generally
resembles the complex complement or inverse of the environmental
feedback function, rather than that function itself.
In theory, that is perfectly possible: a "best" inverse of the
environmental feedback function could well be computed. In practice,
however, this might prove to be very difficult or impossible,
especially when the environmental feedback function contains a
significant delay; its inverse would then need to contain a perfect
predictor and thus is not physically realizable. A forward model does
not have those problems: a model of something physically existing IS
physically realizable.
I am sure this application of Kalman filtering must have occurred to
someone already. Any comments?
This is a fine approach if the response of the "world" (the environ-
mental feedback function) is invertible. Frequently, alas, it is not.
... a "controller" is a (good) controller only in particular
environments. Or, equivalently, whether a system is a (good) con-
troller depends on the environment in which it finds itself.
This is true, but some controllers are able to work without change
in a wider range of environments than others.
Sure. The technical term is "robust". Some controllers are more
robust than others. The literature (and my own research) suggests
that a PID-controller, for instance (the type that you mostly use is
a PI-controller; the I stands for the integrator that you use in your
output function, the P comes into effect at low frequencies through
the integrator's "leak"), continues to function well if the parame-
ters of the world (delay time, gain, dominant time constant) change
by a factor of at most 2 to 3. More, and control becomes too sluggish
or too oscillatory. An optimal controller, on the other hand, is much
less robust: a change of world-parameters by 30% might already be
disastrous. This changes, of course, if the optimal controller can be
made adaptive: this may make it as robust as desired.
On open loop control:
If the properties of the environment are well-enough known,
including effects of disturbances and forms of functions that lie in
the environment between the transfer function and the result, and if
those properties are taken into account in the transfer function,
this system [open loop "controller"] can in principle control the
result perfectly. However, the slightest change in any aspect of the
environment or in the transfer function will directly affect the
result, making it different from the desired result; control will
disappear.
Now make this open loop "controller" adaptive, so that it continu-
ously updates the "calibration curve" (or its equivalent for a dyna-
mic system): control will reappear.
Now consider an operational amplifier as an element of an analog
computer:
---[feedback element] ---
> >
> >\ |
input ---[series element] --->|- \___________________v____ output
--->|+ /
> >/
ref
If you want to compare this op amp diagram with the functioning of an
organism, you should note that the feedback element models the envi-
ronment, which will generally not be a simple resistor. It might
contain a delay, an integrator, a varying gain or what have you.
Replace the feedback element by a complex impedance and calculate
when you have a control system: only under severe restrictions...
Things can be improved, however, if it is within your power to change
the series element into a complex impedance as well, the characteris-
tics of which you can change. If so, you have implemented adaptive
control.
... The real point is that this "control system" manages to keep
its controlled variable, the voltage at its negative input,
precisely matching the voltage at the reference input in all these
cases, with no adaptation or other adjustment to compensate for the
radical changes in the series and feedback elements.
Analysis will show you that this is not true. A strict relationship
must exist between the series element and the feedback element if the
diagram above is to represent a control system rather than an oscil-
lator.
And it does this with absolutely no advance information about what
voltage waveform will be applied to the input. In fact, that is the
point of this analog computing component: it will compute a
particular function of ANY input waveform, the value of the function
being continuously represented as its output voltage.
This is not true either. It may be that the response to certain wave
forms (say DC) is perfectly all right, whereas the response to other
wave forms (say square waves) shows tremendous over- or undershoots.
Have you ever calibrated the input section of an oscilloscope?
... if you believe that there is NO kind of control system that
can resist disturbances without such an explicit model of them, I
wouldn't be able to help wondering if you have ever actually
understood negative feedback control.
I think that you might have lost sight (literally) of the general
problem through your ubiqitous use of an integrator in your output
function and the smoothing that it performs. But an integrating
controller can only "live" (control well) in certain restricted
environments -- and I doubt whether those environments (a unit
transfer function/environmental feedback function) are realistic
enough, as you attest yourself in a previous post:
If the environmental feedback function is something other than a
constant of proportionality, then we can't necessarily use the same
form of the controller function, because of the system dynamics.
You may want to check how well (white) noise can be "resisted" in the
case of an op amp. I have redrawn your diagram and added the noise as
an extra voltage source in series with the feedback element:
> noise
v
ยทยทยท
---
-[feedback element] -|+|-
> --- |
v |\ |
input ---[series element] --->|- \___________________v____ output
--->|+ /
> >/
ref
The qualification "can resist" ought to be quantified. I maintain
that a disturbance can be the better resisted the better it can be
modelled. Since white noise cannot be modelled, it is the base line.
If part of the noise can be modelled and used for control purposes,
this would enable us to place an extra (modelled noise) voltage
source in series with the series element, which partially compensates
the world-noise.
(Bill Leach 950601.21:46 U.S. Eastern Time Zone)
... convinced that some of the most fundamental ideas implemented
in you model probably are present in living systems ...
Yes, I think so too. In a primitive way, this is an approach to
modelling memory and what it is for.
... but also that it is highly unlikely that anything even remotely
similar to the methodology exists.
I'm not so sure that the basic properties are so different. But I
agree that my guesses aren't worth much. They do, however, for me at
least, give pointers into possible areas for research.
It is upon you (or someone else for that matter) to first identify a
behavioural phenomenon that can not be explained using either a
linear or non-linear straight closed loop feedback control system
_as_ extended by HPCT. I believe that this task is insurmountable
at the present state of knowledge concerning HPCT.
See above. At the very least I think that I have demonstrated a
method to implement the (soft) switching in and out of the "imagi-
nation connection" through the manipulation of the single term pvv.
Too bad that you cannot play around with my demo...
One particular disturbance that is quite common for living systems
(particularly human) is for the system to encounter another system
controlling some aspect of the same CEV. This would be an unmodeled
dynamic (initially for certain) but could remain unmodeled as each
system modified its own control methods to obtain improved control
of its' own aspect.
Yes, cooperation between humans requires a model of the other. Some-
times a generalized model will do, where the assumption is that all
humans are/act/respond alike. I have discovered that sometimes, how-
ever, a particular person must be modelled in extreme detail in order
not to incur her wrath ;-).
Without trying to quote some more of your text... are you sure that
you could design a model based controller for one of the simpler PCT
tests and obtain similar results (including the sorts of changes
made to the typical experiment)?
Yes. Regrettably (for Bill Powers) the very exact specifications that
would be required to establish the QUALITY of control would result in
an equally exact model specification. And since optimal controllers
directly tune their parameters in such a fashion that optimal control
(i.e. control that could not be better given the control criterium
and the model) results, no other controller could be better.
But -- in the context of this discussion list -- such a contest would
be kind of silly: we are trying to establish correspondences between
how organisms and how artificial controllers function, not which con-
trol system designer builds a better controller given some artificial
criterium.
(Bill Powers (950604.0530 MDT))
A beautiful abstract! You get to the core and understand it, I think.
I'll check some details later...
Note that the value of pvv thus acts as a switch: pvv = 0 means
"control the actual value of xt" and pvv > 0 means "control the
output of the world model." This switch is actuated by an outside
intelligence.
This is the switch that toggles what you call the "imagination con-
nection". It is, however, a "soft switch" so that control can be
PARTIALLY based on both internal prediction and external observation.
That is how we humans often operate, I think: we have a course inner
model and use our observations only to fill in the moment-to-moment
details.
4. Independent disturbances applied to both y and xt. All relevant
parameters manually set to agree with disturbance amplitudes.
Now the control system makes x = xopt, and xt = xopt + (disturbance
of xt). There is no resistance to disturbances of xt. The model
output x follows the reference signal xopt, but the real system
output does not.
This is the most complex situation: the model attempts to discrimi-
nate between what is "real" (in the outside world) and what is due to
bad sensors (inner defects). Sometimes such a discrimination is not
possible, e.g. when both disturbances covary. Much depends on the
amplitude of both disturbances and on the assumptions (pvv and pff)
about them. The model output x follows the reference signal xopt, but
the real system output follows something halfway (0 to 100%) between
xt and y.
The critical factor in this model is the manual setting of pvv to
tell the model whether variations in y that are uncorrelated with
xopt represent real variations in xt or are only noise in the input
function.
With pvv set nonzero, the model ignores the uncorrelated variations.
When pvv is set to zero, the model changes to an ordinary feedback
control system that resists disturbance-induced fluctuations in y
(and thus in xt) without assistance from the world-model. The only
effect of the world model (due to the way the output u is computed)
is to vary the gain of the external loop.
Yes, that's why I called it the "imagination connection" switch.
In fact, d(t) is a subtractive term in u, which simply cancels the
effect of the modeled d(t) on the world-model. At the same time,
since u is also an input to the real system, the modeled d(t)
cancels the real d(t) that is disturbing xt.
This shows that we have here not a feedback control model but a
compensating model. Somehow the controller must generate a d(t)
signal that is equal and opposite to the disturbance that is
affecting the controlled variable.
Yes. See my discussion about noise compensation in the op amp diagram
above.
This is essentially W. Ross Ashby's concept of a regulator,
described in _Design for a Brain_.
Thanks for the reference. I'll look him up.
If your control system had a sensor (or any other process) that
could represent the real d(t) as a d(t) inside the world-model, you
would have essentially Ashby's regulator. From what you have said so
far, this is essentially your concept of how all control works,
because you have said that ALL control systems required a model of
the disturbance in order to resist it. The only reason for needing a
model of the disturbance is in order to output a compensating effect
that will cancel the effect of the disturbance on the controlled
variable. Please tell me if I am assuming correctly.
Now you confuse/conflate two things. First, I do not claim that ALL
control works through adaptation of parameters in an explicit intern-
al model. Sometimes the control parameters can be chosen_a priori_
and remain fixed, yet result in good enough control. But only when
the "world" does not change too much.
That is the other discussion: if the "world" changes significantly,
the controller must change significantly as well, i.e. it must be
adaptive. Refer back to my discussion about the op amp, where the
series element must change when the feedback element changes in order
for control to remain control of sufficient quality. It is in this
latter sense that I say that any controller must have "knowledge"
about (an internal model of) the environment in which it operates.
(Bill Powers (950605.1415 MDT)) responding to (Martin Taylor,
950605.1100)
Martin:
My point was that there is a conceptual distinction between "having
a value of zero" and "having an unspecified value," and that the
"standard model" control loop has no way for that distinction to be
represented, either in the PIF or elsewhere.
Bill:
That's true, and neither does Hans' model. When the input is lost
in Hans' model, the expected variance pvv of the perceptual
variable y has to be set extremely large (10,000), to stop the
Kalman filter from continuing to make adjustments based on a
spurious value of y.
Isn't that the CONCEPTUAL distinction between "having a value of
zero" and "having an unspecified value"? When pvv = 10,000, this
tells that y may be in error +/- 100, so y is pretty much unspeci-
fied. If you find the "extremely large" objectionable, manipulate
1/pvv rather than pvv, and a zero will mean "fully unspecified".
Greetings,
Hans