What is a control system?

[From Bill Powers (961016.0600 MDT)]

Hans Blom, 961015 --

The next question is whether both are _control_ systems. Now we don't
have an accurate delimitation of what control systems are -- see numerous
previous discussions -- but here we can assume that a control system is a
system that strives for constancy of some of its internal quantities in
the face of variations in the outside environment.

I have an accurate delimitation of what I mean when I say "control system."
I mean a system that can maintain some aspect of its input (as perceived)
near an arbitrarily specified reference level, despite a significant range
of unpredicted disturbances (which I have also defined). The model you call
a "model-based control system" is not a control system under this
definition, because it can't resist unpredicted (i.e., unmodeled)
disturbances. Such an arrangement might be quite useful, and probably exists
at least at some levels of the human system, but it is not what I mean when
I say "control system."

Note that I am not saying what a control system IS, objectively. I am only
stating as clearly as I can what I mean by that term. When you read anything
I say about control systems, you can be sure I am speaking only of systems
which can behave as defined above. If what I say seems to contradict what
you define as a control system, that is only because I am consistently using
my definition, not yours. Everything I say about control systems, as near as
I can manage, is consistent with my definition. I have offered in the past
to adopt some term other than "control" for the behavior of the kind of
system I have in mind, to avoid confusion with different organizations that
other people sometimes call control systems, but so far this offer has been
voted down.

When you ignore my repeated statements of the above policy, the result can
only be confusion. For example, you have repeatedly insisted that all
control requires prediction of disturbances and that disturbances that are
not predicted cannot be opposed. For a model-based system, that is certainly
true. But it is not true for the kind of system I call a control system. If
I say that control can be achieved without predictions, I mean that the
behavior I call control can be achieved by what I call a control system,
which contains no internal world model between its output and its input. I
do not mean that what you call control can be achieved without prediction in
such a world-model, or that what you call a control system can work without
predictive facilities. Your kind of control system requires the modeling of
disturbances; without that modeling it can work correctly only in a
disturbance-free environment (noise aside) as you have frequently admitted.
But the kind of control system I talk about does not have that limitation.

Note that prediction per se does not distinguish our conceptions of control.
If a control system (my definition) has a predictor in its perceptual
function, but no other world-model of the environment, then the outcome of
that prediction can be controlled in present time by a model of my type, as
in the example I gave recently of the predictive control used when the
Shuttle approached the Mir space station. Here the prediction occurs in a
different position in the control diagram -- not in an internal feedback
path from output to input, but in series with the input function alone. This
means that unpredicted disturbances of the input can be compensated by
ordinary feedback control, because they show up as deviations of the
predicted perception from the signal specifying the reference level. The
resulting error leads to corrective action even though the cause of the
disturbance (for example, leakage of pressurized fuel from a thrustor)
remains unknown.

You have occasionally referred to my type of control system as a "PID"
(proportional-integral-derivative) control system. That is a red herring.
The particular means used for stabilization is irrelevant; what matters is
that the output is based on the difference between a present-time perceptual
signal and the signal that specifies the desired level of the perception.
This is very different from a system of your type, in which there is never
any direct comparison of the perceptual signal with the reference signal. It
is that direct comparison that allows my type of system to resist
disturbances without having to predict them. And it is the lack of that
direct comparison that prevents your model from doing the same thing.

There are numerous ways of stabilizing a control system; one could even
imagine using a Kalman filter method to do it. But that is a side-issue of
no importance, considering the main difference between our conceptions of a
control system.

I have offered a compromise model which would seem to retain the best
features of both our conceptions. In this model, the comparator can receive
an input signal either from the sensory receptors or from the output of an
internal model, in the position you specify and updated in the same manner
you specify. On loss of input signal, the switch is thrown so that the
internal model is used instead of the real-time perception. Your model, as
it stands now, requires a switch to be thrown when input is lost, to
distinguish loss of signal from a real drop of the signal to zero. So my
compromise requires no more machinery, and no different machinery, than your
model needs. I don't recall any discussion of this alternative by you. Is it
that you don't understand my proposal, or that you don't wish to compromise?
Or is there some serious defect in my proposal that I haven't recognized?

Best,

Bill P.

[Hans Blom, 961023]

(Bill Powers (961016.0600 MDT))

Running far behind with my replies...

You have occasionally referred to my type of control system as a
"PID" (proportional-integral-derivative) control system. That is a
red herring.

Not to me. Because I know about PID-controllers and how to analyze,
construct and tune them, and because I recognize the basic PCT
controller as one of that type (or rather a subclass: in your
simulations it is usually an I-controller), I also know something
about where they are most appropriate and where not, and how they
compare to other controller types.

The particular means used for stabilization is irrelevant; what
matters is that the output is based on the difference between a
present-time perceptual signal and the signal that specifies the
desired level of the perception.

The difference between controller types is not that large; other
types can be seen as a generalization of what you describe. Rather
than using _only_ the present-time perceptual signal and the
present-time desired level of the perception, other controller types
offer additions: besides the present-time perceptual signal,
functions of it can also be used such as integral, differential,
polynomials over time, stored perceptions or functions thereof such
as (cross)correlations, etc. Some controllers are even robust if
perceptions are occasionally missing. And rather than just using a
present-time reference level, some controllers can compute one for
some periods of time, such as in end-point control. In all these
cases one could consider the system to be a hierarchical controller,
with additional levels and functions built up above the ground level.
But a hierarchy which is different from the one of HPCT, and with
different functions, not just a repeat of lower levels.

So we can talk about control in general and agree. Or about control
in particular, choose different controller types and disagree about
requirements, cost, optimality and such. No problem, at least not for
me.

I have offered a compromise model which would seem to retain the
best features of both our conceptions. In this model, the comparator
can receive an input signal either from the sensory receptors or
from the output of an internal model, in the position you specify
and updated in the same manner you specify. On loss of input signal,
the switch is thrown so that the internal model is used instead of
the real-time perception. Your model, as it stands now, requires a
switch to be thrown when input is lost, to distinguish loss of
signal from a real drop of the signal to zero.

My model does not have a switch but something akin to a voltage
divider: it always works with both current-time perception and with
current-time prediction by the internal model. The "voltage divider"
is set according to the expected accuracies. Thus we have the best of
both worlds: optimal use of the sensory information that is there,
and optimal use of internal predictions, which are always there. The
weakness of my model -- in this context -- that it does not have a
mechanism that can compute the accuracy of the perceptions. Another
weakness is that my model cannot distinguish between inaccurate
perceptions and changes in the "laws" or parameters of the world.
These are real problems that I do not see a ready solution to.

So my compromise requires no more machinery, and no different
machinery, than your model needs. I don't recall any discussion of
this alternative by you. Is it that you don't understand my
proposal, or that you don't wish to compromise?

A switch is a fine first attempt. I implemented a switch to reset the
model parameters ("forgetting") and initialize relearning rather than
the more difficult problem of selective forgetting -- what needs to
be forgotten and what can remain intact. So I've got nothing against
switches, except my opinion that they can be only a crude first
approximation.

Go ahead with the implementation of an "imagination connection" and
you'll start modeling the "world"!

Greetings,

Hans