[From Bill Powers (970624.0645 MDT)]
Hans Blom, 970624--
You capture the essential difference very well. In systems science,
"goals" (reference levels) are, if they are considered at all,
emergent phenomena, emerging from mutually opposing "forces" or
"tendencies". In PCT it is the "goals" that are primary and the
equilibria that "emerge". It appears to me that both cybernetics and
PCT are concerned with the same things, but look at those things >from a
different "level" (ordering of levels?).
The essential point that is missed in what you call "systems science" is
that a control loop is NOT the same as two mutually opposing forces or
tendencies. Suppose you run water into a bucket with a small hole in its
bottom. The water level will rise until the pressure at the hole increases
enough to cause an outflow equal to the inflow. This is a pure equilibrium
system: causation runs from inflow to outflow.
But now add a control loop to this picture. Use a sensor which detects the
level of water without affecting it -- a small float, for example, or an
electronic scale that weighs the bucket. Compare the signal from the sensor
with an adjustable reference signal. Amplify the output signal to run a
motor and connect the motor to adjust the diameter of the hole in the
bottom of the bucket. Suddenly there is no longer the same equilibrium
point. By altering the reference signal, you can now cause the water level
to remain at any level you want, even if the inflow changes. The same
equilibrium process exists as before: the water level will rise or fall
until inflow equals outflow. But one of the factors influencing water level
is now very strongly affected by feedback from the sensor, so the
uncontrolled factor, the inflow, no longer can alter the water level (over
some range of inflow). Alternatively, you could connect the motor so it
varies the inflow; then changes in the size of the hole in the bottom of
the bucket would no longer be able to affect the water level, over some
range of sizes of the hole. And in both cases, if someone arbitrarily adds
water or removes some, the control system will quickly restore the original
water level -- as long as the reference signal remains the same. The
control system acts _by adjusting the equilibrium point of the passive
equilibrium system_.
"Systems science" writers seem unaware of this difference between passive
equilibrium systems and active control system.
This distinction is the essence of PCT. It is the reference signal
of a control system that gives it an _intrinsic_ purpose, as >>opposed to
a purpose that is only a use to which it could be put.
You are right: this is the control theory perspective. When we
construct a control system, we want to be able to specify the goal,
from which the design follows. The design's goal is then to realize
the specified goal as well as possible, and that normally implies a
high "loop gain" (accurate realization of the goal).
The design has no goal; it is the designer who has the goal of seeing the
control system behave in a certain way. The control system, if it does not
meet the designer's goal, will do nothing to change its own design until
the goal is met. It is the designer who will change the control system.
Adaptive systems of the MCT type have been given (by the designer) the
ability to perceive aspects of their own performance -- that is, a new
control system has been added to those that already exist, which perceives
some consequence of the operation of the other control systems inside the
whole system. A reference signal is supplied to specify the (designer's)
intended state of this perception of performance, and the error is used to
alter the parameters of the other control systems until the reference
signal is matched by the perceived performance. Of course the control
system that is doing this does not modify its _own_ parameters -- it
modifies the parameters only of the _other_ control systems. If the
adaptive system is not adapting properly, the designer will see that it is
not satisfying the designer's goal, and will have to alter the adaptive
part of the system. And the adaptive part does not set its own reference
levels; the designer decides what is to be considered optimum performance
and sets the reference levels accordingly.
Yet, this control system point of view leads to absurdity if the >loop
gain is low. A (much simplified) example: it seems that our >human blood
pressure "control system" has a "loop gain" of about 3. >Say my blood
pressure is 120 mm Hg. A rapid calculation shows that >the "goal" of the
system would be to have the blood pressure at 4/3 >of the equilibrium
value, that is at 160 mm Hg. Such a blood >pressure is actually quite
unhealthy. So we are "lucky" that the >control system has a gain of only 3:
if it had a much higher gain >(something recommended in PCT and, of course,
for many artificial >control systems) we wouldn't be nearly as healthy as
we actually are >due to the control system's "deficiency".
There is another, and I think more convincing, interpretation of these
facts: blood pressure itself is not the only variable under control. Why is
there blood pressure at all? In order to move blood through the circulatory
system, to carry nutrients and oxygen to the tissues, move waste products
to the organs that get rid of them, and to get rid of excess heat. It is
more important that these factors be controlled than it is to control blood
pressure. In fact, for these factors to be controlled against disturbances,
blood pressure must _vary_. The blood pressure system might actually
involve very tight control, but its reference level may be variable
according to the _effects_ the blood pressure is having -- for example,
affecting the blood supply to the brain. In other words, the blood pressure
control system may be subordinate to other control systems.
There are clearly hierarchically-related control processes going on here,
with the higher level control systems being concerned with the effects of
heart rate and stroke volume much more than with the blood pressure
required to maintain the flows that are needed. It is possible that blood
pressure per se is not controlled at all -- it may simply be an output
variable that is altered by other systems as a way of controlling more
important variables. Of course there might be some degree of direct control
of blood pressure, because even a gain of 3 is better than no gain at all;
3/4 of the effect of any disturbance of blood pressure would be cancelled.
It doesn't matter if the reference signal is 160 mm, because that reference
signal is varied by the systems regulating the _effects_ of blood pressure.
It is set to whatever level is required to keep those effects under
control. If any "unhealthy" effects were to occur, the reference signal
would be lowered.
This resulting absurdity (as in "it is the goal of the blood >pressure
control system to stabilize the pressure at 160 mm Hg") is >the major
reason why physiologists, for example, are naturally >reluctant to talk
about reference levels. Yet talking about loop >gains is quite natural to
them; see Guyton's textbook "Physiology", >for example. The same argument
seems to apply in the context of >systems science/cybernetics, where loop
gains are frequently low as >well.
Physiologists would be quite right if they were being reluctant to talk
about _fixed_ reference levels. The absurdity is in trying to analyze a
physiological control system as if it exists in isolation. It is possible
that blood pressure per se may not be under control at all, but only
certain _effects_ of blood pressure. A test to determine loop gain of a
supposed blood pressure control system might actually be measuring the net
gain of several other control systems which vary blood pressure (among
other things) to control something else. Even if it is under control, you
can't measure the true loop gain unless you prevent other systems from
altering the reference signal. When you disturb blood pressure, you are
disturbing blood flow, and thus all the other variables that depend on
blood flow. If you don't understand the systems that are controlling those
other variables, you can't predict how the reference level for blood
pressure, or the signal driving heart rate and volume (if there is no
control system for blood pressure), will vary.
The problem with studying one system at a time is that there are really
many interacting systems, and unless you have some concept of how the whole
collection of systems works, you can't get a true picture of any one of them.
The confusion by de Rosnay of equilibrium processes with control
processes is a sure sign that he is ignorant of the real properties
of living control systems.
There is nothing that prevents you from calculating reference levels
in the kind of "equilibrium" systems that cybernetics is concerned
with. Doing so in low loop gain systems, however, will show up the
kind of absurdity I mentioned above. The concept of reference levels
is quite valuable in high gain control systems, not in low gain >ones.
Forget about the "absurdity." There is none. Reference levels exist
regardless of the gain: the reference level is that level of input at which
the output is just zero or neutral, and such a level can be defined quite
independently of loop gain. Varying the reference level will vary the
operating point of the control system, period. Even in a control loop with
a gain of only 3, variations in the reference signal will be tracked by
variations in the perceptual signal that are 3/4 as large. That size of
effect can't just be ignored.
Anyway, I suspect that if you measure a loop gain of only 3 in any
physiological control system, you have misidentified the controlled
variable, or have disturbed other systems that are altering the reference
signal of the control system.
A point of more interest, maybe: at which loop gain do we start to
speak of a "control system" rather than an "equilibrium system"? Is
the value of the loop gain what distinguishes between both, or is
there something else (as well)? Usage of externally supplied energy
maybe? If so, which kinds of energy are we allowed to consider? Does
a falling ball consume "potential energy" when it realizes its >"goal" of
seeking the earth's center of gravity?
Loop gain is the critical factor; energetics follow. The loop gain in any
equilibrium system is at most 1, and in all real cases is less than 1. To
get any loop gain greater than 1, it is necessary to have an external power
source. This is because amplification involves an output that contains more
power than the input that drives it; that extra power has to come from
somewhere. It is this extra power that enables a control system to _resist_
disturbances, and _overcome_ resistance when it is given a varying
reference signal.
The falling ball does not involve any circular causation, so it's outside
the purview of this discussion. And even if you manage to see a closed loop
in it, the energy expended by the ball in falling to the ground is equal to
or less than the energy expended to raise it in the first place. Entropy
sees to it that any "loop gain" you can manage to see in the situation is
less than 1.
In an equilibrium system the equilibrium exists between two (or >more)
"conflicting" tendencies (sorry, I cannot find a better word). >In the case
of the blood pressure control system, for instance, one >tendency is the
blood vessel walls' natural elasticity, which >"attempts to" decrease the
vessel's diameter and thus to increase >the blood pressure.
The walls of the blood vessel (if not actively constricted by another
control system) have a diameter that depends on pressure. They don't
"attempt" to do anything. If the pressure rises, they expand.
The other tendency is the due to the baroreceptor control
system's action, where increasing blood pressure increasingly >relaxes the
smooth musculature that surrounds the vessels, thus >"attempting to"
decrease the blood pressure.
This is what I mean by talking about systems as if they existed in
isolation. Vasoconstriction is also affected by many other control systems;
for example, temperature control systems. Since vasoconstriction affects
blood flow at constant pressure, it affects the flow of nutrients and
oxygen, and the rates of gas exchange in the lungs, and heat transport to
and from the periphery and the brain. All the control systems associated
with those variables also get into the act, among other things contributing
to the reference levels for heart rate and stroke volume. If you could
write the system equations for ALL of the control systems that are
involved, you would find no conflicts.
PCT would, I guess, consider the
first tendency passive, the second one active (a "one way" control
system). Systems science mostly tends to disregard this distinction,
and attempts to analyze in terms of "tendencies", whether active or
passive. In the process, it eliminates a lot of problems in cases
where it is simply unknown -- or considered unimportant -- whether a
process is active (a "control system") or passive (resulting from
material properties). For instance: is a cell's maintenance of its
sodium concentration an active or a passive process? However
interesting this question is, we would frequently want to skip it >and to
focus our attention on other issues, simply taking for >granted _that_ a
cell's sodium concentration normally does not vary >much.
That seems legitimate to me, provided that the sodium concentration does
not actually vary enough to cause significant changes in whatever depends
on it.
Moreover, an attempt to answer such questions leads one to ever
deeper levels. A cell's sodium concentration is regulated by sodium
pumps. But then what regulates the action of the sodium pumps? We
finally arrive at an explanation in terms of physical forces between
atoms. But not for all questions is it meaningful to give a solution
in terms of elementary physical properties.
This is going in the wrong direction for a control-system analysis. The
lower-level variables will be much more variable than higher-level
variables: the lower-level variations are occurring _in order to_ keep the
higher level variables from changing, and they're being caused to vary by
the higher-level systems, not by causes at a still more molecular level.
This is in direct contrast to the usual biological or biochemical approach,
in which the lower-level processes are imagined to control the higher-level
processes. Reductionism doesn't work in control theory.
Analysis can always be done in terms of equilibria, I guess.
Yes, it can, but when active control is involved, it is a mistake.
PCT -- or control theory in general -- makes finer distinctions, and >that
may be an advantage. In PCT a passive tendency would be >modelled as and
called an environment function, an active one a >control system. And PCT
even considers equilibria resulting from the >interaction of _two_ active
systems, for which the term "conflict" was invented.
Sometimes, however, these distinctions are unimportant. I can as
easily manipulate a control system (say a room thermostat) as a
passive system (say a ball). And in both cases the ease with which I
can manipulate an object seems to depend on the complexities of its
"laws of motion" or, as I would say, on how well my brain can >"model"
that object in terms of how it will behave as a result of my
manipulations. It is easier to manipulate a thermostat than a human
being, as it is easier to manipulate a soccer ball than a football >or
rugby ball.
This is true, and the reason it's true has to do with hierarchies of
control. When you manipulate the thermostat, you do so by altering its
reference signal, not by turning the furnace on and off. If you had to
operate the furnace directly, you'd have to take into account every heat
loss and heat gain from every part of the building and from every cause,
and you'd have to understand the physics of heat transport by conduction
and diffusion. In fact, all you have to worry about is your own skin
temperature. When you feel cold, you turn up the thermostat setting, and
the thermostat then automatically takes care of manipulating the furnace
output to keep its own sensed temperature at the specified level. You don't
have to "model" the physics of heating the room at all. That's all done for
you by the thermostat, a lower-level control system as it relates to you.
While the furnace ultimately causes your sensed skin temperature, your
desired skin temperature ultimately causes the furnace's action.
All in all, I don't see "confusion" in the Macroscope, just a
different perspective.
No, not just a different perspective -- what I see there is ignorance of
control theory, and an attempt to handle control processes without
understanding control processes.
A final question: an _active_ control system can ultimately be
analyzed in terms of physical forces between atoms, which we >normally
consider to be _passive_. At which level of analysis do we >start to speak
of that system being active? Why? In other words: >what is the nature of a
control system (in contrast to an >equilibrium system), apart from control
being a certain perspective >on investigation?
When there is a closed causal loop with a negative loop gain greater than
1. To get a gain greater than one, many atoms in high energy states have to
be reduced to low energy states while raising a few atoms to higher energy
states. The energy gain in the closed control loop is obtained at the
expense of a large energy loss in the flow of energy-bearing materials into
the system and the flow of degraded materials out of it.
Of course the _degree_ of control depends on the _amount_ of loop gain. But
the boundary between control systems and passive equilibrium systems is at
a loop gain of 1.
Best,
Bill P.