macroscope

[Hans Blom, 970619]

(Ellery Lanier, 970619)

Macroscope looks exactly like what I have been searching for, but
using your http for an hour has failed to bring it up. Could there
be a spelling error? What search engine might I try?

The entry point is:

  http://pespmc1.vub.ac.be/macroscope/

After triple checking, I'm quite sure there is no typing error in
this address ;-). By the way, searching for "macroscope" using the
AltaVista search engine (by far my favorite) gave this address as
number 10 of its returned list, on the first page. Not bad!

Enjoy,

Hans

[From Bill Powers (960719.1609 MDT)]

This is a paragraph from _Macroscope_, by Joel de Rosnay, recommended for
our reading by Hans Blom:

···

==============================
The introduction of finality (the goal of the system) in
this definition may be surprising. We understand that the
purpose of a machine has been defined and specified by man;
but how does one speak of the purpose of a system like the
cell? There is nothing mysterious about the "goal" of the
cell. It suggests no scheme; it declares itself a
posteriori: to maintain its structure and replicate itself.
The same applies to the ecosystem. Its purpose is to
maintain its equilibrium and permit the development of life.
No one has set the level of the concentration of oxygen in
the air, the average temperature of the earth, the
composition of the oceans. They are maintained, however,
within very strict limits.

I have read quite a bit of this section, and have yet to see any mention of
reference signals. Of course the above paragraph has nothing to do with
reference signals; "goal" and "purpose" as discussed here are simply
metaphysical concepts, without explanatory power. There is no distinction
made between the purpose or goal of any mechanical device and the purpose
or goal of a living system -- that is, of a control system.

This distinction is the essence of PCT. It is the reference signal of a
control system that gives it an _intrinsic_ purpose, as opposed to a
purpose that is only a use to which it could be put. It is only the
reference signal that distinguishes an _outcome_ of behavior from a _goal_
of behavior. The purpose or goal of a system, if it has one, is to be found
through an analysis of the system, not through an inquiry into the effects
of that system on its environment, or through asking what the observer of
the system wants to use it for.

This point has come up a number of times in discussions with Hans Blom.
What seems to be clear from these discussions, and from reading the text
that Hans recommends so highly, is that the PCT understanding of reference
signals and their role in the organization of behavior is simply _not
understood_ in this way of thinking. Goals and purposes remain mysterious
abstractions; the idea that a goal or a purpose could have a physical
existence as a real neural signal is simply missing. And certainly the role
of reference signals in specifying perceptions that are to be brought about
and maintained is missing. The confusion by de Rosnay of equilibrium
processes with control processes is a sure sign that he is ignorant of the
real properties of living control systems.

Best,

Bill P.

Hi Rich,
Many thanks for the cautionary message.
Ellery

[From Rick Marken (970619.2010)]

Ellery Lanier --

Many thanks for the cautionary message.

With Hans loose on the CSGNet, your safety is our first concern;-)

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@leonardo.net
http://www.leonardo.net/Marken

[From Bruce Gregory (970620.1530 EDT)]

Bill Powers (960719.1609 MDT)

This point has come up a number of times in discussions with Hans Blom.
What seems to be clear from these discussions, and from reading the text
that Hans recommends so highly, is that the PCT understanding of reference
signals and their role in the organization of behavior is simply _not
understood_ in this way of thinking. Goals and purposes remain mysterious
abstractions; the idea that a goal or a purpose could have a physical
existence as a real neural signal is simply missing. And certainly the role
of reference signals in specifying perceptions that are to be brought about
and maintained is missing. The confusion by de Rosnay of equilibrium
processes with control processes is a sure sign that he is ignorant of the
real properties of living control systems.

I agree completely, but pick a small nit. As far as I know, the
atmosphere and the oceans are _not_ in equilibrium with their
sources and sinks. (The atmosphere is oxygen-rich and the land
surfaces are sub-oxidized.) These out-of-equilibrium systems are
apparently the result of the actions of living systems. Whether
they are intended results or side-effects is unclear. We seem to
be performing a massive experiment to see if the system
can successfully counter the disturbance. High rollers, we...

Bruce

[From Bill Powers (970620.1454 MDT)]

Bruce Gregory (970620.1530 EDT)--

I agree completely, but pick a small nit. As far as I know, the
atmosphere and the oceans are _not_ in equilibrium with their
sources and sinks. (The atmosphere is oxygen-rich and the land
surfaces are sub-oxidized.) These out-of-equilibrium systems are
apparently the result of the actions of living systems.

I think that's correct. However, if you look up Lovelock's stuff on Gaia,
you'll see that he doesn't understand control systems, either -- he thinks
there's just an equilibrium between oxygen output and CO2 intake. I
corresponded with him for a while, trying to convince him that if every
plant controlled the oxygen level _at its own leaves_, the result would be
strong control of atmospheric O2. But he doen't believe in reference
signals -- apparently he made a reputation once by inventing a control
system that (he thinks) didn't have any reference signal in it, so from now
on, it's NO REFERENCE SIGNALS.

Best,

[From John Anderson (970622.1700)]

[From Bruce Gregory (970620.1530 EDT)]

I agree completely, but pick a small nit. As far as I know, the
atmosphere and the oceans are _not_ in equilibrium with their
sources and sinks. (The atmosphere is oxygen-rich and the land
surfaces are sub-oxidized.) These out-of-equilibrium systems are
apparently the result of the actions of living systems.

Let me pick an even smaller nit. It sounds like you are saying that the
atmosphere and the oceans are not in equilibrium because, in your
example, the atmosphere is oxygen-rich and the "land surfaces" are not.
But the system could be in equilibrium without the amount of oxygen
being uniform. All that's required is that the rate of oxygen movement
from the surface to the atmosphere be the same as the rate of movement
in the opposite direction. Unless the "land surfaces" just don't have
sites to bind to oxygen, some sort of active processes would be required
to maintain the lopsided equilibrium. As you suggest, these active
processes could certainly be due to living systems.

John

···

--
John E. Anderson
jander@unf.edu

[From Bruce Gregory (970623.1000 EDT)]

John Anderson (970622.1700)

But the system could be in equilibrium without the amount of oxygen
being uniform. All that's required is that the rate of oxygen movement
from the surface to the atmosphere be the same as the rate of movement
in the opposite direction. Unless the "land surfaces" just don't have
sites to bind to oxygen, some sort of active processes would be required
to maintain the lopsided equilibrium. As you suggest, these active
processes could certainly be due to living systems.

I believe the argument is that the sub-oxidized surface would
remove the oxygen from the atmosphere on a time scale of some
million years. The presence of oxygen over time scales of
billions of years indicates the presence of a non-equilibrium
net source.

Bruce

[Hans Blom, 970624]

(Bill Powers (960719.1609 MDT))

This is a paragraph from _Macroscope_, by Joel de Rosnay,
recommended for our reading by Hans Blom:

... how does one speak of the purpose of a system like the cell?
There is nothing mysterious about the "goal" of the cell. It
suggests no scheme; it declares itself a posteriori: to maintain
its structure and replicate itself. The same applies to the
ecosystem. Its purpose is to maintain its equilibrium and permit
the development of life.

I have read quite a bit of this section, and have yet to see any
mention of reference signals. Of course the above paragraph has
nothing to do with reference signals; "goal" and "purpose" as
discussed here are simply metaphysical concepts, without explanatory
power. There is no distinction made between the purpose or goal of
any mechanical device and the purpose or goal of a living system --
that is, of a control system.

You capture the essential difference very well. In systems science,
"goals" (reference levels) are, if they are considered at all,
emergent phenomena, emerging from mutually opposing "forces" or
"tendencies". In PCT it is the "goals" that are primary and the
equilibria that "emerge". It appears to me that both cybernetics and
PCT are concerned with the same things, but look at those things from
a different "level" (ordering of levels?).

This distinction is the essence of PCT. It is the reference signal
of a control system that gives it an _intrinsic_ purpose, as opposed
to a purpose that is only a use to which it could be put.

You are right: this is the control theory perspective. When we
construct a control system, we want to be able to specify the goal,
from which the design follows. The design's goal is then to realize
the specified goal as well as possible, and that normally implies a
high "loop gain" (accurate realization of the goal).

Yet, this control system point of view leads to absurdity if the loop
gain is low. A (much simplified) example: it seems that our human
blood pressure "control system" has a "loop gain" of about 3. Say my
blood pressure is 120 mm Hg. A rapid calculation shows that the
"goal" of the system would be to have the blood pressure at 4/3 of
the equilibrium value, that is at 160 mm Hg. Such a blood pressure is
actually quite unhealthy. So we are "lucky" that the control system
has a gain of only 3: if it had a much higher gain (something
recommended in PCT and, of course, for many artificial control
systems) we wouldn't be nearly as healthy as we actually are due to
the control system's "deficiency".

This resulting absurdity (as in "it is the goal of the blood pressure
control system to stabilize the pressure at 160 mm Hg") is the major
reason why physiologists, for example, are naturally reluctant to
talk about reference levels. Yet talking about loop gains is quite
natural to them; see Guyton's textbook "Physiology", for example. The
same argument seems to apply in the context of systems science /
cybernetics, where loop gains are frequently low as well.

The confusion by de Rosnay of equilibrium processes with control
processes is a sure sign that he is ignorant of the real properties
of living control systems.

There is nothing that prevents you from calculating reference levels
in the kind of "equilibrium" systems that cybernetics is concerned
with. Doing so in low loop gain systems, however, will show up the
kind of absurdity I mentioned above. The concept of reference levels
is quite valuable in high gain control systems, not in low gain ones.

A point of more interest, maybe: at which loop gain do we start to
speak of a "control system" rather than an "equilibrium system"? Is
the value of the loop gain what distinguishes between both, or is
there something else (as well)? Usage of externally supplied energy
maybe? If so, which kinds of energy are we allowed to consider? Does
a falling ball consume "potential energy" when it realizes its "goal"
of seeking the earth's center of gravity?

In an equilibrium system the equilibrium exists between two (or more)
"conflicting" tendencies (sorry, I cannot find a better word). In the
case of the blood pressure control system, for instance, one tendency
is the blood vessel walls' natural elasticity, which "attempts to"
decrease the vessel's diameter and thus to increase the blood
pressure. The other tendency is the due to the baroreceptor control
system's action, where increasing blood pressure increasingly relaxes
the smooth musculature that surrounds the vessels, thus "attempting
to" decrease the blood pressure. PCT would, I guess, consider the
first tendency passive, the second one active (a "one way" control
system). Systems science mostly tends to disregard this distinction,
and attempts to analyze in terms of "tendencies", whether active or
passive. In the process, it eliminates a lot of problems in cases
where it is simply unknown -- or considered unimportant -- whether a
process is active (a "control system") or passive (resulting from
material properties). For instance: is a cell's maintenance of its
sodium concentration an active or a passive process? However
interesting this question is, we would frequently want to skip it and
to focus our attention on other issues, simply taking for granted
_that_ a cell's sodium concentration normally does not vary much.
Moreover, an attempt to answer such questions leads one to ever
deeper levels. A cell's sodium concentration is regulated by sodium
pumps. But then what regulates the action of the sodium pumps? We
finally arrive at an explanation in terms of physical forces between
atoms. But not for all questions is it meaningful to give a solution
in terms of elementary physical properties.

Analysis can always be done in terms of equilibria, I guess. PCT --
or control theory in general -- makes finer distinctions, and that
may be an advantage. In PCT a passive tendency would be modelled as
and called an environment function, an active one a control system.
And PCT even considers equilibria resulting from the interaction of
_two_ active systems, for which the term "conflict" was invented.

Sometimes, however, these distinctions are unimportant. I can as
easily manipulate a control system (say a room thermostat) as a
passive system (say a ball). And in both cases the ease with which I
can manipulate an object seems to depend on the complexities of its
"laws of motion" or, as I would say, on how well my brain can "model"
that object in terms of how it will behave as a result of my
manipulations. It is easier to manipulate a thermostat than a human
being, as it is easier to manipulate a soccer ball than a football or
rugby ball.

All in all, I don't see "confusion" in the Macroscope, just a
different perspective.

A final question: an _active_ control system can ultimately be
analyzed in terms of physical forces between atoms, which we normally
consider to be _passive_. At which level of analysis do we start to
speak of that system being active? Why? In other words: what is the
nature of a control system (in contrast to an equilibrium system),
apart from control being a certain perspective on investigation?

Greetings,

Hans

[From Bill Powers (970624.0645 MDT)]

Hans Blom, 970624--

You capture the essential difference very well. In systems science,
"goals" (reference levels) are, if they are considered at all,
emergent phenomena, emerging from mutually opposing "forces" or
"tendencies". In PCT it is the "goals" that are primary and the
equilibria that "emerge". It appears to me that both cybernetics and
PCT are concerned with the same things, but look at those things >from a

different "level" (ordering of levels?).

The essential point that is missed in what you call "systems science" is
that a control loop is NOT the same as two mutually opposing forces or
tendencies. Suppose you run water into a bucket with a small hole in its
bottom. The water level will rise until the pressure at the hole increases
enough to cause an outflow equal to the inflow. This is a pure equilibrium
system: causation runs from inflow to outflow.

But now add a control loop to this picture. Use a sensor which detects the
level of water without affecting it -- a small float, for example, or an
electronic scale that weighs the bucket. Compare the signal from the sensor
with an adjustable reference signal. Amplify the output signal to run a
motor and connect the motor to adjust the diameter of the hole in the
bottom of the bucket. Suddenly there is no longer the same equilibrium
point. By altering the reference signal, you can now cause the water level
to remain at any level you want, even if the inflow changes. The same
equilibrium process exists as before: the water level will rise or fall
until inflow equals outflow. But one of the factors influencing water level
is now very strongly affected by feedback from the sensor, so the
uncontrolled factor, the inflow, no longer can alter the water level (over
some range of inflow). Alternatively, you could connect the motor so it
varies the inflow; then changes in the size of the hole in the bottom of
the bucket would no longer be able to affect the water level, over some
range of sizes of the hole. And in both cases, if someone arbitrarily adds
water or removes some, the control system will quickly restore the original
water level -- as long as the reference signal remains the same. The
control system acts _by adjusting the equilibrium point of the passive
equilibrium system_.

"Systems science" writers seem unaware of this difference between passive
equilibrium systems and active control system.

This distinction is the essence of PCT. It is the reference signal
of a control system that gives it an _intrinsic_ purpose, as >>opposed to

a purpose that is only a use to which it could be put.

You are right: this is the control theory perspective. When we
construct a control system, we want to be able to specify the goal,
from which the design follows. The design's goal is then to realize
the specified goal as well as possible, and that normally implies a
high "loop gain" (accurate realization of the goal).

The design has no goal; it is the designer who has the goal of seeing the
control system behave in a certain way. The control system, if it does not
meet the designer's goal, will do nothing to change its own design until
the goal is met. It is the designer who will change the control system.

Adaptive systems of the MCT type have been given (by the designer) the
ability to perceive aspects of their own performance -- that is, a new
control system has been added to those that already exist, which perceives
some consequence of the operation of the other control systems inside the
whole system. A reference signal is supplied to specify the (designer's)
intended state of this perception of performance, and the error is used to
alter the parameters of the other control systems until the reference
signal is matched by the perceived performance. Of course the control
system that is doing this does not modify its _own_ parameters -- it
modifies the parameters only of the _other_ control systems. If the
adaptive system is not adapting properly, the designer will see that it is
not satisfying the designer's goal, and will have to alter the adaptive
part of the system. And the adaptive part does not set its own reference
levels; the designer decides what is to be considered optimum performance
and sets the reference levels accordingly.

Yet, this control system point of view leads to absurdity if the >loop

gain is low. A (much simplified) example: it seems that our >human blood
pressure "control system" has a "loop gain" of about 3. >Say my blood
pressure is 120 mm Hg. A rapid calculation shows that >the "goal" of the
system would be to have the blood pressure at 4/3 >of the equilibrium
value, that is at 160 mm Hg. Such a blood >pressure is actually quite
unhealthy. So we are "lucky" that the >control system has a gain of only 3:
if it had a much higher gain >(something recommended in PCT and, of course,
for many artificial >control systems) we wouldn't be nearly as healthy as
we actually are >due to the control system's "deficiency".

There is another, and I think more convincing, interpretation of these
facts: blood pressure itself is not the only variable under control. Why is
there blood pressure at all? In order to move blood through the circulatory
system, to carry nutrients and oxygen to the tissues, move waste products
to the organs that get rid of them, and to get rid of excess heat. It is
more important that these factors be controlled than it is to control blood
pressure. In fact, for these factors to be controlled against disturbances,
blood pressure must _vary_. The blood pressure system might actually
involve very tight control, but its reference level may be variable
according to the _effects_ the blood pressure is having -- for example,
affecting the blood supply to the brain. In other words, the blood pressure
control system may be subordinate to other control systems.

There are clearly hierarchically-related control processes going on here,
with the higher level control systems being concerned with the effects of
heart rate and stroke volume much more than with the blood pressure
required to maintain the flows that are needed. It is possible that blood
pressure per se is not controlled at all -- it may simply be an output
variable that is altered by other systems as a way of controlling more
important variables. Of course there might be some degree of direct control
of blood pressure, because even a gain of 3 is better than no gain at all;
3/4 of the effect of any disturbance of blood pressure would be cancelled.
It doesn't matter if the reference signal is 160 mm, because that reference
signal is varied by the systems regulating the _effects_ of blood pressure.
It is set to whatever level is required to keep those effects under
control. If any "unhealthy" effects were to occur, the reference signal
would be lowered.

This resulting absurdity (as in "it is the goal of the blood >pressure

control system to stabilize the pressure at 160 mm Hg") is >the major
reason why physiologists, for example, are naturally >reluctant to talk
about reference levels. Yet talking about loop >gains is quite natural to
them; see Guyton's textbook "Physiology", >for example. The same argument
seems to apply in the context of >systems science/cybernetics, where loop
gains are frequently low as >well.

Physiologists would be quite right if they were being reluctant to talk
about _fixed_ reference levels. The absurdity is in trying to analyze a
physiological control system as if it exists in isolation. It is possible
that blood pressure per se may not be under control at all, but only
certain _effects_ of blood pressure. A test to determine loop gain of a
supposed blood pressure control system might actually be measuring the net
gain of several other control systems which vary blood pressure (among
other things) to control something else. Even if it is under control, you
can't measure the true loop gain unless you prevent other systems from
altering the reference signal. When you disturb blood pressure, you are
disturbing blood flow, and thus all the other variables that depend on
blood flow. If you don't understand the systems that are controlling those
other variables, you can't predict how the reference level for blood
pressure, or the signal driving heart rate and volume (if there is no
control system for blood pressure), will vary.

The problem with studying one system at a time is that there are really
many interacting systems, and unless you have some concept of how the whole
collection of systems works, you can't get a true picture of any one of them.

The confusion by de Rosnay of equilibrium processes with control
processes is a sure sign that he is ignorant of the real properties
of living control systems.

There is nothing that prevents you from calculating reference levels
in the kind of "equilibrium" systems that cybernetics is concerned
with. Doing so in low loop gain systems, however, will show up the
kind of absurdity I mentioned above. The concept of reference levels
is quite valuable in high gain control systems, not in low gain >ones.

Forget about the "absurdity." There is none. Reference levels exist
regardless of the gain: the reference level is that level of input at which
the output is just zero or neutral, and such a level can be defined quite
independently of loop gain. Varying the reference level will vary the
operating point of the control system, period. Even in a control loop with
a gain of only 3, variations in the reference signal will be tracked by
variations in the perceptual signal that are 3/4 as large. That size of
effect can't just be ignored.

Anyway, I suspect that if you measure a loop gain of only 3 in any
physiological control system, you have misidentified the controlled
variable, or have disturbed other systems that are altering the reference
signal of the control system.

A point of more interest, maybe: at which loop gain do we start to
speak of a "control system" rather than an "equilibrium system"? Is
the value of the loop gain what distinguishes between both, or is
there something else (as well)? Usage of externally supplied energy
maybe? If so, which kinds of energy are we allowed to consider? Does
a falling ball consume "potential energy" when it realizes its >"goal" of

seeking the earth's center of gravity?

Loop gain is the critical factor; energetics follow. The loop gain in any
equilibrium system is at most 1, and in all real cases is less than 1. To
get any loop gain greater than 1, it is necessary to have an external power
source. This is because amplification involves an output that contains more
power than the input that drives it; that extra power has to come from
somewhere. It is this extra power that enables a control system to _resist_
disturbances, and _overcome_ resistance when it is given a varying
reference signal.

The falling ball does not involve any circular causation, so it's outside
the purview of this discussion. And even if you manage to see a closed loop
in it, the energy expended by the ball in falling to the ground is equal to
or less than the energy expended to raise it in the first place. Entropy
sees to it that any "loop gain" you can manage to see in the situation is
less than 1.

In an equilibrium system the equilibrium exists between two (or >more)

"conflicting" tendencies (sorry, I cannot find a better word). >In the case
of the blood pressure control system, for instance, one >tendency is the
blood vessel walls' natural elasticity, which >"attempts to" decrease the
vessel's diameter and thus to increase >the blood pressure.

The walls of the blood vessel (if not actively constricted by another
control system) have a diameter that depends on pressure. They don't
"attempt" to do anything. If the pressure rises, they expand.

The other tendency is the due to the baroreceptor control
system's action, where increasing blood pressure increasingly >relaxes the

smooth musculature that surrounds the vessels, thus >"attempting to"
decrease the blood pressure.

This is what I mean by talking about systems as if they existed in
isolation. Vasoconstriction is also affected by many other control systems;
for example, temperature control systems. Since vasoconstriction affects
blood flow at constant pressure, it affects the flow of nutrients and
oxygen, and the rates of gas exchange in the lungs, and heat transport to
and from the periphery and the brain. All the control systems associated
with those variables also get into the act, among other things contributing
to the reference levels for heart rate and stroke volume. If you could
write the system equations for ALL of the control systems that are
involved, you would find no conflicts.

PCT would, I guess, consider the
first tendency passive, the second one active (a "one way" control
system). Systems science mostly tends to disregard this distinction,
and attempts to analyze in terms of "tendencies", whether active or
passive. In the process, it eliminates a lot of problems in cases
where it is simply unknown -- or considered unimportant -- whether a
process is active (a "control system") or passive (resulting from
material properties). For instance: is a cell's maintenance of its
sodium concentration an active or a passive process? However
interesting this question is, we would frequently want to skip it >and to

focus our attention on other issues, simply taking for >granted _that_ a
cell's sodium concentration normally does not vary >much.

That seems legitimate to me, provided that the sodium concentration does
not actually vary enough to cause significant changes in whatever depends
on it.

Moreover, an attempt to answer such questions leads one to ever
deeper levels. A cell's sodium concentration is regulated by sodium
pumps. But then what regulates the action of the sodium pumps? We
finally arrive at an explanation in terms of physical forces between
atoms. But not for all questions is it meaningful to give a solution
in terms of elementary physical properties.

This is going in the wrong direction for a control-system analysis. The
lower-level variables will be much more variable than higher-level
variables: the lower-level variations are occurring _in order to_ keep the
higher level variables from changing, and they're being caused to vary by
the higher-level systems, not by causes at a still more molecular level.

This is in direct contrast to the usual biological or biochemical approach,
in which the lower-level processes are imagined to control the higher-level
processes. Reductionism doesn't work in control theory.

Analysis can always be done in terms of equilibria, I guess.

Yes, it can, but when active control is involved, it is a mistake.

PCT -- or control theory in general -- makes finer distinctions, and >that

may be an advantage. In PCT a passive tendency would be >modelled as and
called an environment function, an active one a >control system. And PCT
even considers equilibria resulting from the >interaction of _two_ active
systems, for which the term "conflict" was invented.

Sometimes, however, these distinctions are unimportant. I can as
easily manipulate a control system (say a room thermostat) as a
passive system (say a ball). And in both cases the ease with which I
can manipulate an object seems to depend on the complexities of its
"laws of motion" or, as I would say, on how well my brain can >"model"

that object in terms of how it will behave as a result of my

manipulations. It is easier to manipulate a thermostat than a human
being, as it is easier to manipulate a soccer ball than a football >or

rugby ball.

This is true, and the reason it's true has to do with hierarchies of
control. When you manipulate the thermostat, you do so by altering its
reference signal, not by turning the furnace on and off. If you had to
operate the furnace directly, you'd have to take into account every heat
loss and heat gain from every part of the building and from every cause,
and you'd have to understand the physics of heat transport by conduction
and diffusion. In fact, all you have to worry about is your own skin
temperature. When you feel cold, you turn up the thermostat setting, and
the thermostat then automatically takes care of manipulating the furnace
output to keep its own sensed temperature at the specified level. You don't
have to "model" the physics of heating the room at all. That's all done for
you by the thermostat, a lower-level control system as it relates to you.
While the furnace ultimately causes your sensed skin temperature, your
desired skin temperature ultimately causes the furnace's action.

All in all, I don't see "confusion" in the Macroscope, just a
different perspective.

No, not just a different perspective -- what I see there is ignorance of
control theory, and an attempt to handle control processes without
understanding control processes.

A final question: an _active_ control system can ultimately be
analyzed in terms of physical forces between atoms, which we >normally

consider to be _passive_. At which level of analysis do we >start to speak
of that system being active? Why? In other words: >what is the nature of a
control system (in contrast to an >equilibrium system), apart from control
being a certain perspective >on investigation?

When there is a closed causal loop with a negative loop gain greater than
1. To get a gain greater than one, many atoms in high energy states have to
be reduced to low energy states while raising a few atoms to higher energy
states. The energy gain in the closed control loop is obtained at the
expense of a large energy loss in the flow of energy-bearing materials into
the system and the flow of degraded materials out of it.

Of course the _degree_ of control depends on the _amount_ of loop gain. But
the boundary between control systems and passive equilibrium systems is at
a loop gain of 1.

Best,

Bill P.

[Hans Blom, 970625]

(Bill Powers (970624.0645 MDT))

The essential point that is missed in what you call "systems
science" is that a control loop is NOT the same as two mutually
opposing forces or tendencies. Suppose you run water into a bucket
with a small hole in its bottom. The water level will rise until the
pressure at the hole increases enough to cause an outflow equal to
the inflow. This is a pure equilibrium system: causation runs from
inflow to outflow.

But now add a control loop to this picture. Use a sensor which
detects the level of water without affecting it -- a small float,
for example, or an electronic scale that weighs the bucket. Compare
the signal from the sensor with an adjustable reference signal.
Amplify the output signal to run a motor and connect the motor to
adjust the diameter of the hole in the bottom of the bucket.

Why so complicated? Why use a motor to control a water level? What
you propose is realized much more simply in every WC: use a float to
operate/control the water inflow into the "bucket". As simple -- and
as mechanical -- as Watt's steam engine regulator.

I maintain that such a system can be modelled both as a control
system AND as an equilibrium system. Explaining it to a layman would,
I think, be easier in terms of an equilibrium. Modeling the WC as a
control system introduces the absurdity I mentioned. What is the
"reference (water) level" of your WC?

"Systems science" writers seem unaware of this difference between
passive equilibrium systems and active control system.

Since many "systems science" people have had a control engineering
education, I think that this remark is in error. Unless I interpret
your statement as a subjetive one: "it seems _to me_". In that case I
can only counter that it doesn't seem so to me...

When we construct a control system, we want to be able to specify
the goal, from which the design follows. The design's goal is then
to realize the specified goal as well as possible, and that
normally implies a high "loop gain" (accurate realization of the
goal).

The design has no goal; it is the designer who has the goal of
seeing the control system behave in a certain way.

You fly off at a tangent. With "design" I meant the design process of
the system's constructor. That "design" certainly has a goal.

There is another, and I think more convincing, interpretation of
these facts: blood pressure itself is not the only variable under
control.

I fully agree: every control system is part of a larger whole. Now
apply your reasoning to cursor tracking. Having the cursor on the
target is not the only variable under control, as I experience daily.
As soon as I hear the shout "Coffee is ready", I abandon the computer
and control suddenly deteriorates tremendously ;-). I leave it up to
you to find the answer to: Why is there cursor tracking at all? :wink:

But I guess the answer would be, again paraphrasing you:

There are clearly hierarchically-related control processes going on
here, with the higher level control systems being concerned with the
effects of cursor tracking much more than with the joystick position
required to maintain the cursor positions that are needed.

And also:

It is possible that cursor position per se is not controlled at all
-- it may simply be an output variable that is altered by other
systems as a way of controlling more important variables.

And I would fully agree with that as well ;-).

Greetings,

Hans

[From Bill Powers (970625.1044 MDT)]

Hans Blom, 970625--

(Bill Powers (970624.0645 MDT))

The essential point that is missed in what you call "systems
science" is that a control loop is NOT the same as two mutually
opposing forces or tendencies. Suppose you run water into a bucket
with a small hole in its bottom. ...

Why so complicated? Why use a motor to control a water level? What
you propose is realized much more simply in every WC: use a float to
operate/control the water inflow into the "bucket". As simple -- and
as mechanical -- as Watt's steam engine regulator.

Sure, that would work, too. I don't have any hangup about seeing control
systems as electronic only.

I maintain that such a system can be modelled both as a control
system AND as an equilibrium system. Explaining it to a layman >would, I

think, be easier in terms of an equilibrium.

It would also be untruthful. The WC Control System contains power gain, the
excess power being tapped off the pressure and flow in the water supply (in
fact my WC, which does not use a float, is equipped with a small fluidics
amplifier that makes the valve very sensitive to small changes in water
pressure at the bottom of the tank. It costs less than the float-type
controller. The reference pressure is set with a small screw that changes
the pressure at which the valve just closes).

Modeling the WC as a
control system introduces the absurdity I mentioned. What is the
"reference (water) level" of your WC?

I defined the reference level for you yesterday: it is that level of the
controlled variable at which the output is just zero (or neutral). The
reference level on the float-type WC controller is adjusted by bending the
rod that supports the float. Ktesibios did it by varying the position of
one half of the float-operated valve, 2200 years ago. The float-operated
controller, too, contains power gain: the required energy is provided by
the work done on the float by the rising water. This is how the valve can
be shut all the way against the pressure of the water supply. It is an
integrating control system.

"Systems science" writers seem unaware of this difference between
passive equilibrium systems and active control system.

Since many "systems science" people have had a control engineering
education, I think that this remark is in error. Unless I interpret
your statement as a subjetive one: "it seems _to me_". In that case >I can

only counter that it doesn't seem so to me...

That's because you, too, seem unaware of the difference, since you keep
wanting to lump equilibrium systems with control systems. I can only judge
what a writer knows by what he writes. I have spoken with many control
system engineers who feel that their educations left them unequipped to
deal with real control systems -- they had to learn that from experience
(those still capable of learning, that is).

When we construct a control system, we want to be able to specify
the goal, from which the design follows. The design's goal is then
to realize the specified goal as well as possible, and that
normally implies a high "loop gain" (accurate realization of the
goal).

The design has no goal; it is the designer who has the goal of
seeing the control system behave in a certain way.

You fly off at a tangent. With "design" I meant the design process >of the

system's constructor. That "design" certainly has a goal.

No, the constructor has a goal which a given design satisfies to a greater
or lesser extent. You vary the design until it fits the reference criteria.
But you refuse to see the difference between a reference signal and an
outcome, so I don't know why I'm bothering to explain it.

There is another, and I think more convincing, interpretation of
these facts: blood pressure itself is not the only variable under
control.

I fully agree: every control system is part of a larger whole. Now
apply your reasoning to cursor tracking. Having the cursor on the
target is not the only variable under control, as I experience >daily. As

soon as I hear the shout "Coffee is ready", I abandon the >computer and
control suddenly deteriorates tremendously ;-). I leave >it up to you to
find the answer to: Why is there cursor tracking at >all? :wink:

Because there is a higher-level purpose that is satisfied by (among other
possible activities) doing the tracking task. When coffee is ready, we see
the relative priorities of the different higher-level purposes.

But I guess the answer would be, again paraphrasing you:

There are clearly hierarchically-related control processes going on
here, with the higher level control systems being concerned with >>the

effects of cursor tracking much more than with the joystick >>position
required to maintain the cursor positions that are needed.

Yes, that is what my answer would be. This does not mean that there is no
concern with varying the joystick position to control the cursor. There are
different levels of control acting at the same time. Also, if you
understood PCT, you would not have said that ANY control system is
"concerned with the joystick position required to maintain the cursor
positions that are needed." That's YOUR way of looking at control.

And also:

It is possible that cursor position per se is not controlled at all
-- it may simply be an output variable that is altered by other
systems as a way of controlling more important variables.

And I would fully agree with that as well ;-).

That is not what I would say, so I don't know with what straw man you are
agreeing. In fact, when we model cursor-control behavior, we find that the
best model is a control system with a leaky integrator and a static loop
gain in the hundreds. The control hypothesis is pretty convincing,
especially when we establish that changing the conditions has just the
effects that the model predicts. Of course there is the question of why a
person does the control task at all, and there the answer has to lie in
higher systems: to satisfy curiousity, to please an experimenter, and so
on. Those variables are controlled on a different time scale, and with
different control characteristics.

···

-------------------------
This is getting us nowhere, Hans. You are determined to show that there is
nothing new in PCT or HPCT, and that is what you think you are doing as you
pat yourself on the back for your clever responses. As long as you don't
want to see anything new in PCT, you will successfully avoid seeing it. You
once expressed (somewhat insulting) amazement that I should be able to
learn the MCT model; so far I am not amazed at your ability to grasp the
PCT model.

Best,

Bill P.

[From Rick Marken (970619.1245)]

Ellery Lanier (970619) --

Macroscope looks exactly like what I have been searching for

I think you'll be disappointed, Ellery. But it's worth a look.
There are some very cute cartoons.

I skimmed the discussion of economics in chapter 1; based on the what I
read it didn't seem too bad. The author does seem to get the basic idea
of circular flow between aggregate producer and consumer.

The big blunder is in the discussion of "The Living Organism" which
is in section 5 of chapter 1. Just look at the second cartoon in
that section (I've attached it to this post for people whose e-mail
systems can read GIFs); it's the one that has a pointing hand in
the upper left labeled "Start here".

The cartoon is a functional diagram of information and energy flow
in a living organism. The "Start Here" hand indicates where the
flow of information though the organism begins: information comes
into the organism from the environment; it goes through the
organism's transducers to the "Center for regulation, control and
communication" and then on to "Transmitters" (output devices) that
have "Effects on the environment". The crucial connection between
these output effects and the information entering the organism at
the "Start here" box is not shown. So the picture gives the strong
impression that the living organism is a one way, open-loop
information processing system. To a PCTer this picture of a
living organism is as quaintly wrong as a picture of the earth as
the center of concentric crystal spheres carrying the different planets
with one sphere carrying the sun.

Best

Rick

(Attachment MacroscFig30.GIF is missing)

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/
Content-Type: image/gif; x-mac-type="47494666"; x-mac-creator="4A565752"; name="MacroscFig30.GIF"