Tom Bourbon [941005.1615]
[Hans Blom, 941005]
(Tom Bourbon [941004.0945])
I am happy that all of you took this "inside" perspective: "irrelevant to
the ECU doing the controlling that leads to the side effects." But several
of you went further and noticed that, maybe, this picture was too limited.
Too limited? For what? To whom? Certainly not to the individual ECU doing
the controlling, and that's all that matters here.
Forgive me my background in control engineering, but what matters _there_ is
what is sometimes called "multi-sensor/multi-actuator integration".
No, please. You forgive _me_, for not recognizing that you wanted us to
talk about control engineering when we replied to the "assignment" you
posted on a net devoted to the study of living things as control systems.
The mistake is mine.
sensor/one-actuator systems with just one goal (reference level) hardly occur
(even the simplest ones, on further consideration, frequently contain extra
controls, even if those are just limits to be obeyed) and they aren't inter-
My mistake again. I thought you were interested in our ideas, otherwise, I
couldn't imagine why would you ask the questions you did. I didn't
realize you had already dismissed our ideas as not interesting. Have you
run all of the PCT demonstrations, or read our little collections of
published reports on modeling? If you did, did you notice how smoothly and
seamlessly the modeling progresses from the uninteresting (for you) case of
a single elementary perceptual control unit (ECU), to interactions between
two or more ECUs that share an environment and affect one another's
controlled variables, to multiple ECUs running in parallel, to hierarchical
ECUs often running in parallel at a given level in the hierarchy. All the
way through that work, the same uninteresting model continues to produce the
same kinds of results; thus far, we have found no reason to abandon that
model and look for others, perhaps in control engineering.
An interesting control _system_ has more than one input and/or
output, and more than one goal.
Again, did you run the demonstrations and read the reports? What you call
"interesting" can be found there. I honestly don't know what you are
driving at, Hans. I have the feeling that we have been through this
discussion with you many times, but it seems that we have all failed to make
our ideas clear to one another.
The problem is then how to design a system in
such a way that an "optimal compromise" is reached among the goals, not all of
which can often be realized at the same time under all circumstances.
"Optimal," again. "Optimal" may be (no, I will go so far as to say it "is")
a legitimate term for control engineers, but it seems to have little if
anything to do with perceptual control systems.
Powers proposes a hierarchy as the mechanism to realize such a cooperation.
What is more common in complex control systems is a "society of mind"-type of
organization, a number of parallel control loops that do not only monitor
their own process but are also, somehow (through cross-connections), aware of
what other control loops are doing.
This sentence is loaded with terms that beg definition and that, as I
understand them, probably have little or nothing to do with the way living
systems work. All of our attempts to model interactions between controllers
involve "cross-connections," but the connections are through the enviroment,
where the actions of one controller may affect variables controlled by
others; the connections are not "somehow," but direct and physical. That
was my point in the post to which you are replying here.
The point is, that each unit control
system adjusts its own goal depending upon the behavior of other unit control
OK, and in our modeling thus far, the most common way for multiple
units to interact is that each unit "adjusts" its actions to eliminate
the disturbances produced by the actions of other units. I have some
adaptive PCT models (described long ago in discussions with you) in which
one model adjusts its own parameters. Bill Powers has some adaptive models
that are much more complex than mine; he has described them on this list.
Again, I'm not sure what point you are trying to make. The facts that many
control units might interact, and that when they interact they might disturb
one another, and that some of them might be adaptive, do not change our
answers on your "assignment." To the contrary, we gave our answers with
full knowledge of those facts.
A hierarchical organization does exactly the same, upon closer consi-
deration. Therefore, as a control engineer, it strikes me as funny to say that
all that matters to an ECU is what it itself does.
But that is not what we said, Hans. We have _never_ said that. We have
always said all that "matters" to an ECU is what it _perceives_. An ECU can
only "know" what it "perceives" and that is all the ECU can control. An ECU
might "do" many things "to itself," by way of unperceived effects, but those
things do not "matter" to the ECU. See my example of anaerobic life, in
the post to which you are replying.
You might of course maintain the opposite position: since a multi-input multi-
output system is organized in such a way that the goal of each individual
control unit takes into account the behavior of other units, it needs only
pursue its own goal. Fine. What I wanted to make explicit is the linkages that
exist between ECUs.
I respect that goal, but you did not make it clear in your assignment. I do
wonder why you think it is necessary to make the environmental linkages
explicit to a group where people always make them explicit, anyway. Have we
communicated that fact so poorly that you did not notice it? If so, we must
find more effective ways to talk about interactions among ECUs.
. . .
What is a "side effect" for ECUa might well affect ECUb. Sure. But what I
wanted to emphasize is that ECUb's subsequent actions might in turn affect
ECUa again. That is, the result of ECUa's actions loops back to itself. It is
in this sense that I reject the "irrelevance" of the side effect.
That circumstance is obvious and it is discussed in our work on interacting
systems. When I read your assignment, I never guessed that you were trying
to make this point on which we all agree already. In my example, we are
talking about an effect of ECUa's actions that "comes back" in the form of
disturbances on ECUa's controlled variables, and we are right back to
garden variety PCT - nothing new there.
(I was really hoping you could be at the PCT workshop in Wales, Hans. I
wanted to show you some of my programs in which people and models and hands
interact in real time, producing all kinds of effects that "come back" to
each controller by way of the actions of other, disturbed, controllers. I
even made up some new programs for you; in them, the participating people
never would guess the paths through which effects travel during the
interactions, but everything can be duplicated by a few of the very
simplest ECUs from PCT. The "trick" is to keep the models simple and
dumb and make all of the complicated connections among variables in the
An exercise that is often fruitful in control engineering is to consider what
an "infinitely intelligent" controller might achieve. That is: accept the
sensors and actuators exactly as they are, and consider which internal orga-
nization or "algorithm" would provide optimal performance. This exercise
demands you to drop preconceptions about the controller's internal organi-
zation, and often reduces to establishing what the system _can_ know about its
world given the perceptions that it can acquire through its sensors and the
"experiments" that it can perform upon its world through its actions. This
smacks of optimal control theory and also information theory, of course. The
point is, that an "infinitely intelligent" controller would frequently be able
to discover that other controllers exist in its environment, and hence, in its
own control actions, would be able to take account of the regularities in the
control of those "others". A more practical point is that frequently the
"intelligence" of the controller can be quite limited (because of its limited
number of sensors and actuators), yet result in (almost) the same optimal
It looks to me as though you are describing a hierarchical PCT model,
except that the ECUs in the PCT model would not require the sophistication
and "infinite intelligence" you seem to require in control engineering.
PCT has a different focus; it presumes a certain internal organization of the
system. This presumption may be so limiting, that the above described type of
"intelligence" becomes impossible. Maybe not. Maybe the strictly hierarchical
organization does not impose limits. But that remains to be shown.
Oops. I thought you were describing the kinds of results we _do_ obtain
with multiple PCT units in parallel, in a hierarchy, or in a combination of
parallel units in a hierarchy. Thus far, we have seen no need to include
all of the jazzy features that you control engineers seem to need.
The question of whether our simple model, which you describe as uninteresting
and perhaps too limiting, is adequate to our purposes is answered by whether
the model duplicates the behavior of people, acting alone, or while
interacting. So far, the PCT model has passed the tests.
Whether at one level, or in a hierarchy, it is conceivable for side effects
that are irrelevant to perceptual control by any of the ECUs to have
deleterious effects on the ECUs themselves, but that still leaves the side
effects irrelevant to the ECU's control of p relative to p*.
Wouldn't that be a "dumb" controller?
You bet it would, and that's exactly what we want. That's all we find in
living creatures -- lots of local, dumb controllers. Put a few of
those dumb controllers together, and watch the interesting, seemingly
"intelligent", things they can do.
Bill Powers' example was, that when at a
football (was it football?) game you want to see better, stand up and hit your
head against an overhead steel girder, that is an "insignificant" side effect
of the wish to see the game better. Not so in my opinion: the resulting con-
cussion may result in your being taken to the hospital, so that you cannot see
the remainder of the game at all. The single-mindedness of the ECU does not
only have an effect on other systems, but also on itself.
I'll let Bill talk to you about that example. I was hoping to see your
thoughts about the anerobes that created oxygen, which oxidized them.
Speaking of Bill, I too had the impression he described in
(Bill Powers (941004.0700 MDT))
when he wrote:
Your argument depends not only on the false conclusion 2 or 2a above,
but on a gradual shift of the meaning of "relevant" during the
development. We progress from "relevant to the control system in
question" to "relevant to some control system" to "relevant in the sense
of having some objective effect on something else, whether this effect
is represented perceptually or not." Thus the term "irrelevant side
effects," which begins by meaning only side-effects unrelated to a
control system controlling a given variable, comes to mean "effects of
behaviors which have no effects," a null set.
I did not intend a shift of the meaning of "relevance". I always intended to
mean the relevance of one control system's actions relevant to the control of
_that_ control system, even if the relevance is indirect and effected through
others. I gladly agree with your "null set", however.
It certainly did look like a shift of meaning. I'm glad you cleared up that