"irrelevant side effects", again

Tom Bourbon [941005.1615]

[Hans Blom, 941005]

(Tom Bourbon [941004.0945])

Hans:

I am happy that all of you took this "inside" perspective: "irrelevant to
the ECU doing the controlling that leads to the side effects." But several
of you went further and noticed that, maybe, this picture was too limited.

Tom:

Too limited? For what? To whom? Certainly not to the individual ECU doing
the controlling, and that's all that matters here.

Hans:

Forgive me my background in control engineering, but what matters _there_ is
what is sometimes called "multi-sensor/multi-actuator integration".

No, please. You forgive _me_, for not recognizing that you wanted us to
talk about control engineering when we replied to the "assignment" you
posted on a net devoted to the study of living things as control systems.
The mistake is mine. :wink:

One-
sensor/one-actuator systems with just one goal (reference level) hardly occur
(even the simplest ones, on further consideration, frequently contain extra
controls, even if those are just limits to be obeyed) and they aren't inter-
esting anyway.

My mistake again. I thought you were interested in our ideas, otherwise, I
couldn't imagine why would you ask the questions you did. I didn't
realize you had already dismissed our ideas as not interesting. Have you
run all of the PCT demonstrations, or read our little collections of
published reports on modeling? If you did, did you notice how smoothly and
seamlessly the modeling progresses from the uninteresting (for you) case of
a single elementary perceptual control unit (ECU), to interactions between
two or more ECUs that share an environment and affect one another's
controlled variables, to multiple ECUs running in parallel, to hierarchical
ECUs often running in parallel at a given level in the hierarchy. All the
way through that work, the same uninteresting model continues to produce the
same kinds of results; thus far, we have found no reason to abandon that
model and look for others, perhaps in control engineering.

An interesting control _system_ has more than one input and/or
output, and more than one goal.

Again, did you run the demonstrations and read the reports? What you call
"interesting" can be found there. I honestly don't know what you are
driving at, Hans. I have the feeling that we have been through this
discussion with you many times, but it seems that we have all failed to make
our ideas clear to one another.

The problem is then how to design a system in
such a way that an "optimal compromise" is reached among the goals, not all of
which can often be realized at the same time under all circumstances.

"Optimal," again. "Optimal" may be (no, I will go so far as to say it "is")
a legitimate term for control engineers, but it seems to have little if
anything to do with perceptual control systems.

Bill
Powers proposes a hierarchy as the mechanism to realize such a cooperation.
What is more common in complex control systems is a "society of mind"-type of
organization, a number of parallel control loops that do not only monitor
their own process but are also, somehow (through cross-connections), aware of
what other control loops are doing.

This sentence is loaded with terms that beg definition and that, as I
understand them, probably have little or nothing to do with the way living
systems work. All of our attempts to model interactions between controllers
involve "cross-connections," but the connections are through the enviroment,
where the actions of one controller may affect variables controlled by
others; the connections are not "somehow," but direct and physical. That
was my point in the post to which you are replying here.

The point is, that each unit control
system adjusts its own goal depending upon the behavior of other unit control
system.

OK, and in our modeling thus far, the most common way for multiple
units to interact is that each unit "adjusts" its actions to eliminate
the disturbances produced by the actions of other units. I have some
adaptive PCT models (described long ago in discussions with you) in which
one model adjusts its own parameters. Bill Powers has some adaptive models
that are much more complex than mine; he has described them on this list.
Again, I'm not sure what point you are trying to make. The facts that many
control units might interact, and that when they interact they might disturb
one another, and that some of them might be adaptive, do not change our
answers on your "assignment." To the contrary, we gave our answers with
full knowledge of those facts.

A hierarchical organization does exactly the same, upon closer consi-
deration. Therefore, as a control engineer, it strikes me as funny to say that
all that matters to an ECU is what it itself does.

But that is not what we said, Hans. We have _never_ said that. We have
always said all that "matters" to an ECU is what it _perceives_. An ECU can
only "know" what it "perceives" and that is all the ECU can control. An ECU
might "do" many things "to itself," by way of unperceived effects, but those
things do not "matter" to the ECU. See my example of anaerobic life, in
the post to which you are replying.

You might of course maintain the opposite position: since a multi-input multi-
output system is organized in such a way that the goal of each individual
control unit takes into account the behavior of other units, it needs only
pursue its own goal. Fine. What I wanted to make explicit is the linkages that
exist between ECUs.

I respect that goal, but you did not make it clear in your assignment. I do
wonder why you think it is necessary to make the environmental linkages
explicit to a group where people always make them explicit, anyway. Have we
communicated that fact so poorly that you did not notice it? If so, we must
find more effective ways to talk about interactions among ECUs.

. . .
Hans:

What is a "side effect" for ECUa might well affect ECUb. Sure. But what I
wanted to emphasize is that ECUb's subsequent actions might in turn affect
ECUa again. That is, the result of ECUa's actions loops back to itself. It is
in this sense that I reject the "irrelevance" of the side effect.

That circumstance is obvious and it is discussed in our work on interacting
systems. When I read your assignment, I never guessed that you were trying
to make this point on which we all agree already. In my example, we are
talking about an effect of ECUa's actions that "comes back" in the form of
disturbances on ECUa's controlled variables, and we are right back to
garden variety PCT - nothing new there.

(I was really hoping you could be at the PCT workshop in Wales, Hans. I
wanted to show you some of my programs in which people and models and hands
interact in real time, producing all kinds of effects that "come back" to
each controller by way of the actions of other, disturbed, controllers. I
even made up some new programs for you; in them, the participating people
never would guess the paths through which effects travel during the
interactions, but everything can be duplicated by a few of the very
simplest ECUs from PCT. The "trick" is to keep the models simple and
dumb and make all of the complicated connections among variables in the
enviornment.)

An exercise that is often fruitful in control engineering is to consider what
an "infinitely intelligent" controller might achieve. That is: accept the
sensors and actuators exactly as they are, and consider which internal orga-
nization or "algorithm" would provide optimal performance. This exercise
demands you to drop preconceptions about the controller's internal organi-
zation, and often reduces to establishing what the system _can_ know about its
world given the perceptions that it can acquire through its sensors and the
"experiments" that it can perform upon its world through its actions. This
smacks of optimal control theory and also information theory, of course. The
point is, that an "infinitely intelligent" controller would frequently be able
to discover that other controllers exist in its environment, and hence, in its
own control actions, would be able to take account of the regularities in the
control of those "others". A more practical point is that frequently the
"intelligence" of the controller can be quite limited (because of its limited
number of sensors and actuators), yet result in (almost) the same optimal
behavior.

It looks to me as though you are describing a hierarchical PCT model,
except that the ECUs in the PCT model would not require the sophistication
and "infinite intelligence" you seem to require in control engineering.

PCT has a different focus; it presumes a certain internal organization of the
system. This presumption may be so limiting, that the above described type of
"intelligence" becomes impossible. Maybe not. Maybe the strictly hierarchical
organization does not impose limits. But that remains to be shown.

Oops. I thought you were describing the kinds of results we _do_ obtain
with multiple PCT units in parallel, in a hierarchy, or in a combination of
parallel units in a hierarchy. Thus far, we have seen no need to include
all of the jazzy features that you control engineers seem to need. :slight_smile:

The question of whether our simple model, which you describe as uninteresting
and perhaps too limiting, is adequate to our purposes is answered by whether
the model duplicates the behavior of people, acting alone, or while
interacting. So far, the PCT model has passed the tests.

Tom:

Whether at one level, or in a hierarchy, it is conceivable for side effects
that are irrelevant to perceptual control by any of the ECUs to have
deleterious effects on the ECUs themselves, but that still leaves the side
effects irrelevant to the ECU's control of p relative to p*.

Hans:

Wouldn't that be a "dumb" controller?

You bet it would, and that's exactly what we want. That's all we find in
living creatures -- lots of local, dumb controllers. Put a few of
those dumb controllers together, and watch the interesting, seemingly
"intelligent", things they can do.

Bill Powers' example was, that when at a
football (was it football?) game you want to see better, stand up and hit your
head against an overhead steel girder, that is an "insignificant" side effect
of the wish to see the game better. Not so in my opinion: the resulting con-
cussion may result in your being taken to the hospital, so that you cannot see
the remainder of the game at all. The single-mindedness of the ECU does not
only have an effect on other systems, but also on itself.

I'll let Bill talk to you about that example. I was hoping to see your
thoughts about the anerobes that created oxygen, which oxidized them.

Speaking of Bill, I too had the impression he described in

(Bill Powers (941004.0700 MDT))

when he wrote:

Your argument depends not only on the false conclusion 2 or 2a above,
but on a gradual shift of the meaning of "relevant" during the
development. We progress from "relevant to the control system in
question" to "relevant to some control system" to "relevant in the sense
of having some objective effect on something else, whether this effect
is represented perceptually or not." Thus the term "irrelevant side
effects," which begins by meaning only side-effects unrelated to a
control system controlling a given variable, comes to mean "effects of
behaviors which have no effects," a null set.

Hans:

I did not intend a shift of the meaning of "relevance". I always intended to
mean the relevance of one control system's actions relevant to the control of
_that_ control system, even if the relevance is indirect and effected through
others. I gladly agree with your "null set", however.

It certainly did look like a shift of meaning. I'm glad you cleared up that
point.

Later,

Tom

[Hans Blom, 941005]

(Tom Bourbon [941004.0945])

I am happy that all of you took this "inside" perspective: "irrelevant to
the ECU doing the controlling that leads to the side effects." But several
of you went further and noticed that, maybe, this picture was too limited.

Too limited? For what? To whom? Certainly not to the individual ECU doing
the controlling, and that's all that matters here.

Forgive me my background in control engineering, but what matters _there_ is
what is sometimes called "multi-sensor/multi-actuator integration". One-
sensor/one-actuator systems with just one goal (reference level) hardly occur
(even the simplest ones, on further consideration, frequently contain extra
controls, even if those are just limits to be obeyed) and they aren't inter-
esting anyway. An interesting control _system_ has more than one input and/or
output, and more than one goal. The problem is then how to design a system in
such a way that an "optimal compromise" is reached among the goals, not all of
which can often be realized at the same time under all circumstances. Bill
Powers proposes a hierarchy as the mechanism to realize such a cooperation.
What is more common in complex control systems is a "society of mind"-type of
organization, a number of parallel control loops that do not only monitor
their own process but are also, somehow (through cross-connections), aware of
what other control loops are doing. The point is, that each unit control
system adjusts its own goal depending upon the behavior of other unit control
system. A hierarchical organization does exactly the same, upon closer consi-
deration. Therefore, as a control engineer, it strikes me as funny to say that
all that matters to an ECU is what it itself does.

You might of course maintain the opposite position: since a multi-input multi-
output system is organized in such a way that the goal of each individual
control unit takes into account the behavior of other units, it needs only
pursue its own goal. Fine. What I wanted to make explicit is the linkages that
exist between ECUs.

Each ECU (ECUa, ECUb, . . ., ECUn) controls its own perceptual signal (p)
keeping it at its own p* and in the process creating side effects irrelevant
to itself. If all of the ECUs are at the same level in a hierarchy, or if
each is an independent autonomous controller, then what is a side effect for
ECUa might well affect ECUb, by disturbing an environmental variable that
affects ECUb's controlled perception. If that happens, the additional
disturbance combines with all other disturbances on the variable
"controlled" by ECUb, which simply goes on controlling by eliminating the
effect of the net disturbance to its p. Nothing new there.

What is a "side effect" for ECUa might well affect ECUb. Sure. But what I
wanted to emphasize is that ECUb's subsequent actions might in turn affect
ECUa again. That is, the result of ECUa's actions loops back to itself. It is
in this sense that I reject the "irrelevance" of the side effect.

An exercise that is often fruitful in control engineering is to consider what
an "infinitely intelligent" controller might achieve. That is: accept the
sensors and actuators exactly as they are, and consider which internal orga-
nization or "algorithm" would provide optimal performance. This exercise
demands you to drop preconceptions about the controller's internal organi-
zation, and often reduces to establishing what the system _can_ know about its
world given the perceptions that it can acquire through its sensors and the
"experiments" that it can perform upon its world through its actions. This
smacks of optimal control theory and also information theory, of course. The
point is, that an "infinitely intelligent" controller would frequently be able
to discover that other controllers exist in its environment, and hence, in its
own control actions, would be able to take account of the regularities in the
control of those "others". A more practical point is that frequently the
"intelligence" of the controller can be quite limited (because of its limited
number of sensors and actuators), yet result in (almost) the same optimal
behavior.

PCT has a different focus; it presumes a certain internal organization of the
system. This presumption may be so limiting, that the above described type of
"intelligence" becomes impossible. Maybe not. Maybe the strictly hierarchical
organization does not impose limits. But that remains to be shown.

Whether at one level, or in a hierarchy, it is conceivable for side effects
that are irrelevant to perceptual control by any of the ECUs to have
deleterious effects on the ECUs themselves, but that still leaves the side
effects irrelevant to the ECU's control of p relative to p*.

Wouldn't that be a "dumb" controller? Bill Powers' example was, that when at a
football (was it football?) game you want to see better, stand up and hit your
head against an overhead steel girder, that is an "insignificant" side effect
of the wish to see the game better. Not so in my opinion: the resulting con-
cussion may result in your being taken to the hospital, so that you cannot see
the remainder of the game at all. The single-mindedness of the ECU does not
only have an effect on other systems, but also on itself.

(Bill Powers (941004.0700 MDT))

To show the truth of an arbitrary proposition is always possible...

Only if the proposition is based on, and can be derived from, the set of
axioms that you accept (re G"odel). There are, regrettably, infinitely many
true propositions whose truth cannot be proved, however many axioms you
introduce. Axiom-based "truth" is something like a spider web that tries to
cover a plane. The web's dimensionality just doesn't suffice for a covering.
Holes always remain, however fine the web.

... but seldom profitable (except in the sense of conning the suckers).

It is often profitable, I find, to explicitly draw attention to the things
that you accept as axioms. Frequently it leads to _thinking_, which is either
a consideration of which exactly are the private axioms that you base your
whole existence on (delving deep; analysis), or a consideration of what they
imply (building high; construction of new theorems based on those axioms).
Both are extremely important, I think. Suckers probably do neither.

Please let's not play the game of who is right. Rightness exists only relative
to a certain set of axioms. If yours and mine are different, we'll come to
different conclusions. Let's just try to understand each other, just like
Euclidean and non-Euclidean geometrist can admire each other's constructions.
Let's not play tricks, nor accuse each other of doing so. Let's just go back
to basics and recognize that one person, just maybe, has one or two axioms
different from the other person.

This argument depends, in part, on sneaking in the premise that all
effects of action are disturbances of controlled variables. This is
equivalent to saying

1. Behavior is the control of perceptions; therefore
2. All perceptions are controlled by behavior.
or:
1a;. for all X, X controls some Y, therefore
2a. for all Y, Y is controlled by some X (non-sequitur).

No, that is not quite what I want to say. My 2a would be

2a'. for all Y (perceptions), Y can be used by some (controller) X

which does not follow from, but is independent of, 1a. This does not say that
those perceptions are controlled; it says that those perceptions are (or may
be) used by some (complex) input functions. It is this that makes them rele-
vant.

In the initial usage of this term, relevance was defined with reference
to the controlling system: its actions produce many effects, but only
some of those effects show up as changes in the controlled variable. The
remainder of the effects may or may not affect variables controlled by
other systems (inside the same organism or in other organisms), but only
the effects that tend to alter the perception of the system in question
are relevant to the operation of that system. As far as that system is
concerned, all effects of its actions that do not show up as changes in
its own controlled variable are irrelevant; it does not even know about
them: they are not represented inside that control system.

Self-relevance is the usage of the term that I think I stuck with. What I
wanted to focus the attention on is that the results of one's actions may
_indirectly_, through the actions of others, arrive back at one self. It would
then be folly not to consider the perceptions of those loops from self to self
in one's own behavior.

The context in which this subject arose was that of intentional versus
accidental effects of behavior. In conventional behaviorism, where
internal phenomena such as goals and intentions are ruled out by the
requirement for direct observability, there is no way to distinguish an
intentional effect from an accidental effect of the same motor action.

My point was to show that some control systems might be better than others,
and that the way to achieve this might be, for a control system, to keep in
mind that there are other control systems as well that operate in the same
environment.

Your argument depends not only on the false conclusion 2 or 2a above,
but on a gradual shift of the meaning of "relevant" during the
development. We progress from "relevant to the control system in
question" to "relevant to some control system" to "relevant in the sense
of having some objective effect on something else, whether this effect
is represented perceptually or not." Thus the term "irrelevant side
effects," which begins by meaning only side-effects unrelated to a
control system controlling a given variable, comes to mean "effects of
behaviors which have no effects," a null set.

I did not intend a shift of the meaning of "relevance". I always intended to
mean the relevance of one control system's actions relevant to the control of
_that_ control system, even if the relevance is indirect and effected through
others. I gladly agree with your "null set", however.

Such a [heating] system can be set up so the two controls system are in
conflict.

And frequently is. It often does not matter. Moreover, some systems are better
designed than others. My heating system, when it is cold outside and it starts
to heat up in the morning, sometimes cycles on and off repeatedly, because of
the limit imposed on the temperature of the circulating water, but that does
not cause overt harm; just some noise, caused by the small explosions that
occur when the gas that is allowed to flow ignites again. That noise could
have been prevented by a better design, but it doesn't greatly bother me.

Under PCT we do not consider "the goals" of a single entity, the
organism. Only individual and independent control systems have goals.
The organism is a collection of such individual and independent control
systems.

I have trouble with the word "independent". In all (artificial) control
systems that I know of, each individual control system (if they can be dis-
criminated; that is difficult in many multi-input/multi-output control
systems, which are often designed as "vector" controllers, i.e. all "units"
are at the same hierarchical level but have cross-connections to others) takes
account of the fact that there are others. I might agree if you mean that one
system's goal is already also defined in terms of the behavior of other
controllers. But that would stretch the meaning of the word "independent", I
think.

The hierarchical form of the model says that if several goals
form an organized unit, they do so only because there is a higher-level
system perceiving all the related variables, organizing them into a
higher-level perception, and controlling the resulting perception with
respect to a single goal for that system only, by means of adjusting the
lower goals.

You say that a higher-level system adjusts the lower-level goals. Now assume,
just for a minute, that there is no higher-level system and that the highest-
level reference level is fixed. Are the lower-level reference levels now fixed
as well? If not, might we not say that it is the _perceptions_ that adjust the
lower goals?

In the heating system, the more important goal, wired in by the
system's designer, is survival of the system.

This I doubt. Where will we find the reference signal that specifies
"survival?"

Let's not quibble about words. You know what I mean: the system is designed in
such a way, that simple-minded perusal of one goal cannot destruct the whole
system.

Yes, such high-priority goals [in an organism] may "have to do with survival"
(in the eye of the external observer), but none of them says "survive." The
goal of maintaining the temperature of blood going to the brain at 37 c "has
to do with" survival, but this goal specifies temperature, not survival.

Allow me my own speculation about the highest reference level. In my opinion
it is, as evolution theory says, the transmission of genes to the next gene-
ration. Survival is a large part of that; we have grow old enough to be able
to have children, and older still to protect them enough to have a good chance
in life . Transmission of genes is not all, of course; bacteria do it just as
well. _The way in which_, our human way, is just as important. But that is
where the riddles start to crop up.

Also, don't forget that no organism achieves the goal of survival. We
all die.

Difficult to forget. See above.

                         If you had stated your initial thesis with
this definition of "relevant" made clear from the start, you would have
received no objections -- nobody has ever said that irrelevant side
effects do not affect anything else, any other physical variable or
other control system.

Have I made myself more clear now?

Triggered by my note on Popper:

Rather than speaking of any rigid principle by which we accept and
reject theories, I think we should think in terms of always striving to
lower our thresholds of discomfort as far as possible, so we consider
progressively fewer failures of prediction as being acceptable...

As I write this, I'm coming to appreciate more what others have said
(including some of the same things I'm working out here).

I confess that my compulsive soul is offended by my own thoughts here.

There is much (Popper-like) wisdom in your small "essay" on science. I agree
that our endeavors in science are a paradox: we strive for perfection, yet
realize that we will never get there. The journey, not the destination.

By the way, how could I translate the previous sentence into PCT terminology?

Greetings

Hans

PS: Please take my own, infrequent, small "essays" (do to a lack of time, I
cannot offer more), in the Popperian sense. The basis of PCT is, in my
opinion, profoundly true. I am just trying to come to grips with and exploring
consequences in order to enlighten myself. And discover, in the process, how
difficult it is to make myself clear.