"irrelevant side effects"; Karl Popper

[Hans Blom, 941003]

SUBJECTS: "IRRELEVANT SIDE EFFECTS"; KARL POPPER

Class, now that you have all handed in your homework, let us see what your
thoughts were. As you know from previous assignments, the object is not to
show that you are right and someone else is wrong, but to carefully formu-
late your own position on the subject. Compare your assignments with an
exercise in Euclidean geometry: discover the implications of a few axioms.
Do not strive for exhaustiveness -- or exhaustion -- because the implicat-
ions are essentially infinite; you could expand forever. But do try to
discover the most important ramifications that directly circle the givens.

Your tasks in this assignment were twofold:

1. to show that the term "irrelevant side effects of control" is, within
   the theory of B:CP, selfcontradictory.

The hints were: a. irrelevant side effects can be perceived (from the
above sentence); b. irrelevant side effects are not controlled (by
definition); c. B:CP (axiom).

2. to give your personal interpretation of what the term "irrelevant side
   effects" could possibly mean in the above sentence.

Except for the few of you who reacted immediately -- a reflex? -- most of
you took sufficient time to think, taking my implicit suggestion that
there might be more to this task than meets the eye (figuratively speak-
ing, of course, in this context).

Most of you started with the second subtask and explained what is usually
meant by "irrelevant side effects". Several persons took the home heating
system of old as an example; it heats the room up to the setpoint tempera-
ture and does not care about -- nor does it perceive -- other effects of
its action, such as the room's humidity. All of you took "irrelevant" to
mean "irrelevant to the control system itself": all it cares about is
controlling the temperature at its sensor at the reference level. That is
correct. The system does not care that it burns natural gas (most of you
seem to have that kind of heating system), nor that it does so ineffi-
ciently, nor that an outside observer may think that the temperature else-
where in the room or in other rooms is too high or too low.

I am happy that all of you took this "inside" perspective: "irrelevant to
the ECU doing the controlling that leads to the side effects." But several
of you went further and noticed that, maybe, this picture was too limited.
They noticed that even something as simple as the heating system has more
goals. When the reference level is moved up, say from 15 to 20 degrees
Centigrade, the sensor temperature does not immediately follow; trying to
get the sensor temperature to follow the reference level would cause the
selfdestruction of the system. One of the system's other goals is to
limit the temperature of the circulating water to below 90 degrees Centi-
grade or so -- the system has a sensor to measure that temperature and a
magnet coil to switch off the gas flow to the burner when the water gets
too hot -- and obeying this goal is, not only at times but always, even
more important than than obeying the other goal.

How do you interpret this? Some said that the system "temporarily lost
control" over its sensor temperature, due to a limitation of how it could
act. Others said that an "optimal compromise" was reached between the two
goals. Others again called exactly the same thing an "internal conflict"
between two goals. Let us not quibble about words. I think that we under-
stand and agree with each other, even though we may use different words to
describe the same thing. Finding the most appropriate word is not what
this assignment is about; it is about how to describe what happens so that
all can understand it. What happens is that the attempt to control one
variable may negatively influence or even be destructive to the control of
another variable. In the heating system, the more important goal, wired in
by the system's designer, is survival of the system. Although one goal
cannot be realized immediately, it will be realized a little bit later --
but only if _now_ the system controls for survival. We see exactly the
same thing in organisms, I think. The notion of "patience" may be linked
with this phenomenon.

Picture the whole control hierarchy. How many goals -- reference levels --
are there? B:CP enumerates the levels of the hierarchy, but is not expli-
cit about its width at the various levels. Are there hundreds, thousands,
millions of simultaneous goals? All those goals, most of them not avail-
able to consciousness, want to be realized, but it is a solid guess that
the goals that have to do with survival of the intact organism have a
pretty high priority.

From there, most of you could formulate an opinion much like: "variable X

may be called a side effect if a one-dimensional control system controls
variable Y, yet influences variable X; but that variable will very likely
_not_ be irrelevant to a complex organism as a whole". Very good. We would
not want to make the mistake that perceptions that are important for some
goal that consciousness cannot reach are irrelevant. At the least, we have
to consider that _maybe_ they are relevant for some as yet not consciously
discerned goal. As one person said: "Side effects are side effects ONLY to
the controller with respect to which they are defined. Other ECUs may
well have perceptual functions that detect (some) of the side effects of
any one ECU's actions. It must be very rare that the actions consequent
on the output of an ECU affect ONLY those variables contributing to the
ECU's perceptual signal, and ONLY in the relationship represented in the
perceptual function of the ECU. Any other effect of those actions is an
"irrelevant side effect" to that ECU. Even though they might prove fatal
to the organism within which that ECU resides." Exactly. It is the per-
spective that you take. If you identify with one particular ECU -- if you
have one very important goal in mind -- you tend to forget all else. If
you identify with the multidimensional control system as a whole, what is
irrelevant for one ECU may, indeed, be fatal for another. We can call that
singlemindedness "the best possible control" for one ECU; we can also call
it "lethally bad control" for the organism as a whole.

There is another argument that some of you proposed, starting from the
hint that the side effect could be perceived. Why did evolution lead to
increasingly better perception in higher animals? In order to better be
able to control, of course. In this you were unanimous. Would organisms
then consider part of what they _can_ perceive as irrelevant? Maybe. I
have never seen or heard about even an ape enjoying a beautiful sunset.
Maybe they do, but probably not. But humans do. What is there to perceive
that humans can _not_ enjoy? Is there anything in creation that cannot be
contemplated out of curiosity, in enjoyment, or to further science? Is
there anything that cannot be used for some human goal? I think that I --
cautiously -- have to agree with this consideration: nothing that we can
perceive is a priori irrelevant. A sunset can be enjoyed; the stars at
night can be marveled at or studied; the moon can be made a goal to walk
on. Kind of poetic, but the more I think about it, the more irrefutable.
One person said: "B:CP means that all behavior controls perceptions. It
does not mean that behavior controls ALL perceptions." That is true. But
can we be certain that B:CP implies that there are "irrelevant" i.e. use-
less perceptions?

One person mentioned that when you walk, you generate turbulences in the
air far in excess of what a butterfly can, and hence your walking might
cause a hurricane in Miami, killing hundreds of people, amongst which one
of your children. That goes too far. Of course you set actions into the
world that have unpredictable and possibly disastrous effects. But you, a
mere human, cannot predict everything. You can only predict -- and control
-- as well as you can. That kind of humility is the basis of religion, and
of law: you are not responsible for disasters that you unwittingly caused
but could not possibly have prevented. However relevant your actions were,
to others and to you, that is not what this discussion is about.

Can we agree on a conclusion? The term "irrelevant side effects of con-
trol" is a good one when used for simple systems that do not have the
sensors to perceive the (indirect) consequences of their actions. It is in
all probability, I think, not a good term when the sensors are there and
the organisms are as complex as humans.

Greetings,

Hans

PS: How come sometimes I get compliments about how well I understand PCT
and sometimes I get complaints about even flunking kindergarten? Which is
it? Or do I understand some parts and not others?

PSPS: Two weeks ago, Karl Popper died. When I got across the first of his
works, I got the same gut feeling of "he's got it basically right" as when
I first read B:CP. Popper's "The open society and its enemies" [1945]
started what Popper was good at: refuting well-established opinions,
whether political, social or scientific. Popper was at his best when he
could demolish another philosopher's theses, and he did, continuously.
That, at the same time, was the core of what he proposed: one million
examples pro cannot prove a theory, wheras one con falsifies it. There-
fore, was his opinion, it is the greatest honor for a theory if people try
to prove that it is wrong. People have the natural inclination to seek
support for their theories, and of course they will find it. But never
will it be possible to prove a theory. The best support for a theory is
its survival, despite years or centuries of attack. Every scientific
theory is preliminary; certainty does not exist, although it is a goal to
strive for.
Popper has been incredibly influential in Western Europe, and his books
have greatly influenced me personally as well. He was celebrated by all
intellectuals who do not believe in grand theories or solutions. He was
anti-totalitarian and anti-dogmatic, for both scientific and moral
reasons, because everything should at all times remain subject to criti-
cism. He would not propose "final solutions"; he much preferred to mini-
mize misery, because that caused far fewer accidents. He dismissed the
scientific status of both psychoanalysis and marxism, because both sought
only evidence that could support the theory and neglected all that could
contradict it. Progress in science is a trial and error affair, not the
result of utopian blueprints. Trying to create heaven on earth always
results in hell, was his opinion.

Popper was humble, scientifically and philosophically, but not as a
person: the words "I", "my" and "me" often occur in his writings. He had
opinions, which he could defend well; but he also liked others to have
different opinions, because only that would fire up his enthousiasm and
his creativity. He wanted to be right, of course, a status that is
inaccessible for science.

Popper was not always correct; he himself showed the dangers inherent in
"thought experiments" when he talked about subjects that he did not quite
grasp. I remember that once (at least) Einstein had to correct him on
thought experiments on the subject of quantum mechanics, showing that
thinking itself is not infallible, even though one is very good at logic
and is more familiar with the subject matter than most. But corrections
were welcome and described in detail in subsequent editions. Popper might
also change his opinion and be honest about it; he would then list all
arguments pro and con that he could think of, and reach a conclusion, well
defended, but always preliminary -- of course.

Popper's principle of falsification means to try to reject hypotheses
using a critical and rationalistic attitude. That implies daring to criti-
cally investigate your own conclusions and enjoying others' attacks on
them. Support of your theories may flatter your ego -- but does nothing
for the progress of science. Regrettably, understates the Dutch philoso-
pher-logician-computer scientist Jaap van Heerden, "the demand for self-
demolition is, from a psychological perspective, not an academic reality".

"Popper", writes Dutch essayist Rudy Kousbroek, "may be difficult, he is
not impenetrable, as Wittgenstein is. He is more banal, but in the manner
that a scientific investigator is more banal than an oracle". Although I
do not quite agree that Wittgenstein is opaque, I do agree with the dis-
tinction: Popper's concepts can be explained in a few sentences.

If you cannot say something so simply and clearly that everyone can under-
stand it, you'd better remain silent, Popper would say; if you are not
understood, you are either not clear or not correct. And criticisms, what-
ever their motives -- neglect, frustration, obstruction -- had better be
taken seriously: it is not the motives but the solidity of the arguments
and propositions that matters.

What I like most in Popper is that he places his opponents' remarks in the
most favorable light. Even if their thoughts are badly expressed, he tries
to understand the meaning behind those words and to react accordingly.
This way, although he is ruthless towards arguments, he is lavishly gener-
ous towards the persons who utter them -- no ad hominems here.