[From Bill Powers (941004.0700 MDT)]
Hans Blom (941003) --
Your tasks in this assignment were twofold:
1. to show that the term "irrelevant side effects of control" is,
within the theory of B:CP, selfcontradictory.
2. to give your personal interpretation of what the term "irrelevant
side effects" could possibly mean in the above sentence.
To show the truth of an arbitrary proposition is always possible, but
seldom profitable (except in the sense of conning the suckers). The way
this is done is to adjust definitions or other premises until perfectly
good logic will lead to the conclusion you want. People will often
accept innocent-sounding premises if they are presented with enough
confidence and forcefulness, or are mentioned so briefly in passing that
their importance is not appreciated. Because premises require some
logical process to produce a conclusion from them, they seem distant
from the conclusions. This apparent distance can be increased if you
focus on the logic, making sure each step is clear and demonstrating
that it is free of mistakes. If the logic is lengthy enough, you can
even introduce more premises as you go, steering the process toward the
desired conclusion. By the time you get to the conclusion, the person
you are trying to convince has forgotten about the arbitrariness of the
premises and can see only that you have made no logical errors.
Magicians employ this method of misdirection, making sure you follow
every step of their manipulations long after the rabbit has actually
been removed from the hat.
You begin with
The hints were: a. irrelevant side effects can be perceived (from the
above sentence); b. irrelevant side effects are not controlled (by
definition); c. B:CP (axiom).
The trick is this:
1. Irrelevant side effects of control can be perceived (because if
anyone knows about them, someone is perceiving them).
2. All perceived effects of the actions involved in control are
disturbances of controlled variables and hence are relevant to the
3. (Long chain of arguments)
4. Therefore, all effects of the action involved in a given control
process are disturbances of a variable controlled by some other control
system and hence are relevant to that control system. Q. E. D., the term
"irrelevant side effects of control" is self-contradictory.
This argument depends, in part, on sneaking in the premise that all
effects of action are disturbances of controlled variables. This is
equivalent to saying
1. Behavior is the control of perceptions; therefore
2. All perceptions are controlled by behavior.
1a;. for all X, X controls some Y, therefore
2a. for all Y, Y is controlled by some X (non-sequitur).
In the initial usage of this term, relevance was defined with reference
to the controlling system: its actions produce many effects, but only
some of those effects show up as changes in the controlled variable. The
remainder of the effects may or may not affect variables controlled by
other systems (inside the same organism or in other organisms), but only
the effects that tend to alter the perception of the system in question
are relevant to the operation of that system. As far as that system is
concerned, all effects of its actions that do not show up as changes in
its own controlled variable are irrelevant; it does not even know about
them: they are not represented inside that control system.
The context in which this subject arose was that of intentional versus
accidental effects of behavior. In conventional behaviorism, where
internal phenomena such as goals and intentions are ruled out by the
requirement for direct observability, there is no way to distinguish an
intentional effect from an accidental effect of the same motor action.
Rick Marken's mind-reading experiment, and my three-cursors experiment
in Demo 1, show that even the computer can make this distinction
correctly on the basis of applying PCT.
Your argument depends not only on the false conclusion 2 or 2a above,
but on a gradual shift of the meaning of "relevant" during the
development. We progress from "relevant to the control system in
question" to "relevant to some control system" to "relevant in the sense
of having some objective effect on something else, whether this effect
is represented perceptually or not." Thus the term "irrelevant side
effects," which begins by meaning only side-effects unrelated to a
control system controlling a given variable, comes to mean "effects of
behaviors which have no effects," a null set.
I am happy that all of you took this "inside" perspective: "irrelevant
to the ECU doing the controlling that leads to the side effects." But
several of you went further and noticed that, maybe, this picture was
too limited. They noticed that even something as simple as the heating
system has more goals.
"The heating system" is not "a" control system; it is composed of
independent control systems that control different aspects of the same
situation. The basic thermostat (the kind found in most homes) controls
sensed air temperature. Another control system, in your hypothetical
example, controls (one-way) for circulating water temperature to be at
or below an upper limit.
Such a system can be set up so the two controls system are in conflict.
The water-temperature-limiting system's output would affect the same
furnace that the air-temperature-control system's output affects. The
two control systems would produce side-effects that strongly disturb
each other's controlled variables. But the system can be set up
hierarchically so there is no conflict; the water temperature control
system's output can add to the effect of the mechanical air-temperature
set point to reduce the net set point for air temperature. Now both
control systems can continue to keep their controlled variables in the
specified reference condition, and there is no conflict.
Under PCT we do not consider "the goals" of a single entity, the
organism. Only individual and independent control systems have goals.
The organism is a collection of such individual and independent control
systems. The hierarchical form of the model says that if several goals
form an organized unit, they do so only because there is a higher-level
system perceiving all the related variables, organizing them into a
higher-level perception, and controlling the resulting perception with
respect to a single goal for that system only, by means of adjusting the
lower goals. We can refer to such higher and lower goals as goals of
"the organism" only in the sense that they are contained inside the same
organism, and on occasion we may be conscious of their existence.
In the heating system, the more important goal, wired in by the
system's designer, is survival of the system.
This I doubt. Where will we find the reference signal that specifies
"survival?" It can be said that the designer intended that this control
system should survive, but that is a goal inside the designer, not
inside the control system. In PCT, "goal" has just one simple and
uniform meaning: reference signal. A reference signal must exist
physically inside some physical system before a goal can be said to
exist. We do not find any reference signal inside the control system, or
collection of systems, to which a perception of survival is being
matched by means of variations in the action of the system. Therefore
that system does not have any goal of surviving. As you defined it, it
has only two goals: a particular air temperature, and a circulating
water temperature less than 90 c. If there is a goal of survival, that
goal will be a physical reference signal or set of signals in the
designer's brain, meaning "this control system survives for x years."
Given long enough observation of this control system and others designed
in the identical way, the designer can perceive the actual degree of
survival, compare it with the desired degree, and alter the design of
subsequent copies of the control system to achieve the designer's goal.
As we all know, the desired number of years of survival is not infinite;
it is the smallest number consistent with selling as many of these
systems as possible.
it is a solid guess that the goals that have to do with survival of the
intact organism have a pretty high priority.
Yes, such high-priority goals may "have to do with survival" (in the eye
of the external observer), but none of them says "survive." The goal of
maintaining the temperature of blood going to the brain at 37 c "has to
do with" survival, but this goal specifies temperature, not survival.
The same holds for all goals that "have to do with" survival: not one of
them specifies that survival should take place. Don't confuse the effect
of having a goal with the goal itself.
Also, don't forget that no organism achieves the goal of survival. We
From there, most of you could formulate an opinion much like: "variable
X may be called a side effect if a one-dimensional control system
controls variable Y, yet influences variable X; but that variable will
very likely _not_ be irrelevant to a complex organism as a whole". Very
Very good, we have reached the conclusion you had in mind? But the
rabbit had been removed from the hat long before you revealed this
conclusion to show that the rabbit is gone. We are now talking about
"relevant in some objective way to the organism as a whole," whereas we
began talking about "relevant to effects on the controlled variable of
the control system in question," which is not the whole organism but
only one subsystem in it. If you had stated your initial thesis with
this definition of "relevant" made clear from the start, you would have
received no objections -- nobody has ever said that irrelevant side
effects do not affect anything else, any other physical variable or
other control system. You are reasoning the stuffings out of a straw
man, or do I mean trying to beat action into a dead horse?
Can we agree on a conclusion? The term "irrelevant side effects of con-
trol" is a good one when used for simple systems that do not have the
sensors to perceive the (indirect) consequences of their actions. It is
in all probability, I think, not a good term when the sensors are there
and the organisms are as complex as humans.
Yes, we can agree on that conclusion, and could have agreed from the
beginning, had a specific meaning for "relevant" been specified.
"Relevant" means "related to", and to specify a relationship you need at
least two entities. If we consider the organism as a whole, clearly no
action one of its control systems can perform affects or disturbs just
one other of its component control systems. If we consider one control
system at a time, as was the original intent of remarks about irrelevant
side-effects, then we need consider only the effects of the control
system on its own controlled variable.
One person mentioned that when you walk, you generate turbulences in
the air far in excess of what a butterfly can, and hence your walking
might cause a hurricane in Miami, killing hundreds of people, amongst
which one of your children. That goes too far. Of course you set
actions into the world that have unpredictable and possibly disastrous
effects. But you, a mere human, cannot predict everything.
The "butterfly effect" is a questionable notion, because it ignores all
the other butterflies. When there are millions of variables all
contributing equally to a particular outcome, how can we speak of any
one of them "causing" the outcome? At the time one butterfly's wings
begin their downstroke, others are at different points within the wing-
flap cycle, each contributing in a slightly different way to the local
conditions and to their subsequent effects. Which one of them caused the
hurricane? We might as well speak of the "molecule effect," because the
outcome will be affected by the direction and speed of each air molecule
as the butterfly's wings encounter it, a factor over which the butterfly
has no control. There is no end to this but to say that the state of the
universe at any given instant is critically dependent on the state of
the universe at every instant in the past within the appropriate event
horizon. We may take that as a definition of determinism, but it is a
trivial definition. What actually matters the most are the largest and
nearest effects, not the smallest and most remote. This, to me, is the
real message of chaos theory.
The point of the butterfly effect is not that small events determine
later large events, but that there are genuine bifurcations of
causation, so that even if all butterflies were to flap their wings in
exactly the same way a second time (as nearly as could be measured), the
outcome would almost certainly not repeat. In other words, the butterfly
cannot decide whether it will cause a hurricane in the antipodes. The
bifurcations of causation are so finely balanced and so numerous that
the butterfly simply can't use its wings to create a preselected effect
on the weather halfway around the world, even if it can perceive that
weather. Nor can it refrain from having any particular remote effect.
We don't have to extend the horizon of effects of behavior to the
antipodes to make the point that is really at issue here. A sphere six
feet in diameter will do just as well. Hypersensitivity to initial
conditions starts with muscle contractions and before, and multiplies
rapidly enough to make even the ensuing limb positions difficult to
predict for more than one second or so through ordinary causal
calculations. And if we then include interactions with physical objects
in the near vicinity, the hypersensitivity grows by leaps and bounds, to
the point where it would be impossible to predict the final position of
a half-empty cup, given only a record of the muscle contractions
involved in reaching out and picking it up.
This is the basic reason why the open-loop, computed-output models can't
possibly work. They all assume that output effects follow with the
precision of symbolic mathematics from the muscle contractions. This is
not true even if we consider only effects within arm's reach. Only a
control system, which monitors outcomes and varies outputs as required
to make the outcomes come to a preselected state, can work in the real
world, bypassing chaos.
RE: Popper, and some irrelevant considerations
Popper's principle of falsification means to try to reject hypotheses
using a critical and rationalistic attitude.
I think it means more than that: it means casting hypotheses in such a
way that it would be possible for a test to come out negative. If there
is no way for a result contrary to hypothesis to occur, and for the
experimenter to know that it has occurred, then no test has occurred.
All the experiments in Demo 1, as I mention recently, subject PCT to a
test within my understanding of the Popperian meaning. The subjects are
not constrained to move the control handle in any particular way, so
every run of every demonstration leaves the possibility open that the
actions will NOT oppose the disturbance. Occasionally, they don't, and
in such cases the only proper conclusion, according to Popper and on
occasion, me, is that the subject was not behaving as the model
predicts. You can think up all sorts of reasons for the failure -- there
was a lapse of attention, the subject was experimenting to see what
would happen, the subject wasn't feeling cooperative, the subject hadn't
practiced enough -- but the fact remains that the test was failed and
the theory did not apply correctly.
This vitiates the principle that a single counterexample is enough to
destroy a theory. In fact, as Martin Taylor has pointed out, we don't
abandon PCT because of such failures, whether they be single instances
or repeated. Instead, we say that there are other factors involved not
taken into account in a simple model of a specific kind of behavior.
If these failures were frequent, of course, we would begin to get very
uncomfortable with accepting the model. The critical question is where
we set our thresholds of discomfort. If we set them at zero, so any
failure persuades us to abandon the model, then we will have no model of
anything. If we accept 50% failures, on the other hand, our ability to
make predictions will be crippled. At this point, matters of practical
and personal choice come into play. Am I willing to accept 50% wrong
predictions just to be able to say I have _some_ ability to predict
behavior, or should I lower that number to 40, 20, 10, or 5 percent so
that while finding an acceptable model becomes more difficult, my
predictions become more reliable?
I think we set our thresholds of discomfort to the lowest level we can,
given the contrary demand that we be able to make at least some
predictions. If the only tools we have available give us 50% errors, we
may set our threshold of discomfort to 49% errors, hoping for some small
improvement but not giving up if we don't achieve it. When better tools
become available, so we often find that we can predict better than 50%
of the time, we may lower the threshold of discomfort further, perhaps
to 40%. The overriding consideration is always to be able to make
predictions as accurately as we can; only the most rigidly principled
scientist would make literal rejection of all theories with a single
counterexample his rule, and he would have to stop being a scientist in
that case because all theories would fail that test.
Rather than speaking of any rigid principle by which we accept and
reject theories, I think we should think in terms of always striving to
lower our thresholds of discomfort as far as possible, so we consider
progressively fewer failures of prediction as being acceptable. Perhaps
one of our difficulties with conventional scientists is that we know
that control theory often allows us to set the failure threshold at
something like 3 or 5 percent, while those to whom we talk have not
found it possible to accept their own theories without accepting a much
higher failure rate -- 50% and even higher. So they tend to look on our
claims as being completely unrealistic, and our demands for precision
unreasonably discouraging and even fantastical.
What we have to get across is not the most stringent requirements on
failure rate, but the process of continually lowering the failure rate
by iteratively testing and improving models. We have to communicate the
goal of reaching the lowest possible failure rate, while still accepting
that scientists will try to work with whatever failure rate is
As I write this, I'm coming to appreciate more what others have said
(including some of the same things I'm working out here). Those who use
statistics in their work have been criticized for accepting very high
failure rates (in comparison to those we accept in tracking
experiments). But in fact they're accepting the lowest failure rates
that are practical to accept, given the aims of their work and the tools
available. We -- I -- should not focus on the failure rate, but on
encouraging a process of progressively lowering it, improving the tools
so that those who use them can afford to reduce their own thresholds of
discomfort. We have our own problems in this regard, because so far we
have come up with very few applications of PCT to higher levels of
control, and by holding ourselves to the requirements that we can meet
in tracking experiments, we simply rule out any attempt to make
predictions at higher levels. We ourselves should learn how to do PCT
experiments with complex behaviors even if our predictions fail 50% of
the time, and accept that failure rate as the best we can do for the
time being. If we don't do the experiments, and respect them as genuine
tries at understanding, how can we ever start the process of iterative
improvement? And if we don't do that, how can we criticize others for
not doing it?
I confess that my compulsive soul is offended by my own thoughts here. I
want all experiments to be done RIGHT, and all models to predict
PERFECTLY, EVERY TIME. The idea of willingly accepting results that are
discernibly far from perfect sets my teeth on edge. I see the
alternative as license to be sloppy, to propose any wild idea for no
reason at all, to claim knowledge when all we have is little better than
a guess. But somewhere in me there is a little wrinkled kernel of wisdom
that is telling me that my compulsiveness is as foolish as my imaginings
about what will happen if we loosen the reins. It is telling me that
science is a process of improving our understanding, not of leaping to
perfection in a single bound. It is saying, "Put your own house in order
before you cast aspersions on others." It is a very annoying little
kernel of wisdom; no wonder it looks neglected.
Martin Taylor (941003.1745) -- (loose end)
Basically, I think "triangulation" is worth a lot more than
"replication." If a perception can be controlled with input from
either of two quite different sources, it is more likely to represent
some consistency of the world than if it continues to be controllable
with input from always the same single source.
You're veering off onto a different subject: discovering the meaning of
a perception, rather than verifying the ability to create a specific
perception. I don't disgree with you, but my discussion of replication
had a different aim.
Before you can triangulate meaningfully from two perceptions, you have
to be sure that each perception is not just a fluke. In surveying, you
take at least three readings on each target, and often many more.
When the measurement involved is itself very noisy, and is obtained in a
setting full of unknown variables, it is essential to have someone else
independently conduct the same experiment. This becomes even more
imperative when a result with p <0.05 is considered acceptable, and one
with p < 0.06 is not. The conclusions drawn from experiments that yield
large amounts of noise may depend critically on small variations, and
may depend even more critically on conditions surrounding the
experiments that were not appreciated or even noticed.
The time when independent replication is most needed is when the
experiment gives you a result that logic and reason tell you is the one
that HAD to occur, so that if anything else had happened you would be
shocked. It is in precisely this situation that we are most likely to
deceive ourselves. And it is precisely by challenging what we believe
MUST happen that we create the potential for the most important
discoveries. It makes complete sense that the apparent velocity of light
should depend on the velocity of the observer, and indeed observation
upheld this logical conclusion. But those observations were careless or
insuffiently sensitive, as we now know. What HAD to be, was not. And
discovering what was not proved to be immensely more important than
simply verifying what reason proposed and science had, until then,