[From Bill Powers (950906.0630 MDT)]
Martin Taylor (950905 19:00) --
In response to my two questions, I thank Bill and Rick for
responding, but I don't think that they really approached an
Am I quite wrong in thinking that one can turn off the perceptual
input for a while in Rick's spreadsheet demonstration, and have it
lose track only slowly (as in a model-based controller)?
Gee, I don't know, Martin. I wonder if there's any way to find out what
would happen if you turned off the perceptual input in Rick's
demonstration. You know, I mean by actually running the demonstration
and actually turning the input sensitivities to zero and seeing what
would happen. Perhaps Rick would very kindly do this for you and report
on the result.
+ 2. Is it possible to show that there CAN BE any perceptual control
+ system whose behaviour cannot be reproduced in detail by some
+ model-based control system?
Yes. A perceptual control system that can counteract the effects of
arbitrary independent disturbances with an accuracy greater than the
theoretical accuracy with which the disturbance could be deduced and
extrapolated into the future by any means of prediction.
You are asserting here that the model must function without input
I have read and re-read my paragraph just above and I can't see where I
made any such assertion.
You (and others) have claimed for a long time that arbitrary independent
disturbances are actually predictable, so it is possible to incorporate
models of them into a model-based control system and achieve control.
You asked if it is +possible+ to show a perceptual control system that
can't be imitated by a model-based system. My reply meant yes, if you
demonstrate a perceptual control system that works without prediction
that controls better than a model-based control system using the best
possible method of predicting the future course of an arbitrary
I claim that the elementary model we use to predict tracking behavior
can control better than any control model that relies on a world-model.
The way to disprove this simple claim is to construct a program that can
control at least as well as the elementary model does, but using a
prediction of the disturbance and an internal world-model in the same
way that Hans Blom's model assumes can be done. Of course the conditions
have to be the same in both experiments: there must be no direct
indication of the magnitude or direction of the cause of the
disturbance; all information used to model the disturbance must be
derived from sensing the output of the plant.
What I'm hoping for is some kind of a theorem, not verbal arguments
that can appear to make points when the hidden assumptions are not
examined. But verbal arguments are OK if they are explicit enough
to allow their truth to be judged.
Yes, I understood that you were hoping to find an answer without
actually putting any model to an experimental test. However, an
excellent way to test for hidden assumptions is to construct working
models and see what they actually do. When you commit your model to
hardware, your hidden assumptions have their effects whether you
anticipated them or not.
A model is supposed to duplicate the input-output relationships in the
real system, isn't it?
No. It is supposed to map them in some way. Just as your
Artificial Cerebellum does, in effect generating an inverse
function to the feedback function.
In that case my A.C. model is a failure, because all it does is generate
an output function that is sufficient to allow stable control. That
doees not entail finding the exact inverse of the feedback function.
Moreover, the perceptual function of the control system can have
algebraic forms, such as the logarithm or square, which are not in the
feedback function; the controlled variable will then be the antilog or
the square root of the output of the feedback function.
However, you can probably further modify your definition of "model" so
it fits this situation, too. When you have the final definition, it will
fit every possible situation. Then you will be able to conclude that
THEOREM: if any control system actually controls, it contains a model of
Actually, I think you're making progress:
In further discussion, I suggested that one could regard the
structure of a control system to be a kind of distributed model2 of
the range of environments within which it could control. The two
kinds are quite distinct, I think.
So now we have a structure that is a KIND of DISTRIBUTED MODEL2 of the
RANGE of environments within which it COULD control. I'm sure that if
you keep stretching the original meaning of "model" very gently, one
step at a time, you will finally arrive at the theorem you're trying to
prove true before the meaning snaps. It's just a matter of patiently
manipulating the premises until you get to the desired answer.
One can't help wondering, though, why you desire to get that particular
My point was to question whether the distribution or explicit
representation of the "model" made an intrinsic difference, or
whether the two ways of constructing control systems could not IN
PRINCIPLE be told apart by observing their effects in the
You're assuming that there is a model which can be represented either in
a distributed way or as explicit stored information. That
characterization alone is absolutely loaded with theoretical
assumptions, whichever alternative you choose. But it seems that you
want to forget about proving those assumptions true, and go on to ask
whether it is possible IN PRINCIPLE to tell the two kinds of model
apart, as if they really existed.
Well, I'm boggling at the assumptions, since in the first place I don't
have such an elastic view of what constitutes a model, and in the second
place I don't believe that anyone has yet demonstrated that ANY kind of
world-model can produced control of the kind we see in our model of the
tracking experiment, where the future course of the disturbance is
All this, I fear, goes all the way back to the "information in the
perceptual signal about the disturbance" argument, in which your
position has never budged by a single millimeter since that discussion
began. If the effects of a disturbance on the controlled variable are
accurately and systematically resisted, you claim, then (a) there must
be information about the cause of the disturbance SOMEHOW reaching the
control system, and (b) this information must SOMEHOW be utilized in
constructing the output that will oppose the disturbance. In some
metaphysical way, the very structure of the control system must SOMEHOW
constitute a model of the environment, including a model of all
disturbances that the system might encounter. Clearly if none of this
happened to be true, you would experience a very large error concerning
some picture of the world that you don't talk about directly. I
apologize for playing "levels" without invitation, but it seems to me
that the very essence of our disagreements stems from something that has
gone almost entirely unspoken.
Now back to question 1: Both Rick and Bill seem to assert that NO
perceptual control system can retain ANY control when the
perceptual input momentarily goes away, and that this distinguishes
perceptual control systems from model-based control systems, which
can. Is this a true statement of your intent, Bill and Rick?
When you put it that way, you're offering a challenge to the designer,
not asking a general question. Of course I could design a control system
without a world-model that could go on producing the same pattern of
outputs for a short time after loss of inputs; if the disturbance
remained the same as before, it would SEEM to go on controlling for some
time. It would not actually be controlling, as could be shown just by
changing the disturbance pattern. The system would go on producing
outputs as if the old pattern of disturbance were still present.
But the proper way to ask this question is to ask whether there is ANY
perceptual control system (that can control in the presence of arbitrary
independent disturbances) that loses control immediately when the input
function sensitivity is turned to zero. The answer to that question is
clearly yes. This is true for most normal perceptual control systems not
specifically designed to continue producing the same output in the
absence of inputs.
Then the question is whether there is any world-model based control
system that, using only models of the feedback function and disturbance
derived by itself from its input information, can control as well as the
aforementioned (intact) perceptual control system. And I claim that the
answer is no, not even using the best known methods of deriving and
predicting the form of the disturbance. If my claim is correct, then I
believe your basic question would be answered.
In fact, the world-model type of system can continue to produce outputs
that make the real world match the condition set by an arbitrary
reference signal -- but only in the absence of arbitrary independent
disturbances. This tells us something about the conditions under which
such systems might be useful.
The ordinary perceptual control system can produce variable outputs that
make the real world match the condition set by an arbitrary reference
signal, AND can do so in the presence of arbitrary independent
disturbances -- but only while the feedback path is intact. Such systems
are useful mainly in situations where the feedback path is not likely to
Marc Abrams 950509.2000
I see that you're apologizing to everyone but Oded.
Marc, you are being told by several people, including me, that your
method of defending your views is not acceptable to them. They are
trying to control your behavior. There are two ways to deal with this:
say to hell with them, or decide that it may be OK to adjust your
actions in that respect. Either way, it's up to you.
For myself, I find it very unpleasant to receive sarcasm, wild
accusations, and put-downs as an answer to a statement that I differ
from your views. If you can do that to somebody else, you can do it to
me. Your anger is your problem; I'm not going to make it mine, too.
Avery Andrews (950906) --
... people suffering from impatience and/or authoritarian
tendencies (e.g., me) would be too eager to `make the little
I would suggest avoiding that particular terminology at a Parent-
Teachers' Association meeting.
Hans Blom (950906) --
The word "conflict" has not been given a scientific content yet, as
far as I know.
We do have a PCT definition of conflict, which is essentially the same
as yours and I hope is scientific. However, we don't require that the
control systems in question be in different organisms. In the HPCT
model, there is more than one control system running at a time,
independently of others. As a result, one control system can require an
action that induces an error into a different control system in the same
organism. Example, trying to catch the keys someone has just tossed to
you while both your arms are full of groceries. You want to catch the
keys, but doing so involves not supporting the groceries. For a moment,
both control systems try to act, with the possible result that you miss
catching the keys AND drop one or both sacks of groceries.
In your approach, where there is only a single complex control system
that does everything, internal conflict isn't (as far as I can see)
possible. You may want to consider at least allowing your approach to
include multiple independent control systems with superordinate control
systems coordinating them. That wouldn't change the basic concept of
model-based control; it would simply allow multiple systems of that kind
to coexist inside the same organism.
In the HPCT model, internal conflict is caused by some higher-level
control system setting incompatible reference signals for several lower-
level control systems (at least that's one way it can happen).
Resolution of the conflict then requires the higher-order system to be
reorganized to use different combinations of lower-level reference
This leads to a first trial for a definition: conflict is a state
where a control system cannot realize its goal, despite a powerful
attempt to do so. But this definition implies some kind of
expectation: that the control system is designed/created in such a
way that normally it WOULD be able to reach its goal. It seems that
there is something out there that doesn't let the system reach its
goal, something that shouldn't be there, an "opponent", an "enemy",
a control system rather than just a resistance, a compliance or a
That's the same definition I use, even to the point of not including
difficulties due merely to passive resistance from the environment. What
makes a conflict is TWO control systems. When you couple the outputs of
two control systems together, the result is very different from coupling
the output of a control system to a passive piece of the environment.
When you push on a control system, it actively pushes back. The "active"
pushing back makes the resistance to your efforts far stiffer than you
would expect from dealing with the inaminate environment alone. When you
push a swinging door open, for example, you feel a certain amount of
resistance from friction, return springs, and mass. But if another
control system is trying to push the same door open from the other side,
the curve relating effort to angle of the door suddenly goes almost, and
perhaps exactly, vertical. The door behaves as if someone had nailed it
This shows why conflict is a problem: it entails a reduction in the
ability to control, or even a total loss of control. The effort that
normally opens the door has no effect on it at all, for either person.
We both agree that the resolution is for the behaving systems to get
smarter. In HPCT we say that higher-level systems reorganize to find
alternate means of controlling; in yours you say the world-model is
revised, which really amounts to the same thing. In the case of the
swinging door, most people have ready-made solutions for such conflicts,
acquired through previous experience. I would say that the higher system
switches from setting a reference level for pushing to setting a
reference level for stepping back. That has its comical aspects, of
course, if both people follow the same strategy. Of course some people
may have the strategy of hurling themselves as hard as they can against
the door; it takes all kinds.
... there is no good definition of conflict, so no good comparison,
no good delineation of what is a conflict and what not. But there
seem to be two elements: a) you cannot reach your goal, and b) the
situation is a lot more tiring or frustrating than you thought it
I think we've had a good definition for a long time in PCT. See Chapter
17 in B:CP, Conflict and Control (p. 250 ff). You'll find everything in
there that you've been talking about, and more. I hestitate to ask at
this late date, but have you ever read that book?
One key to discovering conflict is finding the process of control "a lot
more tiring or frustrating than you thought it would be." But that's
only a special instance of a much more general formal method. The method
is closely connected to the Test for the Controlled Variable. The first
step in the Test, when you want to know if some variable is under
control by another control system, is to apply a disturbance to the
candidate variable. You observe the effect of the disturbance on the
variable. But there's more to it than that: you must know how the
variable would change under the disturbance IF THERE WERE NO CONTROL
SYSTEM ACTING ON IT. The existence of a control system is strongly
suggested when the variable changes far less than it should change, or
in a way very different from the way it should change, given the amount
and direction of disturbance you're applying.
In the case of the swinging door, you push on the door and it fails to
swing open. All your previous experience with that door tells you that
with the amount of push you felt yourself applying, the door should
easily have swung open. The simplest explanation, also based on
experience, is that someone else is trying to get through from the other
side: there's another control system with a different reference level
for the behavior of the door.
However, that's only the first part of the Test. The next steps are
meant to _verify_ that there's another control system pushing back. You
yell through the door, "Coming through" (or "come on through" if you're
feeling deferent). If you get the expected result, your hypothesis is
verified. But if there's no answer and no result, and the door still
won't open, you may check out other hypotheses. Maybe, however unlikely
the idea, someone has actually nailed the door shut, or locked it (even
though it never had a lock before). So you'd go off in other directions,
giving up the idea that some control system is pushing back on the other
side. The Test is passed only when the existence of the control system
is fully verified. Otherwise, the Test shows that no control system was
there. In murder mysteries, you'd find that there's a dead body wedged
against the door on the other side. Dead bodies are not control systems.
Both internal and external conflict can be found this way. It is common,
with internal conflicts, to be conscious of only one side of the
conflict. It's as though you consciously identify with one of the
conflicting control systems, and remain unaware that the other exists.
The only symptom of the conflict is that when you choose a goal which
you would expect yourself to start approaching immediately, nothing
happens. If the conflict is expressed at a very low level, you may find
yourself becoming tense, but without moving: opposing muscle groups are
acting against each other: you "freeze." At a higher level, you may find
that even the thought of acting feels fatiguing; you can't get up the
energy even to tense your muscles. At a still higher level, you feel
that you can't even try to reach the goal, even though you know you want
to. "I really must clean out the garage right now," you say, and reach
for another beer.
If we look on this situation as a version of the Test, the obvious
conclusion is that there might be another control system pushing back. I
want to clean out the garage, my muscles are working (as I can prove by
getting up and going to the refrigerator), and I am not cleaning out the
garage. A very likely hypothesis is that there is some other control
system in there, running on automatic, that has a goal incompatible with
cleaning out the garage. Perhaps you put up a big fight against someone
who criticized the way your garage looked, and if you clean out the
garage now you will be admitting that all your counterarguments were
foolish. However, that little control system that wants you to keep from
feeling foolish isn't operating in the conscious state; it's just
operating. Any move toward cleaning out the garage is also a move toward
feeling foolish, and is countermanded by the hidden control system. The
system where the conscious You currently resides issues the reference
signal to be to cleaning out the garage, and the other system silently
negates that reference signal, and there you sit.
The way to resolve this conflict by conscious effort is to try to find a
viewpoint in the hierarchy higher than the level where the conflict is
occurring. In this case, you need to become aware of the reasons for
which you want to clean the garage, and the reasons for which you want
not to clean the garage. When your awareness is at the right level,
apparently, reorganization automatically occurs and the conflict is
resolved. You decide with no effort at all, "I don't care how sloppy
people think that garage is, I'm not going to let them push me around."
Or, of course, you could decide the other way. Either way, the conflict
In order to see a conflict it seems necessary to recognize and
model the opponent, in particular to recognize the opponent as a
control system, that is to recognize the opponent's goal.
Precisely. But Hans, if you'd paid attention to the literature of PCT
you would know that this is what we already say. You're just describing
the Test for the Controlled Variable.
Best to all,