[From Bill Powers (2003.11.27.0659 MST)]
Bruce Gregory (2003.11.27.0812)–
A few simple questions. My foot is
accelerator maintaining 75 mph. The reference level for the
and position of my foot is established by a higher level perceptual
control system for “getting to work.” This reference in turn
established by a still higher-level control system “keep my
reference level is established in terms of intrinsic survival goals.
car unexpectedly cuts in front of me. I slam on the brakes.
According to HPCT, does a conflict exist?
If so, how does this conflict get resolved
If not, why not?
There are several problems here, the first being the idea that intrinsic
reference levels are at the highest level of the hierarchy. They are not.
Intrinsic reference levels are not in the hierarchy at all. Intrinsic
error signals result in reorganization, which is a series of alterations
of parameters such as gain, or of the connections by which higher systems
in the hierarchy contribute to which lower reference signals. or of other
aspects of the physical circuitry. There could even be some selection of
reference signals at higher levels where no higher system exists, but
these would be randomly selected, perhaps from memories at that level,
not directed toward “survival” (unlikely to be at the highest
level) or any other specific goal. The only goals of the reorganizing
system are the intrinsic reference signals, none of which is
Understand that I am describing what the theory proposes, not claiming
that these are established facts. But you’re asking what the theory says.
The second problem is that conflicts can exist only between systems at
the same level. If there were a higher learned goal of survival, whatever
that means, it could not conflict with pushing on the brakes. Neither
could the goal of “getting to work on time.” Only another
system directly concerned with operating the brakes could issue a
contradictory reference setting for the position of the braking foot and
thus create a direct conflict. This follows from the formal definition of
conflict in HPCT. Conflict exists if and only if one variable must be in
two different states at the same time in order for two control systems to
correct their own errors. There are also degrees of conflict, from
harmless to devastating.
We might say that the system that wants to get to work on time tries to
increase the forward speed of the car (if its ETA predicts being late),
and the system that wants to avoid collisions tries to decrease the
forward speed of the car. Now we have two different reference levels for
the same variable, the forward speed of the car, and this constitutes a
conflict for the systems trying to adjust the speed of the car. The
car can’t go at two different speeds at once. Below the level of
controlling the forward speed of the car there is no conflict.
Unless some higher-level system intervenes by altering the reference
signals for one or both of the conflicted systems, those systems will
produce opposing adjustments of the reference signal for forward speed,
and the result will depend on the design details of the control systems
involved (see Kent McClelland’s simulations of conflicted systems). If
the two systems have identical static and dynamic characteristics, the
net reference speed, and the actual speed, will be the sum or perhaps the
average of the two reference settings (depending on exactly how the two
reference signals combine).
If a higher order system or set of systems exists that can perceive the
existence of the conflict and has learned how to resolve such conflicts
(say, by momentarily turning off the goal of getting to the meeting on
time), the conflict will immediately be resolved. Most conflicts are
resolved in this way and cause only momentary inconveniences. But if the
situation prevents any easy solution or there is no already-known
solution, the conflict will simply persist. If the result of its
persistence is to create significant large errors in other control
systems, reorganization will probably start, and there is then no way to
predict what the solution will be, if a solution is found.
Again, these are predictions from the theory that are roughly borne out
in my experience, but that have never been formally tested. One has to
know what the theory predicts, of course, before there can be any test of
In the Crowd program, there is a conflict between collision avoidance and
the two other possible goals, following a person or getting to a
destination. The conflict is never resolved. What happens is that the
collision avoidance system experiences very large changes in error for
small adjustments of direction of travel, while the other systems’ errors
change much less, because the other person or the destination is much
farther away than the nearby obstable. In effect, the collision avoidance
system has much higher loop gain than the other two systems have, and it
overpowers the other two systems when a collision is imminent. When its
error returns close to zero, the other two systems simply go on operating
and progress continues.
These conflicts can persist when there is a large number of obstacles
present. However, the Crowd agents do have a very minor kind of
“reorganization” working, a small random signal added to the
perceptual signals. This signal assures that no action ever simply
repeats exactly; there is always a slight variation. So when an agent
gets stuck in a closed path, the path is randomly perturbed by a small
amount, and most of the time there is an eventual escape. It’s possible
that there would always be an escape, but I have never waited more
than five or ten minutes to see if that would happen.
I am curious, now. Was the above question a request for information, or
for something you could then sneer at or attack? You said
“The Powers-Marken tranquilizing philosophy–or religion?–is so
delicately contrived that, for the time being, it provides a gentle
pillow for the true believer from which he cannot easily be aroused. So
let him lie there.”
You seem to be in conflict about letting the true believer lie there, or