[From Bill Powers (2000.06.18.1146 MDT)]
Bruce Gregory (2000.0618.1758)--
Martin Taylor (2000.06.17 09:04) --
Bruce:
I think this identification is unfortunate. One doesn't normally think of
having a "purpose" or a "goal" to maintain one's body temperature at a
temperature of 98.6 F. Clearly it is reasonable to talk about a control
system in the hierarchy with this reference level, however.
One also doesn't think of standing upright as being a purpose or goal while
one is doing it -- but there is a control process going on while one does
this, so there must be a reference signal, perception, error signal, and so
on.
The terms purpose and goal are nontechnical terms, representing unorganized
attempts to describe how we somehow know or intend what is to happen before
it happens, or keep it going "on purpose", in a way that is not exactly
what we mean by "prediction." If you intend, or if your purpose is, to wash
your hands, you're not just predicting that you're going to wash your
hands; you somehow "know" you're going to do it, in a sense that seems
stronger than mere prediction. If you're washing your hands already and the
purpose is turned off, you will instantly cease washing your hands.
Something unusual or unexpected, and more than trivial, would have to
happen to prevent the hand-washing. Ordinary disturbances, such as a lack
of water when you turn the faucet handle, will not keep you from washing
something disgusting off your hands.
PCT solves these old problems first by identifying "what is to happen" as a
_perception_ rather than an objective event, even in cases where most
people would automatically assume an objective event. If you're aware that
something is happening, the fundamental assumption of PCT is that the
something is a perception. This isn't to say it's an hallucination: the
experience could, for all we know, have some some sort of counterpart in
reality. We just have no way of proving to ourselves that it does, with the
same certainty that we can know that a perception, an experience, is
happening.
Once we accept all that we experience as consisting of perceptions, which
is to say afferent perceptual signals in our brains, and once we have
grasped how a control loop works, we can easily see that the only part of
the control loop that fits what we call intention, volition (willing), or
purpose is the reference signal. The error signal doesn't fit, Martin,
because it is at least as much caused by external events, disturbances, as
by changes in a reference signal.
If we can identify the cause of an action of our own (as we perceive it), I
think we tend to think of the action as _not_ willed. We call actions like
jerking a hand back when we touch something very hot "involuntary," where
"voluntary" and "willed" mean the same thing. Of course we can fight the
involuntary action, as when we pick up something that is not only
uncomfortably hot, but valuable. We can "will" ourselves to put it down
carefully even though holding onto it hurts. This willing, in that case,
seems opposite to the tendency to act involuntarily: it comes from within
rather than from outside. The conflict is between an external cause and an
internal one. That is what brings the internal cause to attention. We can
see and feel what tends to cause the involuntary action -- but what causes
the opposition to it?
I'm speaking here more or less as if trying to figure out what is going on
without any theory. We do, however, have a theory that explains these
phenomena. The voluntary aspect of our actions arises because reference
signals exist. Of course so do the involuntary aspects, but the involuntary
actions usually seem to arise because of built-in reference values: we are
wired up to act as if we desire sensations like burning to have zero
magnitude. So the difference between voluntary and involuntary acts seems
to be in the source of the reference signals. If the reference signals are
built-in, not adjustable by higher systems, we call the actions that arise
from them (such as breathing) involuntary, even if we can temporarily
oppose them by an "act of will." If the reference signals come from higher
in the hierarchy, we call the actions that arise from changing the
reference signals voluntary, and those that arise from disturbances not so
much automatic or involuntary as optional. If a dealer quotes you a price
on a car that is higher than the price in the advertisement, you may react
angrily to this disturbance, but that reaction is optional: you could also
change your mind about wanting that car, in which case there would be no
effective disturbance.
Martin:
To "will" something to occur implies a perception
that the thing willed is not now occurring.
I don't think this is true of all cases of willing. If it isn't, we can't
base any general analysis on this assumption. We have to try to distinguish
between what we know theoretically must be occurring and what, without
benefit of theory, seems subjectively to be occurring. It doesn't matter
what willing "implies," because implication is always within the framework
of some theory. The basic question we're trying to answer is _what
phenomena are denoted_ by terms like willing. The theory then is supposed
to offer an explanation of the phenomena.
I can breathe in a regular rhythm yet feel that the breathing is willed
rather than automatic. We can often tell when someone is pretending to be
asleep by noting that the rhythm of breathing, while slow and regular, is
not the same as the rhythm when the person is really asleep. I just had a
pulmonary test which required, at one stage, relaxing and "breathing
normally." Breathing normally on purpose is almost a contradiction in
terms. How can I know how I breathe when I am not conscious of breathing?
I'm fine, by the way, except for some permanent loss of function from all
those years of smoking.
So I can't accept that willing corresponds to error signals. Remember, to
know that a perception is not the same as a reference signal would require
that something sense both the reference signal and the perceptual signal
and compare them. We know that something as a comparator, but there is also
the possibility that it could be a perceptual input function, receiving an
imagined version of a reference signal and a real version of a perceptual
signal, and computing the difference between them -- a relationship. I
would prefer to interpret all cases of apparent perception of an error
signal this way; it's worth the very slight complication to avoid greater
complication of a very nice general proposition: that all perception
consists of afferent signals. One doesn't give up such an elegant
generalization easily.
There's also another reason for not accepting perception of error signals
easily. Have you ever tried to construct a working hierarchical model in
which an error signal from one control system becomes part of the inputs to
a higher level control system? I have, and I couldn't make it work
properly. That doesn't mean nobody could, but I certainly failed to get a
stable system, and until I see how such a system could work, I'm not going
to believe in it. Part of the problem, as far as I've been able to
understand it, is the existence of parasitic feedback loops at the higher
level. The lower-order error signal is a function, in part, of the
reference signal, and the reference signal is a function of the
higher-level error signal, which is a function of the higher-level
perception. So the perceived error signal from below is a function of the
perceiving system's own error signal, which closes a loop that is local to
the higher system. This is probably why I couldn't get stability, or even
understand what was happening when I tried to simulate such a system.
What is perceived is a
difference between a reference and a perception (i.e. the existence
of an error).
Note that this same effect can be achieve completely in terms of perceptual
signals if the reference signal is perceived via the imagination
connection. Of course the "synthesized error signal" may or may not be the
same as the actual error signal in the lower system -- it's probably not
the same. But it could be roughly the same.
Bruce:
No, the problem here seems to be the ambiguous use of the term "perceive".
One does not perceive that something needs changing, one perceives a state
of affairs.
Precisely. Nothing "needs changing" objectively.
Another ambiguity that is interfering here is the difference between plain
perception ("existence of a signal in a perceptual pathway") and _conscious
perception_, which brings in awareness. Control systems need perceptions to
operate at all, but the perceptions do not have to reach awareness. Many
control systems are working at all times. Some operate at the same level
where awareness happens to be currently engaged, but outside its scope
(you're listening to a concerto and ignoring the sound of your breathing).
Some operate at levels higher than the momentary locus of awareness (you're
aware of controlling the car's path, but at the moment are not aware of the
reason for which you're driving north on I-5). And of course very many
control systems are operating at levels lower than the current level of
awareness. There must be perceptual signals in all operating control
systems, yet we are aware of only some of them. So the existence of a
perceptual signal does not, in general, mean that we are aware of it. The
most telling instances of this involve _becoming_ aware of a perceptual
signal that must have existed all along: supply your own examples.
Bruce G.:
If you have a reference state for that perception that differs
from the input, the hierarchy may act to eliminate error (assuming a lack
of conflict). You may have thoughts about those actions ("I really have to
do something about the mess in my office") but those thoughts are not an
essential part of the control process. That is, you can model the control
process without paying any attention to the thoughts.
Let's not have any orphan phenomena if we can help it. Is thinking simply
an ephiphenomenon with no functional significance in the hierarchy? I think
quite the opposite -- in fact, I have devoted several proposed levels in
the hierarchy to processes involved in thinking (as I see it), and together
with the imagination connection these processes put thinking in charge of
most of the hierarchy. It may or may not be conscious thinking. I see
thinking as imposing logical and rule-driven structure on the operation of
the system. "I really have to clean up this mess some day" is a nice piece
of logic which simultaneously specifies an action and postpones it to the
indefinite future, so nothing is actually required to be done. But many
other thoughts ("If I take his knight, I'll have his queen and rook in a
fork") have determining effects on how lower systems operate.
Best,
Bill P.