[From Bill Powers (990630.0636 MDT)]
Rick Marken (990629.2000)--
Actually, I should have said "This is not true no matter what;
actions are _not_ generated by error". Saying that "error
generates action" is just a cause-effect (and, thus, incorrect)
way of thinking about control.
While I agree with you that it's better to focus on the perception as the
object of control, I think denying that error causes action makes the whole
model a bit too mysterious. A control loop is made up of a set of little
cause-effect processes. Hooking them up as a closed loop doesn't change the
character of these processes: they each still convert inputs into outputs.
The output function of a control system receives a continuous error signal
and produces a continuous output signal; the output signal is a continuous
function of the error signal. It's true that the error signal is also a
function of the output (and disturbances), but that doesn't alter the fact
that the error signal causes the output, at the same time.
You can argue that this only encourages people to think of the control loop
as a sequence of events occurring one after another, and that's probably
true. But can't it be explained that all these functions operate at the
same time, like the parts of a car engine?
Perhaps we need to focus more on identifying how the parts of the control
model correspond to elements of subjective experience. The way the model is
conceived right now -- and this could change -- the ONLY part of the model
that we can experience directly is the perceptual signal. We can perceive
reference signals only in the imagination mode, when they are routed into
the perceptual channels (so we are still perceiving only perceptual
signals). And error signals we don't experience at all -- there's no
provision in the model for switching the error signal back into the
perceptual inputs before it has entered the output function. I'm not even
sure what would happen if we made that connection in a model.
Is that an oversight that should be corrected, or is it a realistic
feature? I've thought about this quite a bit, and so far it still looks
like a realistic feature. This doesn't mean that we can't _infer_ the
existence of error signals, it means only that we can't perceive the error
itself, directly. Remember that higher-level error signals drive
lower-level control systems, changing their reference signals and therefore
changing the perceptual signals sent back up to higher-level systems. So if
we perceive nourselves acting as if to oppose some disturbance, chances are
that there's an error signal setting a reference level for that action or
some effect of it. That's how we can "infer" an error: by experiencing its
lower-level effects on perceptions.
Normally, while we're acting in relation to the outside world, we don't
experience reference signals, either. We simply experience what we intend
to experience (we also passively witness many perceptions not under
control), with the "intending" part being pretty vague in consciousness, if
it exists at all during action.
I think I'm simply reporting on what I do and do not actually experience of
the workings of the control model. It's easy to _imagine_ error signals and
reference signals and everything else, just by imagining the control model
while you experience something. But if you stick strictly to what you are
actually experiencing right now, I think the only part of the control model
that fits the experience is the perceptual signal.
This makes sense from another point of view. If we experienced every aspect
of the control systems in us, they would appear as self-evident aspects of
reality, and we wouldn't need any model. We need a model precisely because
large parts of our own functioning remain hidden from our conscious
inspection. The model fills in connections which logically must be there,
but which are not located where we can see them. This, indeed, is why
things like intentions and purposes have been so mysterious; if we could
experience reference signals and error signals in action (rather than only
in an introspective state of imagination), we would understand exactly how
they work and there never would have been any mystery.
The clincher for me is one simple fact: we have trouble convincing some
people that they are organized as, and behave like, control systems. I have
never had any trouble convincing anyone that they have arms and hands and
can seize a joystick and move it, because they can experience such things
directly for themselves. If the whole structure of the control hierarchy
were equally part of experience, there wouldn't be any PCT because
everybody would already know all about it.
Best,
Bill P.