[From Bill Powers (970325.0915 MST)]
James Chriss (970324) --
I am still trying to understand the fundamentals of PCT, and I have a
few questions about working assumptions, whether ontological,
epistemological, or axiological. Please bear with me here.
1. Powers has commented that cybernetics, although a good starting
point, is flawed in fundamental ways, with regard to its ability to
inform PCT and explanations of the behavior of human beings. Would it
be fair to suggest that one of the weaknesses of cybernetics is that it
assumes a closed system, and that the causality within a cybernetic
system is essentially linear?
No. The original cybernetics assumed a closed causal loop just as PCT does,
and did not rely on lineal causality. Originally cybernetics was mainly
about control theory.
Bertalanffy, for example, commented that
the cybernetic approach retains the Cartesian machine model of the
organism, unidirectional causality, and closed systems, which appears to
be unsuitable for explaining human behavior.
Bertalanffy was an anti-mechanist and he knew nothing about control theory.
Don't confuse a closed-loop system with what he calls a "closed system."
Bertalanffy meant by an "open system" a system that was open to influences
from its environment (as all control systems are), and the opposite by a
"closed system." But the way he conceived of an open system was such as to
make it into a cause-effect or stimulus-response system! What Bertalanffy
seemed to be alluding to was the idea of a thermodynamically open or closed
system.
This seems to be
essentially Powers' position. Also worth noting in the cybernetic
system is that information can only decrease, in that the system
functions to decrease energy.
This is a garble and gets the cybernetic position exactly backward.
Information in an organism increases, at the expense of a decrease
elsewhere; the term you want is _entropy_, not energy. Decreasing entropy is
said to be the equivalent of increasing information. Others may correct my
usages here; I don't use these concepts.
Cybernetic systems can change, but they
cannot change intrinsically; change must come from an outside force.
This is similar to the stimulus-response assumptions of behaviorism
which PCT seems to reject. (And as a sociologist, so do I.) Is all
this compatible with PCT assumptions?
No. Nor is it a correct representation of cybernetic systems.
2. If indeed PCT assumes an open systems model, then the assumption of
equifinity, in which the same state may be reached from different
initial conditions in different ways, seems to apply. In such a system,
the components become more specialized or self-differentiating. This
leads to a decrease in randomness, or negentropy. This then allows for
nonmechanistic interaction among the system's components. Within such
an open system, information can increase, contrary to the assumptions of
the cybernetic model.
Again, completely wrong. "Equifinality" is an _alternative_ to the PCT view,
claiming that reaching the same state from different initial conditions
simply relects the existence of alternative causal pathways. In PCT this
phenomenon is shown to be the result of a control system opposing
disturbances; it is _necessary_ to use different means to counteract
different disturbances, if the intended result is to occur. What you're
talking about is an attempt to explain behavior without the use of control
concepts -- basically a philosophical rather than an engineering approach.
When you refer to "non-mechanistic interactions" I don't know what you are
talking about. Magic?
The open system can attain a steady state through continuous exchange of
matter, energy, and information among components. But I don't
understand how this could really be occurring, empirically, within the
human organism with regard to sensory (or neural) information, brain
physiology, and next courses of action.
The answer is that it doesn't occur. The "open system" idea refers strictly
to the exchange of physical matter and energy with the environment. It is
related only peripherally to PCT. There is certainly no "exchange of sensory
information" with the environment.
The idea of the internal
comparator is a great mystery to me, and I don't see how one can defend
the existence of such a thing, even within the assumptions of an open
system. The comparator, if it exists, is awfully fleeting and episodic.
It can hardly be called a comparator.
Do you never compare the ongoing results of your actions with the result you
desire to occur, while you're acting?
The PCT attempt to explain the
control or management of perceptions or impressions is actually quite
similar to Erving Goffman's dramaturgical theory of action and
impression management, although no internal comparator is posited. At
first I thought PCT was close to Parsons' AGIL theory and cybernetic
hierarchy of control, but now that I'm getting further into Powers' book
I see that that initial assumption is in error. My quest is to get a
better working knowledge of PCT through analysis of its working domain
assumptions.
How about just understanding how the PCT model itself works, first? If you
spend all your time looking for similarities to other theories, how are you
going to understand PCT itself?
I would particularly appreciate greater guidance and
clarity with regard to the existence--whether this is ontological or
merely analytic--of the comparator. That would be a big help. In other
worlds, one could knock oneself out building models with comparators and
saying they parallel what's going on in human beings. But I need more
proof that such a thing really exists.
A comparator is a function that receives a perceptual signal and, with the
opposite sign, a reference signal. Its output is a measure of the difference
between these two signals. There are many places in the nervous system from
the spinal reflexes to the mid-brain systems where just this physical
arrangement exists. It probably exists at higher levels, too, but nobody in
neurology has been looking for it.
Any time you detect a difference between what you ARE perceiving and what
you WANT to be perceiving, something in you is doing a comparison. The
object of control is to make the actual perception match the reference
perception, and continual comparison of the two is necessary to make this
possible.
I don't know what else could be offered as "proof that such a thing really
exists." Neural circuit-tracing is about the only way to do that, and we're
a long way from such a capability for most of the brain. e can say that
every control system requires a function exactly equivalent to comparison,
and that models containing comparators do manage to reproduce some real
behaviors quite well. Beyond that, I don't know what you're looking for.
Best,
Bill P.