Systems assumptions

I am still trying to understand the fundamentals of PCT, and I have a
few questions about working assumptions, whether ontological,
epistemological, or axiological. Please bear with me here.

1. Powers has commented that cybernetics, although a good starting
point, is flawed in fundamental ways, with regard to its ability to
inform PCT and explanations of the behavior of human beings. Would it
be fair to suggest that one of the weaknesses of cybernetics is that it
assumes a closed system, and that the causality within a cybernetic
system is essentially linear? Bertalanffy, for example, commented that
the cybernetic approach retains the Cartesian machine model of the
organism, unidirectional causality, and closed systems, which appears to
be unsuitable for explaining human behavior. This seems to be
essentially Powers' position. Also worth noting in the cybernetic
system is that information can only decrease, in that the system
functions to decrease energy. Cybernetic systems can change, but they
cannot change intrinsically; change must come from an outside force.
This is similar to the stimulus-response assumptions of behaviorism
which PCT seems to reject. (And as a sociologist, so do I.) Is all
this compatible with PCT assumptions?

2. If indeed PCT assumes an open systems model, then the assumption of
equifinity, in which the same state may be reached from different
initial conditions in different ways, seems to apply. In such a system,
the components become more specialized or self-differentiating. This
leads to a decrease in randomness, or negentropy. This then allows for
nonmechanistic interaction among the system's components. Within such
an open system, information can increase, contrary to the assumptions of
the cybernetic model.
The open system can attain a steady state through continuous exchange of
matter, energy, and information among components. But I don't
understand how this could really be occurring, empirically, within the
human organism with regard to sensory (or neural) information, brain
physiology, and next courses of action. The idea of the internal
comparator is a great mystery to me, and I don't see how one can defend
the existence of such a thing, even within the assumptions of an open
system. The comparator, if it exists, is awfully fleeting and episodic.
It can hardly be called a comparator. The PCT attempt to explain the
control or management of perceptions or impressions is actually quite
similar to Erving Goffman's dramaturgical theory of action and
impression management, although no internal comparator is posited. At
first I thought PCT was close to Parsons' AGIL theory and cybernetic
hierarchy of control, but now that I'm getting further into Powers' book
I see that that initial assumption is in error. My quest is to get a
better working knowledge of PCT through analysis of its working domain
assumptions. I would particularly appreciate greater guidance and
clarity with regard to the existence--whether this is ontological or
merely analytic--of the comparator. That would be a big help. In other
worlds, one could knock oneself out building models with comparators and
saying they parallel what's going on in human beings. But I need more
proof that such a thing really exists.

Thanks.

Dr. James J. Chriss
Kansas Newman College
Sociology Department

[From Rick Marken (970325.0750)]

Welcome back Bill! Sorry about the travel nightmare. How did things
go at the meeting?

James Chriss (970324) --

I am still trying to understand the fundamentals of PCT, and I
have a few questions about working assumptions

Ok. Fire away.

Would it be fair to suggest that one of the weaknesses of
cybernetics is that it assumes a closed system

Doesn't cybernetics assume that people are closed loop systems,
just like PCT does? I would say the main weaknesses of cybernetics
are 1) it doesn't understand the nature of purposeful behavior (control)
and 2) it doesn't understand how purposeful (closed loop control)
systems work (control of perception). Other than that it's fine;-)

If indeed PCT assumes an open systems model

It doesn't.

then the assumption of equifinity, in which the same state may
be reached from different initial conditions in different ways,
seems to apply.

This is not an _assumption_. It is a _description_ of how a control
system works. Control systems produce the same results (equifinality)
consistently using whatever means are appropriate to the prevailing
circumstances (disturbances and the feedback connection to the
controlled variable). Since prevailing circumstances are always
changing, a control system will always produce the same result in
different ways.

The idea of the internal comparator is a great mystery to me, and
I don't see how one can defend the existence of such a thing

Well, try to model control without one, then;-)

The comparator, if it exists, is awfully fleeting and episodic.

On what data do you base this conclusion? All of our data suggests
that a model that continuously compares a perceptual variable to an
internal reference specification for that variable accounts for
human purposeful behavior quite accurately. I recommend that you try
some of my on-line demos of control
(http://www.leonardo.net/Marken/demos.html) to see what's up.

The PCT attempt to explain the control or management of perceptions
or impressions is actually quite similar to Erving Goffman's
dramaturgical theory of action and impression management, although
no internal comparator is posited.

That reminds me of a fellow who came up to a friend of mine (who had a
cool car) and said that his car was just like my friend's: except that
it was blue, it was a Ford, it was a four door, and it didn't have white
walls (it was a while ago):wink:

My quest is to get a better working knowledge of PCT through
analysis of its working domain assumptions.

I humbly suggest that it would be best for you to first get a better
working knowledge of the phenomenon that PCT explains: purposeful
behavior or control (see my first Demo at
http://www.leonardo.net/Marken/demos.html). Then I would suggest
learning how the PCT model _works_ rather than worrying about its
_working_ domain assumptions.
A great way to learn how the PCT model works is through the use of
computer simulation; I recommend Wolfgang Zocher's SIMCON program, which
is available for PCs at:

ftp://lynx.ed.uiuc.edu/LRS2/CSG/Computer_Programs/MS-DOS/PCTdemos%20(Forssell)/

one could knock oneself out building models with comparators and
saying they parallel what's going on in human beings. But I need
more proof that such a thing really exists.

I don't know about "proof". But I think you'll find some pretty
convincing evidence that a simple perceptual control model (with
comparator -- there is no other kind) accounts for the basic
phenomena of purposeful behavior.

Best

Rick

[From Bill Powers (970325.0915 MST)]

James Chriss (970324) --

I am still trying to understand the fundamentals of PCT, and I have a
few questions about working assumptions, whether ontological,
epistemological, or axiological. Please bear with me here.

1. Powers has commented that cybernetics, although a good starting
point, is flawed in fundamental ways, with regard to its ability to
inform PCT and explanations of the behavior of human beings. Would it
be fair to suggest that one of the weaknesses of cybernetics is that it
assumes a closed system, and that the causality within a cybernetic
system is essentially linear?

No. The original cybernetics assumed a closed causal loop just as PCT does,
and did not rely on lineal causality. Originally cybernetics was mainly
about control theory.

Bertalanffy, for example, commented that
the cybernetic approach retains the Cartesian machine model of the
organism, unidirectional causality, and closed systems, which appears to
be unsuitable for explaining human behavior.

Bertalanffy was an anti-mechanist and he knew nothing about control theory.
Don't confuse a closed-loop system with what he calls a "closed system."
Bertalanffy meant by an "open system" a system that was open to influences
from its environment (as all control systems are), and the opposite by a
"closed system." But the way he conceived of an open system was such as to
make it into a cause-effect or stimulus-response system! What Bertalanffy
seemed to be alluding to was the idea of a thermodynamically open or closed
system.

This seems to be
essentially Powers' position. Also worth noting in the cybernetic
system is that information can only decrease, in that the system
functions to decrease energy.

This is a garble and gets the cybernetic position exactly backward.
Information in an organism increases, at the expense of a decrease
elsewhere; the term you want is _entropy_, not energy. Decreasing entropy is
said to be the equivalent of increasing information. Others may correct my
usages here; I don't use these concepts.

Cybernetic systems can change, but they
cannot change intrinsically; change must come from an outside force.
This is similar to the stimulus-response assumptions of behaviorism
which PCT seems to reject. (And as a sociologist, so do I.) Is all
this compatible with PCT assumptions?

No. Nor is it a correct representation of cybernetic systems.

2. If indeed PCT assumes an open systems model, then the assumption of
equifinity, in which the same state may be reached from different
initial conditions in different ways, seems to apply. In such a system,
the components become more specialized or self-differentiating. This
leads to a decrease in randomness, or negentropy. This then allows for
nonmechanistic interaction among the system's components. Within such
an open system, information can increase, contrary to the assumptions of
the cybernetic model.

Again, completely wrong. "Equifinality" is an _alternative_ to the PCT view,
claiming that reaching the same state from different initial conditions
simply relects the existence of alternative causal pathways. In PCT this
phenomenon is shown to be the result of a control system opposing
disturbances; it is _necessary_ to use different means to counteract
different disturbances, if the intended result is to occur. What you're
talking about is an attempt to explain behavior without the use of control
concepts -- basically a philosophical rather than an engineering approach.

When you refer to "non-mechanistic interactions" I don't know what you are
talking about. Magic?

The open system can attain a steady state through continuous exchange of
matter, energy, and information among components. But I don't
understand how this could really be occurring, empirically, within the
human organism with regard to sensory (or neural) information, brain
physiology, and next courses of action.

The answer is that it doesn't occur. The "open system" idea refers strictly
to the exchange of physical matter and energy with the environment. It is
related only peripherally to PCT. There is certainly no "exchange of sensory
information" with the environment.

The idea of the internal
comparator is a great mystery to me, and I don't see how one can defend
the existence of such a thing, even within the assumptions of an open
system. The comparator, if it exists, is awfully fleeting and episodic.
It can hardly be called a comparator.

Do you never compare the ongoing results of your actions with the result you
desire to occur, while you're acting?

The PCT attempt to explain the
control or management of perceptions or impressions is actually quite
similar to Erving Goffman's dramaturgical theory of action and
impression management, although no internal comparator is posited. At
first I thought PCT was close to Parsons' AGIL theory and cybernetic
hierarchy of control, but now that I'm getting further into Powers' book
I see that that initial assumption is in error. My quest is to get a
better working knowledge of PCT through analysis of its working domain
assumptions.

How about just understanding how the PCT model itself works, first? If you
spend all your time looking for similarities to other theories, how are you
going to understand PCT itself?

I would particularly appreciate greater guidance and
clarity with regard to the existence--whether this is ontological or
merely analytic--of the comparator. That would be a big help. In other
worlds, one could knock oneself out building models with comparators and
saying they parallel what's going on in human beings. But I need more
proof that such a thing really exists.

A comparator is a function that receives a perceptual signal and, with the
opposite sign, a reference signal. Its output is a measure of the difference
between these two signals. There are many places in the nervous system from
the spinal reflexes to the mid-brain systems where just this physical
arrangement exists. It probably exists at higher levels, too, but nobody in
neurology has been looking for it.

Any time you detect a difference between what you ARE perceiving and what
you WANT to be perceiving, something in you is doing a comparison. The
object of control is to make the actual perception match the reference
perception, and continual comparison of the two is necessary to make this
possible.

I don't know what else could be offered as "proof that such a thing really
exists." Neural circuit-tracing is about the only way to do that, and we're
a long way from such a capability for most of the brain. e can say that
every control system requires a function exactly equivalent to comparison,
and that models containing comparators do manage to reproduce some real
behaviors quite well. Beyond that, I don't know what you're looking for.

Best,

Bill P.