SR and PCT (Was A wish and a plea)

[From Bill Powers (2004.08.31.0651)]

Fred Nickols (2004.08.30.0917) –

(Hank Folson):> How can you
possibly “integrate” theories that state that

organisms are responsive with perceptual control theory that

states that organisms are purposive? They are opposites.

(Fred Nickols) I think I understand what Hank is driving at in the remark
immediately above but I’m having difficulty with the way it’s
stated. First, I’m not sure that “purposive” and
“responsive” are “opposites.”

I have wondered, in gloomier moods, whether PCT isn’t doomed to lurk in
the background of science until “the right funerals” have
happened. What concerned me was a possibility that phenomena like
imprinting occur at the highest levels of organization (where there there
are no higher levels), so that the first plausible ideas a brain
encounters when it first becomes educated so impress it with their logic
and appropriateness that they become permanent fixtures, incapable of
further modification. In other words, once you experience a big AHA
concerning system concepts, you can’t undo it again – it’s there for
life. Hence “the right funerals.” Perhaps I am doomed to
believe for the rest of my life that PCT makes beautiful sense simply
because once having grasped it at my highest levels, I have left no room
for any other idea, even a better one, to take root. I have used up that
part of my brain.
As I say, that’s a gloomy view, and I don’t really think it’s right. But
it is certainly true, I think, that changing a system concept or worse,
replacing it with another, is extraordinarily difficult. Even when one
accepts a new idea, the old one keeps popping up again, saying “But
then how do you explain …?”
I’m not going to argue here the relative merits of saying “Behavior
never occurs except for the purpose of controlling some perception”
and “Behavior is a conditioned response to environmental
stimuli.” I’m not going to compare “Changes of organization
continue until certain critical variables are brought under control”
with “Special reinforcing stimuli determine which behaviors will
become more probable.” But I can assert that these statements are
not equally valid “perspectives” that one can move between at
will; each is a denial of the other in the pair, just as “the earth
is round” is a denial of “the earth is flat”. One can
certainly vacillate between such pairs of ideas, but it is impossible
that they can both be true at the same time. My point here is that there
is no way that I can help anyone to choose between them, or even to see
that a choice must be made.
Since I imprinted on the system concepts of PCT and not some other idea I
might have come across first, but didn’t accept, I can’t understand how
the logic of behaviorism can still seem to make sense after one has
learned about PCT. I especially can’t see how one could put behavioristic
concepts beside PCT and think that both could make sense at the same
time. So I’m going to ask you, and Bruce Abbott, too, to explain to me
and each other what these different perspectives are and how they can
both be correct. Why aren’t they different perspectives,
meaning that one can only switch back and forth between them: round,
flat, round, flat?
Is my belief in PCT justified by nothing more than the years of effort I
have put into understanding it, using it, and teaching it? Is it simply
that I have invested so much time and effort in it, and have depended on
it for what standing I have in science and with friends, that makes me
persist in clinging to it? Is it only the desire to be right, and to have
been right, that prevents me from seeing the glaring flaws in PCT? A
behaviorist has the right to ask me such questions and expect me to give
them serious thought, as I have often asked them of myself.
But I have the same right to ask them of behaviorists, and all others who
think that another theory can be merged with PCT – and thus be rescued
from extinction. So what are your thoughts on this
matter?

Best,

Bill P.

From Dick Robertson, 2004.08.31.1515CDT]

···

From: Bill Powers powers_w@FRONTIER.NET

Date: Tuesday, August 31, 2004 9:09 am

Subject: SR and PCT (Was A wish and a plea)

[From Bill Powers (2004.08.31.0651)]

Fred Nickols (2004.08.30.0917) –

I have wondered, in gloomier moods, whether PCT isn’t doomed to
lurk in the background of science until “the right funerals” have happened. What
concerned me was a possibility that phenomena like imprinting
occur at the highest levels of organization (where there there are no higher
levels), so that the first plausible ideas a brain encounters when it first
becomes educated so impress it with their logic and appropriateness
that they become permanent fixtures, incapable of further modification.

Yes, but your description is much to “cognitive.” The fact that ealiest experiences (read: perceptions that accompany life support system-error states) tend to become “stamped in” i. e. to accompany major reorganizations-- was discovered a hundred years ago by the much maligned and misunderstood Freud. He observed that this seemed to be the case, because his patients would remember early formative experiences, usually along with strong emotional accompaniment. He called these experiences trauma. He also felt that he observed that they often seemed to show improvement after such episodes.

I would well consider such an episode as a subsequent reorganization that “cured” an earlier one, that might have been functional for survival at an early age but was no longer facilitative for further development.

Freud didn’t have a very good explanation of_why_ earliest experiences should be so formative; he just seemed to observe and conclude that it was the case (But see Plooij for a more comprehensive explanation–and a PCT one BTW.)

In other > words, once you experience a big AHA concerning system concepts,

you can’t undo it again – it’s there for life. Hence “the right funerals.”
Perhaps I > am doomed to believe for the rest of my life that PCT makes
beautiful sense simply because once having grasped it at my highest levels, I have
left no room for any other idea, even a better one, to take root. I have
used up that part of my brain.

That view seems to me to give insufficient credit to your own proposition about reorganization. According to your view a given organization of the CNS tends to persist as long as the organizm functions well enought to keep its life-support systems from going into error states. In my clinical experience I have come to suspect that the longer a given pattern of the CNS persists the more subsidiary systems come under control that will resist any new organization. The reason they resist is that --again, according to reorganization theory – new reorganization send randoms signals into the CNS. This would certainly seem to imply interference with existing controls. The subjective aspect of this “instability” while reorganization is going on fits, for me, the descriptions that people, including me, label with the term, “anxiety.”

As I say, that’s a gloomy view, and I don’t really think it’s
right. But it is certainly true, I think, that changing a system concept or worse,
replacing it with another, is extraordinarily difficult.

See the above.

Even when one accepts a new idea, the old one keeps popping up again, saying

“But then how do you explain …?”

I think this comes when one goes to reflect upon the internal sensation s that accompany reorganization to “understand” what is happening.

I’m not going to argue here the relative merits of saying
“Behavior never occurs except for the purpose of controlling some perception” and
“Behavioris a conditioned response to environmental stimuli.” I’m
not going to compare “Changes of organization continue until certain critical
variablesare brought under control” with “Special reinforcing
stimuli determine which behaviors will become more probable.” But I can assert that
thesestatements are not equally valid “perspectives” that one can
move between at will; each is a denial of the other in the pair, just as “the
earth is round” is a denial of “the earth is flat”.

You’re not alone remember.

One can certainly vacillate

between such pairs of ideas, but it is impossible that they can
both be
true at the same time. My point here is that there is no way that
I can
help anyone to choose between them, or even to see that a choice
must be made.

Since I imprinted on the system concepts of PCT and not some other
idea I
might have come across first, but didn’t accept, I can’t
understand how the
logic of behaviorism can still seem to make sense after one has
learnedabout PCT. I especially can’t see how one could put
behavioristic concepts
beside PCT and think that both could make sense at the same time.
So I’m
going to ask you, and Bruce Abbott, too, to explain to me and each
otherwhat these different perspectives are and how they can both be
correct. Why aren’t they different perspectives, meaning that one
can only
switch back and forth between them: round, flat, round, flat?

I too, am interested in the answer to these questions; my philosophical side, that is, not my would-be scientific side.

Best,

Dick R.

[From Bill Powers (2004.08.31.1113 MDT)]
Dick Robertson, 2004.08.31.1515CDT –

Yes, but your description is
much to “cognitive.” The fact that ealiest experiences
(read: perceptions that accompany life support system-error states) tend
to become “stamped in” i. e. to accompany major
reorganizations-- was discovered a hundred years ago by the much maligned
and misunderstood Freud.

I wasn’t talking about early experiences, but the experiences we have in
later life, where we are exposed to well-argued points of view delivered
in dramatic or engaging ways by mentors we admire and would like to
imitate, and whose reasoning seems unassailable to us. We perceive the
system concept they offer, think it is the cat’s pajamas, and are then
stuck with it for perhaps the rest of our lives.

In other words, once you
experience a big AHA concerning system concepts,

you can’t undo it again – it’s there for
life.

That view seems to me to give
insufficient credit to your own proposition about reorganization.
According to your view a given organization of the CNS tends to persist
as long as the organizm functions well enought to keep its life-support
systems from going into error states. In my clinical experience I
have come to suspect that the longer a given pattern of the CNS persists
the more subsidiary systems come under control that will resist any new
organization. The reason they resist is that --again, according to
reorganization theory – new reorganization send randoms signals into the
CNS. This would certainly seem to imply interference with existing
controls. The subjective aspect of this “instability”
while reorganization is going on fits, for me, the descriptions that
people, including me, label with the term,
“anxiety.”

I think reorganization at the highest levels is very slow and becomes
slower. How often do people change religions, political parties, ways of
reasoning, personalities, and professions? It does happen, but it’s rare.
I think we spend most of our time trimming the details of the system
concepts we adopt once we have managed to adopt one. I agree that we seek
inner peace and harmony by doing this, but we don’t often make any
fundamental changes.

I too, am interested in the answer
to these questions; my philosophical side, that is, not my would-be
scientific side.

We’ll see where this goes.

Best,

Bill P.

[Fred Nickols (2004.09.01.1007)] --

Bill Powers (2004.08.31.0651)]

I'm not going to argue here the relative merits of saying "Behavior never
occurs except for the purpose of controlling some perception" and
"Behavior
is a conditioned response to environmental stimuli." I'm not going to
compare "Changes of organization continue until certain critical variables
are brought under control" with "Special reinforcing stimuli determine
which behaviors will become more probable." But I can assert that these
statements are not equally valid "perspectives" that one can move between
at will; each is a denial of the other in the pair, just as "the earth is
round" is a denial of "the earth is flat". One can certainly vacillate
between such pairs of ideas, but it is impossible that they can both be
true at the same time. My point here is that there is no way that I can
help anyone to choose between them, or even to see that a choice
must be made.

If I view "stimuli" very broadly (and no doubt loosely), I can fit them
under the heading of perceptions. If all I know of my world I know in the
form of my perceptions, it seems to me that would be true of stimuli as
well, no matter who might or might not be trying to manipulate me using them
and whatever form they might take. So, substituting the one for the other
yields "Special reinforcing perceptions determine which behaviors will
become more probable." In other words, I perceive the effects of my actions
and my actions are indeed shaped by the effects my actions have on those
conditions I am attempting to control. "Special reinforcing perceptions"
are perceptions informing me my actions had the desired effect(s). The
observer/manipulator, of course, thinks it's all due to the stimuli as
he/she sees them and does not take into account that it is my perception of
the stimuli in relation to my reference signals that does or doesn't produce
behavior. (Is it reference signal or reference condition? - I can never
keep those two straight.)

Anyway, I'm a lightweight here so I'll respond to Hank's query and then butt
out.

of my post stimulate you instead of the other parts? Why did you
respond to this post rather than the hundreds of other posts
floating around CSGnet? Why did the stimulus cause you to only
respond with a short statement rather than a serious paper? Or at
all? How are my statements stimulating you now, if at all? Please
put on your Behaviorist cap and let us know how you operate as a
stimulus-response system.

I don't think of myself as a S-R system; I think of myself as a living
control system. As best I can recall, I responded to the following portion
of your post to Marc:

Hank Folson (2004.08.29)]

How can you possibly "integrate" theories that state that
organisms are responsive with perceptual control theory that
states that organisms are purposive? They are opposites.

I responded because I'm favorably disposed toward integration and
reconciliation in general and because I resent and resist dogma and dogmatic
approaches in general. In PCT terms, I suppose I responded because a
reference condition was being disturbed. Which one? I'm neither that
insightful or introspective to tell. But, as I wrote, I don't see the two
as "opposites." It's clear that many others do. I don't. That's possibly
a failing or error on my part or on theirs. I don't know which.

I'll try not to let disturbances provoke me to action (posting) in the
future but, what little I know of PCT says that's not likely to happen.

Regards,

Fred Nickols, CPT
Distance Consulting
"Assistance at a Distance"
nickols@att.net
www.nickols.us

···

Hank wrote on 31 Aug: > Okay, even a layman should be able to tell why he was stimulated > by Hank Folson's post to Marc Abrams to respond: Why did one part

T)]

Fred Nickols (2004.09.01.1007 --

If all I know of my world I know in the
form of my perceptions, it seems to me that would be true of stimuli as
well, no matter who might or might not be trying to manipulate me using them
and whatever form they might take.

But perceptions are in your brain, and stimuli are in your environment.
Other people can manipulate stimuli because they can see the environment,
but how can they know what perceptions of yours they're manipulating?
Anyway, if they try to manipulate either a stimulus or a perception that
you're controlling, you will act to prevent them from affecting it, if you
can -- the conflict will be obvious. If they manipulate a stimulus that
you're not perceiving and controlling, at most it will be a disturbance of
something else you're controlling, which you will counteract, and if not
that, you'll just ignore it and do nothing. Have I left out any cases? In
none of these cases is your action simply a "response to a stimulus," is it?

If you loosen your definitions enough, you can make aarvarks into zebras
(they both have four legs, don't they?) or antimony into antipasto (they
both have to do with ants, don't they?). I think we should keep our
definitions tight, if we want to be talking about the same thing.

Best,

Bill P.

···

  So, substituting the one for the other
yields "Special reinforcing perceptions determine which behaviors will
become more probable." In other words, I perceive the effects of my actions
and my actions are indeed shaped by the effects my actions have on those
conditions I am attempting to control. "Special reinforcing perceptions"
are perceptions informing me my actions had the desired effect(s). The
observer/manipulator, of course, thinks it's all due to the stimuli as
he/she sees them and does not take into account that it is my perception of
the stimuli in relation to my reference signals that does or doesn't produce
behavior. (Is it reference signal or reference condition? - I can never
keep those two straight.)

Anyway, I'm a lightweight here so I'll respond to Hank's query and then butt
out.

Hank wrote on 31 Aug: > > > Okay, even a layman should be able to tell why he was stimulated > > by Hank Folson's post to Marc Abrams to respond: Why did one part
> of my post stimulate you instead of the other parts? Why did you
> respond to this post rather than the hundreds of other posts
> floating around CSGnet? Why did the stimulus cause you to only
> respond with a short statement rather than a serious paper? Or at
> all? How are my statements stimulating you now, if at all? Please
> put on your Behaviorist cap and let us know how you operate as a
> stimulus-response system.

I don't think of myself as a S-R system; I think of myself as a living
control system. As best I can recall, I responded to the following portion
of your post to Marc:

> Hank Folson (2004.08.29)]
>
> How can you possibly "integrate" theories that state that
> organisms are responsive with perceptual control theory that
> states that organisms are purposive? They are opposites.

I responded because I'm favorably disposed toward integration and
reconciliation in general and because I resent and resist dogma and dogmatic
approaches in general. In PCT terms, I suppose I responded because a
reference condition was being disturbed. Which one? I'm neither that
insightful or introspective to tell. But, as I wrote, I don't see the two
as "opposites." It's clear that many others do. I don't. That's possibly
a failing or error on my part or on theirs. I don't know which.

I'll try not to let disturbances provoke me to action (posting) in the
future but, what little I know of PCT says that's not likely to happen.

Regards,

Fred Nickols, CPT
Distance Consulting
"Assistance at a Distance"
nickols@att.net
www.nickols.us

[From Bruce Abbott (2004.09.02.1300 EST)]

[Bill Powers (2004.08.31.0651)

Since I imprinted on the system concepts of PCT and not some other idea I
might have come across first, but didn’t accept, I can’t understand how
the logic of behaviorism can still seem to make sense after one has
learned about PCT. I especially can’t see how one could put behavioristic
concepts beside PCT and think that both could make sense at the same
time. So I’m going to ask you, and Bruce Abbott, too, to explain to me
and each other what these different perspectives are and how they can
both be correct. Why aren’t they different perspectives,
meaning that one can only switch back and forth between them: round,
flat, round, flat?
. . . A behaviorist has the right to ask me such questions
and expect me to give them serious thought, as I have often asked them of
myself.
But I have the same right to ask them of behaviorists, and all others who
think that another theory can be merged with PCT – and thus be rescued
from extinction. So what are your thoughts on this
matter?

I think what Marc was referring to, when he brought up my name, came from
a conversation we had some months ago when he was developing plans to
open a new discussion forum. You and I have had at least a brief
conversation about it, although it’s been a while and I’m quite fuzzy
about the contents of it. Basically, it comes to this:

Reorganization under the current HPCT proposal is a continuous process
whose rate increases with the magnitude of persistent error in the
so-called “intrinsic” variables. The reorganization process
introduces small, random changes to the perceptual/control hierarchy.
Changes that lead to better control over the intrinsic variables reduce
the level of persistent error, slowing the rate of reorganization and
thus tending to preserve the changes.

The problem with this proposal, as you know, is that it does not
necessarily make those changes in just the right places where they would
do the most good. (You have suggested that consciousness may have
something to do with focusing those changes where they are needed, but a
rigorous proposal as to how this mechanism works is currently
lacking.)

What is needed in HPCT is a mechanism for reorganization that focuses its
changes where they do some good. An example where such a change does
occur is provided by the hungry rat that learns to press a lever in order
to obtain food. Here, the animal already has a (rather complex) control
system organized to locate food and, when food is found, consume it. In
part, error in this system may set a reference in a sub-system
controlling for the perception of food, whose actions may include what we
might describe as “searching” behavior. Eventually, the rat
does something in the course of this activity that depresses the lever,
and food immediately appears. The rat may learn from this experience to
press the lever, or it may require a few more repetitions of the lever
press before the rat seems to “get the idea.” But what emerges
is a new means of controlling for the perception of a food
pellet.

How is this reorganization to be explained? We have an established system
that at times sets a reference for perceiving that food is available to
eat. As the way to make the pellet appear is unknown to the rat, the
system defaults to search mode. There are already control systems present
that allow the animal to engage in various activities such as approaching
a specific location, sniffing in particular places, rearing, and so on.
When the animal happens to bring its paw down on the lever during the
course of such activities, it already has numerous finely-tuned control
systems that were actively positioning the animal in the chamber at the
time the lever moved downward. As this action happened, suddenly, a
pellet appeared. This immediately eliminated the on-going error in the
system controlling for perceiving available food. A few repetitions of
this coincidence and the rat no longer searches the chamber, apparently
at random, for food. Now, each time it wants another pellet, it
approaches and presses the lever. It would seem that the control system
engaged in obtaining food now includes a new conditional branch in the
program: if lever present, then approach and press.

Fred Nichols offered the following description of this process:

In other words, I perceive the
effects of my actions

and my actions are indeed shaped by the effects my actions have on
those

conditions I am attempting to control. “Special reinforcing
perceptions”

are perceptions informing me my actions had the desired
effect(s).

II would amend this to state that what “reinforces” the change
in the organization of the system might better be described as an
immediate reduction in the level of error of the system initiating the
actions (in my example, the system having a reference for finding
food).

There is also one other point I wish to discuss. Some perceptions are
accompanied by an experience of pleasure or displeasure. If the
experience is pleasurable, we will often control for that perception; if
aversive, we will often control for the absence of that perception.
If I recall correctly, you have argued that these feelings (perceptions,
really) are merely side-effects. We have references for the initial
perceptions and the feelings are simply interpretations of our own
actions with respect to those stimuli (as James proposed with respect to
emotions).

An alternative, which I favor, is that what we are controlling for is the
experience (pleasure, absence of displeasure) that accompanies certain of
our perceptions. The link between perception and feeling-state may be
innate. For example, under proper conditions we humans usually find a
sweet taste to be pleasurable. Because sweetness is a taste quality
evoked by sugars, and sugars both supply energy and are usually
associated with needed vitamins and minerals (in fruit), natural
selection presumably would have favored an association of sweetness with
pleasure and, seeking the things that give us pleasure, we would thereby
be led to consume what the body needs. It could also be acquired through
experience.

If these feelings are merely a side-effect of control action, I fail to
see why they would exist as phenomena of consciousness – why the ability
to experience them would have been preserved during the evolutionary
process.

Best wishes,

Bruce A.

[From Bruce Nevin (2004.09.03 13:37 EDT)]

Bruce Abbott (2004.09.02.1300 EST)–
(Standard time already?)

[Bill Powers
(2004.08.31.0651) –
What is needed in HPCT is a mechanism for reorganization
that focuses its changes where they do some good.

I have asked the corollary question, what is in it for the nerve
cells.
A control loop generates an error signal. The error signal itself
(according to theory) is not perceived by that or any other control loop.
It certainly is not perceived as an error signal by the cell producing
it, much less by the cells involved (according to hypothesis) in
reorganization such that error is reduced.
What is it about control loop error that affects controlled variables in
the nerve cells that branch, make and break connections, change their
production and assimilation of various substances, etc. resulting in
reorganization for the control loops of which they are small parts?
Assuming that these cells are controlling some input variables by means
of control loops closed through the cellular environment, I have
speculated that persistent error in a control loop results in changes in
the local cellular environment resulting in local reorganization. What
those changes are, indeed whether or not they occur, and indeed whether
or not reorganization occurs as predicted by the PCT hypothesis by that
name, are all of course matters for research.
Following Isaac Kurzer’s observation that nerve cells in vitro
branch like crazy, making and breaking connections with any other cell,
including themselves, suggested another related idea. If this is taken to
be the normal behavior of a neuron when it is not part of a control loop
that is controlling successfully, the question is not what circumstances
initiate reorganization, but rather what is it about successful control
that suppresses reorganization.

This reverses the polarity of the e. coli model of reorganization.
Consider the person searching for a coin somewhere in the room. Instead
of changing direction whenever the response is “no”, they keep
changing direction as long as there is no response, until the response is
“yes”. In a more realistic gradient situation, they
“tumble” to maximize the signal, then stop.

This doesn’t change much for e. coli, but investigations of the cellular
basis of reorganization would have to allow for both possibilities.
Instead of looking for changes in the cellular environment that accompany
the error signal traversing one part of the control loop, look for
changes in the cellular environment when the loop is controlling its
input successfully. This makes more evolutionary sense, since successful
control at the loop level is a good predictor of survival of the
superordinate organism, and therefore a good predictor of stability in
the cellular environment.

    /Bruce

Nevin

···

At 04:07 PM 9/2/2004 -0500, Bruce Abbott wrote:

[From Bill Powers (2004.09.03.1418 MDT)]

Bruce Abbott (2004.09.02.1300 EST)–

Reorganization under the current
HPCT proposal is a continuous process whose rate increases with the
magnitude of persistent error in the so-called “intrinsic”
variables. The reorganization process introduces small, random changes to
the perceptual/control hierarchy. Changes that lead to better control
over the intrinsic variables reduce the level of persistent error,
slowing the rate of reorganization and thus tending to preserve the
changes.

The problem with this proposal, as you know, is that it does not
necessarily make those changes in just the right places where they would
do the most good.

Yes, that is a problem, although there are possible solutions for it. The
same problem holds for reinforcement theory, doesn’t it? A stimulus is
considered to be reinforcing if the behavior it follows increases in
frequency. But how does the reinforcer know which behavioral systems to
reinforce? The assumption that the system producing the reinforcer is
made to act more frequently asserts this specificity from the start; it
doesn’t provide the mechanism. What steers the effect of reinforcers to
just the parts of the brain that produce that behavior instead of some
other behavior? Same problem.

Reinforcement increases in frequency if the behavior that produces it
increases in frequency; I don’t see any argument with that. But what
makes that particular behavior increase in frequency? Skinner proposed
that the behavior is controlled by its consequences, picking the
consequences as a place to start keeping track of cause and effect. But
that is arbitrary; you can also start with the behavior if you wish. I
think that neither is correct, because we’re talking about a closed
loop.

(You have suggested that
consciousness may have something to do with focusing those changes where
they are needed, but a rigorous proposal as to how this mechanism works
is currently lacking.)

What is needed in HPCT is a
mechanism for reorganization that focuses its changes where they do some
good.

There is actually such a mechanism already present; awareness isn’t the
only possibility. Study how the reorganization works in the
27-degree-of-freedom arm model. In this model, every control system
continuously reorganizes its own input function independently of all the
others even as the systems interact. The result is that the set of input
functions becomes the transpose of the set of output functions and
control becomes very effective. Because of that effectiveness, errors are
kept small and reorganization gets very slow.

In effect, reorganization occurs everywhere at once in this system. But
recall that the rate of change of parameters is very slow to start with,
and is proportional to the rms error, so as the error gets smaller, the
changes due to reorganization become much smaller and the process goes
much more slowly. It’s the very control system that becomes organized
that “selects” the right reorganizing action, turning the
reoreganization down as the control system becomes more able to keep the
error small. No third party is needed.

This would seem to fit with Bruce Nevin’s/Isaac Kurtzer’s observation
about neurons growing and destroying connections like mad in vitro. That
would look like the maximum rate of reorganization, and it would be going
on whenever control wasn’t working (as it doesn’t in vitro). In vivo, as
the neuron’s connections become more suitable for controlling some
variable, the error that is driving the reorganization declines, going to
some very low level when control becomes efficient. What we would like to
know is the kind of error that results in this mad random branching and
that can be reduced in vivo when this branching produces the right
control systems. That remains to be determined.

An example where such a
change does occur is provided by the hungry rat that learns to press a
lever in order to obtain food. Here, the animal already has a (rather
complex) control system organized to locate food and, when food is found,
consume it. In part, error in this system may set a reference in a
sub-system controlling for the perception of food, whose actions may
include what we might describe as “searching” behavior.
Eventually, the rat does something in the course of this activity that
depresses the lever, and food immediately appears. The rat may learn from
this experience to press the lever, or it may require a few more
repetitions of the lever press before the rat seems to “get the
idea.” But what emerges is a new means of controlling for the
perception of a food pellet.

How is this reorganization to be explained?

I’m not sure it should be called reorganization. It could be a systematic
learned control process (sequence). A search strategy is a control
process that should terminate when its object has been achieved. It’s
pretty systematic in rats that I have seen, courtesy of your videos. But
systematic or not it would fit the same general paradigm as E. coli
reorganization, in that error would produce the search, the cessation of
the error should terminate the searching action. Here the “searching
action” consists of activating lower-order systems by adjusting
their reference signals, first this set, then another set, and so on
until something happens to reduce the error.

It seems to me that you’re saying the same thing:

We have an established system
that at times sets a reference for perceiving that food is available to
eat. As the way to make the pellet appear is unknown to the rat, the
system defaults to search mode. There are already control systems present
that allow the animal to engage in various activities such as approaching
a specific location, sniffing in particular places, rearing, and so on.
When the animal happens to bring its paw down on the lever during the
course of such activities, it already has numerous finely-tuned control
systems that were actively positioning the animal in the chamber at the
time the lever moved downward. As this action happened, suddenly, a
pellet appeared. This immediately eliminated the on-going error in the
system controlling for perceiving available food. A few repetitions of
this coincidence and the rat no longer searches the chamber, apparently
at random, for food. Now, each time it wants another pellet, it
approaches and presses the lever. It would seem that the control system
engaged in obtaining food now includes a new conditional branch in the
program: if lever present, then approach and press.

I don’t know if you ever knew of Wayne Hershberger’s doctoral thesis. He
set up chicks in an apparatus such that if the chicks approached the food
bowl, it receded from them. To eat, the chicks had to back away from the
food bowl. Most of them were able to master this unnatural control
task.

I agree with the description of events as you present it, but not with
Fred Nickols’ characterization of it:

“Special
reinforcing perceptions”

are perceptions informing me my actions had the desired
effect(s).

II would amend this to state that what “reinforces” the change
in the organization of the system might better be described as an
immediate reduction in the level of error of the system initiating the
actions (in my example, the system having a reference for finding
food).

These ways of describing it omit the most obvious fact: the “special
reinforcing perception” is precisely the one that the animal is
trying to control. It’s not some separate aspect of the food or water;
it’s whatever aspect of the food or water itself for which the animal has
a nonzero reference level. That IS the desired effect. Putting it Fred’s
way makes it sound as if the reinforcer is something separate from the
desired effect, like a post-it stick to the food pellet saying “This
is what you wanted.” But that is an unnecessary embellishment. The
animal has a reference level for ingesting certain objects and already
knows what it wants. It has the ability to search for such objects and to
learn how to make them available. Sometimes a learned systematic search
process suffices; sometimes reorganization is required, as when an
experimenter thinks up some odd requirement that must be met to get
food.

There is also one other point I
wish to discuss. Some perceptions are accompanied by an experience of
pleasure or displeasure. If the experience is pleasurable, we will
often control for that perception; if aversive, we will often control for
the absence of that perception.

I disagree with this way of describing it, if I understand you, which
implies that a pleasant perception is so because it triggers a separate
signal saying that this perception is pleasant or unpleasant. I
understand that this is the received wisdom – endorphins and all that.
But I think that is an unparsimonious proposition, and it ignores the
fact that the same perception can be either pleasant or unpleasant,
depending on your current reference level for it. Isn’t it
sufficient to say that the experience of bringing a perception, any
perception, to match a reference signal that we have set to a high level
(including all the auxiliary experiences that go with this) is itself
what we mean by “pleasure”? And the experience of a high amount
of a perception, any perception, for which we have set a low reference
level is itself what we call unpleasant or painful?

If I recall correctly, you have
argued that these feelings (perceptions, really) are merely
side-effects.

Not side-effects: the effects themselves ARE the pleasure or displeasure.
We have been looking elsewhere for something that is right under our
noses. I don’t think there is any other signal to find. That there was
some other was somebody’s idea long ago that apparently never got
questioned. It was proposed long before anyone heard of endorphins or
pleasure centers, and I think it heavily influenced the interpretation of
the data that led to the idea of pleasure and pain centers, endorphins,
and so on.

We have references for the initial
perceptions and the feelings are simply interpretations of our own
actions with respect to those stimuli (as James proposed with respect to
emotions).

That’s not a bad idea, but what I’m proposing is even
simpler.

An alternative, which I favor, is
that what we are controlling for is the experience (pleasure, absence of
displeasure) that accompanies certain of our
perceptions.

So the only reason you control for keeping your car on the right side of
the road is that you experience pleasure or absence of displeasure from
doing so? That just does not ring a bell for me. I think it is the
perceptions that we control for, period. We don’t need all these extra
signals floating around to explain that. I’ve proposed a connection
between learning and our physical well-being in the form of a
reorganizing system, which takes care of basic learning. We learn to
control our experiences; when we succeed we are content; when we fail, we
feel driven to do something about it, or withdraw.

The link between perception
and feeling-state may be innate. For example, under proper conditions we
humans usually find a sweet taste to be pleasurable. Because sweetness is
a taste quality evoked by sugars, and sugars both supply energy and are
usually associated with needed vitamins and minerals (in fruit), natural
selection presumably would have favored an association of sweetness with
pleasure and, seeking the things that give us pleasure, we would thereby
be led to consume what the body needs. It could also be acquired through
experience.

I don’t buy those “rational natural selection” arguments. Why
not just say that we have a high reference level for sugars and the
feelings we get from ingesting them? You can attribute that reference
level to evolutionary processes, but that is only to say that organisms
who set those reference levels too low didn’t survive. We, perhaps, can
see why that might be so, but that’s a just-so story, unprovable,
unfalsifiable.

Since any perception can be either pleasant or unpleasant, I can’t accept
the idea that there is an innate connection between perceptions and
feeling states.

If these feelings are merely a
side-effect of control action,

Not “merely a side-effect” – that is not what I say. They are
among the essential effects of control actions, and they are controlled
perceptions themselves. There is no other signal saying that a
perception is pleasurable of painful; the states of perceptions relative
to our reference levels (of the moment) define pleasure and
displeasure.

I think there has been a long search, for a century or more, for what
determines pleasure and pain. Lacking a workable model and being stuck
more or less with the simple S-R view, theorists postulated an elaborate
set of ad-hoc explanations, none of which, in the PCT model, are
necessary.

Best,

Bill P.

···

I fail to see why they would
exist as phenomena of consciousness – why the ability to experience them
would have been preserved during the evolutionary process.

Best wishes,

Bruce A.

[From Bruce Gregory (2004.0904.0730)]

Bill Powers (2004.09.03.1418 MDT)

Would it be a fair characterization of PCT to say that emotions are an
epiphenomenon to the extent that they play no working role in the
model?

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bill Powers (2004.09.04.0846 MDT)]

Bruce Gregory (2004.0904.0730) --

Would it be a fair characterization of PCT to say that emotions are an
epiphenomenon to the extent that they play no working role in the
model?

No. They ARE the model at work. They are what we experience of our bodies
preparing to act and our brains putting reference signals into effect. That
is what we experience as emotions, as I see it.

Best,

Bill P.

[From Rick Marken (2004.09.04.1045)]

Bruce Gregory (2004.0904.0730)--

Would it be a fair characterization of PCT to say that emotions are an
epiphenomenon to the extent that they play no working role in the
model?

I would say that emotions should play the same role in the PCT model as
they play in the systems being modeled.

RSM

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bruce Gregory (2004.0904.1537)]

Bill Powers (2004.09.04.0846 MDT)

No. They ARE the model at work. They are what we experience of our
bodies
preparing to act and our brains putting reference signals into effect.
That
is what we experience as emotions, as I see it.

That is true of any epiphenomenon, isn't it? Those who think that
consciousness is an ephiphenomenon would say that what we experience is
the model at work. All the "heavy lifting" is done by the neurons and
their connections. Consciousness is what we experience as the neurons
go about their business. In the same way, in PCT emotions do no heavy
lifting. They are along for the ride, so to speak. The PCT model does
not need emotions any more than it needs awareness. Control is modeled
in exactly the same way whether or not we are conscious of it or
whether or not we experience emotions.

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bruce Gregory (2004.0904.1611)]

Rick Marken (2004.09.04.1045)

Bruce Gregory (2004.0904.0730)--

Would it be a fair characterization of PCT to say that emotions are an
epiphenomenon to the extent that they play no working role in the
model?

I would say that emotions should play the same role in the PCT model as
they play in the systems being modeled.

Apparently they play no role in the systems being modeled. They
certainly play no role in PCT. Or to be more precise, I have never seen
them included in any PCT model. Is that fair?

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bill Powers (2004.09.04.1540 MDT)]

Bruce Gregory (2004.0904.1537)--

[Emotions] ARE the model at work. They are what we experience of our
bodies preparing to act and our brains putting reference signals into
effect.That is what we experience as emotions, as I see it.

That is true of any epiphenomenon, isn't it? Those who think that
consciousness is an ephiphenomenon would say that what we experience is
the model at work. All the "heavy lifting" is done by the neurons and
their connections.

In the PCT model, the biochemical systems are involved, too. But I don't
think that's the problem here.

The PCT model does not need emotions any more than it needs awareness.
Control is modeled in exactly the same way whether or not we are conscious
of it or whether or not we experience emotions.

It's not exactly the same situation. Awareness is a problem whether you're
talking about the operation of the hierarchy or anything else. How do you
explain awareness of emotions?

I have a feeling that you see something missing in the PCT account of
emotions. Can you be more explicit about what it is?

Best,

Bill P.

.

[From Bruce Gregory (2004.0904.1918)]

Bill Powers (2004.09.04.1540 MDT)

I have a feeling that you see something missing in the PCT account of
emotions. Can you be more explicit about what it is?

No, I don't see anything missing. If emotions are controlled
perceptions, they are modeled by PCT. If emotions are not controlled
perceptions, they are not modeled by PCT. As far as I know, there are
no examples of purposeful behavior that require that emotions be
modeled. Until there are, nothing is missing from the PCT account of
emotions, as far as I can tell.

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bill Powers (2004.09.04.2330 MDT)]

Bruce Gregory (2004.0904.1918)]

Bill Powers (2004.09.04.1540 MDT)

I have a feeling that you see something missing in the PCT account of
emotions. Can you be more explicit about what it is?

No, I don't see anything missing. If emotions are controlled
perceptions, they are modeled by PCT. If emotions are not controlled
perceptions, they are not modeled by PCT. As far as I know, there are
no examples of purposeful behavior that require that emotions be
modeled. Until there are, nothing is missing from the PCT account of
emotions, as far as I can tell.

Let me put that differently. Do you think that there is something about
emotions that needs to be understood or modeled that is not accounted for
in the present model (other than our awareness of the phenomena)?

Best,

Bill P.

···

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bruce Gregory (2004.0905.0746)]

Bill Powers (2004.09.04.2330 MDT)

Let me put that differently. Do you think that there is something about
emotions that needs to be understood or modeled that is not accounted
for
in the present model (other than our awareness of the phenomena)?

PCT models do not include emotions. The question from a PCT
perspective, I should think, is whether the absence of an explicit
mechanism involving emotions limits our ability to model purposeful
action. As far as I can tell, the answer is no. You have suggested a
mechanism that describes emotions as a side effect of frustrated
control. This suggestion has yet to be subject to empirical test, as
far as I am aware. When it is, we will have a better idea of what needs
to be understood or modeled.

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bill Powers (2004.09.05.0743 MDT)]

Bruce Gregory (2004.0905.0746)–

You have suggested a mechanism that
describes emotions as a side effect of frustrated control. This
suggestion has yet to be subject to empirical test, as far as I am aware.
When it is, we will have a better idea of what needs

to be understood or modeled.

My problem here is with the term “side effect”. To me, a
side-effect of an action is some other experience, not the action
itself. Your words seem to say that an emotion is some experience other
that the feeling of bodily change of state and mental striving to satisfy
a reference level, a separate experience that results from these things.
My proposal is that what we feel when we feel an emotion is precisely the
bodily changes and the mental states. We are simply experiencing the
hierarchy in action and giving this particular kind of activity a name:
emotion.

So you can see why I boggle when you say that emotion is not included in
the PCT model. Perhaps physical examples of the model don’t yet
extend into the biochemical systems at the lowest level (though we’ve
been discussing the endocrine system and its link to the hypothalamus),
but in principle it could extend there, and it would work just the same
way (with chemical signals instead of neural, and probably little if any
access by awareness).

So far all that isn’t included in the PCT model, as far as I know, is the
Observer, the Experiencer of what goes on in the hierarachy. That may or
may not turn out to be another brain function – I’m content to leave
that question unanswered, since I know I can’t answer it and so far
nobody else has done so either (not philosophbers, theologians, or Zen
Masters).

As to experimental tests, we have one going on all the time that each of
us is monitoring,and we can compare notes and suggest various lines of
investigation, reporting back what we find as in any scientific
investigation. The fact that each of us is examining a different
instance of the system we’re investigating (as behaviorists in different
laboratories observe different instances of the laboratory rat or college
sophomore) means that we can’t say much about areas of disagreement
except to say that you and I are not organized identically. But we can
look for common features, which the PCT model is about, and report
whether the model’s properties seem sufficient to cover what we observe.
The day when we can compare neural signals in my brain with neural
signals in yours is a long way off, and I predict that when we can, we
will find lots of differences even when we agree verbally that we are
experiencing the same thing.

My proposition about emotion is that whatever emotion you’re feeling,
detailed examination of the experience will show that it consists of
experiences you can trace to changes in bodily state, in the context of
some reference perception you’re trying to achieve. Since in this model
reference signals are not themselves directly perceived, you are most
likely to be aware of the physiological preparations and the efforts you
are making to reduce some error without at the moment knowing what the
error is. But (empirically) we can expect that the operation known as
“going up a level” will bring more of the perceptual control
aspect of the emotion into view and make it more understandable – that
is, make what is going on fit better into our cognitive models of
ourselves.

I don’t know what else is required of a theory of emotion.

Best,

Bill P.

[From Bruce Gregory (2004.0905.1047)]

Bill Powers (2004.09.05.0743 MDT)

My proposition about emotion is that whatever emotion you're feeling,
detailed examination of the experience will show that it consists of
experiences you can trace to changes in bodily state, in the context
of some reference perception you're trying to achieve. Since in this
model reference signals are not themselves directly perceived, you are
most likely to be aware of the physiological preparations and the
efforts you are making to reduce some error without at the moment
knowing what the error is.

I don't find your description very convincing when it comes to positive
emotions. Why is sunset beautiful? Why do we like to look at beautiful
things? What evokes the positive emotions we associate with the
presence of some others? What is the source of the satisfaction we
achieve when we do certain things? Where is the error when look at the
Grand Canyon and what are the physiological preparations we are making
to reduce it?

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

···

On Sep 5, 2004, at 10:17 AM, Bill Powers wrote:

[From Bill Powers (2004.09.05.0935 MDT)]

I don't find your description very convincing when it comes to positive
emotions. Why is sunset beautiful? Why do we like to look at beautiful
things? What evokes the positive emotions we associate with the
presence of some others? What is the source of the satisfaction we
achieve when we do certain things? Where is the error when look at the
Grand Canyon and what are the physiological preparations we are making
to reduce it?

OK, but does it seem to fit negative emotions? If so, that's half the battle.

You ask questions, but offer no answers. How about assuming that there is
an answer and trying to see what it is? When questions like these are
raised, the implication is that you don't think there is any answer,
because you've already said you find the theory unconvincing regarding the
positive emotions. What made it seem unconvincing? Do you have some basis
for assuming that the same theory can't explain positive emotions? Could
positive emotions be included by adding something to the theory? I think
it's poossible, but perhaps you have some reason why it isn't, or shouldn't
be, possible. What say you?

Best,

Bill P.