S-R research into learning

[From Bruce Nevin (06.12.2002 13;12 EDT)]
Bruce Nevin (06.11.2002 15;10 EDT)–

It seems clear to me that S-R research is validly about learning, but
does
not license conclusions about how behavior reflecting what has been
learned
is executed.
Following this up, in Bateson’s posthumous Angels Fear I came
across the following (p. 48):
The
triadic pattern involved in learning is held together
by the
nature of “reinforcement.” This is precisely the name
of any
message or experience that will attach value (“good”
or
“bad”; “right” or “wrong”;
“success” or “failure”;
“pleasure”
or “pain”; etc.) to an association or linkage
between
any other two or more components in a sequence of
interaction.
It is not a “piece” of behavior that is
reinforced;
reinforcement
is a comment on a relation between two or more

    events in

a sequence. […] Some persons impose upon their
world the
premise that “right” and “wrong” are attributes of

    items

rather than of relations between items. These people
would
define “reinforcement” in terms different from the
above.
Legal codes seem to do this in the belief that actions
are easier
to define than relations and associations between
actions. I
suspect that such shortcuts are commonly wrong
and/or
dangerous.

/Bruce

[From Bill Powers (2002.06.12.1241 MDT)]

Bruce Nevin (06.12.2002 13;12 EDT)--
  in Bateson's posthumous Angels Fear I came across the following (p. 48):

        The triadic pattern involved in learning is held together
        by the nature of "reinforcement." This is precisely the name
        of any message or experience that will attach value ("good"
        or "bad"; "right" or "wrong"; "success" or "failure";
        "pleasure" or "pain"; etc.) to an association or linkage
        between any other two or more components in a sequence of
        interaction.

This is still an attempt to give some aspects of environmental processes a
causal influence on what we learn. If the sequence A, B leads to some
environmental result that is reinforcing, the effect is to increase the
likelihood that A will lead to B again. But what can give any environmental
result the ability to inflence a likelihood in this way? That question, the
central problem of S-R theory, remains unanswered.

The basic observation remains valid: if a sequence A,B, or a configuration
A, or a relationship f(A,B), or a sensation B, is followed by satisfaction
of some goal (or at least a better approach to the goal), then higher
organisms are likely to create the sequence, configuration, relationship,
sensation, or whatever experience it was, again. But it is not the sequence
and so on that is valued: it is the satisfaction of the goal. The other
experiences take on only a secondary sort of value, whatever their
significance is in achieving an experience that is, through a
misinterpretation just as complete as the one that resulted in phlogiston,
mistaken for an environmental cause when it is really an intended effect.

Best,

Bill P.

[From Bruce Nevin (06.12.2002 15:57 EDT)]

Bill Powers (2002.06.12.1241 MDT) --

This is still an attempt to give some aspects of environmental processes a
causal influence on what we learn. If the sequence A, B leads to some
environmental result that is reinforcing, the effect is to increase the
likelihood that A will lead to B again.

You might be right. Maybe Gregory Bateson was really a behaviorist. But note that he put "reinforcement" in scare quotes.

The emphasized point was the proposal that 'reinforcement' is the association of a valuation with a sequence of perceptions rather than with "a 'piece' of behavior".

Whatever Bateson thought or was trying to say, what I found interesting here was the proposal that learning is not about predictability of the organism but about predictability of the environment by the organism. If I perceive this and then I do that, then x happens. Ooh look, it happened again!

The observer's assignment of evaluative labels '("good" or "bad"; "right" or "wrong"; "success" or "failure"; "pleasure" or "pain"; etc.)' must be based upon the observer's assessment of what the organism does about it; the valuation itself is inaccessible. (This may even be subjectively true.)

         /Bruce

···

At 12:55 PM 6/12/2002 -0600, Bill Powers wrote:

[From Bruce Abbott (2002.06.12.1815 EST)]

Bruce Nevin (06.12.2002 15:57 EDT)

Bill Powers (2002.06.12.1241 MDT)

This is still an attempt to give
some aspects of environmental processes a

causal influence on what we learn. If the sequence A, B leads to
some

environmental result that is reinforcing, the effect is to increase
the

likelihood that A will lead to B again.

You might be right. Maybe Gregory Bateson was really a behaviorist.
But

note that he put “reinforcement” in scare quotes.

The emphasized point was the proposal that ‘reinforcement’ is the

association of a valuation with a sequence of perceptions rather than
with

“a ‘piece’ of behavior”.

Whatever Bateson thought or was trying to say, what I found
interesting

here was the proposal that learning is not about predictability of
the

organism but about predictability of the environment by the organism. If
I

perceive this and then I do that, then x happens. Ooh look, it happened
again!

The observer’s assignment of evaluative labels '(“good” or
“bad”; “right”

or “wrong”; “success” or “failure”;
“pleasure” or “pain”; etc.)’ must be

based upon the observer’s assessment of what the organism does about
it;

the valuation itself is inaccessible. (This may even be subjectively
true.)

I had the impression that Bateson was talking about the organism’s
evaluation, not the observer’s assignment of labels to that
evaluation. Of course, (as you state above) the observer’s
assignment of evaluative labels must be based on inference, because the
observer has no special access to the organism’s subjective
state.
Unfortunately, Bateson himself promotes confusion by stating what he
means badly. He says of reinforcement, “This is precisely the
name
of any message or experience that will attach value
(“good” or “bad”; “right” or
“wrong”; “success” or “failure”;
“pleasure” or “pain”; etc.) to an association or
linkage between any other two or more components in a sequence of
interaction.” If value is attached to an association or
linkage, it is a process in the organism’s brain that does this, not the
“message or experience.” What I take Bateson to mean by
his statement is that certain perceptions of “associations or
linkages” activate (respectively in the brain and subjectively) a
“message or experience” of value (another perception)
that is attached to the perceived “association or
linkage.”

One of the outstanding problems for PCT, in my view, is to account for
the existence of the (subjective) perception of pleasure/displeasure,
like/dislike, etc., i.e., the evaluative dimension. What is it
there for?

Bruce A.

···

At 12:55 PM 6/12/2002 -0600, Bill Powers wrote:

[From Bill Powers (2002.06.0841 MDT)]

Bruce Nevin (06.12.2002 15:57 EDT)--

Bill Powers (2002.06.12.1241 MDT) --

This is still an attempt to give some aspects of environmental processes a
causal influence on what we learn. If the sequence A, B leads to some
environmental result that is reinforcing, the effect is to increase the
likelihood that A will lead to B again.

You might be right. Maybe Gregory Bateson was really a behaviorist. But
note that he put "reinforcement" in scare quotes.

Did I say that Gregory Bateson was really a behaviorist? I think he
considered himself a cyberneticist and a free thinker, and would have
rejected strongly any suggestion that he was a behaviorist. The problem
back then was that nobody knew how to say anything about behavior that
wasn't behavioristic, even when they railed against Skinner and Watson and
proclaimed themselves proponents of the closed loop. You keep getting these
wierd ideas such as an organism "learning to respond adaptively to its
environment" (that's the flavor, not a quote), so you get a little bit of
self-determinism mixed in with a proper respect for causality, with the net
result of a conceptual mess.

I've just been reading a little book by Russell Brain: "The nature of
experience," Oxford University Press, 1959. It consists of the text of
three lectures given at King's College in 1958. The middle one, "The nature
of perception," is interesting because of how close it comes to the PCT
view without actually getting there. The most interesting parts of the
lecture for me were the reviews of what other people, "the realist
philosophers," said when they rejected "sense-datum theory" -- their
arguments don't sound as if they make use of any sort of logical reasoning
at all. It's just sort of what makes sense to them, or what they would
prefer to believe, and phooey on the facts of neurology. It's hard for me
to understand why they thought anyone should listen to them.

Brain develops his argument in practically the same way I did, except that
he just can't let go of that final little bit of realism that keeps his
argument from being complete. His "sense datum theory" of experience says
that It's all perception, except for the parts of experience that are
clearly correct representations of the real world. How he decides that some
perceptions are accurate and objective while others are clearly constructed
by the brain is never made clear. Having been down the same path, and
having felt the same reluctance to go all the way to the obvious
conclusion, I can see just where he came up against something he just
couldn't bring himself to include as a construction by the brain. Viz:

"The human brain is the climax of millions of years of evolutionary
selection of the capacity to react to the physical environment in an ever
more complex manner.On the receptive side this depends upon a progressive
increase in the powers of discrimination of certain sense-organs,
especially the eyes, ....

Well, I'll skip a bit, bringing us to "Thus, one of the most important
functions of the brain is to provide us with an accurate representation of
the spatial structure of the external world as well as our own bodies."

That's what I mean by not being willing to give up that last little bit.
From here he wanders completely off the track, because he wants to believe
in at least our ability to perceive accurately the "spatial structure" of
the external world and all the knowledge implied by that. The idea that
"space" and "structure" themselves are human constructs created by the
brain would have carried his thinking farther in the obvious direction than
he was ready to go. In this respect he seems much like Gibson, admitting
much of the role that the brain's organization plays in creating the world
of experience, but in the end being unable to give up the last line of
defense against admitting that it's ALL perception -- even the idea of
"perception."

It's probably the specter of solipsism that keeps that last mile from being
connected to the net. I used to worry about that, too. But clearly, it's
possible to say that experience is _a function of_ some real world without
having to claim that there is some way to know just what the form of that
function is, and also without having to claim that there must be _some_
little window through which we could peek to check up on the resemblance of
the experienced world to the real one. There isn't one, not even a teeny
tiny knothole.

I was reminded of Brain's book here because of that echo of behaviorism in
the idea of the brain's capacity to react to the environment in complex
ways, which preserves the environment as the cause of the behavior while
sneaking in just a hint of autonomy, choice, and self-regulation. Bateson
and the other cyberneticists said lots of things like that, as if they
couldn't make up their minds that "purpose" was actually a real phenomenon.

Best,

Bill P.

···

At 12:55 PM 6/12/2002 -0600, Bill Powers wrote:

[From Bill Powers (2002.06.13.0930 MDT)]

  Bruce Abbott (2002.06.12.1815 EST)

>One of the outstanding problems for PCT, in my view, is to account for
the existence >of the (subjective) perception of pleasure/displeasure,
like/dislike, etc., i.e., the >evaluative dimension. What is it there for?

My proposal has been that we give such names to experiences that go with
error signals, or changes in error signals. These experiences are tied to
what I have called "intrinsic state" -- the set of all internal variables
for which we have built-in reference values. We gradually learn systematic
ways of controlling intrinsic state, but indirectly, through learning to
control perceptions of the physical and physiological world. As the learned
control systems become effective, the built-in reorganizing system has
fewer occasions for initiating trial-and-error changes in the learned systems.

  Earlier, you said " the observer's assignment of evaluative labels must
be based on inference, because the observer has no special access to the
organism's subjective state."

But the observer has special access to _one_ organism's subjective state:
his own. In fact, _every_ observer has such access to one organism, and it
is possible for observers to perform agreed-upon experiments and compare
their observations, just as if they were replicating experiments with the
world they observe "outside" of themselves. How else would anyone know
about the Necker cube and other such subjective -- but replicable --
phenomena? There is room for mistaken agreement, of course, but not much
more room than there is in so-called "objective" experimentation which
observers learn about mainly by reading what other observers describe and
then trying out the ideas for themselves, as nearly as they can understand
what they are supposed to do and look for.

Part of the problem here is in recognizing just when one is
"introspecting." When you examine and describe the world of experience as
accurately as you can, you are introspecting in a useful way, particularly
if your description enables others to notice and manipulate what seem to be
the same phenomena. When you imagine vague things going on in your head,
and start overambitiously interpreting fleeting impressions and hazy
nuances, you're introspecting in the way that earned introspection a bad
name in psychology.

Best,

Bill P.

[From Bruce Nevin (06.13.2002 15:57 EDT)]

Bruce Abbott (2002.06.12.1815 EST) –

I had the impression that Bateson was talking
about the organism’s evaluation, not the observer’s assignment of
labels to that evaluation.

Agreed. And I agree that Bateson’s words wrongly impute agency to the
“message or experience” with which the organism associates a
value.

One of the outstanding problems for PCT, in my
view, is to account for the existence of the (subjective) perception of
pleasure/displeasure, like/dislike, etc., i.e., the evaluative
dimension. What is it there for?

I thought it came from the limbic system, where it has the same function
that it has in my chickens. We may laugh at their “the sky is
falling!” panic racket and running about when we lift an object so
that it is silhouetted against the sky, or when something falls nearby,
and so on, but it is probably one reason there are chickens today despite
the many hazards to the lives of their ancestors.

Or is the Papez-MacLean notion of three brains (reptilian,
paleomammalian, and neomammalian) no longer accepted?

    /Bruce
···

At 06:25 PM 6/12/2002 -0500, Abbott_Bruce wrote:

[From Bruce Abbott (2002.06.14.0820 EST)]

Bruce Nevin (06.13.2002 15:57 EDT)

Bruce Abbott (2002.06.12.1815 EST) –

I had the impression that Bateson
was talking about the organism’s evaluation, not the observer’s
assignment of labels to that evaluation.

Agreed. And I agree that Bateson’s words wrongly impute agency to the
“message or experience” with which the organism associates a
value.

One of the outstanding problems for
PCT, in my view, is to account for the existence of the (subjective)
perception of pleasure/displeasure, like/dislike, etc., i.e., the
evaluative dimension. What is it there for?

I thought it came from the limbic system, where it has the same function
that it has in my chickens. We may laugh at their “the sky is
falling!” panic racket and running about when we lift an object so
that it is silhouetted against the sky, or when something falls nearby,
and so on, but it is probably one reason there are chickens today despite
the many hazards to the lives of their ancestors.

Or is the Papez-MacLean notion of three brains (reptilian,
paleomammalian, and neomammalian) no longer accepted?

Sorry if I was unclear, but when I said that the problem was to
“account for” the perception of pleasure/displeasure, I did not
mean identifying the neural machinery that produces it. Clearly
that is one kind of “accounting for” – identifying the
physical mechanism that produces it – but what I had in mind was another
kind: identifying its “reason for being,” its
function in the operation of the system. Why should natural
selection have favored brains that produce such feelings and associate
them with certain other perceptions? How does this promote the
well-being of the organism? See my reply to Bill P.
(2002.06.14.0815 EST) for my (highly speculative) proposal.

Bruce A.

[From Bruce Abbott (2002.06.14.0815 EST)]

Bill Powers (2002.06.13.0930 MDT)

Bruce Abbott (2002.06.12.1815 EST)

One of the outstanding problems for PCT, in my view, is to account
for

the existence >of the (subjective) perception of
pleasure/displeasure,

like/dislike, etc., i.e., the >evaluative dimension. What is it
there for?

My proposal has been that we give such names to experiences that go
with

error signals, or changes in error signals. These experiences are tied
to

what I have called “intrinsic state” – the set of all internal
variables

for which we have built-in reference values. We gradually learn
systematic

ways of controlling intrinsic state, but indirectly, through learning
to

control perceptions of the physical and physiological world. As the
learned

control systems become effective, the built-in reorganizing system
has

fewer occasions for initiating trial-and-error changes in the learned
systems.

I understand. However, it seems to me that this leaves unexplained
why we experience the feelings of pleasure/displeasure that “go with
error signals, or changes in error signals.”
Perhaps such experiences function as surrogates for the error or change
in error in consciously unperceived intrinsic variables. We may not
know what is wrong, but it feels bad, and doing this (some
behavioral act) seems to make it feel better. And after doing
that, wow, it feels great! So we learn to control the
intrinsic variable by learning what makes certain perceived feelings feel
better or worse.

And that would explain why we have those conscious feelings.

Earlier, you said
" the observer’s assignment of evaluative labels must

be based on inference, because the observer has no special access to
the

organism’s subjective state."

But the observer has special access to one organism’s subjective
state:

his own. In fact, every observer has such access to one organism, and
it

is possible for observers to perform agreed-upon experiments and
compare

their observations, just as if they were replicating experiments with
the

world they observe “outside” of themselves. How else would
anyone know

about the Necker cube and other such subjective – but replicable

phenomena? There is room for mistaken agreement, of course, but not
much

more room than there is in so-called “objective”
experimentation which

observers learn about mainly by reading what other observers describe
and

then trying out the ideas for themselves, as nearly as they can
understand

what they are supposed to do and look for.

Yes, I agree. Although I didn’t state it explicitly, that was part
of what I had in mind when I said that the observer’s assignment of
labels must be based on inference.

Bruce A.

[From Bill Powers (2002.06.14.0840 MDT)]

Bruce Abbott (2002.06.14.0815 EST)--

I understand. However, it seems to me that this leaves unexplained why we
experience the feelings of pleasure/displeasure that "go with error
signals, or changes in error signals."

I think the question assumes something unprovable -- that there is
something special about the sensations and feelings we experience that
makes them into "pleasant" or "painful." You might as well ask why it is
that we identify a certain sensation as "chocolate," and another as
"blue." Is there something about those signals that uniquely qualifies
them to represent a particular taste or color? I think not. What makes them
into what they are, I think, is simply the context in which they occur,
that context including other experiences as well as the structure of
reference signals and control systems.

Just how is a feeling of pleasure different from a feeling of pain, or for
that matter, a taste of chocolate? If you search the feeling itself for the
answer you will search in vain. Any experience that you isolate and focus
on becomes more like other experiences, not less like them. How is yellow
different from red? The harder you try to see what makes them different,
the smaller the difference seems.

I think you may be on the track of a similar idea when you say

Perhaps such experiences function as surrogates for the error or change in
error in consciously unperceived intrinsic variables. We may not know
what is wrong, but it feels bad, and doing this (some behavioral act)
seems to make it feel better. And after doing that, wow, it feels
great! So we learn to control the intrinsic variable by learning what
makes certain perceived feelings feel better or worse.

To say "surrogate" implies that there is the intrinsic variable on the one
hand, and the judgement of "better" or "worse" on the other hand, which
gives the variable value. But that only postpones the problem or pushes it
back one level. Now we have to ask why feeling one sort of reaction to an
intrinsic variable tells us it is in a "good" state, while feeling some
other sort of reaction tells us the same variable is in a "bad" state. So
we're down to asking about those special reactions. Why does pleasure feel
good and pain feel bad? And if pleasure is good or pain is bad, from whence
comes _that_ judgement?

It seems to me that we can save ourselves a lot of searching if we just
realize that no matter what criterion we use for pleasure or pain, good or
bad, we will never reach a final answer because after we get over the
triumph of having solved the problem we will find that the criterion itself
demands another criterion, and so on without end.

Suppose we simply say that error itself is the criterion by which all else
is judged: the difference between what _is_ experienced and what we _want_
to experience. If they are the same, that is what we mean by "good." If
they are different, that is what we mean by "bad". Then the action we take
to achieve the good or do away with the bad is no longer to be explained by
the goodness or the badness (which are only labels) but by the fact that
the signals we are talking about are error signals in a control system.
Control systems, by their very design, act to reduce error and make the
world more like what we want, and oppose external forces or circumstances
that tend to increase error, making the experienced world less like what we
want it to be.

This says that the judgement of good or bad follows the reaction to error
rather than preceding it. If we find ourselves unable or unwilling to do
something, it is a control system rejecting an experience, and afterward we
say that the whole experience was unpleasant or bad. What we seek,
conversely, we call good; the very process of seeking and achieving gives
meaning to the term "good."

The only objection I can think of to this way of dealing with value is one
I wouldn't support anyway. It is simply that this does away with
_objective_ values. However, it doesn't do away with _built-in_ values in
the form of intrinsic reference signals. We avoid ingesting certain
substances, it seems, because they taste "awful", as well as being
objectively poisonous. But it is neither the awfulness nor the objective
poisonousness that leads us to reject them: it is simply the fact that we
have inherited reference levels for those substances set close to zero. The
"awfulness" is our description of how we act when errors occur, but it does
not explain the actions.

Organisms with low reference levels for the sensed effects of poisons have
survived, but this does not mean they set those reference levels to low
values _in order to_ survive, or because poisons are objectively bad for
them. We can explain all the reactions to poisons in terms of the low
setting of reference levels, without needing to invoke "pleasure" or "pain"
as anything but after-the-fact descriptions of the reactions.

This speaks to your second post as well, concerning functionalism. I
consider functionalism to be, at best, an awkward way of speaking about
control, and at worst, a way of imputing control where there is none.
Giraffes do not have long necks because having them allowed them to browse
on higher tree branches. It's simply that those with shorter necks, among
those with the more elongated necks, were crowded out by those who could
reach higher when the grass was gone. There may be a control process in
the background, but it has nothing to do with lengthening or shortening
necks in order to reach food. I approve of the functionalist mode of
description when it is meant literally and refers to a real control
process. But I don't like it when it is used metaphorically to explain
phenomena that are not control processes.

Best,

Bill P.

[From Bruce Abbott (2002.06.15.0910 EST)]

Bill Powers (2002.06.14.0840 MDT)

Bruce Abbott (2002.06.14.0815 EST)

I understand. However, it
seems to me that this leaves unexplained why we

experience the feelings of pleasure/displeasure that "go with
error

signals, or changes in error signals."

I think the question assumes something unprovable – that there is

something special about the sensations and feelings we experience
that

makes them into “pleasant” or “painful.” You
might as well ask why it is

that we identify a certain sensation as “chocolate,” and
another as

“blue.” Is there something about those signals that
uniquely qualifies

them to represent a particular taste or color? I think not. What makes
them

into what they are, I think, is simply the context in which they
occur,

that context including other experiences as well as the structure
of

reference signals and control systems.

I agree that our conscious experiences depend (so far as we can tell from
the evidence) not only on perceptual signals, but on the context of
relevant brain-states in which they occur. I do not think that my
question assumes "that there is something special about sensations
and feelings we experience that makes them into “pleasant” or
“painful.” In fact, my question only asks you to explain,
given your own position (that feelings of pleasure etc. are
correlates of error, or changes in error, within control systems) the
functional role of such feelings in the operation of the overall brain
system. What are they doing there?

Just how is a feeling of pleasure
different from a feeling of pain, or for

that matter, a taste of chocolate? If you search the feeling itself for
the

answer you will search in vain. Any experience that you isolate and
focus

on becomes more like other experiences, not less like them. How is
yellow

different from red? The harder you try to see what makes them
different,

the smaller the difference seems.

We (human beings) have no idea how it is that the experience of
“blue” or “chocolate” arises from patterns of neural
activity in specific regions of the brain, or what makes the experience
of “blue” different from the experience of “red,” or
why these seem somehow to be of the same sort of experience (which we
call “color”) whereas “chocolate” is of some other
sort. They just are. The best we can do (at least at present)
is to determine what activities in what brain mechanisms correlate with
these experiences.
My experience does not agree with yours with respect to comparing
different experiences. Yellow seems just as distinctly different
from red after I have been comparing those experiences for a while.
I’m not able to identify what makes them different, but they seem
different all the same.

But certain experiences (whatever the complex of brain activity required
to produce them) often are accompanied by certain other experiences that
the old psychologists of a bygone era called “feelings.”
Some experiences, under some conditions, are experienced not only as,
say, “chocolate,” but also as distinctly pleasant.
Others, such as the bitterness of quinine, may be experienced not only as
bitter, but distinctly unpleasant. Your proposal, if I understand
it correctly, is that such associated feelings are associated with the
states of particular control systems – the level of error in them and/or
the direction and rate of change in that error. I wouldn’t
necessarily disagree, but wonder why such feelings occur in consciousness
under these conditions if they have no functional significance.

Of course, this is just part of the wider question of why consciousness
exists at all. It would appear that the entire hierarchy of control
systems could work just as well, consciousness or no consciousness, yet
consciousness exists (in me and probably in a lot of other
creatures). I believe that when conscious experience emerged in the
course of evolution it was retained and elaborated because if provides a
quantum leap in the organism’s ability to organize and effectively deal
with certain aspects of the brain’s environment that can only be dealt
with via behavior (muscular and glandular outputs).

I think you may be on the track of
a similar idea when you say

Perhaps such experiences function
as surrogates for the error or change in

error in consciously unperceived intrinsic variables. We may not
know

what is wrong, but it feels bad, and doing this (some behavioral
act)

seems to make it feel better. And after doing that, wow, it
feels

great! So we learn to control the intrinsic variable by learning
what

makes certain perceived feelings feel better or worse.

To say “surrogate” implies that there is the intrinsic variable
on the one

hand, and the judgement of “better” or “worse” on the
other hand, which

gives the variable value. But that only postpones the problem or pushes
it

back one level. Now we have to ask why feeling one sort of reaction to
an

intrinsic variable tells us it is in a “good”
state, while feeling some

other sort of reaction tells us the same variable is in a
“bad” state. So

we’re down to asking about those special reactions. Why does pleasure
feel

good and pain feel bad? And if pleasure is good or pain is bad, from
whence

comes that judgement?

You’re missing the point. Having a common dimension onto which many
diverse experience can be mapped provides a simple way to deal
effectively with all those experiences. We control for the
experience of pleasure/displeasure (the reference is set at the pleasure
end). We then learn what to do to achieve that control, which will
involve bringing the underlying variable (the one associated with the
current state of pleasure/displeasure) under control.

It seems to me that we can save
ourselves a lot of searching if we just

realize that no matter what criterion we use for pleasure or pain, good
or

bad, we will never reach a final answer because after we get over
the

triumph of having solved the problem we will find that the criterion
itself

demands another criterion, and so on without end.

Suppose we simply say that error itself is the criterion by which all
else

is judged: the difference between what is experienced and what we
want

to experience. If they are the same, that is what we mean by
“good.” If

they are different, that is what we mean by “bad”. Then the
action we take

to achieve the good or do away with the bad is no longer to be explained
by

the goodness or the badness (which are only labels) but by the fact
that

the signals we are talking about are error signals in a control
system.

Control systems, by their very design, act to reduce error and make
the

world more like what we want, and oppose external forces or
circumstances

that tend to increase error, making the experienced world less like what
we

want it to be.

This says that the judgement of good or bad follows the reaction to
error

rather than preceding it. If we find ourselves unable or unwilling to
do

something, it is a control system rejecting an experience, and afterward
we

say that the whole experience was unpleasant or bad. What we seek,

conversely, we call good; the very process of seeking and achieving
gives

meaning to the term “good.”

You are arguing that experiencing the feeling that something is
“good” or “bad” is unnecessary. That brings us
back to the problem I have raised – why have those experiences at
all? My solution is to propose that such associated evaluative
perceptions provide a simple way to represent the states of a myriad of
other (sensory or non-evaluative) perceptions in consciousness, for which
we might not otherwise have any references or, for that matter, control
systems. Consider the child born with no ability to feel
pain. To that child, pushing a sharp knife into the arm and
watching it penetrate the skin and blood gush out might simply be an
interesting sensory experience. But to the normal child, any
sensory experience accompanied by pain is likely to produce actions to
reduce or eliminate that pain. By controlling the pain level to
zero, the child also tends to keep tissue damage to a minimum, whether
this tissue damage is caused by penetrating sharp objects, blunt force,
intense sound, intense light, chemical reactions – each of these
associated with a different perceptual signal.

The only objection I can think of
to this way of dealing with value is one

I wouldn’t support anyway. It is simply that this does away with

objective values. However, it doesn’t do away with built-in values
in

the form of intrinsic reference signals. We avoid ingesting certain

substances, it seems, because they taste “awful”, as well as
being

objectively poisonous. But it is neither the awfulness nor the
objective

poisonousness that leads us to reject them: it is simply the fact that
we

have inherited reference levels for those substances set close to zero.
The

“awfulness” is our description of how we act when errors occur,
but it does

not explain the actions.

Alternatively, we have inherited references for “awfulness” and
control the associated sensory experiences in whatever way will bring the
awfulness toward its reference level. That explains not only how we
end up controlling the associated sensory experience and the appearance
of the experience of “awfulness” in consciousness.

Organisms with low reference levels
for the sensed effects of poisons have

survived, but this does not mean they set those reference levels to
low

values in order to survive, or because poisons are objectively
bad for

them. We can explain all the reactions to poisons in terms of the
low

setting of reference levels, without needing to invoke
“pleasure” or “pain”

as anything but after-the-fact descriptions of the
reactions.

Certainly they don’t set reference levels to anything “in order
to” survive. Organisms that have survived have done so because
these organisms had their reference levels set to particular
values. Those that had the ability to perceive an evaluative
dimension and attach this to the raw sensory experience, and had certain
reference values for those evaluative experiences, were better able to
survive.

This speaks to your second post as
well, concerning functionalism. I

consider functionalism to be, at best, an awkward way of speaking
about

control, and at worst, a way of imputing control where there is
none.

Giraffes do not have long necks because having them allowed them to
browse

on higher tree branches. It’s simply that those with shorter necks,
among

those with the more elongated necks, were crowded out by those who
could

reach higher when the grass was gone. There may be a control
process in

the background, but it has nothing to do with lengthening or
shortening

necks in order to reach food. I approve of the functionalist mode of

description when it is meant literally and refers to a real control

process. But I don’t like it when it is used metaphorically to
explain

phenomena that are not control processes.

I think that you tend to confuse function with purpose.
“Function,” the way I am attempting to use the term, refers to
the role played by some entity within a system. In this
sense, the function of a comparator in a control system is to output an
error signal that is some function of the difference between the input
reference and perceptual signals. A function of the heart in the
body is to pump blood around the circulatory system. This is not
the same as saying that this is the “purpose” of the heart,
where “purpose” refers to the intentions of some entity
(be it human being, a god, “evolution,” or a control
system).

Unfortunately, in English, “purpose” and “function”
are often used as synonyms, which can lead to confusion in the present
context. We say that a “purpose” of the heart is to pump
blood, when we do not mean to imply by this that some intelligent
designer intended that result.

I have no problem whatsoever with the term “function,” nor with
functional descriptions, used as I have described. I wonder what
the functions of conscious perceptions are – what role they play in the
operation of the system. I see that as a legitimate
question.

With respect to evolution through natural selection, the process
frequently results in entities that provide certain identifiable
functions within the system in which they operate – the heart being a
nice example. The fact that there was no designer intending this
result is irrelevant, because function – at least as I use the term –
is not the same as purpose.

Bruce A.

Bruce, I try to keep up with brain research. I believe it is still
accepted.
I love your chicken example.
David Wolsk

[From Bruce Nevin (06.13.2002 15:57 EDT)]

Or is the Papez-MacLean notion of three brains (reptilian, paleomammalian,
and neomammalian) no longer accepted?

        /Bruce

[From Bill Powers (2002.06.16.0734 MDT)]

I agree that our conscious experiences depend (so far as we can tell from
the evidence) not only on perceptual signals, but on the context of
relevant brain-states in which they occur. I do not think that my
question assumes "that there is something special about sensations and
feelings we experience that makes them into "pleasant" or "painful." In
fact, my question only asks you to explain, given your own position (that
feelings of pleasure etc. are correlates of error, or changes in error,
within control systems) the functional role of such feelings in the
operation of the overall brain system. What are they doing there?

I'm proposing that they are not there -- not as signals separate from the
experiences for which we have reference levels. I can convince myself that
there is, in the pleasant experience of chocolate ice-cream, nothing but
the chocolate ice cream (meaning all the sensations that go with it) and
the desire to eat it. Nothing more is actually required to make me want to
eat the ice cream and to do so when possible. The concept of a "pleasure
signal" is superfluous, because a pleasure signal can't accomplish anything
more than a positive nonzero reference signal can do, by way of giving
value to different kinds of experiences.

I think that the idea of pleasure and pain signals as separate entities
predated belief in the reinforcement idea, which requires that a special
effect on behavior exist which leads to an increase (or decrease) in
behavior that produces certain sensations. A reference signal plays the
same role, that of specifying that we will seek (high reference signal) or
avoid (low or zero reference signal) the same sensations, but in a way that
is under the control of the organism, not the environment. It's another
question of phlogiston versus oxygen, or God versus levels of brain
organization higher than the current level occupied by awareness. Once we
persuade ourselves to believe a given explanation of human behavior, the
explanation all too easily turns into a self-evident observation. You can
just SEE the reinforcer making more behavior -- it's not even a theory any
more. You can detect the pleasure or the pain just as if it were a separate
sensation.

The cure for any such observation is to stop believing in it. If that makes
it no longer seem true, then it probably wasn't true to begin with. And if
the fact refuses to go away -- if the evidence keeps forcing you back to
the same conclusion -- then the observation may have been real.. Probably,
may have been -- certainty is unattainable.

My experience does not agree with yours with respect to comparing
different experiences. Yellow seems just as distinctly different from red
after I have been comparing those experiences for a while. I'm not able
to identify what makes them different, but they seem different all the same.

Oh, I agree that they seem different. I'm not confused between yellow and
red -- as long as I have an example of each one in the same field of view.
It's as though they occur in different "places" in perceptual space, so
that I can perceive some sort of distance between them.

But when the field of view consists of one color only, it can become
difficult to say just what that color is. The human brain is wonderful at
constructing a normally colored scene under conditions of illumination that
actually change the wavelengths reflected by all the items in the scene.
What Edwin Land established was that the brain computes a sort of "average
gray" in a scene, and then assigns colors according to deviations from that
gray. In fact he showed that an identically colored patch can been
perceived as red or as yellow depending on the colors of other patches
around it, which in turns depends on the color of the illumination.

  Some experiences, under some conditions, are experienced not only as,
say, "chocolate," but also as distinctly pleasant. Others, such as the
bitterness of quinine, may be experienced not only as bitter, but
distinctly unpleasant. Your proposal, if I understand it correctly, is
that such associated feelings are associated with the states of
particular control systems -- the level of error in them and/or the
direction and rate of change in that error. I wouldn't necessarily
disagree, but wonder why such feelings occur in consciousness under these
conditions if they have no functional significance.

I'm proposing that such feelings are simply the consequences of error. If
someone rams a jackknife blade into your hand, you not only get a sensation
with far too high a level of contrast with surrounding sensations, but you
feel strong efforts to withdraw your hand, and violent preparations for
action in your viscera. All that adds up to the experience we categorize as
"pain." All sensations can occur at levels we term painful if they are high
enough. This in no way implies that there are separate pain signals.
The simplest explanation is that pain is a term for sensations -- any
sensations -- that are far higher than their reference levels. Pleasure is
harder to characterize, but pleasure seems to go with transient reductions
in errors of any kind. Even finding that the camera I dropped still works
is pleasant, although it's hard to imagine a pleasure signal provided by
nature to go with cameras, or with cars starting on a subzero morning, or
with a checkbook balancing the first time, or with a plowed furrow turning
out perfectly straight, or with a program running correctly at last. There
are just too many things that are pleasant or painful to think they are
each somehow connected to a pleasure center or a pain center -- especially
things that haven't been around for more than a few thousand years so
evolution could come into play.

>You're missing the point. Having a common dimension onto which many
diverse >experience can be mapped provides a simple way to deal effectively
with all those >experiences.

Yes, but is it a plausible way? And given the idea of control systems with
refrerence signals, is it even necessary? Also, I dispute that mapping
different experiences onto a common dimension makes them easier to deal
with effectively. That might be so if the same kind of action were needed
to deal with any painful experience, or to sustain any pleasant experience.
But that is definitely not the case: to control any experience, a unique
set of actions must be produced, appropriate to the current state of the
environment. The difficulty with this "common dimension" is the same as the
difficulty with the "final common pathway" of Sherrington. If that were
really how the nervous system worked, there wouldn't be nearly enough
different behaviors to satisfy all our different goals. The "common
dimension" only sounds useful if you refrain from asking _how_ it
translates into the appropriate action for each occasion.

> We control for the experience of pleasure/displeasure (the reference is
set at the >pleasure end). We then learn what to do to achieve that
control, which will involve >bringing the underlying variable (the one
associated with the current state of >pleasure/displeasure) under control.

This is exactly the opposite of my idea. We learn to bring perceptions to
reference levels. When we succeed in doing so (or while we are succeeding)
we call the whole experience pleasant; when we fail, we call it painful.
That is what pleasure and pain ARE. There is no separate experience.
Pleasure and pain are categories, not sensations.

>You are arguing that experiencing the feeling that something is "good" or
"bad" is >unnecessary.

No, I'm not. I'm merely identifying what "good" and "bad" mean as the
experience of correcting errors or failing to correct them. That is how it
feels to succeed or fail at reducing error. No added signals are necessary
to explain the feelings.

>That brings us back to the problem I have raised -- why have those
experiences at >all?

Because those experiences ARE the very process of control that is at the
center of all we do or experience. What different kinds of experience have
in common is that they can have preferred states, and that we are
inherently organized to try to maintain them in those preferred states.
THAT is what is common to all experiences that we control. To categorize
some experiences as pleasant and others as painful is only to recognize,
dimly, what is common to all processes of control.

  >Consider the child born with no ability to feel pain. To that
child, pushing a sharp >knife into the arm and watching it penetrate the
skin and blood gush out might >simply be an interesting sensory experience.
But to the normal child, any sensory >experience accompanied by pain is
likely to produce actions to reduce or eliminate >that pain. By
controlling the pain level to zero, the child also tends to keep
tissue >damage to a minimum, whether this tissue damage is caused by
penetrating sharp >objects, blunt force, intense sound, intense light,
chemical reactions -- each of these >associated with a different perceptual
signal.

Very logical -- but spurious. You're bringing in an imaginary invervening
variable, the "evaluative" experience of pain that is different from
feeling the damage caused by the knife and the extreme deviation of the
sensations from their preferred states. You don't need _anything but that
deviation_ to explain the reaction. The sensations experienced by the
normal child are not some special pain signals; they are the immediate and
specific result of of skin being ripped apart and tissues being damaged ,
with the sensations that result being driven far from their normal and
prefered levels. That alone is enough to qualify as pain -- an
experience to be rejected violently and automatically.

You have to remember that this whole concept of pleasure signals and pain
signals (and the associated "centers") was an attempt to explain why
organisms seem to reject pain and seek pleasure, as if they did so
intentionally, of their own volition, purposively -- all those terms that
scientists have considered incorrect, fradulent, unscientific, and
disgusting. The only way to explain such behavior without invoking inner
direction was to imagine that certain experiences aroused some basic sense
of pleasure or its opposite, with that sense then causing the trophic or
phobic movements that we observe. That, in turn, is the basis of the idea
of reinforcement.

>Alternatively, we have inherited references for "awfulness" and control
the >associated sensory experiences in whatever way will bring the
awfulness toward its >reference level. That explains not only how we end
up controlling the associated >sensory experience and the appearance of the
experience of "awfulness" in >consciousness.

I agree: it is an explanatory scheme that allows us to explain phobic and
trophic behavior without ever having to speak of internal desires,
preferences, intentions, and goals. One doesn't even need the idea of
reference signals. If experiences cause a sense of "awfulness", then that
sense can negatively reinforce the connections between certain stimuli and
responses, reducing their likelihood, and conversely for a sense of
"wonderfulness." There is no need for control theory -- in fact, the
explanatory scheme you propose is part of a long effort to do away with any
idea that organisns determine their own preferences and thus define for
themselves (individually as well as species-wise) what they will find to be
good or bad.

But if we decide that the control-system model is in fact appropriate, then
we don't need any of the older explanations. We don''t need a generalized
pleasure or pain center; we don't need pain signals and pleasure signals.
All we need are reference signals and control systems; all the rest
follows, including our subjective experiences that we categorize as
pleasure and pain.

Best,

Bill P.

[From Rick Marken (2002.06.16.1030)]

Bill Powers (2002.06.16.0734 MDT)

I'm proposing that they are not there -- not as signals separate from the
experiences for which we have reference levels...

Yes! This is what CSG is for. Wonderful post!!

Happy day to the father of PCT from your too often prodigal son;-)

Love

Rick

···

--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bruce Nevin (06.16.2002 22:07 EDT)]

I’m slow to respond, behind in reading, and will be even slower and
behinder as I leave tomorrow on a business trip for the next week.

Bruce Abbott (2002.06.14.0820 EST)–

Sorry if I was unclear, but when I said that
the problem was to “account for” the perception of
pleasure/displeasure, […] what I had in mind was […] identifying its
“reason for being,” its function in the operation of the
system. Why should natural selection have favored brains that
produce such feelings and associate them with certain other
perceptions? How does this promote the well-being of the organism?

I see that it was I who was unclear. I understood that, and thought I was
responding to it.

To repeat, the evolutionary benefit to organisms which (like my chickens)
have little more than the ‘reptilian’ and ‘paleomammalian’ layers of the
brain seems to be rapid recognition of hazards so as to avoid them. These
fast ‘emotional’ evaluations enable fast ‘decisions’ as a result of which
more hazards are survived. The assessments aren’t terribly reliable, in
the sense that the hazards aren’t always actually present, but expending
calories to avoid nonexistent hazards is a better guarantor of having
offspring than failing to avoid hazards that are really there. Likewise
opportunities – probably as important to procreative success as evading
predators etc. Some of my chickens are a lot better than others are at
recognizing bugs and worms that are exposed by my moving things
around.

As the neomammalian ‘brain’ increases in size and complexity the same
fast ‘emotional’ evaluations seem to serve an alerting and
attention-directing function, and in humans seems to be the basis for gut
feelings, first impressions, snap judgements, and the like. Just as the
more primitive two ‘brains’ are still present, so I believe are the
original hookups to adrenalin etc. that appear to be in effect as my
chickens control evasion of whatever-it-[maybe]-was.

I remember posting to the net some years ago mention of some research
suggesting that emotional snap judgements are crude stereotypings that
are nonetheless useful for survival because they are faster than sitting
around cogitating and assessing.

Analogous, on a much simpler level, are the claims that can be made about
the house-fly’s initial leap backward into the air, which kick-starts the
oscillation of the wings (Bruce Abbott 971211.0945 EST). Direction is not
controlled. As a consequence, the fly may fail to escape a wily swatter
who knows to strike behind rather than directly over the fly. But a fly
in the air is a lot harder to catch than one on its feet, so that’s the
way the balance of risks tips under Darwin’s hammer.

    /Bruce
···

At 08:22 AM 6/14/2002 -0500, Abbott_Bruce wrote:

At 08:22 AM 6/14/2002 -0500, Abbott_Bruce wrote:

Bruce Nevin
(06.13.2002 15:57 EDT)–

Bruce Abbott (2002.06.12.1815 EST) –

One of the outstanding problems for PCT, in my
view, is to account for the existence of the (subjective) perception of
pleasure/displeasure, like/dislike, etc., i.e., the evaluative
dimension. What is it there for?

I thought it came from the limbic system, where it has the same function
that it has in my chickens. We may laugh at their “the sky is
falling!” panic racket and running about when we lift an object so
that it is silhouetted against the sky, or when something falls nearby,
and so on, but it is probably one reason there are chickens today despite
the many hazards to the lives of their ancestors.

Or is the Papez-MacLean notion of three brains (reptilian,
paleomammalian, and neomammalian) no longer accepted?

Sorry if I was unclear, but when I said that the problem was to
“account for” the perception of pleasure/displeasure, I did not
mean identifying the neural machinery that produces it. Clearly
that is one kind of “accounting for” – identifying the
physical mechanism that produces it – but what I had in mind was another
kind: identifying its “reason for being,” its
function in the operation of the system. Why should natural
selection have favored brains that produce such feelings and associate
them with certain other perceptions? How does this promote the
well-being of the organism? See my reply to Bill P.
(2002.06.14.0815 EST) for my (highly speculative) proposal.

Bruce A.

[From Bruce Abbott (2002.06.17.1240 EST)]

Bill Powers (2002.06.16.0734 MDT)

I agree that our conscious
experiences depend (so far as we can tell from

the evidence) not only on perceptual signals, but on the context of

relevant brain-states in which they occur. I do not think that
my

question assumes "that there is something special about sensations
and

feelings we experience that makes them into “pleasant” or
“painful.” In

fact, my question only asks you to explain, given your own position
(that

feelings of pleasure etc. are correlates of error, or changes in
error,

within control systems) the functional role of such feelings in the

operation of the overall brain system. What are they doing
there?

I’m proposing that they are not there – not as signals separate from
the

experiences for which we have reference levels. I can convince myself
that

there is, in the pleasant experience of chocolate ice-cream, nothing
but

the chocolate ice cream (meaning all the sensations that go with it)
and

the desire to eat it. Nothing more is actually required to make me want
to

eat the ice cream and to do so when possible. The concept of a
"pleasure

signal" is superfluous, because a pleasure signal can’t accomplish
anything

more than a positive nonzero reference signal can do, by way of
giving

value to different kinds of experiences.

The important question to be answered is not whether they appear to be
needed according to your reasoning, but whether they are there in the
real system. You could argue with equal force that there is no
consciousness, because the hpct hierarchy as presently conceived does not
require it. Yet most of us would admit that consciousness does
exist in at least one human being.

Perhaps I have the cart before the horse: somehow, that dimension of
experience along which we measure pleasure/displeasure is simply our way
of labeling the experience of moving closer to or farther from some
reference level for some perception. We experience the increased or
decreased effort we expend, the changes in physiological arousal, and
call those changes pleasant or unpleasant. James and Lange
expressed a similar view over a century ago. Note, however, that
even this view asserts that some higher level in the brain is monitoring
these changes and interpreting them. Why?

Your position has it that the experience of tasting chocolate ice-cream
is pleasant because you desire to eat it, and I am still holding to the
more traditional view that I desire to eat it because the experience of
tasting chocolate ice-cream is pleasant. I believe that your
position begs the question of why you desire to eat it in the first
place, if not for the pleasure you derive from the act. A tasteless
mush with all the same nutrients would keep the intrinsic variables
involved equally well within their physiological ranges.

I think that the idea of pleasure
and pain signals as separate entities

predated belief in the reinforcement idea, which requires that a
special

effect on behavior exist which leads to an increase (or decrease)
in

behavior that produces certain sensations. A reference signal plays
the

same role, that of specifying that we will seek (high reference signal)
or

avoid (low or zero reference signal) the same sensations, but in a way
that

is under the control of the organism, not the environment.
It’s another

question of phlogiston versus oxygen, or God versus levels of brain

organization higher than the current level occupied by awareness. Once
we

persuade ourselves to believe a given explanation of human behavior,
the

explanation all too easily turns into a self-evident observation. You
can

just SEE the reinforcer making more behavior – it’s not even a theory
any

more. You can detect the pleasure or the pain just as if it were a
separate

sensation.

I have not proposed that the experience of pleasure associated with some
perception magically takes control over the organism. Whether some
perception is experiences as pleasant or not would partly depend on the
properties of the stuff that is stimulating the sensory receptors (rancid
chocolate ice-cream would probably yield a different evaluation than the
fresh stuff). But it is the organism that does the evaluating,
using factors that include not only the sensory experience per se,
but also the current states of many internal physiological
variables. The evaluation by the organism (that is, the
relevant parts of the organism’s brain) yields a value along a dimension
for which the organism has a control system in place, complete with
reference value… The process is identical to that by which other
higher-order perceptions are created.

The latter part of your above paragraph strays off topic, from discussing
my proposal to launching your usual attack on the concept of
reinforcement.

The cure for any such observation
is to stop believing in it. If that makes

it no longer seem true, then it probably wasn’t true to begin with. And
if

the fact refuses to go away – if the evidence keeps forcing you back
to

the same conclusion – then the observation may have been real…
Probably,

may have been – certainty is unattainable.

If I stop believing an observation, that makes it “no longer seem
true” by definition. I think that what you meant to convey was
the idea that one should always maintain a degree of skepticism toward
any supposed fact. Pretend for the moment that it isn’t true and
then examine where that assumption leads you. Does it lead to
contradiction? Or does the available evidence push you back to your
original belief in the observation? Descartes said to begin by
doubting. That’s certainly good advice.

My
experience does not agree with yours with respect to comparing

different experiences. Yellow seems just as distinctly different
from red

after I have been comparing those experiences for a while. I’m not
able

to identify what makes them different, but they seem different all the
same.

Oh, I agree that they seem different. I’m not confused between yellow
and

red – as long as I have an example of each one in the same field of
view.

It’s as though they occur in different “places” in perceptual
space, so

that I can perceive some sort of distance between them.

But when the field of view consists of one color only, it can
become

difficult to say just what that color is. The human brain is wonderful
at

constructing a normally colored scene under conditions of illumination
that

actually change the wavelengths reflected by all the items in the
scene.

What Edwin Land established was that the brain computes a sort of
"average

gray" in a scene, and then assigns colors according to deviations
from that

gray. In fact he showed that an identically colored patch can been

perceived as red or as yellow depending on the colors of other
patches

around it, which in turns depends on the color of the
illumination.

Perception is far more complex than anyone at first imagined. It
does not depend merely on the local activities of specific sensory
receptors, but depends instead on all sorts of local and global
comparisons whose results can be described without stretching things too
much as inferences, based on the available evidence. Most
perceptual systems depend strongly on change or difference, so that
steady stimulation by one value quickly leads to a fading away of the
perception arising from that stimulation.

Some experiences, under some conditions, are experienced not only
as,

say, “chocolate,” but also as distinctly pleasant.
Others, such as the

bitterness of quinine, may be experienced not only as bitter, but

distinctly unpleasant. Your proposal, if I understand it correctly,
is

that such associated feelings are associated with the states of

particular control systems – the level of error in them and/or the

direction and rate of change in that error. I wouldn’t
necessarily

disagree, but wonder why such feelings occur in consciousness under
these

conditions if they have no functional significance.

I’m proposing that such feelings are simply the consequences of error.
If

someone rams a jackknife blade into your hand, you not only get a
sensation

with far too high a level of contrast with surrounding sensations, but
you

feel strong efforts to withdraw your hand, and violent preparations
for

action in your viscera. All that adds up to the experience we categorize
as

“pain.” All sensations can occur at levels we term painful if
they are high

enough. This in no way implies that there are separate pain
signals.

The simplest explanation is that pain is a term for sensations –
any

sensations – that are far higher than their reference levels. Pleasure
is

harder to characterize, but pleasure seems to go with transient
reductions

in errors of any kind. Even finding that the camera I dropped still
works

is pleasant, although it’s hard to imagine a pleasure signal provided
by

nature to go with cameras, or with cars starting on a subzero morning,
or

with a checkbook balancing the first time, or with a plowed furrow
turning

out perfectly straight, or with a program running correctly at last.
There

are just too many things that are pleasant or painful to think they
are

each somehow connected to a pleasure center or a pain center –
especially

things that haven’t been around for more than a few thousand years
so

evolution could come into play.

Pain is only poorly understood at present, but for me at least, the
experience of pain is distinctly different, qualitatively, from the
sensory experience that the pain accompanies. Very bright light is
“blinding” in the sense that all one perceives is brilliant
whiteness, but the stabbing pain I seem to feel in my very eyeballs has a
tactile quality as well as an extreme unpleasantness about it.
Parts of the brain (in the midbrain tegmentum, for example) are capable
of modulating the level of pain experienced without at the same time
changing the primary sensory experience. I take that as evidence
that pain is a separate sensory experience.

As for the camera example, if pleasure/displeasure emerge from a
relatively high-level evaluative process, I can see no reason why that
process should be confined to “signals provided by
nature.” We learn to take pleasure in all sorts of
things.

You’re missing the point.
Having a common dimension onto which many

diverse >experience can be mapped provides a simple way to deal
effectively

with all those >experiences.

Yes, but is it a plausible way?

Yes. (:->

And given the idea of control
systems with

refrerence signals, is it even necessary?

Perhaps the apparent lack of necessity is more due to the limitations on
our knowledge of what is necessary than to a real lack of
necessity. To repeat, there is no apparent necessity of
consciousness either, from the PCT viewpoint. According to your
reasoning, it follows that consciousness does not really exist.

Also, I dispute that mapping

different experiences onto a common dimension makes them easier to
deal

with effectively. That might be so if the same kind of action were
needed

to deal with any painful experience, or to sustain any pleasant
experience.

But that is definitely not the case: to control any experience, a
unique

set of actions must be produced, appropriate to the current state of
the

environment. The difficulty with this “common dimension” is the
same as the

difficulty with the “final common pathway” of Sherrington. If
that were

really how the nervous system worked, there wouldn’t be nearly
enough

different behaviors to satisfy all our different goals. The
"common

dimension" only sounds useful if you refrain from asking how
it

translates into the appropriate action for each
occasion.

Not so. My proposal is not that there is only the perception of
pleasure/displeasure and then the organism has to somehow figure out from
this information only what needs to be done to bring that perception to
its reference level (although on occasion this may prove to be the case,
as when a dietary deficiency leads to a feeling of malaise, and one has
to experiment to discover what will make that feeling go away).
Usually there are perceptions whose changes in value are associated with
changes in experienced pleasure/displeasure. The person learns to
control these perceptions (by whatever means are discovered to work) as
the means of controlling the experienced
pleasure/displeasure.

We control for the experience
of pleasure/displeasure (the reference is

set at the >pleasure end). We then learn what to do to achieve
that

control, which will involve >bringing the underlying variable (the
one

associated with the current state of >pleasure/displeasure) under
control.

This is exactly the opposite of my idea. We learn to bring perceptions
to

reference levels. When we succeed in doing so (or while we are
succeeding)

we call the whole experience pleasant; when we fail, we call it
painful.

That is what pleasure and pain ARE. There is no separate
experience.

Pleasure and pain are categories, not sensations.

You are right: these conceptions are exactly opposite. As I
experience them, pleasure and pain are perceptions, not
categories.

You are arguing that
experiencing the feeling that something is “good” or

“bad” is >unnecessary.

No, I’m not. I’m merely identifying what “good” and
“bad” mean as the

experience of correcting errors or failing to correct them. That is
how it

feels to succeed or fail at reducing error. No added signals are
necessary

to explain the feelings.

But why does it, as you yourself put it feel (is not a
feeling a perception?) like anything at all? These
“feelings” have no role to play whatsoever in your current
conception.

That brings us back to the
problem I have raised – why have those

experiences at >all?

Because those experiences ARE the very process of control that is at
the

center of all we do or experience. What different kinds of experience
have

in common is that they can have preferred states, and that we are

inherently organized to try to maintain them in those preferred
states.

THAT is what is common to all experiences that we control. To
categorize

some experiences as pleasant and others as painful is only to
recognize,

dimly, what is common to all processes of control.

Where we do agree is that these subjective experiences are closely bound
to the processes of control. But I see no reason why we should
experience changes in error in control systems as anything at all.
There should be nothing to attach the labels to. We should just
control. But it seems to me (from my own subjective
experience, at least), that such changes often do carry with them another
perceptual experience, which I take to be arising from higher, evaluative
mechanisms in the brain.

Consider the child born
with no ability to feel pain. To that

child, pushing a sharp >knife into the arm and watching it
penetrate the

skin and blood gush out might >simply be an interesting sensory
experience.

But to the normal child, any sensory >experience accompanied by pain
is

likely to produce actions to reduce or eliminate >that pain.
By

controlling the pain level to zero, the child also tends to keep

tissue >damage to a minimum, whether this tissue damage is caused
by

penetrating sharp >objects, blunt force, intense sound, intense
light,

chemical reactions – each of these >associated with a different
perceptual

signal.

Very logical – but spurious. You’re bringing in an imaginary
invervening

variable, the “evaluative” experience of pain that is different
from

feeling the damage caused by the knife and the extreme deviation of
the

sensations from their preferred states. You don’t need _anything but
that

deviation_ to explain the reaction. The sensations experienced by
the

normal child are not some special pain signals; they are the immediate
and

specific result of of skin being ripped apart and tissues being damaged
,

with the sensations that result being driven far from their normal
and

prefered levels. That alone is enough to qualify as pain – an

experience to be rejected violently and
automatically.

Then how do you explain the fact that the child born congenitally unable
to experience pain has otherwise normal sensory experiences – hot and
cold, light and deep pressure, normal hearing and vision, and so
on?

You have to remember that this
whole concept of pleasure signals and pain

signals (and the associated “centers”) was an attempt to
explain why

organisms seem to reject pain and seek pleasure, as if they did so

intentionally, of their own volition, purposively – all those terms
that

scientists have considered incorrect, fradulent, unscientific, and

disgusting. The only way to explain such behavior without invoking
inner

direction was to imagine that certain experiences aroused some basic
sense

of pleasure or its opposite, with that sense then causing the trophic
or

phobic movements that we observe. That, in turn, is the basis of
the idea

of reinforcement.

Perhaps your reconstruction of history is not entirely accurate.
Perhaps there came first the primary experiences of pleasure and pain and
later an attempt to account for behavior in stimulus-response terms that
invoked a reinforcement principle rather than the concept of
control. One could reject the reinforcement principle in favor of
control without having to deny the existence of the primary
experiences. (I’m using “primary” here to mean that they
are directly experienced in consciousness – the only stuff we can be
sure is “real.”)

Alternatively, we have
inherited references for “awfulness” and control

the >associated sensory experiences in whatever way will bring
the

awfulness toward its >reference level. That explains not only
how we end

up controlling the associated >sensory experience and the appearance
of the

experience of “awfulness” in >consciousness.

I agree: it is an explanatory scheme that allows us to explain phobic
and

trophic behavior without ever having to speak of internal desires,

preferences, intentions, and goals. One doesn’t even need the idea
of

reference signals. If experiences cause a sense of “awfulness”,
then that

sense can negatively reinforce the connections between certain stimuli
and

responses, reducing their likelihood, and conversely for a sense
of

“wonderfulness.” There is no need for control theory –
in fact, the

explanatory scheme you propose is part of a long effort to do away with
any

idea that organisns determine their own preferences and thus define
for

themselves (individually as well as species-wise) what they will find to
be

good or bad.

Good lord, Bill, you’re straying into territory having nothing to do with
my proposal. Control of experience (whether of sense perceptions or
of “feeling” perceptions) is control regardless. It is
not “reinforcement.”

But if we decide that the
control-system model is in fact appropriate, then

we don’t need any of the older explanations. We don’'t need a
generalized

pleasure or pain center; we don’t need pain signals and pleasure
signals.

All we need are reference signals and control systems; all the rest

follows, including our subjective experiences that we categorize as

pleasure and pain.

Bill — hey Bill! That’s a windmill you’re jousting with!
I’m over here . . . .

Interesting that you had no comment on my discussion of
function.

Bruce A.

[From Bill Powers (2002.06.18.0848 MDT)]

Bruce Abbott (2002.06.17.1240 EST)

>The important question to be answered is not whether [special signals]
appear to >be needed according to your reasoning, but whether they are
there in the real >system. You could argue with equal force that there is
no consciousness, because >the hpct hierarchy as presently conceived does
not require it. Yet most of us would >admit that consciousness does exist
in at least one human being.

Yes, these are difficult questions because so much depends on private
interpretations of experience. All we can do is "try on" various possible
points of view to see how they stand up under scrutiny. I'm proposing that
we try on the explanations of pleasant and unpleasant experiences that grow
out of HPCT.

>Perhaps I have the cart before the horse: somehow, that dimension of
experience >along which we measure pleasure/displeasure is simply our way
of labeling the >experience of moving closer to or farther from some
reference level for some >perception. We experience the increased or
decreased effort we expend, the >changes in physiological arousal, and call
those changes pleasant or unpleasant.

That's a very fair statement of the position I'm taking. It seems to me
that when you include _all_ the concomitants of increasing and decreasing
error, including the "changes in physiological arousal" that you mention
(feelings of elation or terror and so forth), there isn't much left to be
assigned to a special pleasure or displeasure signal. Think about that: if
you put aside the sensations of physiological state-changes which we
presume are part of the operation of the hierarchy of control systems, just
what do pleasure and pain feel like? I'm sure we both know what we mean by
"feeling good" and "feeling bad," so the question really boils down to what
we are detecting when we have such feelings. I say we are detecting normal
aspects of control-system operation. You don't appear to agree with that.
Am I misinterpreting you?

>Note, however, that even this view asserts that some higher level in the
brain is >monitoring these changes and interpreting them. Why?

I think the argument right now is between our higher centers, yours and
mine. We are both talking about the feeling-states that accompany
successful and unsuccessful control. I, like you, can recognize that
certain feeling states are preferable to others, at least by me, and that
the reason they are preferable lies in what they signify about our ability
to control what happens to us. Where we seem to disagree is on the question
of whether something is needed _in addition_ to these feeling states to
tell us which ones to seek and which to avoid.

It's hard for me even to make that last statement in a way that has meaning
in PCT. A control organization _by definition_ seeks perceptual states that
match its reference levels, whether those be high or low levels. And
reference levels are set high or low according to what is needed at the
moment to control higher-order or more abstract variables. We do not have
the choice of seeking to experience something for which we have a low
reference level, or avoiding an experience for which we have a high
reference level. That's just a contradiction. And we can't arbitrarily set
a given reference level high or low without regard to the requirements of
higher-order systems -- in fact "we" is the same as those higher order
systems; we _are_ them. There simply isn't any room in the hierarchy for
inherited signals that make a given experience always good, and therefore
always to be sought, or bad and therefore always to be avoided. Some
experiences, such as injuries, apparently have low reference levels set by
genetics, but even they can be changed if higher levels demand it. Removing
a splinter requires controlling things in a way that increases pain, but we
can do it.

Your position has it that the experience of tasting chocolate ice-cream is
pleasant because you desire to eat it,

I think that's rather an oversimplification of my position. What is
pleasant is the satisfaction of many reference conditions at once, in this
case concerning textures, temperatures, tastes, and even social variables.
Our higher systems set positive reference levels for eating ice cream when
doing so is likely to result in all those satisfactions -- the successful
correction of error can itself become a goal. So my position is not all
that different from yours, at least operationally:

and I am still holding to the more traditional view that I desire to eat
it because the experience of tasting chocolate ice-cream is pleasant.

That is why I would select the behavior of eating ice cream, too. I want to
experience the pleasantness of doing so. That is not what we disagree
about. What we disagree about is whether there is some fundamental
experience of pleasantness separate from the experiences involved in eating
ice cream that is needed for us to experience the result as pleasant. The
traditional view lacks the concept of a reference signal, and so can't
account for why we would seek or avoid any experience, other than to
propose that the pleasantness or unpleasantness of the experience guides
our actions.

I believe that your position begs the question of why you desire to eat
it in the first place, if not for the pleasure you derive from the act.

I eat it in the first place because there are higher-level systems in me
that are responsible for setting the reference conditions that are
satisfied by means of eating behavior. We don't lack an account for why we
set reference levels or why we act to satisfy them. That is what HPCT is
entirely about. If you offer some _other_ explanation for why we seek the
tastes and other experiences of eating, then you're offering a substitute
for PCT -- we don't need two explanations for the same phenomenon,
especially not when they can't both operate at once.

  A tasteless mush with all the same nutrients would keep the intrinsic
variables involved equally well within their physiological ranges.

I'm glad you brought this up, and I agree with your reasoning. If that were
actually true, then that is what we would eat. But it is not true. The
tastes we find pleasant are those arising from control processes that
actually help correct intrinsic error signals. That is, the learned
hierarchy reorganizes until the reference signals the learned
systems generally set are those that result (indirectly) in correction or
prevention of intrinsic error signals. Those are the experiences we come to
class as "pleasant." It's not quite as simple as that, because some of the
intrinsic reference levels are, apparently. not strictly physiological --
such things as appreciation of beauty might, in some form at least, amount
to built-in reference signals. We learn to value positively those
perceptions which, in the process of being controlled, are associated with
reduction in _total_ intrinsic error.

[Skipping over some stuff to keep the length of these posts from expanding
exponentially]

>Pain is only poorly understood at present, but for me at least, the
experience of pain is >distinctly different, qualitatively, from the
sensory experience that the pain >accompanies.

In other words, you're telling me that for you, pain is not a sensation of
burning, or of skin receptors being ripped apart, or of damage to a bone.
So if you had none of the sensations that are aroused by such things, would
you still feel pain?

The problem here is that pain normally occurs when something has happened
that simulates a lot of nerve-endings anyway. Only through artificial means
can you produce a pain signal; "by itself." And like all other sensory
signals, stimulating a nerve to produce such a signal does not prove that
the signal is independent of the physical state it normally represents.
Drilling into a tooth stimulates the nerve in the tooth -- the only one. It
is the same nerve that allows us to sense the pressure of chewing and thus
helps regulate that pressure. Overstimulating it with a drill hurts like
hell, but it is still the same nerve responding -- it's just responding way
too much. There is no pain signal separate from the normal signal. What
hurts is the vast excess of perceptual signal over the normal range of
reference levels.

> Very bright light is "blinding" in the sense that all one perceives is
brilliant whiteness, >but the stabbing pain I seem to feel in my very
eyeballs has a tactile quality as well as >an extreme unpleasantness about it.

It may very well be a muscular reaction that you feel, along with an
extreme overload of brightness signal.

> Parts of the brain (in the midbrain tegmentum, for example) are capable
of >modulating the level of pain experienced without at the same time
changing the >primary sensory experience. I take that as evidence that
pain is a separate sensory >experience.

Changing the reference level will do the same thing if my intepretation is
correct -- that is,. if pain is simply a large excess of a perceptual
signal over its reference level. If you're straining to hear a very faint
sound, and someone speaks into your ear in a normal voice, the result might
easily be felt as painful.

... My proposal is not that there is only the perception of
pleasure/displeasure and then the organism has to somehow figure out from
this information only what needs to be done to bring that perception to
its reference level (although on occasion this may prove to be the case,
as when a dietary deficiency leads to a feeling of malaise, and one has to
experiment to discover what will make that feeling go away).

  When you say that "one has to experiment to discover what will make that
feeling go away," does not "one" now refer specifically to what I have
called the reorganizing system? And is not the "feeling" the intrinsic
state , or the intrinsic error, itself?

You are right: these conceptions are exactly opposite. As I experience
them, pleasure and pain are perceptions, not categories.

Categories are perceptions, too, in HPCT (proposed as level 7).

>But why does it, as you yourself put it feel (is not a feeling a
perception?) like anything >at all? These "feelings" have no role to play
whatsoever in your current conception.

Why does anything feel like anything at all?

This is simply a question of explaining the feelings that we both admit to
experiencing. You want to have a separate "pleasure/displeasure" kind of
sensory signal, while I propose that the signals already existing in the
whole HPCT model (including the reorganizing-system hypothesis) are a
sufficient explanation of what we experience.

>Then how do you explain the fact that the child born congenitally unable
to >experience pain has otherwise normal sensory experiences -- hot and
cold, light >and deep pressure, normal hearing and vision, and so on?

I explain it by saying that the child is _not_ having normal sensory
experiences, or else does not have normal settings for reference signals.

>Interesting that you had no comment on my discussion of function.

Not very. I'll argue that another time. Briefly, when you speak of
functions as you do, you're simply referring to a component of a model -- a
perceptual function, for example. There's nothing new or interesting about
that. All that would be interesting would be the specific model that you
construct out of the proposed functions, assuming it would behave as you claim.

Best,

Bill P.

[From Bruce Nevin (06.22.2002 21:10 PDT -- back to EDT tomorrow)]

Bill Powers (2002.06.12.1241 MDT) --
At 12:55 PM 6/12/2002 -0600, Bill Powers wrote (responding to my quote of Bateson):

This is still an attempt to give some aspects of environmental processes a
causal influence on what we learn. If the sequence A, B leads to some
environmental result that is reinforcing, the effect is to increase the
likelihood that A will lead to B again. But what can give any environmental
result the ability to inflence a likelihood in this way? That question, the
central problem of S-R theory, remains unanswered.

The basic observation remains valid: if a sequence A,B, or a configuration
A, or a relationship f(A,B), or a sensation B, is followed by satisfaction
of some goal (or at least a better approach to the goal), then higher
organisms are likely to create the sequence, configuration, relationship,
sensation, or whatever experience it was, again. But it is not the sequence
and so on that is valued: it is the satisfaction of the goal. The other
experiences take on only a secondary sort of value, whatever their
significance is in achieving an experience that is, through a
misinterpretation just as complete as the one that resulted in phlogiston,
mistaken for an environmental cause when it is really an intended effect.

For purposes of understanding learning, it doesn't matter so much what the S-R researchers believe. What matters is what the organism comes to 'believe' about its environment. The environment is so structured that in order to get result B the rat must produce a certain effect, A. The rat learns that if it produces effect A then result B follows. The S-R researcher may believe that she is controlling the rat's behavior. All she is controlling within the rat is the rat's perception of the structure of its environment, or, rather, of how to control successfully within this oddly structured environment. Really, all she is controlling is the structure of the rat's environment, and leaving it up to the rat to figure it out.

The things on the one hand that I'm not talking about: The rat may be quite mistaken in respect to what we know is 'really' going on [a bit of a joke there, yes?], just as we now know phlogiston theorists were quite mistaken. But the rat gets fed, and the phlogiston theorists were able to make some predictions that were borne out. Phlogiston was a flawed generalization, but as a generalization from observation it did support some valid predictions. The person who perceives the thermostat as a valve, and turns it all the way up to get the house warm faster, does succeed in getting the house warm. What they learned may be in error, but, like Newtonian vs. quantum/relativistic physics, it's good enough for a great deal of successful control. And [the joke, of course] we don't know and probably can't ever know that what we know is what's really going on. It's always possible that there's some perceptual grounding or point of view from which our most refined results of science are as ludicrous as phlogiston and phrenology are to us.

The things on the other hand that I'm not talking about: It's clear that S-R research is not about behavior.

I'm proposing that, their beliefs notwithstanding, S-R researchers are studying learning. Learning is about that which is learned -- to lean (with Bateson) on Whithead & Russell, it is of a higher logical type. Maybe we can find out from S-R research something about learning, since that is what they have been dealing with, even if the researchers' attention has been misdirected.

And after all, what methods have been proposed within PCT for studying learning?

         /Bruce

[From Bill Powers (2002.06.23.9734 MDT)]

Bruce Nevin (06.22.2002 21:10 PDT -- back to EDT tomorrow)]

>For purposes of understanding learning, it doesn't matter so much what the

S-R researchers believe. What matters is what the organism comes to
'believe' about its environment. The environment is so structured that in
order to get result B the rat must produce a certain effect, A. The rat
learns that if it produces effect A then result B follows.

I'm uncomfortable with this way of putting it, especially for a rat but
even with most things people learn. To say that a rat learns to produce
result B by producing effect A is not the same as saying the rat learns
that IF it produces effect A THEN effect B follows. The latter is a
conditional proposition which can be considered and thought about
independently of whether the actual effects and results are occurring.

Apply this to a thermostat: what the thermostat "knows" is that IF a
contact closes, THEN the furnace will turn on. Of course we wouldn't say
that, but why not? What, specifically, does the thermostat lack that keeps
us from accepting this description? It lacks the kind of perception that
would allow it to create such a general proposition. What you attribute to
the rat's learning is specifically a logical proposition, the making of
which requires the ability to perceive and control logical propositions. I
don't think rats can do that, at least not much of it.

The metaphorical use of "believe" and "if...then" doesn't invalidate your
main point, with which I agree. It is the nature of the _experienced_
reality, not the actual reality, that determines what we will try to
control and how we will try to do it. The actual reality determines whether
such attempts will be successful and whether the choice of method is
efficient or entails unnecessary complications. But of course, as you
imply, the actual reality doesn't kindly inform us of these facts.

The S-R
researcher may believe that she is controlling the rat's behavior. All she
is controlling within the rat is the rat's perception of the structure of
its environment, or, rather, of how to control successfully within this
oddly structured environment. Really, all she is controlling is the
structure of the rat's environment, and leaving it up to the rat to figure
it out.

That is very nicely put. In truth, psychology has never actually studied
learning as a process, but only as an outcome. The S-R researcher makes it
_necessary_ for the rat to learn something if the rat is to eat or avoid
pain. It is then possible to study a little about the external conditions
under which the required behavior is acquired slower or faster, or not at
all. But nothing is learned or even proposed about the process of learning
itself -- that is, what capacities in the rat make it possible.

The things on the one hand that I'm not talking about: The rat may be quite
mistaken in respect to what we know is 'really' going on [a bit of a joke
there, yes?], just as we now know phlogiston theorists were quite mistaken.
But the rat gets fed, and the phlogiston theorists were able to make some
predictions that were borne out.

Yes, and this shows that even we can learn to control without symbolizing
our environments and then reasoning correctly about them. We can "learn" by
trial and error, although in doing so we learn only the moves that result
in control, not the reason for which those moves work. OPur explanations of
what we have learned may have nothing to do with the control processes we
actually carry out (just consider someone explaining how he or she
"responds to the environment").

But I don't believe the rat could be "mistaken" about what makes its
behavior work, because it can't be "right", either. It doesn't explain
things to itself as we do, in my opinion. If it can make happen what it
wants to happen, that is the end of the matter.

Phlogiston was a flawed generalization,
but as a generalization from observation it did support some valid
predictions.

Of course it did -- if it hadn't done so it would not have persisted for
150 years. When we theorize by generalizing, all that's required is that
the causal relationships be repeatable most of the time, enough of the time
to be useful. It's not required that our understanding of them be "correct"
in the sense of holding true all of the time and under all relevant
conditions. However, when we theorize by proposing underlying mechanisms,
the game changes. Now it's not satisfactory if the phenomenon occurs in
conformity with the properties of the mechanism only most of the time. The
exceptions become exceedingly important because we don't expect the
mechanism to change suddenly and capriciously. We must explain the
exceptions as well as the conforming observations. That's the sort of
theory PCT is.

The person who perceives the thermostat as a valve, and turns
it all the way up to get the house warm faster, does succeed in getting the
house warm. What they learned may be in error, but, like Newtonian vs.
quantum/relativistic physics, it's good enough for a great deal of
successful control. And [the joke, of course] we don't know and probably
can't ever know that what we know is what's really going on. It's always
possible that there's some perceptual grounding or point of view from which
our most refined results of science are as ludicrous as phlogiston and
phrenology are to us.

Yes, and the more I read about modern physics the more convinced I become
that this is precisely what is happening (or lacking) there. But don't
forget that there is a change in the air, at least in the air you and I are
breathing. When we begin demanding a proposed mechanism rather than a
plausible generalization or rule, flights of mathematical fancy are no
longer good enough to pass for an explanation. We demand to know _what kind
of underlying system could act like that_. And we are not satisfied to
hear a description of such a mechanism: we want to see it, or at least a
workmanlike simulation of it, operating by itself. I think this was the
main difference between Priestley and Lavoisier: Lavoisier's proposal
amounted to a different underlying mechanism.

I'm proposing that, their beliefs notwithstanding, S-R researchers are
studying learning. Learning is about that which is learned -- to lean (with
Bateson) on Whithead & Russell, it is of a higher logical type. Maybe we
can find out from S-R research something about learning, since that is what
they have been dealing with, even if the researchers' attention has been
misdirected.

As I remarked earlier, I don't think they are studying learning so much as
the conditions under which learning takes place.

And after all, what methods have been proposed within PCT for studying
learning?

I'm glad you asked, though I've given my answer before. Using PCT, I would
first construct a workable model of the behaving system for the kind of
behavior under study. This model would contain, among other things,
functions and parameters from which the final behavior would be generated.
Once a good model of the final behavior existed, we could study the way the
functions and parameters changed as a naive person went through the process
of learning the behavior. By matching the model to the person at each stage
of the learning process, we could see how the changes take place.

For example, if the learning is akin to reorganization as I have proposed
this process, we might expect to see a biased random walk as the parameters
change this way and that, with nothing systematic about the changes
themselves except how long they progress in a given direction. Or, if the
learning is algorithm-based or otherwise systematic, we might we able to
deduce the algorith or the system that is being employed. Classical studies
of behavior, which I agree are almost entirely studies of learning, never
did anything like this as far as I know.

I hope I haven't appeared to disagree with you here about anything
important. That was a useful and thought-provoking post.

Best,

Bill P.