Behaviorism versus PCT

[From Bruce Abbott (2009.02.19.1910 EST)]

Bill Powers (2009.02.19.1516 MST) --

Bruce Abbott (2009.02.19.1410 EST)

> > Take a more immediate example: I assume you would say you see a
> > computer screen in front of you. Can you tell me how you verify
that
> > there is a computer screen actually there, independent of all your
> > perceptions?
>
>There's no way to do that. All we have is our perceptions; the
reality
>we believe to be beyond that direct experience is only an inference.
>The best that we can do is note that our perceptions usually point to
>the same conclusion as to what must be "out there": I not only see the
>computer screen, but when I visually perceive my hand to be in contact
>with the screen, I also perceive a feeling in what I take to be my
>fingertips that seems to correspond to what I see -- something smooth,
>flat, rectangular, now sporting a smudge at the apparent point of
>contact. But that's still an inference.

That's the way I figure it, too. There's no way to do that.

The reason I brought it up was that it seemed to be relevant to the
idea that we can modify behavior by properly arranging the environment.

Relevant in what way?

But perhaps I'm completely misreading you. In retrospect, I can't tell
if you're simply reporting the way behaviorists think, or endorsing it,
or perhaps see that view as entirely consistent with the PCT model so
there is no conflict.

Part of what I've done is to report the way behaviorist think, in order to
contrast that approach with PCT. I've hypothesized that the relative success
of behaviorism versus PCT may have to do with the ease with the findings of
behaviorism can be applied in practical settings -- behavioral therapies,
more effective animal training methods, and better teaching technologies, to
name three. The usual goal of practitioners is to bring about some specific
kind of behavioral change, and these techniques of the behaviorist promise
to do just that. It is less obvious how the principles of PCT can be
applied, especially given the frequent assertion that, because people are
living control systems, they will resist any external attempts to change
them.

But people (and other animals) do change. Behaviorists have identified
systematic changes in behavior that result from their experiences when
interacting with their environments (broadly conceived, including other
people or animals). In its current state of development, PCT is rather vague
about all this; it's an area that desperately needs to be researched.

I sense that you disagree with my thesis that the environment in which a
living organism finds itself strongly affects the organization of its
control-systems -- what variables the organism comes to control and what
actions it uses to accomplish that control. But isn't that precisely what
reorganization accomplishes? And if someone is having difficulty maintaining
control over certain variables (leading to considerable distress), wouldn't
a properly structured environment facilitate effective reorganization? If
I'm endorsing anything out of my behaviorist past, it's the demonstrated
fact that what the organism does is systematically related to it
experiences, past and present, in given environments. I don't think it does
us or PCT any good to deny that.

I'd like to hear your take on that. Do you assert that the environment has
no influence on how the organism's control systems come to be organized?
That it is not possible, in theory, to discover interventions (changes to
the individual's environment) that would help an individual to function more
effectively, or with less distress?

As for your last-mentioned option, no, I don't see the behaviorist view as
entirely consistent with the PCT model. Skinner thought that behavior such
as the rat approaching a lever from any of a variety of initial positions
could be explained as the result of the variation in discriminative stimuli
resulting from being in different starting positions, and the reinforcement
of movement in different directions depending on the discriminative stimuli
currently visible from those starting positions. That's nonsense. The rat
learns at a very early age to control its perceptions of movement and
position relative to other things. What "reinforcing" the lever-press does
is teach the rat that approaching the lever and pressing it will allow it to
control the delivery of food pellets and thus reduce error in another
system, error that was induced by the experimenter through food deprivation.
The rat's already existing movement and position control systems simply have
their references set appropriately and the rat moves toward and then
contacts the lever.

The Skinnerian view is also incorrect in the assertion that "reinforcers"
increase the rate or probability of a behavior. The contingencies set up by
the experimenter allow the rat to control a variable such as food intake,
but whether there will be an increase or decrease in the rate or probability
of the lever-press will depend on the nature of the environmental feedback
function that the experimenter has set up, among other factors.

Finally, I point to a fundamental difference between the behaviorist and PCT
approaches. The behaviorist approach is functional; it makes no attempt to
understand the mechanisms within the organism that generate observable
behavior -- equivalent to learning what the various controls of your car do
without caring how they carry out those functions. PCT is thoroughly
mechanistic, proposing a specific organization of parts within the organism.
These proposals are testable in that the proposed organization of structures
can be shown (mathematically or through simulation) to behave in predictable
ways given a knowledge of the starting conditions and inputs. One can test
these predictions against actual behavior.

Once you have a valid model of the system's organization, you are in a
position to predict and understand what will happen under various
conditions: when disturbances push the system beyond its design limits, when
feedback is lost, or when particular components break down. Going back to
the car example, given a purely functional approach, when you turn the
ignition key and nothing happens, you are completely mystified. But if you
know the design of the system, you are in a position to perform appropriate
tests that will identify the fault ("Ah, the battery is dead!").
Furthermore, once you know how the system is structured, you can understand
what functional relationships it will produce. Knowing the functional
relationships that a system produces places constraints on the nature of the
system that produces those relationships, but further work is needed to
accurately characterize that system. This alone should be reason enough to
prefer strongly the mechanistic (PCT) approach over the purely functional
(behavioral) approach.

Bruce A.

[From Bill Powers (2009.02.20.0553 MST)]

Bruce Abbott (2009.02.19.1910 EST)–

Part of what I’ve done is to
report the way behaviorist think, in order to

contrast that approach with PCT. I’ve hypothesized that the relative
success

of behaviorism versus PCT may have to do with the ease with the findings
of

behaviorism can be applied in practical settings – behavioral
therapies,

more effective animal training methods, and better teaching technologies,
to

name three.

OK, so the goals of applied behaviorism have to do with changing the
behavior that we can observe people carrying out, getting animals to
perform behaviors that someone wants them to perform, and – I don’t
really know how to characterize teaching in behavioristic terms, except
maybe to say we want the students to say, write, and do certain
observable things when presented with questions or tasks to be done. How
would you put this?

The usual goal of
practitioners is to bring about some specific

kind of behavioral change, and these techniques of the behaviorist
promise

to do just that. It is less obvious how the principles of PCT can be

applied, especially given the frequent assertion that, because people
are

living control systems, they will resist any external attempts to
change

them.

To bring about a specific kind of behavioral change – I take it that
this means some behavior is already observed to be going on (or is
lacking), and someone wishes or intends it to be different in some way
(or to start occurring). How does behaviorism explain that wish or
intention? That is, a person sees something going on, and then engages in
activities aimed at changing that into some other specific thing going
on. How is the new thing that is to happen determined? What selects it?
I’m talking about the practitioner who is facilitating the change, not
the subject whose behavior is to change. How does behaviorism explain the
behavior of the practitioner? I’m sure it does, I just wonder what the
specific explanation is, and how it sounds to you when you say it out
loud.

But people (and other animals)
do change. Behaviorists have identified

systematic changes in behavior that result from their experiences
when

interacting with their environments (broadly conceived, including
other

people or animals). In its current state of development, PCT is rather
vague

about all this; it’s an area that desperately needs to be
researched.

I sense that you disagree with my thesis that the environment in which
a

living organism finds itself strongly affects the organization of
its

control-systems – what variables the organism comes to control and
what

actions it uses to accomplish that control.

When you say “strongly affects the organization of its control
systems,” what kind of effect do you have in mind? Do you mean that
there are certain nerve signals that have predetermined effects on the
way the brain is organized? Is this a reference to
reinforcement?

But isn’t that precisely
what reorganization accomplishes? And if someone is having difficulty
maintaining control over certain variables (leading to considerable
distress), wouldn’t a properly structured environment facilitate
effective reorganization?

It does sound as if you’re thinking of the reinforcing effects of certain
kinds of stimuli. I know, since you have completely rewritten the
programs for Chapters 7 and 8 of my new book, that you fully understand
how E. coli reorganization relies not on systematic changes of
organization but completely random changes which have just as much chance
of making matters worse as of making them better. What sort of
environmental structure would tend to make that sort of reorganization
more effective? Is reorganization effective because of the effects of
random changes, or because of the way the organism selects for the
results it wants by shortening or lengthening the delay to the next
reorganization? I think it is the latter; the changes are always random
under reorganization theory, and the organism, not the environment,
determines which changes will be retained.

If I’m endorsing anything
out of my behaviorist past, it’s the demonstrated

fact that what the organism does is systematically related to it

experiences, past and present, in given environments. I don’t think it
does

us or PCT any good to deny that.

Couldn’t this also be stated a different way? “The experiences of an
organism, past and present, in given environments, are systematically
dependent on what an organism does to its environment.” How do you
determine whether it is the environment that causes behavior to be
structured as it is, or behavior that causes the environment to be
structured as it is? I know you can determine this physically, but I see
no way to do it functionally. Physically, it’s clearly behavior that
causes reinforcement; there is no direct evidence that reinforcement
changes behavior. No behavior, no reinforcement under any normal schedule
of reinforcement. But there can be behavior without reinforcement; in
fact, there must be, if the first reinforcement is ever to occur. In
cause-effect terms, behavior is clearly the cause and reinforcement the
effect.

I’d like to hear your take on
that. Do you assert that the environment has

no influence on how the organism’s control systems come to be
organized?

Yes. I would say that the environment determines the effects that any
particular behavior will have on the sensory inputs to an organism, as
well as on its physical state. In PCT, however, those sensory inputs do
not cause behavior to become organized. The physical effects can stop
reorganization, but they don’t determine which effects of reorganization
will occur.

The environment (excluding other organisms) has no preferences for which
of its possible effects on the organism it will generate: the local
environment does whatever the behavior of the organism and unrelated
events elsewhere in the environment can make it do, regardless of the
effect on the organism. I would say that the organism has the determining
influence on the effects the environment has on the organism. The
organism alters its effects on the environment until the result it
experiences is the one it intends to experience. You rewrote the program
for Chapter 3 that shows a live block diagram of how this works, so I
know you understand this principle, as well as the E. coli principle of
reorganization.

That it is not possible, in
theory, to discover interventions (changes to

the individual’s environment) that would help an individual to function
more

effectively, or with less distress?

No, I don’t think that is possible very much of the time, either. To know
which changes in the environment would be helpful you would have to know
how the individual is organized inside – that is, what that the
individual is trying to accomplish, and what sort of failure is causing
the distress. As the person reorganizes, the person will adjust the
environment in many ways until the result is a lessening of the distress,
a lessening of the error signals as we model the situation. The person is
immediately aware of the effect of changing the environment on whatever
is the problem; the therapist finds out about that only indirectly and
imperfectly. But the therapy is not done for the sake of increasing the
therapist’s understanding, so that’s all right.

Backing off a little: In behaviorism what do goals or intentions such as
functioning more effectively or experiencing less distress have to do
with anything? Do organisms somehow wish for sensory inputs that are
different from the ones that already exist? Is “distress”
something experienced, or is it simply a behavior that can be observed by
someone else? Does it even exist? Is it possible to “discover”
an intervention, or is the intervention simply a result of the
intervenor’s being presented with certain environmentally determined
contingencies, such that intervening is reinforcing to the intervenor and
not doing so is punishing or at least negatively reinforcing?

One of my criteria in developing the work behind PCT was that the theory
must apply just as well to the theorist as to other organisms. It must
fit the way behavior looks both from the outside and from the inside of
the behaving system. How well does the system of behavioristic beliefs
satisfy this criterion? For example, as you wrote the post to which I’m
replying, did it seem to you that what you were writing was the result of
complex stimuli playing on your nerve-endings? Were the results of your
key-pressing responses a complete surprise to you, the way Skinner
reported his (apparent) astonishment at finding himself depositing a
letter he had written in a mailbox? Were you engaging in typing behavior
because you found that the consequences made you want to type even more?
If that were true, how could you ever get out of that positive feedback
loop?

As for your last-mentioned
option, no, I don’t see the behaviorist view as

entirely consistent with the PCT model. Skinner thought that behavior
such

as the rat approaching a lever from any of a variety of initial
positions

could be explained as the result of the variation in discriminative
stimuli

resulting from being in different starting positions, and the
reinforcement

of movement in different directions depending on the discriminative
stimuli

currently visible from those starting positions. That’s
nonsense.

Yes. Mostly.

The rat learns at a very early
age to control its perceptions of movement and

position relative to other things. What “reinforcing” the
lever-press does

is teach the rat that approaching the lever and pressing it will allow it
to

control the delivery of food pellets and thus reduce error in
another

system, error that was induced by the experimenter through food
deprivation.

The rat’s already existing movement and position control systems simply
have

their references set appropriately and the rat moves toward and then

contacts the lever.

Except that in the video of this process that you sent me, I didn’t see
anything like that idealized scenario occurring. What you describe is an
orderly process that follows a logical sequence. What I saw was a rat
swarming all over a food dish and a lever, equally and at the same time
with a nose in the dish and a hind leg or its rump on the lever. It
looked a lot as if the rat knew food came from that dish but had no idea
why it appeared. Yet clearly the rat was strongly motivated to find the
food and was looking in the place where it had last appeared. It was the
most strongly motivated, it seemed to me, before it had obtained any food
at all. Whatever was causing this behavior was clearly not the food,
because as food began to appear, the frantic searching calmed down and
the rat became somewhat more systematic about its actions. What I was
seeing fit a lot better with the concept of the rat as a reorganizing
control system than the rat as a stimulus-controlled system being shaped
by reinforcement to respond appropriately.

But chacon a son gout.

The Skinnerian view is also
incorrect in the assertion that “reinforcers”

increase the rate or probability of a behavior. The contingencies set up
by

the experimenter allow the rat to control a variable such as food
intake,

but whether there will be an increase or decrease in the rate or
probability

of the lever-press will depend on the nature of the environmental
feedback

function that the experimenter has set up, among other
factors.

I thought you had determined that the rate of lever-pressing didn’t
actually change at all – the apparent rate changes were due to two
factors. First, “rates” are traditionally determined by
dividing total presses during an experimental session by total time of
the session, and second, when reinforcements are delivered, the rat has
to stop pressing for a while to obtain and eat the food. So periods of
non-pressing were being measured along with periods of continuous
pressing, creating the appearance of changes in the rate of pressing.
When you eliminated the times spent consuming the food, the differences
in pressing rate disappeared. Am I remembering that correctly?

Under the degree of deprivation normally used, rats apparently just press
as fast as they can if they press at all. In the video you sent me, the
rat spent a considerable amount of its lever-pressing time (after having
learned to feed itself relatively reliably) sniffing in corners of the
cage or on a platform far from the lever, sleeping while the clock ticked
on. Of course someone observing only the clock and the record of
bar-presses would see a variable rate of pressing from one session to
another. I don’t think we set up the conditions necessary for observing
any possible variations in rate of pressing. I think we would see them
mainly under conditions of very moderate deprivation, not deprivation to
the point of pegging the error meter.

Finally, I point to a
fundamental difference between the behaviorist and PCT

approaches. The behaviorist approach is functional; it makes no
attempt to

understand the mechanisms within the organism that generate
observable

behavior – equivalent to learning what the various controls of your car
do

without caring how they carry out those functions.

Yes. That is why it misconstrues the nature of behavior, isn’t it? It
looks only at inputs and outputs (actually it conjectures wildly about
processes inside the organism, but for philosophical reasons relabels
them as “behaviors” as if they could be observed – for
example, “distress”). I imagine that you find Rick’s continuous
harping on the “behavioral illusion” annoying, but in fact
behaviorism sets itself up to be completely fooled by that illusion. The
schedule of reinforcement is a feedback function. If the rats control for
a certain level of food intake over some averaging period, the average
rate of pressing required to provide that intake is completely determined
by the schedule – and the schedule has absolutely no effect on the
desired level of intake. As obesity studies have shown, if food is
artificially added to the intake independently of the bar-pressing
behavior, the bar-pressing behavior decreases: the generalization that I
have seen in the literature from many similar experiments is
“Non-contingent reward reduces behavior.” In our experiments we
evidently didn’t use the right averaging period, and anyway the rats were
being fed outside the experiments to control their body weight at a
specific level. In the obesity experiments they got all their food from
lever-pressing.

But now I’m arguing and that’s not my aim here, though of course you
deserve to have your questions answered as much as I do.

PCT is thoroughly mechanistic,
proposing a specific organization of parts within the organism. These
proposals are testable in that the proposed organization of structures
can be shown (mathematically or through simulation) to behave in
predictable ways given a knowledge of the starting conditions and
inputs. One can test these predictions against actual
behavior.

Yes. In that way, PCT adheres to the methodology of the hard sciences.
PCT is basically a model of an organization which, if it actually existed
inside an organism, would have to behave in a specific way by its own
rules. That way is very close to the way the organism is actually
observed to behave. Physics is organized around many models of
unobservable entities like electrons and magnetic fields which, if they
really existed, would lead strictly from the organization of the model to
simulations of the behavior of matter that are just like the observations
we actually obtain.

What we find in physical investigations is that the immediate appearances
of behavior in material objects are often misleading, in that naive
observation gives results that conflict with other observations and
generalizations. Here’s an example. Boil an egg for 20 minutes. Dump the
hot water out of the pan and run cold water in, allowing 15 seconds for
the egg to be cool enough to hold. Take the egg out of the cold water and
hold it in your hand. You will feel the cold egg gradually warming up,
first getting as warm as your hand and then getting hotter and hotter
until you finally have to put it down somewhere to keep from being
burned. It seems that you have just disproven the Second Law of
Thermodynamics, which requires that heat always pass from the warmer body
to the colder body, so the colder body can never become warmer than the
warmer body with which it is in contact. The egg experiment demonstrates
that simply observing the exterior of an object does not give you all the
relevant facts you need to reach a correct conclusion or to predict
correctly. Inside that smooth opaque eggshell there is a store of heat
energy, in the central part of the egg which is still nearly at the
boiling temperature of water. You have cooled the outer few millimeters
of the egg, but the hot interior is warming that outer region, and your
hand. So understanding what you deduce theoretically about what is
happening inside the egg helps you see that there is really no conflict
with the basic laws of thermodynamics.

The stories that behaviorists tell about behavior are carefully adjusted
to fit all the facts that can be determine from observing inputs to and
outputs from organisms. As you have noticed, PCT uses the same facts but
tells a different story, and sometimes the stories do not agree. This is
generally the case: more than one story can be concocted to fit any
particular set of observations. The basic question is how we can pick the
the best story to believe.

The first step, of course, is simply to ask how well the stories actually
fit the facts. If one story allows us to predict correctly 50% of the
time while another permits us to predict correctly 99% of the time, there
is no choice to be made: it’s obvious which story we should believe. We
don’t even need to ask whether there are any aspects of the rejected
story that we ought to keep. Why bother, when there are so many stories
that predict with that same or greater accuracy? And just to get some
small fraction of one percent better predictions?

The second step:

Once you have a valid model of
the system’s organization, you are in a

position to predict and understand what will happen under various

conditions: when disturbances push the system beyond its design limits,
when

feedback is lost, or when particular components break down. Going back
to

the car example, given a purely functional approach, when you turn
the

ignition key and nothing happens, you are completely mystified. But if
you

know the design of the system, you are in a position to perform
appropriate

tests that will identify the fault (“Ah, the battery is
dead!”).

Furthermore, once you know how the system is structured, you can
understand

what functional relationships it will produce. Knowing the
functional

relationships that a system produces places constraints on the nature of
the

system that produces those relationships, but further work is needed
to

accurately characterize that system. This alone should be reason enough
to

prefer strongly the mechanistic (PCT) approach over the purely
functional

(behavioral) approach.

You have supplied the second step for me. Once two models have been shown
to predict with somewhere near the same accuracy, we then have to start
examining the premises that make the models different. We have to try to
verify those premises. Suppose we say that reinforcement theory predicts
somewhere in the same ballpark of accuracy as reorganization theory. Now
we have to test the premises by making more detailed predictions. If
reinforcement theory is correct, then giving more reinforcement should
work better than giving less. More reinforcement should increase the rate
or probability of occurance of the same behavior that produced it. Well,
does it? The answer is no, it doesn’t. If a little salt makes soup more
palatable, as shown by what happens when it’s set before a hungry person,
what happens when we add ten times that amount of salt to the soup? Does
the person show an increased eagerness to finish the bowl of soup, given
the same amount of soup deprivation? Obviously not. Yuk.
In fact we will find that no kind of reinforcement, given in huge
amounts, will be reinforcing; in fact you would have to force-feed an
animal which has just eaten a normal meal to get it to eat another one
immediately after. The animal (or any organism) becomes
“satiated.” If you have a fine ear for theory, you will realize
that postulating satiation is a contradiction of the basic theory of
reinforcement. Satiation says that increasing reinforcement can
decrease behavior. Sure, the conditions have changed but so what?
What kind of basic theory is it that has to state “This theory is
true except when it’s exactly wrong?”
Reinforcement theory is simply a misunderstanding of what is most
probably happening, which is internally-directed
reorganization.

OK, having finished all that, I realize that your thoughts about
behaviorism are still ambiguous for me. When you said “The usual
goal of practitioners is to bring about some specific kind of behavioral
change, and these techniques of the behaviorist promise to do just
that,” I couldn’t tell whether you meant these techiques were
actually promising approaches – likely to succeed – or only meant that
behaviorists promise just such results whether they achieve them or
don’t. When you started the bit about mechanistic and functional
theories, I thought you were going to argue for functionalism (as you
have seemed to do in the past). But you surprised me by concluding that
the mechanistic approach of PCT was preferable. I have no doubt
whatsoever about your grasp of PCT being as good as mine (by which
ambiguous statement I mean to say it is just as good as mine). But it
still seems to me that you think of yourself as a behaviorist, a label
that to me means someone who tries to understand organisms by observing
their behavior and not trying to model their insides. So you can see that
I am still in confusion about your stance in these matters.

And I wonder if you are, too. Every now and then you have to work on yet
another edition of your methods book. I haven’t read it, though I should.
Since the original edition was written before you got into PCT, I can’t
help wondering whether the sighs and groans and putting-offs, as well as
other symptoms that I witness when the time rolls around, could perhaps
signify some degree of conflict about this task. Do you still believe in
the usefulness of everything you have written in that book? Are one or
two of those little demons whispering alarming things into your inner
ear? Is your support of certain behavioristic notions, or the whole
picture, still as serene and untroubled as ever?

I hope not.

And all you other closet True Believers, does anything in this
interchange make you uneasy about anything?

I hope so.

Best,

Bill P.

[From
Bruce Abbott (2009.02.20.1225 EST)]

[Bill Powers (2009.02.20.0553 MST) –

Bill,
thank you for your thoughtful and detailed reply. It deserves the same
from me. However, I’m heading out to Indianapolis in a few minutes as our
daughter will be arriving from Italy tomorrow to spend a bit more than a month
with us. I will reply, but I’m not sure how soon. I’ll try to keep
the delay short.

Oh,
one more thing: On one issue you raised: I’m not talking about
reinforcement.

Bruce
A.

[From
Bruce Abbott (2009.02.22.0835 EST)]

Bill Powers (2009.02.20.0553 MST)

Bruce Abbott (2009.02.19.1910 EST)

Part of what I’ve done is to report
the way behaviorist think, in order to
contrast that approach with PCT. I’ve hypothesized that the relative success
of behaviorism versus PCT may have to do with the ease with the findings of
behaviorism can be applied in practical settings – behavioral therapies,
more effective animal training methods, and better teaching technologies, to
name three.

OK, so the goals of applied behaviorism have to do with changing the behavior
that we can observe people carrying out, getting animals to perform behaviors
that someone wants them to perform, and – I don’t really know how to
characterize teaching in behavioristic terms, except maybe to say we want the
students to say, write, and do certain observable things when presented with
questions or tasks to be done. How would you put this?

Teaching – Well, the goal would be to make the learning
process more effective, so that students learn more quickly, with less effort,
gain a deeper understanding of the subject matter, and experience much greater
success. Implementation has included such features as breaking down what is to
be learned into small, easy steps (wherein the student is highly likely to succeed),
giving immediate feedback, and providing immediate remedial steps when an error
occurs.

The usual goal of
practitioners is to bring about some specific
kind of behavioral change, and these techniques of the behaviorist promise
to do just that. It is less obvious how the principles of PCT can be
applied, especially given the frequent assertion that, because people are
living control systems, they will resist any external attempts to change
them.

To bring about a specific kind of behavioral change – I take it that this
means some behavior is already observed to be going on (or is lacking), and
someone wishes or intends it to be different in some way (or to start occurring).
How does behaviorism explain that wish or intention? That is, a person sees
something going on, and then engages in activities aimed at changing that into
some other specific thing going on. How is the new thing that is to happen
determined? What selects it? I’m talking about the practitioner who is
facilitating the change, not the subject whose behavior is to change. How does
behaviorism explain the behavior of the practitioner? I’m sure it does, I just
wonder what the specific explanation is, and how it sounds to you when you say
it out loud.

Skinner explicitly recognized that it’s a two-way street,
that the behaviors of both individuals (teacher and learner, trainer and
trainee, therapist and client) are subject to the same laws; each individual
has effects on and is affected by the other’s behavior. What we label as
intentions are observations of our own response probabilities. According
to Skinner, when we say “I intend to go to the bank,” we mean that
under these conditions we are highly likely to go to the bank. This high
probability is the result of our previous history of reinforcement and current
inputs.

This explanation strikes me as inelegant and implausible,
especially when one has a different, mechanistic model that beautifully
captures the essence of intention or purpose. (I speak of course of PCT.)

What selects the “new thing that is to happen”?
Ultimately, from the Skinnerian position, it’s a process of variation and
a kind of natural selection. Certain consequences of a person’s (or
animal’s) behavior are reinforcing under certain conditions, owing to
phylogeny and/or a prior reinforcement history. To say that something
“does the selecting” reifies the process – there is no agent
doing the selecting, just as there is no “nature” “doing”
the selecting in natural selection. When certain consequences follow a behavior
under given circumstances, that behavior becomes more likely to occur when
those circumstances are present.

PCT offers a different explanation, of course, but the process
envisioned also involves variation and selection. Persistent errors in an
individual’s control systems increase the rate at which the reorganizing
system makes presumably random changes to system organization. The
reorganization system slows when changes occur that reduce these persistent errors,
freezing in the new, more effective organization. PCT’s reorganization
mechanism currently lacks a targeted way to alter only those control systems
where those persistent errors are present, a disadvantage over the
reinforcement interpretation which offers an explanation for why only
particular behaviors change (the ones whose consequences are reinforcing).

But people (and other animals) do
change. Behaviorists have identified
systematic changes in behavior that result from their experiences when
interacting with their environments (broadly conceived, including other
people or animals). In its current state of development, PCT is rather vague
about all this; it’s an area that desperately needs to be researched.

I sense that you disagree with my thesis that the environment in which a
living organism finds itself strongly affects the organization of its
control-systems – what variables the organism comes to control and what
actions it uses to accomplish that control.

When you say “strongly affects the organization of its control
systems,” what kind of effect do you have in mind? Do you mean that there
are certain nerve signals that have predetermined effects on the way the brain
is organized? Is this a reference to reinforcement?

No. Let’s say that control over some controlled variable
is poor. The rate of reorganization increases, and we observe an increase in
the variability of the individual’s behavior. As it happens, the
environment is structured in such a way that a feedback function exists between
a particular one of those variable actions and the controlled variable, closing
the loop and permitting more effective control. If the reorganization process happens
to select this output function for the control system, reorganization slows and
the modified control system is retained – variation and selective
retention.

Another example: the environment is so structured that a certain
environmental feedback function exists only under specific conditions
identified by the presence of certain perceptions. For example, I learn that I
can hear “Car Talk” on the local public radio station only on certain
days and times. So when I want to listen to “Car Talk,” I wait
until I perceive that it is the the right day and time before attempting to
tune in the show. The day and time (as indicated by my calendar and watch) have
influenced my efforts to control for hearing “Car Talk.” From a PCT
perspective, I don’t understand how this can happen if, as you seem to be
saying, the environment has no influence over what I’m likely to do when
I’d like to be listening to the program.

But isn’t that precisely what reorganization
accomplishes? And if someone is having difficulty maintaining control over
certain variables (leading to considerable distress), wouldn’t a properly
structured environment facilitate effective reorganization?

It does sound as if you’re thinking of the reinforcing effects of certain kinds
of stimuli.

No, but I am thinking about the
changes in behavior that occur under those conditions wherein “reinforcing
consequences” are said by behaviorists to select certain behaviors for “strengthening,”
and bring those behaviors under “discriminative control.” These are
real, replicable phenomena even if they are incorrectly interpreted. They need
to be understood within the PCT framework, not denied.

I know, since you have completely rewritten the programs for
Chapters 7 and 8 of my new book, that you fully understand how E. coli
reorganization relies not on systematic changes of organization but completely
random changes which have just as much chance of making matters worse as of
making them better. What sort of environmental structure would tend to make
that sort of reorganization more effective?

How about something like “Try
swinging the bat like this, Johnny! Pay attention to how it feels
when you swing it that way.” Any structure that would make it more likely
to develop a better control structure.

Is reorganization effective because of the effects of random
changes, or because of the way the organism selects for the results it wants by
shortening or lengthening the delay to the next reorganization? I think it is
the latter; the changes are always random under reorganization theory, and the
organism, not the environment, determines which changes will be retained.

I don’t think that the changes are always completely
random, although they may be random if there are no better ways available to
discover what works. So-called trial-and-error learning isn’t the only
possible method. The organism (more specifically, the organism’s
reorganization mechanism) “selects for the results it wants” (stops
reselecting when persistent error in the affected control system goes away),
but the environment in which that selection happens may make some solutions more
easily discoverable via this random walk than otherwise. (I’m thinking of
shaping by successive approximation as one example.)

If I’m endorsing anything out
of my behaviorist past, it’s the demonstrated
fact that what the organism does is systematically related to it
experiences, past and present, in given environments. I don’t think it does
us or PCT any good to deny that.

Couldn’t this also be stated a different way? “The experiences of an
organism, past and present, in given environments, are systematically dependent
on what an organism does to its environment.” How do you determine whether
it is the environment that causes behavior to be structured as it is, or behavior
that causes the environment to be structured as it is? I know you can determine
this physically, but I see no way to do it functionally. Physically, it’s
clearly behavior that causes reinforcement; there is no direct evidence that
reinforcement changes behavior. No behavior, no reinforcement under any normal
schedule of reinforcement. But there can be behavior without reinforcement; in
fact, there must be, if the first reinforcement is ever to occur. In
cause-effect terms, behavior is clearly the cause and reinforcement the effect.

Well, I’m not arguing for the reinforcement
interpretation. What I am arguing for is that, although behavior affects
the environment, the reverse is also true. What works in one environment may
fail entirely in another.

Environments can be structured according to someone’s
plan, but this is rarely the case. Most environments in which one finds
oneself are just there. The individual can act in such ways as to change
some elements of that environment, but not all aspects of the environment are
modifiable, or at least worth the effort to do so. The individual must adjust
to those constraints.

I’d like to hear your take on that.
Do you assert that the environment has
no influence on how the organism’s control systems come to be organized?

Yes. I would say that the environment determines the effects that any
particular behavior will have on the sensory inputs to an organism, as well as
on its physical state. In PCT, however, those sensory inputs do not cause
behavior to become organized. The physical effects can stop reorganization, but
they don’t determine which effects of reorganization will occur.

I do not argue that sensory inputs cause behavior to become
organized. I argue that organisms learn about their environments and take
advantage of whatever contingencies they discover will help them better to
control those variables they control. Those can include environmental feedback
functions (contingencies between potential actions and effects on sensed
variables) and predictive relationships (relationships between input
variables). Different environments include different contingencies, which
constrain what

The environment (excluding other organisms) has no preferences for which of its
possible effects on the organism it will generate: the local environment does
whatever the behavior of the organism and unrelated events elsewhere in the
environment can make it do, regardless of the effect on the organism. I would
say that the organism has the determining influence on the effects the
environment has on the organism. The organism alters its effects on the
environment until the result it experiences is the one it intends to
experience. You rewrote the program for Chapter 3 that shows a live block
diagram of how this works, so I know you understand this principle, as well as
the E. coli principle of reorganization.

That it is not possible, in theory,
to discover interventions (changes to
the individual’s environment) that would help an individual to function more
effectively, or with less distress?

No, I don’t think that is possible very much of the time, either. To know which
changes in the environment would be helpful you would have to know how the
individual is organized inside – that is, what that the individual is trying
to accomplish, and what sort of failure is causing the distress. As the person
reorganizes, the person will adjust the environment in many ways until the
result is a lessening of the distress, a lessening of the error signals as we
model the situation. The person is immediately aware of the effect of changing
the environment on whatever is the problem; the therapist finds out about that
only indirectly and imperfectly. But the therapy is not done for the sake of
increasing the therapist’s understanding, so that’s all right.

Well, there are ways that therapists can discover some of these
things – for example by asking the client. The therapist can use whatever
information is available to form hypotheses. For example, the client may be
distressed because she’s in an abusive relationship with her husband. She
probably can give you reasons why she doesn’t just leave. Helping her to
understand that other options are possible, to perceive the situation
differently, might help her to resolve her difficulty. (The method of
levels might prove effective here as well; but even there you, the therapist,
are setting up conditions that can help the client to develop new perceptions
of her situation and options.)

Backing off a little: In behaviorism what do goals or intentions such as
functioning more effectively or experiencing less distress have to do with
anything? Do organisms somehow wish for sensory inputs that are different from
the ones that already exist? Is “distress” something experienced, or
is it simply a behavior that can be observed by someone else? Does it even
exist? Is it possible to “discover” an intervention, or is the
intervention simply a result of the intervenor’s being presented with certain
environmentally determined contingencies, such that intervening is reinforcing
to the intervenor and not doing so is punishing or at least negatively
reinforcing?

The Skinnerian argument was never that these internal factors do
not exist, but rather that they are not accessible to the observer and, at any
rate, are effects that can be traced to events happening to the individual that
are located in the environment and thus potentially observable. Thus, a
behaviorist would not deny that people sometimes feel distressed (generalizing
from his or her own experiences and correlated behavioral manifestations). But
distress in a client would be defined in terms of observable behaviors and its
causes, to the extent that they can be determined from available information,
found in the individual’s environment the changes in the
individual’s behaviors that exposure to its contingencies produces.

Your last question above seems a bit garbled to me. Of course an
intervention can be discovered and its effectiveness assessed. As to why a
therapist would be motivated to help a client, that would depend on the
therapist’s own history of experience, but for a therapist who is
genuinely interested in helping, the behaviorist would have to presume that the
therapist’s own behaviors would be shaped by the contingences, e.g.,
whether the client’s behavior is perceived by the therapist to have
changed in ways that are perceived as beneficial to the client, such a change
acting to reinforce the therapist’s own behavior with respect to the use
of such an intervention.

One of my criteria in developing the work behind PCT was that the theory must
apply just as well to the theorist as to other organisms. It must fit the way
behavior looks both from the outside and from the inside of the behaving
system. How well does the system of behavioristic beliefs satisfy this
criterion? For example, as you wrote the post to which I’m replying, did it
seem to you that what you were writing was the result of complex stimuli
playing on your nerve-endings? Were the results of your key-pressing responses a
complete surprise to you, the way Skinner reported his (apparent) astonishment
at finding himself depositing a letter he had written in a mailbox? Were you
engaging in typing behavior because you found that the consequences made you
want to type even more? If that were true, how could you ever get out of that
positive feedback loop?

No. As you were typing your reply, did it seem to you that
what you were writing was the result of a complex interplay of sensory input,
changes in vast numbers of neural firing rates, and time-varying reference
signals? I doubt it, except after the fact as you respectively apply PCT to
explain what was going on. We need not be aware of all the factors that
determine each specific movement, but that doesn’t mean that they are not
at work. But I perceive that you are pursuing another point, which has
been levied against Skinner’s position by others: as we do things, we
generally know what we are trying to accomplish and how well we are doing so
– we have a goal and a means for reaching that goal. We are not
simply reeling off a series of responses whose ultimate result (e.g., the text
of a coherent argument) comes as a surprise to us. I couldn’t agree with
you more.

As for your last-mentioned option,
no, I don’t see the behaviorist view as
entirely consistent with the PCT model. Skinner thought that behavior such
as the rat approaching a lever from any of a variety of initial positions
could be explained as the result of the variation in discriminative stimuli
resulting from being in different starting positions, and the reinforcement
of movement in different directions depending on the discriminative stimuli
currently visible from those starting positions. That’s nonsense.

Yes. Mostly.

Mostly?

The rat learns at a very early age
to control its perceptions of movement and
position relative to other things. What “reinforcing” the lever-press
does
is teach the rat that approaching the lever and pressing it will allow it to
control the delivery of food pellets and thus reduce error in another
system, error that was induced by the experimenter through food deprivation.
The rat’s already existing movement and position control systems simply have
their references set appropriately and the rat moves toward and then
contacts the lever.

Except that in the video of this process that you sent me, I didn’t see
anything like that idealized scenario occurring. What you describe is an
orderly process that follows a logical sequence. What I saw was a rat swarming
all over a food dish and a lever, equally and at the same time with a nose in
the dish and a hind leg or its rump on the lever. It looked a lot as if the rat
knew food came from that dish but had no idea why it appeared. Yet clearly the
rat was strongly motivated to find the food and was looking in the place where
it had last appeared. It was the most strongly motivated, it seemed to me,
before it had obtained any food at all. Whatever was causing this behavior was
clearly not the food, because as food began to appear, the frantic searching
calmed down and the rat became somewhat more systematic about its actions. What
I was seeing fit a lot better with the concept of the rat as a reorganizing
control system than the rat as a stimulus-controlled system being shaped by
reinforcement to respond appropriately.

Up to those studies, my experience with operant conditioning had
been almost entirely restricted to a situation in which only a single
lever-press was required in order to produce the programmed consequence, and
that behavior occurred at most only at one-minute intervals. My view of what we
saw in those tapes was that the apparatus was not very well designed for
producing optimal behavior on schedules in which relatively high rates of
lever-pressing were being generated. It was clear that the
“discriminative control” that might have been expected to be
developed by the sound of the feeder operating and/or proprioceptive feedback
from the movement or clicking of the lever, was not. That is, the rat was not
waiting for these sources of feedback to occur before moving to the food cup,
as might be expected given that food was not delivered until the lever was
completely depressed and the feeder had been activated. Placing the food cup
further from the lever might have worked better.

Here would be the behaviorist interpretation: The rat was
food-deprived, an “establishing operation” needed to make the food
pellets serve as effective reinforcers. Any bit of behavior that was effective
in producing food could be expected to be “strengthened” (increased
in probability) when it was closely followed by food, which would include any
pellet dust already present in the food cup. I can’t recall whether I had
given these animals any explicit magazine training (bating the food cup with
pellets and allowing them to find the pellets during their exploration of the
chamber), but even without that they would quickly have discovered any food
once it had been delivered consequent on even accidental contact with the
lever. This would be expected to reinforce activity in and around the food
cup. Given that the lever was nearby, such activity would increase the
probability that the rat would contact and occasionally depress the lever. The
apparatus enforced an environmental feedback function such that lever presses
often activated the feeder, resulting in the delivery of a food pellet into the
nearby food cup and reinforcing the act of depressing the lever.

PCT and the behaviorist view both recognize that the rat engages
in variable activity prior to encountering the contingency between
lever-pressing and food availability. Both envision a process of variation and
selective retention (behaviorists refer to the “shaping” of
behavior: the more closely associated with the reinforcer, the stronger a given
behavior becomes relative to other behaviors, until the behaviors most
effective in producing the reinforcer are the ones that dominate). For
behaviorists, certain consequences have the observable effect of increasing the
occurrence of the behaviors that produce them, but to function as such certain
“establishing operations” might be required. These are empirically
established relationships. Consequences that have this observable effect are
called “reinforcers.” PCT can explain these empirical
relationships by noting that food deprivation acts as a disturbance to the
rat’s nutrient-control systems and that consuming nutrients is the
behavioral act that counters this disturbance. The experimenter-established
relationship between lever-pressing and pellet delivery provides an
environmental feedback function that completes the feedback loop. The
disturbance of food deprivation unleashes the reorganization system, generating
variable actions until one (lever-pressing) succeeds in reducing the level of
deprivation by providing food pellets for the rat to consume. Reduction of this
error slows reorganization, so that lever-pressing is left as the means by
which the rat counters the disturbance.

But chacon a son gout.

Beg your pardon?

The Skinnerian view is also incorrect
in the assertion that “reinforcers”
increase the rate or probability of a behavior. The contingencies set up by
the experimenter allow the rat to control a variable such as food intake,
but whether there will be an increase or decrease in the rate or probability
of the lever-press will depend on the nature of the environmental feedback
function that the experimenter has set up, among other factors.

I thought you had determined that the rate of lever-pressing didn’t actually
change at all – the apparent rate changes were due to two factors. First,
“rates” are traditionally determined by dividing total presses during
an experimental session by total time of the session, and second, when
reinforcements are delivered, the rat has to stop pressing for a while to
obtain and eat the food. So periods of non-pressing were being measured along
with periods of continuous pressing, creating the appearance of changes in the
rate of pressing. When you eliminated the times spent consuming the food, the
differences in pressing rate disappeared. Am I remembering that correctly?

In the behaviorist view, reinforcers are going to increase
response rates up to a limit. This may be the physical limit of the
rat’s ability to press the lever quickly, or it may be a limit produced
by the increasingly punishing effect of maintaining higher and higher rates, so
that an equilibrium sets in, or it may be that there is competition for
behavioral resources by other sources of reinforcement.

What I discovered was that changes in the ratio between two and
sixty-four between lever-presses and pellet deliveries had no discernable
influence on rate of responding, once the time required to collect the food
pellet was removed from the calculation. But of course these rats were
responding on the lever at higher rates than would be observed if
lever-pressing did not produce food pellets. To me this indicated that the rats
were already responding at their limit (given the very inefficient way they
were pressing and scrabbling around the food cup), so that changes in ratio
were without effect.

Under the degree of deprivation normally used, rats apparently just press as
fast as they can if they press at all. In the video you sent me, the rat spent
a considerable amount of its lever-pressing time (after having learned to feed
itself relatively reliably) sniffing in corners of the cage or on a platform
far from the lever, sleeping while the clock ticked on. Of course someone
observing only the clock and the record of bar-presses would see a variable
rate of pressing from one session to another. I don’t think we set up the
conditions necessary for observing any possible variations in rate of pressing.
I think we would see them mainly under conditions of very moderate deprivation,
not deprivation to the point of pegging the error meter.

Good experimenters observe their subjects. The only time we
observed the rats sleeping, etc., was during our weight-control study, when the
rats earned one food pellet per lever-press and were allowed to continue until
sated. From the behaviorist perspective, as food consumption removes food
deprivation, the loss of the “establishing operation” would be
expected to reduce and finally, eliminate the reinforcing effect of food
delivery.

The empirical relationships discovered by varying schedule
values were first determined by examining the cumulative record, which
continuously displays the instantaneous rate of responding across the
experimental session as a function of time. They were not determined by
averaging rates across entire sessions.

Finally, I point to a fundamental
difference between the behaviorist and PCT
approaches. The behaviorist approach is functional; it makes no attempt
to
understand the mechanisms within the organism that generate observable
behavior – equivalent to learning what the various controls of your car do
without caring how they carry out those functions.

Yes. That is why it misconstrues the nature of behavior, isn’t it? It looks
only at inputs and outputs (actually it conjectures wildly about processes
inside the organism, but for philosophical reasons relabels them as
“behaviors” as if they could be observed – for example,
“distress”). I imagine that you find Rick’s continuous harping on the
“behavioral illusion” annoying, but in fact behaviorism sets itself
up to be completely fooled by that illusion. The schedule of reinforcement is a
feedback function. If the rats control for a certain level of food intake over
some averaging period, the average rate of pressing required to provide that
intake is completely determined by the schedule – and the schedule has
absolutely no effect on the desired level of intake. As obesity studies have
shown, if food is artificially added to the intake independently of the
bar-pressing behavior, the bar-pressing behavior decreases: the generalization
that I have seen in the literature from many similar experiments is
“Non-contingent reward reduces behavior.” In our experiments we
evidently didn’t use the right averaging period, and anyway the rats were being
fed outside the experiments to control their body weight at a specific level.
In the obesity experiments they got all their food from lever-pressing.

Actually, I don’t find Rick’s “continual
harping on the behavioral illusion” at all annoying, but I don’t
agree that “behaviorism sets itself up to be completely fooled by that
illusion.” You’d have to know a lot more about behavioral research
to understand why I believe that most of it does not fall prey to the
behavioral illusion.

But now I’m arguing and that’s not my aim here, though of course you deserve to
have your questions answered as much as I do.

PCT is thoroughly mechanistic,
proposing a specific organization of parts within the organism. These proposals
are testable in that the proposed organization of structures can be shown
(mathematically or through simulation) to behave in predictable ways given a
knowledge of the starting conditions and inputs. One can test these
predictions against actual behavior.

Yes. In that way, PCT adheres to the methodology of the hard sciences. PCT is
basically a model of an organization which, if it actually existed inside an
organism, would have to behave in a specific way by its own rules. That way is
very close to the way the organism is actually observed to behave. Physics is
organized around many models of unobservable entities like electrons and
magnetic fields which, if they really existed, would lead strictly from the
organization of the model to simulations of the behavior of matter that are
just like the observations we actually obtain.

What we find in physical investigations is that the immediate appearances of
behavior in material objects are often misleading, in that naive observation
gives results that conflict with other observations and generalizations. Here’s
an example. Boil an egg for 20 minutes. Dump the hot water out of the pan and
run cold water in, allowing 15 seconds for the egg to be cool enough to hold.
Take the egg out of the cold water and hold it in your hand. You will feel the
cold egg gradually warming up, first getting as warm as your hand and then
getting hotter and hotter until you finally have to put it down somewhere to keep
from being burned. It seems that you have just disproven the Second Law of
Thermodynamics, which requires that heat always pass from the warmer body to
the colder body, so the colder body can never become warmer than the warmer
body with which it is in contact. The egg experiment demonstrates that simply
observing the exterior of an object does not give you all the relevant facts
you need to reach a correct conclusion or to predict correctly. Inside that
smooth opaque eggshell there is a store of heat energy, in the central part of
the egg which is still nearly at the boiling temperature of water. You have
cooled the outer few millimeters of the egg, but the hot interior is warming
that outer region, and your hand. So understanding what you deduce theoretically
about what is happening inside the egg helps you see that there is really no
conflict with the basic laws of thermodynamics.

The stories that behaviorists tell about behavior are carefully adjusted to fit
all the facts that can be determine from observing inputs to and outputs from
organisms. As you have noticed, PCT uses the same facts but tells a different
story, and sometimes the stories do not agree. This is generally the case: more
than one story can be concocted to fit any particular set of observations. The
basic question is how we can pick the the best story to believe.

The first step, of course, is simply to ask how well the stories actually fit
the facts. If one story allows us to predict correctly 50% of the time while
another permits us to predict correctly 99% of the time, there is no choice to
be made: it’s obvious which story we should believe. We don’t even need to ask
whether there are any aspects of the rejected story that we ought to keep. Why
bother, when there are so many stories that predict with that same or greater
accuracy? And just to get some small fraction of one percent better
predictions?

The second step:

Once you have a valid model of the
system’s organization, you are in a
position to predict and understand what will happen under various
conditions: when disturbances push the system beyond its design limits, when
feedback is lost, or when particular components break down. Going back to
the car example, given a purely functional approach, when you turn the
ignition key and nothing happens, you are completely mystified. But if you
know the design of the system, you are in a position to perform appropriate
tests that will identify the fault (“Ah, the battery is dead!”).
Furthermore, once you know how the system is structured, you can understand
what functional relationships it will produce. Knowing the functional
relationships that a system produces places constraints on the nature of the
system that produces those relationships, but further work is needed to
accurately characterize that system. This alone should be reason enough to
prefer strongly the mechanistic (PCT) approach over the purely functional
(behavioral) approach.

You have supplied the second step for me. Once two models have been shown to
predict with somewhere near the same accuracy, we then have to start examining
the premises that make the models different. We have to try to verify those
premises. Suppose we say that reinforcement theory predicts somewhere in the
same ballpark of accuracy as reorganization theory. Now we have to test the
premises by making more detailed predictions. If reinforcement theory is
correct, then giving more reinforcement should work better than giving less.
More reinforcement should increase the rate or probability of occurance of the
same behavior that produced it. Well, does it? The answer is no, it doesn’t. If
a little salt makes soup more palatable, as shown by what happens when it’s set
before a hungry person, what happens when we add ten times that amount of salt
to the soup? Does the person show an increased eagerness to finish the bowl of
soup, given the same amount of soup deprivation? Obviously not. Yuk.
In fact we will find that no kind of reinforcement, given in huge amounts, will
be reinforcing; in fact you would have to force-feed an animal which has just
eaten a normal meal to get it to eat another one immediately after. The animal
(or any organism) becomes “satiated.” If you have a fine ear for
theory, you will realize that postulating satiation is a contradiction of the
basic theory of reinforcement. Satiation says that increasing reinforcement can
decrease behavior. Sure, the conditions have changed but so what? What
kind of basic theory is it that has to state “This theory is true except
when it’s exactly wrong?”
Reinforcement theory is simply a misunderstanding of what is most probably
happening, which is internally-directed reorganization.

OK, having finished all that, I realize that your thoughts about behaviorism
are still ambiguous for me. When you said “The usual goal of practitioners
is to bring about some specific kind of behavioral change, and these techniques
of the behaviorist promise to do just that,” I couldn’t tell whether you
meant these techiques were actually promising approaches – likely to succeed
– or only meant that behaviorists promise just such results whether they
achieve them or don’t. When you started the bit about mechanistic and
functional theories, I thought you were going to argue for functionalism (as
you have seemed to do in the past). But you surprised me by concluding that the
mechanistic approach of PCT was preferable. I have no doubt whatsoever about
your grasp of PCT being as good as mine (by which ambiguous statement I mean to
say it is just as good as mine). But it still seems to me that you think of
yourself as a behaviorist, a label that to me means someone who tries to
understand organisms by observing their behavior and not trying to model their
insides. So you can see that I am still in confusion about your stance in these
matters.

Because the Skinnerian approach
is inherently single-subject, its techniques apply directly to the treatment of
individual clients by behavior analysts. It’s not a “one size fits
all” approach; in fact part of behavior analysis involves determining the
sources of reinforcement, punishment, discriminative control, etc. that may be
relevant to the behavioral problem in question. This is not unlike applying the
test for the controlled variable in a PCT analysis. A program for behavior
change is then developed based on this information, and data are collected
during implementation of the program to assess its effectiveness. For some
kinds of problems, the techniques employed often prove highly effective. So, to
the extent that they have been demonstrated to work, they are “promising.”

And I wonder if you are, too. Every now and then you have to work on yet
another edition of your methods book. I haven’t read it, though I should. Since
the original edition was written before you got into PCT, I can’t help
wondering whether the sighs and groans and putting-offs, as well as other
symptoms that I witness when the time rolls around, could perhaps signify some
degree of conflict about this task. Do you still believe in the usefulness of
everything you have written in that book? Are one or two of those little demons
whispering alarming things into your inner ear? Is your support of certain
behavioristic notions, or the whole picture, still as serene and untroubled as
ever?

If I am subject to “little
demons,” they are the nagging type that comes from being aware of reproducible
phenomena from behavioral research that do not seem (to me at least) to be
handled well, if at all, by PCT. It’s not that I see PCT as incorrect but
rather as incomplete. Until I can find a satisfactory explanation for these
phenomena within the PCT framework, the little demons will continue to whisper
in my ear. I hope you will construe what I have stated above and in the
messages preceding this one as a statement of the problem to be addressed by
PCT (as I see it) and not as an endorsement of the behaviorist interpretation
of those phenomena in question, but I worry that all I have succeeded in doing
is raising questions in your mind about my commitment to PCT.

Bruce A.