[From Bruce Abbott (970910.1525 EST)]
Bill Powers (970910.0138 MDT) --
Bruce Abbott (970909.1315 EST)
In those cases, so was the appearance that behavior rate was related to
reinforcement rate an artifact.
The same experiment showed that the rate at which the rats responded on the
lever was low during initial training (before they had learned the
relationship between lever pressing and pellet delivery) and high
thereafter. Although we didn't do it, we easily could have shown that the
observed increase depended materially on the fact that pressing the lever
causes food pellet delivery. An increase in response rate that results from
making an event contingent on the response is called reinforcement; we
certainly observed that result.
Allow me to offer a different description.
What you offer is not a "different" description in the sense of
contradicting the one I offered, but a more detailed description, consistent
with mine but covering a longer span of time. Comments on specific parts
follow.
Before the rats were able efficiently to generate the action would produce
the food, they produced the relevant action only in passing.
"Only in passing" is an inference, not a description. What is observed is
that the _act_ that would produce the food (pressing the lever down to the
point of switch-closure) occurs very infrequently and by inconsistent means.
Then, as their
behavior patterns changed, their actions came to produce the food more and
more often, until the actions were producing food at a limiting rate.
Yes.
From
then on, the rate of food delivery was maintained by the maximum rate of
action that the rats could sustain.
Yes. I suggested that rate was limited in exactly this way a bit farther on
in my post. But not that this is theory again; you and I are _inferring_
that the rate of lever-pressing is limited by the rat's capabilities. We
did not observe that this is so.
Your description matches mine: behavior (lever-pressing) that was observed
infrequently at the start of the session is now occurring frequently. We
can demonstrate that the contingency between lever-pressing and food
delivery is crucial to this result, everything else equal. If we break the
contingency, the frequency with which the rat presses the lever soon
declines. If we then reestablish the contingency, lever-pressing returns to
its previously observed rate.
When much greater amounts of food are produced by a given behavior rate,
the rats do not change their behavior rate appreciably.
What do you mean, "greater amounts of food"? Don't you mean, "a greater
rate of food delivery"? A "greater amount" could mean a larger pellet.
They behave in a
stereotyped way, producing and eating food as quickly as possible until a
certain amount of food has been ingested, producing shorter and shorter
bursts of behavior as the normal meal size is approached and then doing
other things for a hour or more. Then another session of eating and
ingesting takes place (if possible) until a similar meal size is generated,
and so on during a normal day, maintaining the amount of daily intake that
the rats require, or an individual rat requires.
In our weight-control study, we discovered conditions under which the rat
could easily have consumed enough to maintain its body weight, but failed to
do so, until body weight fell to such an extent that I feared the rat would
starve to death, at which point I changed the conditions to those that would
allow the rat's weight to recover. Under the normal conditions the rat
encounters, however, it does behave as you describe, over the long run.
That is, the rat takes in sufficient food to maintain a steady or increasing
body weight.
What we observe is the behavior maintaining the food delivery rate to the
extent physically possible, in the dictionary sense of the transitive verb,
"to maintain." The subject is the rat, the object is the rate of food
delivery, and the means is the production of behavior in such a way as to
produce the food.
Yes. (And I have said nothing previously that would contradict this
description.)
The role of the experimenter is limited to setting up a means by which the
rat's behavior, should the particular act occur, can generate food which
the rat can eat. Since the experimenter is the more powerful of the two
participants in the situation, the experimenter can establish any relation
between any action and the appearance of food, and prevent any other action
from producing food.
This is true in the experiment (after all, an experiment is an arrangement
designed to exert strict control over the relevant variables for the purpose
of determining causation). It is not generally true in natural
circumstances. The fish changed their behavior, not because I am more
powerful than they, but because they learned that they could gain immediate
accesss to a new source of food by so doing.
The rat's requirements for food determine that the rat
will search for and find the key behavior or something sufficiently like it
to generate food, provided that the requirement is not beyond the rat's
ability to discover it, and that the action needed is not beyond the rat's
ability to sustain it.
To paraphrase: The rat's requirements for food determine . . . that it will
acquire behavior whose consequence is the generation of food. These are not
observations, they are either logical truisms (the requirements must fall
within the capabilities of the animal) or matters of theory (requirements
determine...).
If the experimenter changes the contingency, the expected effect with no
change in the rat's pattern of behavior would be an increase or a decrease
in the rate of food delivery.
Yes, depending on how the experimenter changed the contingency. For
example, increasing the ratio on a fixed ratio schedule would decrease the
rate of food delivery if response rate remained the same.
If physically possible, the rat's pattern of
behavior always changes so as to restore the rate of food delivery to its
original level, or as close to it as possible.
Now we again have entered the realm of theory, control theory to be
specific. You expect that increasing the ratio requirement will lead to an
increased rate of responding, so as to compensate (at least partly) for the
decline in rate of food delivery that would occur if response rate did not
change. This of course is not what we observed.
The only exception is when
the rat is already producing as much behavior of the right kind as it can,
and the change of contingency reduces the rate of food delivery. Then we
observe no counteracting change in the rat's behavior, because no more
behavior can be generated.
And here is the theoretical explanation. If reinforcement theory had
required the invocation of a limit on response rate to explain the
observations, and PCT did not, I am quite sure you would now be talking
about this rate-limit idea being just what is needed to save the theory.
What we have here is an animal presented with a problem which it proceeds
to solve to the best of its ability, unaided except for any hints provided
by the experimenter by way of shaping. The experimenter sets the problem by
creating a contingency. The rat solves it by varying its patterns of action
until it is producing as near to the amount of food it needs as it can
under the given circumstances --neither more nor less.
I agree.
This, I submit, is a fairly complete and accurate account of what is
observed, at least as complete and accurate as any behaviorist account of
similar length and detail. All references to agency and direction of
causation are expressed correctly in terms of conventional interpretations
and direct observations; the behavior is produced by the rat, the
requirements for food are the rat's, and the rate of food delivery is
completely dependent on the rat's actions. The experimenter's action
consists entirely of setting up the passive environmental link between some
action and food delivery rate, and this link runs in one direction only,
from action to food delivery.
I submit that it is part observation and part theory. If your goal was to
provide a theory-free description of the facts, you failed.
I suggest that the reason why the rate of pellet delivery turned out not
vary across the different values of the ratio schedule we tested is that the
rats were not controlling for rate of pellet delivery, but rather for access
to food pellets in the cup as soon as the rat desired another pellet for
consumption. Because completing the ratio imposed a delay (owing to the
fact that lever presses require time to execute), the best that the rat
could do was respond as rapidly as possible at all ratio values.
This implies that under other conditions, the rats would press at lower
than maximum rates. I say this is false: the rats are incapable of
producing a systematically variable rhythm of lever pressing. All they can
do is produce a rapid repetitive action on the lever, or cease pressing
altogether. They are either producing this stereotyped action, or they are
doing something else.
You are being mislead by the fact that all our observations were conducted
using ratio schedules. On these schedules, behavior tends to alternate
between responding at a high, relatively constant rate (while completing the
ratio requirement) and pausing (after delivery of the food pellet). On
other schedules the rat (or pigeon) is definitely able to vary its rate of
responding in a fairly smooth way. I have observed rats on a Sidman
avoidance schedule, for example, sitting at the lever and pressing it rather
steadily at a rate of about one press per second -- about 1/5th the rate we
observed on ratio schedules of pellet delivery.
The only reason that "rate of pressing" caught on as
a measure of behavior was that the measure was obtained by dividing total
presses by session duration. As a result, the rat's changing the _kind_ or
_location_ of behavior was indistinguishable from its changing the _rate_
of behavior of a single kind at a single location. In fact, the latter,
which is assumed to occur, does not occur. So it does not need an explanation.
Both sorts of change have been observed, reported and, incidentally,
explained (descriptively) in terms of reinforcement theory. Both require an
explanation.
Reinforcement theory would account for the same constancy in rate by noting
that, on ratio schedules, higher rates of responding yield shorter times to
reinforcement, which in turn would be expected to generate yet higher rates
of responding. This is a positive feedback loop that pushes response rate
up to the maximum, or at least to an equilibrium value established by the
conflicting effects of reinforcement and response effort. (Effort increases
with rate; a response rate is reached where the decrease in delay to
reinforcement is balanced by the increase in effort.
There is no need for this explanation, because rates of responding do not
actually change. If they did change, the explanation might be appropriate,
but as they do not, the explanation is empty of meaning. It explains
something that does not happen.
You lost me there. I explain why response rates are constant across ratios,
and you tell me that there is no need for this explanation, because response
rates are constant across ratios.
I should also point out that whatever we
observe about behavior, it is not "responses." To observe a response one
must also observe the stimulus; otherwise all that one is observing is an
action.
The term "responses" is a holdover from the old S-R days in psychology. As
used in operant studies, it does not imply a prior stimulus. What we
observe are actions, but the one of interest in the analysis are those that
have a common consequence in the environment; they are what Skinner called
"operants."
What I am saying here, Bruce, is that the behaviorist account, which you
present as a simple factual description of observations, is nothing of the
sort. It is a biased account slanted toward encouraging the listener to
conclude that the environment is controlling behavior -- that behavior is
controlled by its consequences, which is exactly the opposite of the truth.
If that is what you are saying, you have failed to make your case. In fact,
your own description failed as a simple factual account of the observations.
Also, I have said nothing about behavior being controlled by its
consequences. I have simply noted that when certain events are made
contingent on certain behaviors, those behaviors increase in frequency.
When the contingency is broken, those behaviors decrease in frequency,
returning more or less to their original levels. When asked "why did the
behavior increase in frequency?", I might answer, "because it was followed
by the contingent event." That is a purely descriptive explanation, which
makes no appeal to mechanism of any sort. Nowhere does this explanation
assert that the environment is controlling behavior -- not in the sense in
which the term "control" is used in PCT. What it asserts is that behavior
changes as a function of establishing (or breaking) certain contingencies
between that behavior and its consequences, and that much is a simple
description of the observations.
All the strange, contorted, backward reasoning in behaviorist accounts, all
the special terminology, all the special definitions of auxiliary terms
that ignore the primary usages of the words, all the insistence on a
particular set of terms in which to express observations -- all this points
in only one direction. It points to a concerted attempt to put the
environment into the causal role and remove purposiveness from organisms.
You haven't demonstrated much of this, Bill. You haven't demonstrated that
behaviorist accounts use strange, contorted, or backward reasoning. You
haven't demonstrated that the terms used ignore the primary usages of the
words. You haven't demonstrated that anything points to a concerted attempt
to do anything.
I don't see how you can deny this: this has been the avowed purpose of
behaviorists like Skinner and his followers from the very beginning.
Skinner said you must always chooose the way of speaking that attributes
the initiation of behavior to the environment; that science itself demands
this overt bias; that nobody who speaks otherwise can really be a
scientist. And his words have been echoed again and again; they are part of
the behaviorist credo. The term "radical behaviorism" is well chosen; this
movement is a extremist one based not on science but on ideology.
We seem to have departed entirely from the issues under consideration,
lapsing instead into an attack on B. F. Skinner in particular and
behaviorism in general. And by the way, all proponents of a particular
outlook or theory are ideologs, you included. Your dichotomy between
science and ideology is a false one: it is quite possible to have both
simultaneously.
One last point. Even though I am a control theorist and think that PCT is
basically correct, and wish to persuade others to my point of view, I can
still describe the facts on which PCT is based without using a single
special term. I can do it without mentioning control, controlled variables,
reference levels or signals, errors or error signals, disturbances (in any
special sense), input functions, comparator functions, or output functions.
I can describe these phenomena in such a way that nobody who doesn't know
me (or control theory) would ever guess what explanation I might offer.
Perhaps so, but in this post you failed to demonstrate that you can stick
purely to observations. Theory and inference kept creeping in.
Can you do the same for EAB? I very much doubt it. The language of EAB uses
special terms and auxiliary words each one of which is carefully defined
(mostly in unusual ways) to support the theoretical position that is
asserted with every breath. Theory and observation are so intertwined that
there is no way to separate them; take away the special terms, and the
observations can't even be described. If you're not allowed to use the word
"reinforcement," what do you say instead? Any other terms you might use
would lay out for all to see what the theoretical bias is. Try it and see.
I have shown that the terms employed in EAB are not used in unusual ways,
but you have chosen to try to make that case anyway, because, I suspect, it
raises a red herring that you can employ to divert the discussion from the
real issues. The problem is not language. By claiming that my use of EAB
language makes communication impossible (or whatever it is you are trying to
have us believe), you simply stop the discussion cold without having to come
to grips with the substantive issues I have raised.
Can I provide a theory-neutral description of the observations? Yes I can.
It was never my intention to do so, because I was attempting to convey the
relationships that exist between certain EAB terms and control system
operation. I'll take up your challenge in my next post. (I have to get
prepared for a lecture at the moment.) Perhaps _then_ we can return to such
substantive questions as what, in control theory, constitutes the event
referred to as the "reinforcer" in EAB. That question, which is the
supposed subject of this series of posts, seems to have been entirely displaced.
Regards,
Bruce