# Operant Model

[From Bruce Abbott (950704.1235 EST)]

I'm anxious to reply to several of the recent posts, especially those from
Bill P., Bill L., and Dag F., but unfortunately my time is quite limited at
the moment (I've got a summer school test to write, book galley proofs to
carefully go over, and a test item file to update which I promised my
publisher would be sent on Wednesday [probably won't quite make it, either],
and in my SPARE time I'm trying to reacquaint myself with recent theories
intended to deal in one way or another with ratio data and keep up with
events on CSG-L). So this must necessarily be brief (sorry!).

Bill Powers (950704.0100 MDT) --

Re: Operant Conditioning Model

Does this bear any resemblance to the model you're working on?

Yes and no--but I haven't gotten very far yet in that development. I do
think you've made a good start, but I'm concerned about some aspects.

These equations apply to the right-hand side of the Motherall curve. As
we move left on that curve, the observed behavior rate begins to fall
below the straight line predicted by the above equations, with the curve
turning downward. One possible explanation is that as the average
motivation ma increases beyond some critical value, the animal begins to
spend more time on other behaviors, thus reducing the value of kt.

I don't see why average motivation would be expected to _increase_ with
ratio size; in fact I would expect the opposite, due to the increasing
response cost. Furthermore, one would expect a shift to other behaviors as
the motivation for lever-pressing _decreased_, not increased. The animal
would be expected to allocate more of its time to the more highly reinforced
activity. Rather than my presenting another verbal description, perhaps it
would be best to wait until I can develop the mathematical model that will
express the sort of relationships I have in mind.

So, for example, by decreasing the "value" of the reinforcer (as by
decreasing its size), the behavior can be made to rise monotonically
with increases in reinforcement over the whole range of schedules. This,
I propose, is how the general rule of "more reinforcement, more
behavior" was initially established, and how, in fact, the concept of
"reinforcement" gained credence. As long as the product kt*kv*km is kept
low enough, this rule will apply. It is always possible, therefore, to
set up an experiment to prove that an increment of reinforcement will
cause an increment of behavior. All that is required is to keep the
product kt*kv*km small enough, which can be done by manipulating kv as
by reducing the size of the reinforcer.

I've said this before, but it bears repeating. I do not agree with this
analysis. Most operant studies using ratio schedules do not employ
conditions represented by the left side of Motheral's curve. Typically
those studies have not varied the ratio requirement (they have investigated
other parameters), so there would be no negatively sloped curve to look at
and wonder about. In fact, probably the most commonly used "ratio" schedule
is CRF (FR-1).

Overly large ratios produce "ratio strain," a breakdown of "schedule
control." Responding on the ratio becomes unstable and may cease
altogether. _This_, I believe, is the region represented by the left limb
of the curve.

I would predict that the position of the peak would shift to the left if
reward size is reduced. This prediction depends on my assumption that ratio
strain represents a point in which the reward per unit of effort becomes too
small to maintain behavior on the schedule. Reward per unit of effort
declines with increasing ratio size. Reducing the size of the reward would
reduce reward per unit of effort at a given ratio and thus bring ratio
strain on at a lower ratio.

Of course, you can stick to your model (and predictions) if you prefer it to
whatever I come up with. You're predicting that increasing the size of food
reward will decrease response rate on a given ratio schedule and I'm
predicting the opposite: that at least provides a clear test of alternatives.

All of this, as you know, is pure PCT. Several phenomena that are new
may be explained by this model, particularly the shift of the peak of
the Motherall curve with changes in certain parameters. The apparent
effect of reinforcement on behavior is explained in terms of the model,
with the "standard" effect appearing only over a certain range of
parameters. The reinforcement variable does not play any special role in
behavior other than the apparent one; it does no "maintaining" of
behavior, although the observed relationships can be interpreted in that
way. Causation is completely circular, with the only independent
variables being rs and rn: the satiation level of reinforcement and the
noncontingent reinforcers.

This is certainly what we're after, and I agree completely with your
approach if not with all the specifics of the model as you have currently
developed it. The model is mathematically precise and not only offers
explanations for known effects, but generates testable predictions. I'll
try to get my proposal put into model form over the weekend, if not sooner,
so you have a better idea of my thinking.

Meanwhile, I've been re-reading some theoretical articles that appeared in
JEAB about two years ago (still pretty current) in a "special issue on the
nature of reinforcement." If you can get your hands on a copy, I think
you'll find it illuminating as to the current thinking about these issues
we've been discussing of late regarding reinforcement theory. The full
reference is _Journal of the Experimental Analysis of Behavior_, Volume 60,
July, 1993. When I get a bit more time, I plan to discuss some of these
views here on CSG-L. Here's a teaser:

Textbook accounts of reinforcement suggest that many psychologists have
yet to absorb the still-valid lesson of an article published nearly 20
years ago. This lesson is that reinforcement of an instrumental
response, contrary to the law of effect and its many relatives, may not
result from a special kind of response consequence called a _reinforcer_
but from a special kind of schedule called a _response-deprivation_
schedule (Timberlake & Allison, 1974).

Allison, James (1993). Response deprivation, reinforcement, and
economics. _JEAB_, _60_, 129-140.

Jim, you've got it half right. Just substitute PCT for this silly notion of
"response deprivation." (;->

I'll also try to offer some suggestions soon, as Bill Leach already has, on
your proposed PCT - EAB dictionary.

Things are starting to cook. Let's see, better check the directions. Get
large pot. Check! Add 100 gallons water. Check! Bring water to a boil.
Add reinforcement theorists, sprinkle in some data, and stir....

Regards,

Bruce