ratio model; nothing but, but better

[From Bill Powers (960703.1315 MDT)]

Bruce Abbott (950703.1025 EST) --

     ... the prototype of the experimental method: vary one thing
     (independent variable), hold everything else constant, observe any
     effect (dependent variable).

     The limitation of this approach is well known (even in psychology):
     the relationship between X and Y may change depending on the value
     of variable Z. For example, Z may turn out to be a parameter whose
     value acts as a multiplier (e.g. gain) between the values of X and
     Y. Such effects are called interactions.

One limitation that is not well known: in holding some variable Z
constant, you may be interrupting a feedback path by which Y normally
affects X. Even more important, if you _arbitrarily vary_ X in order to
determine its effect on Y, you are necessarily interrupting any feedback
effects from Y to X. You are making an independent variable of what is
normally a dependent variable.

I can see a hint of something like this in the concept of the "value" of
a reinforcer. The basic idea as you explained it is that if the value of
a reinforcer is increased (for example, using size or taste), then the
same reinforcement rate will maintain a higher level of behavior. So if
you control the situation to maintain about the same level of
reinforcement rate when the value of the reinforcer is increased, you
would expect to see, and probably would see, an increase in the

Suppose, however, that we are in a control situation where the behavior
is producing enough reinforcer to maintain some internal variable close
to a reference level. The value of the reinforcer is then the amount of
effect each reinforcer has on the state of the internal variable. When
you increase the value of the reinforcer _without_ doing something to
maintain a constant rate of reinforcement, this will not cause the
internal variable to become proportionally greater; the behavior will
simply decrease, reducing the reinforcement rate and maintaining the
internal variable at about the same level. The system with the feedback
loop closed will behave very differently from the way it will behave
when the reinforcment rate is externally kept at an arbitrary level.

I don't believe that this problem is recognized in psychology. I don't
believe it is recognized that you can work your way systematically along
a cause-effect chain, varying one thing at a time and measuring the
result, and get a totally wrong impression of how the system works when
left to itself.

In the situations represented by the Motherall data, I would predict
that doubling the "value" of the obtained reinforcers would result in a
reduction of behavior and a lowering of the reinforcement rate -- as
long as you were on the right-hand side of the curve.

     You seem to be saying that in your view, psychology experiments
     always manipulate only one variable at a time. I would agree that
     parametric studies are not as common as they should be, but there
     are certainly plenty of examples of them.

I am aware of multivariate analyses (though I wouldn't know how to do
one), and of interactions they can reveal. But I know of none that deal
with closed loops. All that I have ever heard of start with a set of
manipulated variables, and produce equations that predict a set of
output variables, with some interaction terms.

The problem with multiparameter studies is exactly what you describe.
You get less and less data about more and more variables. What you gain
in scope you lose in resolution.

A study with rats that takes a year or two occupies a good fraction of
their normal lifetimes, doesn't it? You start out measuring young rats
and end up with old experienced rats. So this means even more runs to
randomize the order of everything.


     When you said that "behavior reaches an upper limit" I was
     picturing a curve in which behavior rate increases with the size of
     the food pellet. The upper limit of this curve would be its
     asymptote (the curve would be increasing but negatively
     accelerated). This upper limit has nothing to do with satiation.
     What you are now describing is a lowering of the upper limit
     (reduced asymptote) due to a lowering of deprivation.

What I'm suggesting is that the leveling out of behavior rate may have
been mistaken for an asymptote, whereas it actually represented the peak
of the Motherall curves being approached from the left. If it is
_assumed_ that this is an asymptote, there would be no reason to explore
still higher "values" of the reinforcer (or lower ratio requirements),
and thus it would not be discovered that the curve actually turns
downward and eventually approaches zero behavior. While the apparent
asymptote does not represent actual satiation, the approach to asymptote
might be taken as a sign that the conditions are coming too close to the
point where satiation effects begin to be seen. Not wishing to
contaminate the data with such effects, the experimenter might well back
off on the amount or size of reinforcement, or make the schedule enough
more demanding to eliminate the droop of the curve. This would assure
that all data would be taken to the left of the peak of the Motherall
curve. If the reward is small enough, even an FR-1 schedule could put
you to the left of the peak.

I think we would find that in many behavioral studies, the entire range
of conditions explored lies to the left of the peak on this curve. I
can't think of any other reason for concluding that increased
reinforcement (times motivation and value) leads to increased behavior.
What you call "ratio strain" would appear to the left of Motherall's
leftmost data point. At his (her?) leftmost point, rats were still
pressing about 1300 times per session and receiving (I'm guessing from
the curves) something like 5 to 10 dippers per session. Obviously the
relationships seen on the right are seldom enough observed for the
authors who independently observed them to remark on them in the paper
you cited.
Rick Marken (950703.0830)--

     With all due respect, I think this is the wrong approach to
     comparing reinforcement and control theories; I think it plays into
     the hands of the merchants of obfuscation.

Are you sure you want to call our old pal Bruce Abbott a "merchant of
obfuscation?" Sounds unnecessarily nasty to me.

I think Bruce is taking a step-by-step approach to laying out the
structure of reinforcement theory. This will eventually lead to a set of
proposed system equations, which we can then solve or simulate to see
what they actually predict. Once the slipperiness of words has been
eliminated, we will have a clear picture of what we are talking about,
and will be able to make a direct comparison with the PCT model. I
believe, with Bruce, that this will lead to a revolution in EAB -- a
conversion, in fact, to PCT.

     I think the only way to help Bruce out of his puzzlement -- and
     possibly convince him that PCT is not only better than
     reinforcement theory but also radically different -- is to have
     Bruce provide what I asked for in an earlier post:

     > a prediction from reinforcement theory that differs from a
     prediction of PCT.

But what if Bruce wants to do it his way instead of yours? You want to
say "Behavior is the control of perception. Period. End of discussion."
But Bruce has in mind the job of convincing a small army of EABers that
this is true, and of doing this starting from where they are, not from
where you are. I think his approach is much more likely to succeed than

     These verbal descriptions imply two VERY different models of the
     nature of living organisms. If, however, these two explanations of
     behavior are fundementally the same (as Bruce seems to be saying)--
     so much so that they cannot be distinguished by empirical test --
     then I would find it strange that reinforcement theorists have not
     made any systematic attempts to identify controlled perceptual

I don't recall Bruce ever saying anything like that. You are casting him
in the role of the bad guy (for some reason I don't comprehend, other
than that he's available), and then attributing to him thoughts and
words that would be appropriate for such a bad guy to think or say, if
he had actually thought or said them. Whomever you're saying these
things about is a figment of your imagination.

Of course you are perfectly right in defining the end-point. But by
demanding one-step perfection, you are very effectively making it
impossible to get there.
Bruce Abbott (950703.1145 EST) --

     Despite your impatience, I hope you're getting a good sense of the
     variables a reinforcement theorist would consider when developing a
     model for a specific circumstance, and that, from a reinforcement
     theory point of view, the ratio situation is complex.

I'm not impatient. You are dilatory. But I forgive you.
     Sam Saunders (950702:0105 EDT) hit the nail on the head when he
     >. . . ratio schedules appear to provide a relatively simple
     situation for PCT. For EAB, however, this is not the case. . . .
     Ratio schedules are constrained, since there is a direct
     relationship between response rate and reinforcement rate, and thus
     response rate would not be expected to be "free to vary" with
     "response strength".

I think he hit the nail on the thumb >:-(. Interval schedules have the
great advantage that they permit you to overlook feedback effects and
treat the system (however mistakenly) as if it were running open-loop
(it is not). Ratio schedules are at least closer to normal situations in
which actions have a proportional effect on controlled variables.

The main effect of an interval schedule is to put an upper limit on the
amount of reinforcement that can be obtained by a given rate of
behavior. Below that limit, the reinforcement rate is the same as the
behavior rate: it's an FR-1. If this level of reinforcement is
significantly less than what the organism would provide for itself if it
had free access to the reinforcer, the organism will simply crank up the
behavior rate to the point where cost equals benefit. Of course on a
_variable_ interval schedule, some of the time the same behavior rate
will produce a great deal of reinforcement, so it is to the rat's
advantage to maintain a high rate of behavior. However, there is nothing
basically different required of the rat under this schedule than under
any other.

     Meanwhile, I think it's time to stop calling researchers in this
     field pseudo-scientists and the like. Within the established
     framework of assumptions regarding the nature of behavior, what
     they do makes good scientific sense: Kuhn's "pick and shovel" work
     with an established paradigm.

Me, too. It's much easier to call a faceless enemy nasty names and strut
around pounding your chest and claiming victory. When you're on the way
to the conference table, however, and the other guy shows you pictures
of his dog and confides that he's worried about how to pay the mortgage
if he doesn't get tenure, it's time to start thinking in terms of human
Best to all,

Bill P.