[From Bill Powers (981018.0922 MDT)]
Bruce Abbott (981017.2155 EST)--
Please pay careful attention to Bruce's words.
Yes, please do.
Your words seem to be susceptible to various interpretations.
What a strange thing to say, after I have proposed that availability of food
in the chamber is controlled, rather than its rate of occurrence.
Granted, although I have a hard time imagining how you measure
"availability." How did you prove that "availability", rather than, say,
amount of food in the cup, was controlled?
From this
you get that I am implying that control theory does not apply? It is, I
think, rather easy to see that this is not what I intend. (You weren't
taking your own advice to pay careful attention to my words, were you?) It
is also obvious that ruling out one CV does not rule out the possibility
that the rat's actions may be controlling some other CV. It's not the sort
of logical error I'm likely to commit.
OK, I hoped not and now you say you don't make this sort of error. I can
only accept your claim until you disprove it (it doesn't take long: see
below).
Neither of these observations has anything to do with whether PCT is the
correct theory for describing and predicting what the rats do.
True, but they have plenty to do with whether your prediction, based on a
particular control-system model of operant behavior of your devising, is
supported by the evidence. In that model, decreasing the reinforcement rate
(by lengthening the average interreinforcement interval of the VI schedule)
was supposed produce an increased rate of lever-pressing as the rat
attempted to offset the reduced reinforcement rate. In fact the opposite
happened. The model is incorrect.
No, the opposite did NOT happen, because the (apparent and illusory)
decrease in the rate of pressing was not caused by a decrease in the rate
of reinforcement, but by time away from the lever that affected both the
apparent rate of pressing and the apparent rate of reinforcement. The
independent variable is time away from the lever; the two apparent rates
both depend on time away from the lever and are not causally related to
each other (based on _that_ measurement).
The problem I'm having is contained in this exerpt from your 981015:
Your recall of the data is incorrect. What the data show is that response
rate declines as the average interval length increases. That is, lower
rates of reinforcement are accompanied by lower rates of responding,
contrary to your earlier prediction, based on PCT, that the reverse would be
true.
The prediction from PCT is that a disturbance of a controlled variable and
the action that does the controlling will vary equally and oppositely. But
any application of that principle presupposes you have successfully
identified a controlled variable. PCT does not predict what variables will
prove to be controlled. PCT is not at fault for predicting that rate of
reinforcement is a controlled variable. My assumption was at fault: that if
the rate of reinforcement varied, it was because the rate of pressing
varied as well as because of the change in ratio (this was all on FR
schedules). You then proved with data that the rate of pressing was
effectively a constant, and thus was actually independent of both rate of
reinforcement and schedule.
I decided, and said a year and a half ago, that the particular model I came
up with couldn't be right because it assumed that the rate of pressing
changed with the (hypothetical) error. I acknowledged at the same time that
my fit of a control-system model to the Motheral data was spurious. Why are
you still harping on this? Do you want me to repeat once a week, or day,
that my first attempt at a specific model wasn't correct?
Or could it be -- this just occurred to me -- that while rate of food
delivery is not a controlled variable, you're assuming it might still be a
reinforcer? That, of course, is ruled out because the behavior rate is
constant and does not vary with rate of food delivery. The same data that
rule out rate of food delivery as a controlled variable rule it out as a
reinforcer. The behavior rate does not change with "reinforcement" rate.
Your VI example in which both behavior rate and reinforcement rate decline
means nothing because the actual cause of the decline is time away from the
lever; it is not the decline in reinforcement rate that causes the decline
in behavior rate.
Not so long ago Rick was holding this prediction up as a crucial test of PCT
vs reinforcement theory -- PCT predicted that response rates would go up
when the interval size was increased, reinforcement theory clearly predicted
the opposite. If rates _had_ gone up I'm absolutely certain that it would
have been hailed as a clear victory for PCT. My, what a difference a datum
makes.
When you find the correct aspect of food delivery that is the controlled
variable, I assume you will find that any independent disturbance that
makes it increase will cause behavior to decrease and vice versa, as
control theory predicts. The problem with the prediction that Rick made is
that he assumed that reinforcement rate had been established as the
controlled variable. If it had, his prediction would have been correct.
Since that proposal for a controlled variable has been ruled out (you say),
you will have to look further for a controlled variable. If you find one,
Rick's prediction will automatically be borne out, since it is part of the
Test.
Reinforcement theory also has a problem, in explaining why actual behavior
rate remains the same when the schedule is changed, thus changing the
reinforcement rate.
This
description is particularly deceptive
There I go again, trying to pull the wool over everyone's eyes. How
fiendishly clever of me.
I'm beginning to think it's your own eyes you're pulling the wool over.
in that the mentioned decline of
response rates with reinforcement rates is actually a decline in both
response rates and reinforcement rates with increases in the interval used
in the schedule.
_Of course_ reinforcement rates decline with increases in the interval used
in the schedule -- changing the interval sizes is how we experimentally vary
the reinforcement rate.
That is not what I am talking about. If time away from the lever increases
with the interval or ratio, that alone will result in a decline in behavior
rate and reinforcement rate, when both are measured as total number during
a session divided by duration of the session. You do not see any
significant change in pressing rate (corrected for collection time) until
the animal starts spending significant time away from the lever in addition
to collection time. This occurs at the higher intervals or ratios.
On ratios under which the rat works the lever continuously but for
collections, changing the schedule and thus the reinforcement rate has no
effect on behavior rate (corrected for collection time). So the food
delivery rate changes but the pressing rate does not change. This removes
food delivery rate as a potential controlled variable, and also as a
potential reinforcer. In fact, because pressing rate does not change, food
delivery itself, in any form, is ruled out as a reinforcer.
These decreases on long-interval schedules come about
because the animals spend increasing time away from the lever. That
naturally decreases both rates, response and reinforcement, AVERAGED OVER
THE WHOLE SESSION.
Actually, because of the nature of variable-interval schedules, the observed
decline in response rates had little or no effect on reinforcement rates
until the response rates had fallen to extremely low levels.
Perhaps so, but that is irrelevant. Time away from the lever increases with
increased intervals, doesn't it? That would automatically decrease both
apparent rates equally: pressing and reinforcement. If your mind-set tells
you that pressing rate depends on reinforcement rate, you will interpret
that simultaneous decline as causal, when it is not.
Furthermore,
the data in question are not whole-session averages, but the slopes of the
cumulative records, which indicate the rates of responding. On VI
schedules, this slope tends to remain rather fairly constant throughout a
session.
If you mean the slopes between the pauses, then you're saying that actual
response rate does not change. But if you mean the slopes averaged by eye
across the pauses, you're averaging periods of fast responding with periods
of no responding. Apparently, from the following, the latter is the case:
Me:
That average includes periods of continuous responding
and appropriately repeated reinforcement-events as well as periods of no
responding and therefore no reinforcement.
You:
True.
As I said.
But on high VI schedules, most programmed interreinforcement
intervals are long enough that (a) reinforcement-events would not have
occurred during most pauses and (b) most pauses will end relatively soon
after a programmed interval has elapsed and the next reinforcement
opportunity set up. Reinforcement rate consequently is little affected.
That is mathematical gibberish, Bruce. What is little affected is the
_whole-session_ reinforcement (and behavior) rate. The short-term behavior
rate is drastically affected. You are rationalizing.
And don't forget that the data consist of the times at which each and every
response occurred, not session averages. However, it is convenient to
report the result in terms of session averages, because those averages
reflect the typical rates observed at any time within a session.
More gibberish. It is convenient because you get the answer you want, not
because there is either logical or mathematical justification for this
number-juggling. The data initially consist of each and every response.
When you use session averages, you throw out most of the relevant information.
There is no implication
regarding any causal relationship between reinforcement rate and response
rate.
But we can state with certainty that increasing X, the programmed interval
size (which is independent of the rat's behavior), reliably results in a
lowered response rate, Y.
That is because it reliably decreases the fraction of the total time the
rat spends pressing the bar. You are varying both numerators (total
presses, total reinforcements) while the demonominator stays the same
(session duration). There is no causal relation between X and Y. If you
deliberately close your eyes to the obvious explanation of the decline in X
and Y, you can claim an overall effect, but in doing so you only reveal
that you want the conclusion to be true more than you want to arrive at it
by legitimate means.
That is, the value of X largely determines the
value of Y under the conditions of the study.
That is wrong.
X is certainly having a
strong influence on Y, if not a "causal" one as you define the term.
That is wrong. X is having a strong influence on time away from the bar,
and that has a strong influence on _both_ session-average rate of pressing
and rate of reinforcement, making them both decline.
At the
same time, Y (response rate) is having very little influence on X', the
observed rate of reinforcement, at all but the highest VI values, when
response rate finally falls low enough to strongly affect the obtained
reinforcement rate.
That is wrong, too. What you mean is that delta-Y has little influence on
delta-X', because the mean slope of the apparatus function has become
almost horizontal. However, Y has a very strong influence on X': it is only
because Y is nonzero that X' is nonzero, and X' is completely determined by
Y, given the apparatus function.
···
--------------------------------------------------------------------
Bruce, you need to take a long and hard look at the way you're interpreting
your data, and at the way you're reasoning. Your drive toward arriving at
particular conclusions is obvious to anyone who doesn't share your
objectives. I can't argue with you: you seem completely impervious to
mathematical or logical reasoning that doesn't support the conclusions you
want to be true.
I don't suppose that you will change your approach just because I object to
it. All I can say is that I object to it strongly enough that I wil not go
on tilting at this windmill much longer.
Best,
Bill P.