Schedules

[From Bruce Abbott (951030.1830 EST)]

Bill Powers (951029.1430 MST) --

   Bruce Abbott (951028.1605 EST)

    I've decided to simply videotape the experimental sessions rather
    than attempting to automate detection of the rat's position, etc

OK -- also a good source of material for others who think up new things
to look for after it's all over. Accurate time recording on the video-
tape is a must, either as a clock in the field of view or internally
generated. For synchronizing with the computer data, we need single-
frame resolution. A frame counter?

Agreed. A colleague has a frame-grabber that will come in handy for this.
I also plan to illuminate some LEDs during lever-presses and
food-deliveries. These will be placed within the camera's field of view; it
should be possible to correlate these events with the times recorded by the
computer and thus synchronize the tape with the data.

    Of course, the ratio analysis in terms of _obtained_ reinforcement
    doesn't make much sense

I still don't think you get my point about ratio analysis. On a ratio
schedule there can never be any difference between "scheduled" and
"obtained" reinforcement rates. In fact, no rate of reinforcement can be
"scheduled" in advance; the rate of reinforcement depends strictly on
the behavior rate, which is unpredictable (except from previous
experience). All that the schedule determines is the _ratio_ of
behaviors to reinforcements.

No, I follow you. That is why the matching law really doesn't apply to
ratio schedules. No degrees of freedom that would allow matching to occur.

    ... unless one views the stable end-point of performance as
    resulting from a dynamic process in which response rates affect
    reinforcement rates, which in turn affect response rates. (I'm
    describing this sequentially but it's a simultaneous, closed-loop
    process.) One can then view the outcome (exclusive preference) as
    the product of an unstable (positive feedback) process that drives
    relative response and reinforcement rates toward exclusive
    responding on the richer schedule.

The exclusive preference is much easier to describe as a control process
that is driving the reinforcement rate toward a reference level that is
higher than the maximum attainable reinforcement rate.

That's a possibility, but I was trying to suggest how Herrnstein may have
thought about it. In the traditional anlaysis, response rate would be
driven by reinforcement rate. The lower-ratio schedule would "pay off" more
often, and thus generate a higher rate of responding on the key associated
with that schedule. This in turn would create an even greater disparity in
reinforcement rates associated with the two keys. The increased allocation
of responses on one key would tend to come at the expense of time spent
responding on the other, so there would be a rapid shift toward exclusive
responding on the key associated with the lower ratio. That's the only
stable condition under this assumption.

If, in a two-key
ratio experiment, behavior ends up on one key exclusively, that is
because that results in the highest possible reinforcement rate (as I
showed that it does, a few posts ago). If the reference level for
reinforcement rate were lower than that maximum, you would not see
exclusive preference. You could achieve this by making the reward size
large enough so it is not necessary to get the maximum possible
reinforcement rate to match the reference level.

Certainly a testable proposition. We'll have to give it a try sometime.

    Remember, "matching" is an attempted summary description of the
    observations, not an explanation. One can say that pigeons match
    (trivially) on concurrent ratio schedules _because_ exclusive
    responding on the richer schedule maximizes the average rate of
    reinforcement.

The fact that a variable reaches a limit does not mean that it is being
maximized (in the sense of hill-climbing). The only reason the
reinforcement rate doesn't go higher is that the allocation of behavior
to one key can't go higher than 100%. If the maximum achievable rate of
reinforcement were above the reference level, the allocation of behavior
would not go all the way to the limit, because zero error would be
achieved before it did.

That's a good point, but all I was trying to illustrate is that matching (if
it does indeed occur on a given schedule) is an observation in need of
explanation, not an explanation. One hypothesis offered to explain matching
on concurrent VI-VI schedules was that the pigeons were attempting to
maximize the local (instantaneous) perceived reinforcement rate. Yours
would be another hypothesis, among several others that have been offered.

. . . I'm getting frazzled from trying to keep up with the net and
also struggling to find a way to write the book Fred Good wants. I'm
going to have to cut back on something, and the McSweeney data is it for
now. I've been working on e-mail on and off since dawn, and it's now
dark again, and I haven't put in a lick on the book, not to mention
putting screens away for the winter. Mary's beginning to wonder who that
man in the basement is. I've got to acquire some self-control here.

I'm going to have to cut back, too; I'm just not getting anything done
outside of teaching class, doing some reading, and posting to CSG-L, and
I've got my own full plate of ghastly culinary offerings to chew on. And I
really need some extra time to get the operant study going.

Regards,

Bruce