Analysis of concurrent fixed-interval schedules

[From Bill Powers (951029.0830 MDT)]

EXPRESSING THE TOTAL RATE OF REINFORCEMENT AS A FUNCTION OF BEHAVIOR
ALLOCATION BETWEEN TWO FIXED-INTERVAL FEEDBACK FUNCTIONS:

···

--------------------------------------------------------------------
1. A fixed-interval feedback function is equivalent to a fixed-ratio-1
feedback function with a ceiling on the reinforcement rate r at
b = 1/I.

                                   r = 1/I2
R * * * * * * * * * * * *

                         * | FI-I2
                       * |
                     * |
                   * |
                 * |
               * | r = 1/I1
             * * * * * * * | * * * * * * * * * * * *
           * | | FI-I1
  FR-1 * | |
      \* | |
     * | |
   * | | FIG 1
* | |
* | |

==============x=============x===============================B->
           b1'= 1/I1 b2'= 1/I2 (I2 < I1)

Here b1' and b2' represent the behavior rate b above which the reinforcement
rate becomes constant for all higher values of the behavior rate, for two
different interval schedules I1 and I2.

2. We can construct a three-dimensional plot with r as the z-axis, b1 as the
y axis and b2 as the x-axis. Looking down from the z-axis into the x-y plane,
we have

    b1 b1 = T3-b2
     > \ b1 = T2-b2 \ |
     > \ \ |
     > \ \ |
     > \ \
b1' | - - - - - - \- - - - - \ - - - - - - -
=1/I1| \ | \
     > \ | \
     > \ | \
     >\ \ | FIG 2
     > \ b1 = \ | \
     > \ T1-b2 \ | \
     O====================|=================================== b2
                        b2=T b2'=1/I2

From Fig. 1 we can see that the reinforcement rate at b2' is higher than at

b1'. The schedule interval I1 is thus greater than I2.

In Fig. 2 the three parallel dashed lines represent three different total
behavior rates, defined as T = b1 + b2. In the steady state, there will be
some value of T, and the allocation of behaviors b1 and b2 will define a
point along the diagonal line corresponding to the total behavior rate.

           REINFORCEMENT RATE AS A FUNCTION OF B1 AND B2

In the quadrant defined by the origin, b1', b2', and the point (b1',b2'), the
reinforcement rate increases at the same rate for increases in b1 or b2. r =
b1 + b2.

In the upper left quadrant, reinforcement is constant with respect to changes
in b1. r = 1/I1 + b2

In the upper right quadrant, both behavior rates are above the reinforcement
limits and reinforcement rate is constant: r = 1/I1 + 1/I2.

In the lower right quadrant, reinforcements are above their limit for
schedule 2 and decrease as b1 decreases. r = 1/I2 - b1.

  REINFORCEMENT RATE AS A FUNCTION OF ALLOCATION AND TOTAL BEHAVIOR

For total behavior rate T1, the diagonal line b1 = T - b2 lies completely
within the lower left quadrant, and reinforcement is constant at r = T.

For total behavior rate T2, with b1 starting equal to T2 and decreasing, the
total reinforcement increases according to r = 1/I1 + b2, then when the point
enters the first quadrant r becomes constant.

For total behavior rate T3, as b1 decreases from T3, the reinforcement rate
first rises according to r = 1/I1 + b2, then remains constant at
r = 1/I1 + 1/I2 for a brief time, then begins decreasing as r = 1/I2 - b1.

                   MODELS OF ALLOCATION BEHAVIOR

Under the approach outlined above, there are two undetermined variables in
this situation: total behavior rate T, and behavior rate b1 (with b2 being
determined by T - b1). This way of defining the situation makes the response
of reinforcement to allocation depend on total behavior rate, with the form
of the response changing at different levels of total behavior. This would
make the model needlessly complex.

Alternatively, we can define the two variables as b1 and b2, and allow the
total behavior rate to be a dependent variable. Under this approach, the
behavior of r as a function of either b1 or b2 is simple: proportional up to
some value of the behavior rate, and constant for higher values of behavior
rate. This should lead to a more tractable model.

I won't get further into the modeling now.
-----------------------------------------------------------------------
Best,

Bill P.