[FROM: Dennis Delprato (950618)]
Rick Marken (950616.1400)
Bruce Abbott (950616)
(Various comments on 'reinforcement model')
I've already tried one simple version of the reinforcement model and it
seems to work alright. The reinforcement model that worked for E. coli
doesn't work for R. coli, but it looks like some kind of control by
consequences model will work.
I'll look at it more closely but it's beginning to look like you're right
again. Maybe this "random consequences" approach to dealing with
reinforcement theory isn't the way to go.
For some time I've had the 'feeling' that the selection by consequences
description of how reinforcement 'works' will not be easy to displace, and
not because this theory is of some sort of ultimate value. Rather,
'selection by consequences' is a type of radically descriptive account
of _procedures_ and outcomes. As such it is of great value in that
previous attempts to provide general descriptions of 'reinforcement'
asked us to accept this or that hypothetical construct, e.g.,
strengthened S-R bonds, response strengthening, confirmed expectations.
Rick's attempts to use PCT models to test the generality of the
selection by consequences account may be doomed to support
selection by consequences basically because thus far he has
operated according to selection by consequences's terms. That is,
his models seemed to have been based on _their_ fundamental units--
R - C and S - R - C (R = operant response, C = consequence, S =
discriminative stimulus). The units are built into the reinforcement
procedures and thus have inherent in the models because the models
were based on the procedures.
Another feature of selection by consequences theory in addition to
its radically descriptive nature makes it difficult to overthrow.
This is the fact that reinforcers are functionally defined. If particular
events follow occurrences of operant responses and the rate of the
operant does not change, one has not established the operation of
reinforcement. One implication of this for a PCT modeler seems to be
that if one sets out beforehand to model reinforcement procedures,
the model will always be compatible with a reinforcement description.
If PCT offers a new fundamental unit of psychological
behavior, then PCT is going "underneath" the operations of
reinforcement. It is getting at what is more generally going
on than what we observe on the surface. The question, then, is
how to go beyond the surface of selection by consequences
reinforcement. One place to look might very well be the work
on feedback functions. It seems to me that the molar behaviorists
(where one finds feedback functions) have departed from Skinnerian
"molecular" selection by consequences. They say, I think, "When
responses are modified by contiguous relationships between
responses and consequences, it is not what one sees directly
(contiguous relationships) but what one does not see directly
that is more generally important for describing what is going
on."
Dennis Delprato
psy_delprato@emuvax.emich.edu