[From Bruce Abbott (951128.1140 EST)]
Rick Marken (951127.2000) --
Bruce Abbott (9527.1650 EST)
The whole point of an experiment is _control_. If you allow the independent
variable to vary on its own, you have no idea what other variables are
varying with it.
I disagree because you know just as much (or as
little) about what other variables might be varying when you manipulate
a variable as when you don't.
Lately there's been this little controversy over whether certain differences
in the ways men and women tend to approach certain problems can be
attributed to subtle genetic or hormonally-induced differences in brain
structure or whether these differences are a product of different
socialization experiences. The problem is difficult to resolve because we
can't randomly assign people to sex for the purpose of the experiment and
for both ethical and practical reasons we can't raise boys and girls in
environments in which the usual socialization experiences are reversed. The
two variables are hopelessly confounded. But if we could randomly assign
people as described, we would have our answer. (My guess is that both
factors mutually interact in a complex pattern, so it would take more than a
single, simple experiment to determine the normal course of events, but this
doesn't affect my basic point.)
When you manipulate a variable there are always
confounding variables. Indeed, that's the game that's played in conventional
psychological research. Since psychologists aren't studying models, they get
their research published by doing experiments that eliminate confounding
variables present in other people's experiments; there are always confounding
variables, so there are alway new issues of JEAB, JEP, etc.
You are right that there are no confound-free experiments. The best anyone
can do is to eliminate as many sources of confounding as possible by
exerting control, and to vary which variables are confounded with which
across experiments in order to determine the independent influence of each.
This problem is not unique to "conventional" research. When you use the
Test for the controlled variable you face this same problem, and resolve it
in the same way.
You are wrong that psychologists aren't studying (and testing) models. What
they aren't studying are control models.
How about the classic (and defunct) Hullian model
Ok. I suppose Hull's model does include organismic variables. But, as I
recall, it also attributes characteristics to reinforcers (like incentive
value) that are independent of the organism.
Even incentive value is supposed to represent the value of the reinforcer
_to the organism_ and not some independent quality of the reinforcer itself.
That value does depend on certain physical characteristics of the
reinforcer, as perceived by the organism, such as the taste of a food
pellet. How a given bit of food tastes to a rat we cannot know, but we can
determine how certain physical characteristics of the food influence the
rat's choice between alternative formulations. The only way we can know the
incentive value of the food is, in effect, to ask the rat. So in these
models incentive value is not independent of the organism either.
You continue to cling to the belief that reinforcement theorists have a
problem because they have the wrong THEORY of control (the theory that
control emerges from differential reinforcement). In fact, reinforcement
theorists have a problem because they don't know WHAT control is; since they
don't know what control is, they don't have a theory of control (or purpose
or whatever they want to call it). It's not a definitional probleml it's a
FACTUAL problem (remember).
No, you continue to misinterpret my words. If you bend any farther backward
in your attempt to do so, you're going to fall on your head! I didn't say
that reinforcement theorists have a wrong theory OF control, I said that
they have a theory according to which control is an illusion. Thus when you
say "reinforcement theorists have a problem because they don't know WHAT
control is," you are simply restating my position. And when you follow this
with "it's not a definitional problem it's a FACTUAL problem," you are
taking my previous argument about the definition of the word "control"
completely out of context, and you know it.
Anyway, you didn't answer my question: why are there no studies done by
reinforcement theorists that can even be construed as being about
determining what organisms control?
Ah, but I did answer your question. If you don't accept the answer I gave,
that's another thing entirely.
Do you want to be the one to tell the reinforcement theorists about this
or should I.
I believe you already tried. Let me do it this time.
I would like to know how you would go about telling them that they are
not studying behavior properly.
I have a strategy in mind, if someone else doesn't beat me to it. As I
recall, you have expended quite a bit of effort trying to talk me out of it.
I would also like to know how you
would tell them that it's not their theories (such as they are) that are
wrong but, rather, their concept of the nature of behavior.
Ah, but if the concept is wrong, the theories based on the concept are wrong
too.
Bill Powers (951127.0750 MST)
This, I think, accounts for why control theory was not discovered by
psychologists.
Me too.
Or anyone else not working with a simple amplifier and trying to understand
the effects of negative feedback in that elementary system.
There would be explanations of this effect in terms of the
"intimidating" properties of horizontal lines; some might say that
horizontal lines afford vertical opposition.
Nah, I don't think so. Note that you have given the "conventional"
researcher nothing but the values of these two variables to work with. All
that could be said is that there is an inverse, probably linear,
relationship between the two variables, in which varying the horizontal line
length causes the vertical line length to vary. But the "control"
researcher evidently has been given more, or else there would be no reason
for him or her to suspect that this relationship had anything to do with
keeping the area of a rectangle constant. To discover this, the hypothesis
would have to be conceived and then tested. Others are possible that would
be consistent with the data, such as that the participant is trying to keep
the total length of the two lines constant, or a weighted sum of their
lengths constant. And there is nothing to prevent the "conventional"
researcher from discovering the true relationship. Researchers often look
for the constancies in an experiment. For example, in concurrent VI
schedules, as the two VI schedules are varied, a pigeon's keypecks are
directed more toward one key than toward the other, so that the proportion
of pecks on a key changes systematically with the proportion of
reinforcements delivered on the schedule associated with that key. However,
if the overall rate of reinforcement on the two schedules combined remains
constant, the total pecking on the two keys combined remains constant. I
know this because the researchers involved in these studies checked for this
constancy.
None of which obviates the main points of your nice example, which are that
the correct model explains the data better than an incorrect one, and that
researchers following conventional practices are unlikely to discover the
correct model.
In the range of IV values selected for this experiment, the relationship
between IV and DV appears linear. It is actually non-linear (the relationship
is DV = 1/IV).
Neither the conventional nor the control researcher would know this, given
only the data presented, until the proper constancy was discovered.
Remember, neither of them knows that the participant is controlling the area
of a rectangle.
But the important point is that the nature of the relationship
between IV and DV depends on the geometry of the situation, not on the nature
of the subject. This is the Behavioral Illusion; the observed relationship
between IV and DV in this experiment depends on the nature of the environment
(actually, the nature of the environmental relationship between IV and DV ),
NOT on the nature of the subject (which these experiments are design to
explore).
Perhaps the clearest way to visualize this is to imagine the (almost)
constant value of the controlled variable as the pivot of a see-saw. Vary
the IV (wiggle one end of the see-saw) and the other end (the DV) changes in
mirror-image fashion. Of course, with more complex environmental functions,
the relationships will be other than the simple one represented by the see-saw.
Now, let's try a different experiment. A researcher trains a pigeon to peck
at a pair of keys for grain reinforcement while projecting different images
onto a small screen in front of the pigeon. The slide may or may not
display an example of what the experimenter designates as the correct
concept. If it does, the pigeon will be rewarded for pecking at the left
("Yes") key and not if it pecks at the right ("No") key. If it does not,
the opposite will be true. No slide is ever shown twice. The concept in
this case is Richard Herrnstein's girl friend. In some slides, she is
plainly visible, in others she is partly obscured by cars, trees, and so on,
or part of a crowd of people, or not in the picture at all. The clothes she
is wearing vary, and her hair style. The pigeon learns to pick her image
out with an accuracy comparable to that of a human (people were also tested).
Some questions:
1. What have we learned about the pigeon?
2. What is the method used (control or IV-DV)?
3. Based on this result, did Herrnstein have any reason to consider
jumping out of the window of his Harvard office?
Regards,
Bruce