[From Rick Marken (990408.1000)]
Marc Abrams (990406.2250) --
Sorry Rick. I don't believe that Bruce would willfully ask PCT to
play in a rigged game.
Nor do I. I think this is because Bruce really doesn't believe that
the game [of having PCT account for conventional psychological research
data] is rigged. So let me take this opportunity to try to explain why
I think the game is rigged. I will do it in the context of the good
ol' Coin Game (which came up in the discussion of the goose's egg
retrieving behavior).
In the Coin Game (see B:CP. p. 235) you lay out four coins on a table
and ask the subject to control some perceptual aspect of the coins.
There are a zillion possibilities; the subject could control their
_pattern_ (keep them in a square, say), some relationship between the
coins (eg., if coin 1 is heads coin 2 is tails and vice versa), etc.
The game is designed to teach you how to discover the perception
(aspect of the coins) a person is controlling (it teaches you how
to do the Test). But I think it can also be used to show why the
game of conventional research is rigged against PCT.
I will just focus on conventional research using a single subject
becuase I think we all already agree that using group data to study
individuals is inappropriate. Single subject research is rare
in conventional psychology but it is used (I used it in my doctoral
research). I will use the Coin Game to show that conventional
research methods, even when applied to a _single subject_, do not
provide the kind of data that is needed for a proper PCT analysis;
it's still a rigged game.
Imagine that you have a subject who is controlling _some_ aspect
of the coins. What you will observe is that when you move the
coins in certain ways the subject responds by saying either "OK"
or "Not OK". So you can do an experiment to see how various
movements of the coins (the independent variable) affect the
subject's responses (the dependent variable). For example,
the coins might be laid out as follows
DH NT
NH DT
Where D is "dime", N is "nickel", T is "tails" and H is "heads".
The independent variable is the position of DH. One level of this
variable is "leave DH where it is" (the "control" cndition); the
other level is "move DH to the left" (the "experimental" condition).
When you do this experiment you find that the subject always says
"OK" in the "control" condition and always says "Not OK" in the
"experimental" condition. So you see a strong (and error free)
effect of the IV (position of DH) on the DV (saying "OK" or "Not
OK").
Now you, as a PCT theorist, are asked to explain this result using
PCT. Of course, you _can_ build a PCT model to explain this result
but it will almost certainly be wrong; it will be wrong because,
in order to build the model, you have to make an _assumption_ about
what variable is being _controlled_ in this experiment. There are
so many possibilities -- none of which have been ruled out by
research -- that you are virtually certain to be wrong. Moreover,
a cause-effect model can be fit to these data just as well as a
control model. Since there has been no test to determine that there
_is_ a controlled variable, the conventional psychologist can
correctly point out that the control model is more complex than
the cause-effect model becuase, in the control model, we have
to make assumptions (about controlled variables) about facts not in
evidence. This is where the "rigging" comes in. If all you have
are the results of conventional research then the conventional
researcher can say "gottcha" when you try to build a control
model that assumes (as it must) that some particular variable is
under control.
A conventional researcher who has a vague familiarity with the
PCT concept of a controlled variable might also be able to point
to other conventional results that contradict any assumption you
make about the variable under control. For example, you might guess
that the variable controlled is "shape" and that the subject wants
to perceive the coins in a square; that's why moving DH to the
left results in a "Not OK". But the conventional researcher might
be able to find a study where the "controlling for square"
assumption is violated; an experiment, for example, where
reversing DH and NT, while preserving the square shape of the
coins, results in "Not OK".
The problem is that the conventional approach is not aimed at
determining what variable(s) a subject might be controlling;
in fact, the IV-DV approach is based on complete ignorance of
the possible existence of controlled variables. The proper way
to study behavior like that in the Coin Game -- the way that
provides data relevant to the application of a control model --
is by _systematically_ testing for controlled variables. Since
conventional research does _not_ involve a _systematic_ test for
controlled variables, any attempt to apply a control model to
the results of such research is (in my opinion) anteing up to
play in a rigged game.
Best
Rick
···
--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/