The Sting

[From Rick Marken (990408.1000)]

Marc Abrams (990406.2250) --

Sorry Rick. I don't believe that Bruce would willfully ask PCT to
play in a rigged game.

Nor do I. I think this is because Bruce really doesn't believe that
the game [of having PCT account for conventional psychological research
data] is rigged. So let me take this opportunity to try to explain why
I think the game is rigged. I will do it in the context of the good
ol' Coin Game (which came up in the discussion of the goose's egg
retrieving behavior).

In the Coin Game (see B:CP. p. 235) you lay out four coins on a table
and ask the subject to control some perceptual aspect of the coins.
There are a zillion possibilities; the subject could control their
_pattern_ (keep them in a square, say), some relationship between the
coins (eg., if coin 1 is heads coin 2 is tails and vice versa), etc.
The game is designed to teach you how to discover the perception
(aspect of the coins) a person is controlling (it teaches you how
to do the Test). But I think it can also be used to show why the
game of conventional research is rigged against PCT.

I will just focus on conventional research using a single subject
becuase I think we all already agree that using group data to study
individuals is inappropriate. Single subject research is rare
in conventional psychology but it is used (I used it in my doctoral
research). I will use the Coin Game to show that conventional
research methods, even when applied to a _single subject_, do not
provide the kind of data that is needed for a proper PCT analysis;
it's still a rigged game.

Imagine that you have a subject who is controlling _some_ aspect
of the coins. What you will observe is that when you move the
coins in certain ways the subject responds by saying either "OK"
or "Not OK". So you can do an experiment to see how various
movements of the coins (the independent variable) affect the
subject's responses (the dependent variable). For example,
the coins might be laid out as follows

        DH NT

        NH DT

Where D is "dime", N is "nickel", T is "tails" and H is "heads".
The independent variable is the position of DH. One level of this
variable is "leave DH where it is" (the "control" cndition); the
other level is "move DH to the left" (the "experimental" condition).
When you do this experiment you find that the subject always says
"OK" in the "control" condition and always says "Not OK" in the
"experimental" condition. So you see a strong (and error free)
effect of the IV (position of DH) on the DV (saying "OK" or "Not
OK").

Now you, as a PCT theorist, are asked to explain this result using
PCT. Of course, you _can_ build a PCT model to explain this result
but it will almost certainly be wrong; it will be wrong because,
in order to build the model, you have to make an _assumption_ about
what variable is being _controlled_ in this experiment. There are
so many possibilities -- none of which have been ruled out by
research -- that you are virtually certain to be wrong. Moreover,
a cause-effect model can be fit to these data just as well as a
control model. Since there has been no test to determine that there
_is_ a controlled variable, the conventional psychologist can
correctly point out that the control model is more complex than
the cause-effect model becuase, in the control model, we have
to make assumptions (about controlled variables) about facts not in
evidence. This is where the "rigging" comes in. If all you have
are the results of conventional research then the conventional
researcher can say "gottcha" when you try to build a control
model that assumes (as it must) that some particular variable is
under control.

A conventional researcher who has a vague familiarity with the
PCT concept of a controlled variable might also be able to point
to other conventional results that contradict any assumption you
make about the variable under control. For example, you might guess
that the variable controlled is "shape" and that the subject wants
to perceive the coins in a square; that's why moving DH to the
left results in a "Not OK". But the conventional researcher might
be able to find a study where the "controlling for square"
assumption is violated; an experiment, for example, where
reversing DH and NT, while preserving the square shape of the
coins, results in "Not OK".

The problem is that the conventional approach is not aimed at
determining what variable(s) a subject might be controlling;
in fact, the IV-DV approach is based on complete ignorance of
the possible existence of controlled variables. The proper way
to study behavior like that in the Coin Game -- the way that
provides data relevant to the application of a control model --
is by _systematically_ testing for controlled variables. Since
conventional research does _not_ involve a _systematic_ test for
controlled variables, any attempt to apply a control model to
the results of such research is (in my opinion) anteing up to
play in a rigged game.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Tracy Harms (990409.1930 Pacific)]

----------------------------------------------------------------------

Date: Thu, 8 Apr 1999 09:56:20 -0800
From: Rick Marken <rmarken@EARTHLINK.NET>
Subject: The Sting

[From Rick Marken (990408.1000)]

... So let me take this opportunity to try to explain why
I think the game is rigged. I will do it in the context
of the good ol' Coin Game ...

Rick,

This was an unusually thought-provoking post. Thanks.

Tracy

[From Bruce Gregory (9904100641 EST)]

Tracy Harms (990409.1930 Pacific)

Rick,

This was an unusually thought-provoking post. Thanks.

Hear. Hear.

Bruce Gregory

[From Fred Nickols (990410.1045 EDT)]--

Rick Marken (990408.1000)

<minor snip>

...let me take this opportunity to try to explain why
I think the game is rigged.

<massive snip>

Since there has been no test to determine that there
_is_ a controlled variable, the conventional psychologist can
correctly point out that the control model is more complex than
the cause-effect model becuase, in the control model, we have
to make assumptions (about controlled variables) about facts not in
evidence. This is where the "rigging" comes in. If all you have
are the results of conventional research then the conventional
researcher can say "gottcha" when you try to build a control
model that assumes (as it must) that some particular variable is
under control.

A question here. Up to this point, what you are saying sounds a lot like
this: "Trying to interpret or reinterpret existing conventional research
in light of PCT is an exercise doomed to failure because PCT must make some
assumptions about facts not in evidence, namely, the existence of
controlled variables." Do I have that right?

<more snipping>

Since
conventional research does _not_ involve a _systematic_ test for
controlled variables, any attempt to apply a control model to
the results of such research is (in my opinion) anteing up to
play in a rigged game.

Another question. What kind of research would have to be (if it hasn't
already been) conducted to establish the existence of controlled variables?
Would it have to be side-by-side PCT and conventional experiments aimed at
exploring and explaining the same phenomena? Or would it have to be
something very, very different? This is not a trick question.

I ask for two reasons. First, if the game is indeed rigged, as you argue
in your post, any attempt to reinterpret past conventional research would
be doomed to failure and criticisms of it from a PCT perspective would be
pointless. However, if it could be unequivocally (i.e., "scientifically")
established that controlled variables do indeed exist, then it would be
easy to assert that conventional research is and has been flawed all along.
Reinterpreting past research would still be pointless because the right
kinds of assumptions and data weren't involved. The call from CSGers and
PCTers, then, would be for new and better research (without so much time
spent carping on historical inadequacies).

I guess what I'm saying is that if your assertion about "rigged" is
correct, then why play at that table? Set up a new one.

Regards,

Fred Nickols
Distance Consulting
http://home.att.net/~nickols/distance.htm
nickols@worldnet.att.net
(609) 490-0095

[From Rick Marken (990410.1030 PDT)]

Tracy Harms (990409.1930 Pacific)--

Re: Rick Marken (990408.1000)

This was an unusually thought-provoking post. Thanks.

Bruce Gregory (9904100641 EST)--

Hear. Hear.

Thanks, guys! I was beginning to feel like one of those videos
that never gets rented:-)

Fred Nickols (990410.1045 EDT) --

A question here. Up to this point, what you are saying sounds
a lot like this: "Trying to interpret or reinterpret existing
conventional research in light of PCT is an exercise doomed to
failure because PCT must make some assumptions about facts not
in evidence, namely, the existence of controlled variables."
Do I have that right?

Yes.

Another way to look at it is like this: Every "response" to
a "stimulus" can be _interpreted_ as though it were an action
aimed at protecting some variable from disturbance. For example,
the response (downward movement) to dropping (stimulus) a ball
can be seen as the ball's effort to protect a variable (height
above the ground) from disturbance. To avoid spurious conclusions
(like this) about whether a particular variable is being controlled
by some system (like a ball or a goose), it is important to test to
determine whether or not that particular variable is actually
under control.

Another question. What kind of research would have to be (if it
hasn't already been) conducted to establish the existence of
controlled variables?

Research aimed at testing for controlled variables. Very little
such research has already been done. The way to do this kind
of research is described in B:CP, Ch. 16 (see especially the
section on the Coin Game; I highly recommend playing that Game
as a way to learn what PCT research involves and how it differs
from conventional research).

Would it have to be side-by-side PCT and conventional experiments
aimed at exploring and explaining the same phenomena?

No. I think behavioral scientists can just stop doing conventional
experiments now (such experiments are only appropriate to the study
of cause-effect systems anyway) and start doing tests for controlled
variables.

Or would it have to be something very, very different?

I think PCT research (testing for controlled variables) and
conventional research are _very_ different, although there are
superficial similarities. In both cases, for example, variables are
manipulated and other variables are monitored to determine whether or
not there was an effect of these manipulations. But in PCT research
we 1) start with a hypothesis about a variable being under control
2) we look for _lack of effect_ rather than an effect of manipulations
on this variable 3) we try many different manipulations (different
disturbances) to prove to ourselves that the hypothetical controlled
variable is, indeed, under control (PCT experiments are not one-shot
affairs) and 4) we revise our hypotheses about controlled variables
if the results of our experiments require it (testing for controlled
variables is like playing Jotto; you keep testing until you discover
what "word" (controlled variable) the subject has in mind (is
controlling)).

Again, I highly recommend that you play the Coin Game (or Jotto;
that's the game where you try to guess another person's "target"
word by selecting "guess" words and asking how many letters of the
target word are contained in the guess word; I think Jotto has the
same kind of goal as the test; the goal is to determine the target
word; not to find any particular relationship between particular
words and number of target letters contained therein) to get a feel
for PCT research and to see how different it is from conventional
research.

I ask for two reasons. First, if the game is indeed rigged, as
you argue in your post, any attempt to reinterpret past
conventional research would be doomed to failure and criticisms
of it from a PCT perspective would be pointless.

I think this is true. The results of conventional research should
be treated as a _starting point_ for PCT research rather than
as "facts" to be accounted for by PCT. For example, Tinbergen's
egg-retrieval data suggests some interesting hypotheses about
what geese control when they retrieve eggs. Tinbergen's data are
a good _start_ for some PCT reseach; same with the operant
conditioning data; it suggests some hypotheses about the variables
that rats and pigeons control; now it's time to start doing the
PCT research to determine what they _do_ control.

However, if it could be unequivocally (i.e., "scientifically")
established that controlled variables do indeed exist, then it
would be easy to assert that conventional research is and has
been flawed all along.

That's the point of my "Dancer..." paper. The idea that conventional
research has been flawed all along is an idea, up with which
conventional psychologists simply will not put. I have no idea why
they published that paper in a conventional research journal. But it
hasn't created any problems for conventional psychologists because
they have had no difficulty dealing with the issues brought up in
that paper: they simply ignore them;-)

I guess what I'm saying is that if your assertion about "rigged" is
correct, then why play at that table? Set up a new one.

Because it's no fun to play alone. Science (like like most other
human endeavors, save masturbation) is a social enterprise (for
me anyway). Part of the fun of science (for me) is sharing the beauty
of the discoveries with others. Right now, there are precious few
others who are doing PCT research. I publish papers about PCT
in conventional journals in the hopes that others, who are
presumably also interested in understanding the behavior of
living systems, will see the beauty and join the party. Unfortunately
(but expectedly) my overtures to conventional psychologists to
join the PCT science "party" are treated more like a _threat_ than
an invitation.

Best

Lonesome Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (990410.1237 MDT)]

Rick Marken (990410.1030 PDT)--

I think Rick Marken is right about the rigging, but I also think the reason
for it is failure of psychologists to understand how science is done in
other more successful fields. What's missing from psychology (and allied
areas) is the discipline of mathematics, or even the desire for it.

What mathematics gives to us is a way to state our assumptions and our
methods of reasoning so precisely that our conclusions no longer depend on
what we want to be true. This is particularly clear in the field of
modeling and simulation. Once you've set up a simulation and started it
running, your private beliefs and wishes have no further effect on the
outcome. You've created an autonomous entity that runs by itself, behaving
because of its own organization and that of its environment. Even if you
have grossly misinterpreted the meaning of your own model, the simulation
behaves exactly as such a model must behave, and not as you believe it must
behave.

The simulation is far stricter than any human critic could be. A human
critic can share your mistaken beliefs and tell you, in error, that you are
right, or mistakenly disbelieve you and wrongly say you are wrong, or give
you the benefit of the doubt and tell you you're on the right track, keep
up the good work, when you are far down a blind alley. The simulation can
do none of these things. It behaves exactly as the organization you gave it
must behave, and it's strictly up to you to learn from it why it behaved
that way. It doesn't care one way or the other whether you understand what
it did. It neither reveals nor conceals what makes it work as it does. It
just works.

In psychology, where mathematics and simulation play vanishingly small
roles, reasoning depends heavily on informal methods and intuition. If you
get a feeling of understanding something, of seeing a pattern or grasping a
causal relationship, that is assumed to be an "insight," and to be correct
just because you got such a feeling of rightness from it. The problem is
that this feeling of rightness has no understandable foundation. If someone
else doesn't get it, you can't transmit it to the other person. It is
purely subjective, and it occurs, or fails to occur, for reasons nobody can
spell out. The people who get the same feeling of rightness about some idea
band together and form a school of thought, but at bottom they can't
justify themselves any more than a religous cult can. People who get a
feeling of rightness about a different idea simply start a different cult.

People who use simulations to test their ideas eventually take the attitude
that they don't really care whether the ideas are right or wrong. They
learn that anticipating rightness or wrongness is utterly pointless. If the
simulation behaves as expected that is gratifying, but if it doesn't that
is edifying and indeed promises more entertainment than if one had
anticipated correctly. What wouldn't a physicist give to find that his
prediction that an object would fall to the ground was disproven? Alpha
Centauri, here we come!

But to take this attitude toward right and wrong predictions, one must have
a way of finding out if predictions were right or wrong, a way that doesn't
depend on persuasion, emotional pressure, looking at things from just the
right "perspective," insight, or being strongly convinced. Mathematics and
simulation, coupled to experiment, are that way.

Best,

Bill P.

[From Bruce Gregory (90410.1730)]

Rick Marken (990410.1030 PDT)

I think you may be looking in the wrong place for people conducting the
Test. Even as we speak, NATO is convincingly demonstrating that Milosevic is
_not_ controlling his perception of "being bombed." I suspect he is
controlling the perception "being in power". I hope NATO is correct that
Yeltsin is not controlling a perception of "protector of the Slavs". They
are Testing that conjecture too.

Bruce Gregory

[From Bruce Gregory (990410.2110 EDT)]

Rick Marken (990410.1730 PDT)

I know you're just trying to be witty and I hate to (once again)
reveal my deep lack of a sense of humor. But I don't want readers
who are as humor-impaired as I am to go away thinking that applying
a stimulus (bombing) and looking for a response (quitting) under
the assumption that the stimulus is a controlled variable is an
example of the Test. It's one _step_ in the Test (it does reveal
that Milosevic is _not_ controlling his perception of his country
"being bombed", at least not with very high gain) but it's a long
way from revealing much about what variables Milosevic _is_
controlling, which is what you would want to find out from doing
the Test.

I agree, it's only the first step. The next step is to see if he is
controlling the perception "not waging ground war".

Bruce Gregory

[From Rick Marken (990410.1730 PDT)]

Bruce Gregory (90410.1730) --

I think you may be looking in the wrong place for people
conducting the Test. Even as we speak, NATO is convincingly
demonstrating that Milosevic is _not_ controlling his perception
of "being bombed."

This is most emphatically _not_ an example of the Test. It is
no more an example of the Test than is any psychological
experiment in which an IV (like bombing vs not bombing) is
manipulated and a DV (like saying or not saying "uncle") is
measured.

I know you're just trying to be witty and I hate to (once again)
reveal my deep lack of a sense of humor. But I don't want readers
who are as humor-impaired as I am to go away thinking that applying
a stimulus (bombing) and looking for a responce (quitting) under
the assumption that the stimulus is a controlled variable is an
example of the Test. It's one _step_ in the Test (it does reveal
that Milosevic is _not_ controlling his perception of his country
"being bombed", at least not with very high gain) but it's a long
way from revealing much about what variables Milosevic _is_
controlling, which is what you would want to find out from doing
the Test.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/