Back in the USSA

[From Rick Marken (920712.1200)]

Well, I'm back. Had a nice time in London; hardly though about
control at all -- just did it, as best as we could given the
peculiar reference levels over there.

I found out why Psych Review didn't even send my "Blind men"
paper out for review. According to the editor it was because:

"It would need to speak more directly to current psychological
issues and theorizing. One would need to see more clearly a connection
between what you are talking about and the issues that dominate
psychological theorizing today."

I guess the nature of the phenomenon they are theorizing about is
not a current issue for psychologists. I think that the most direct
connection between "current psychological issues and theorizing"
and control theory is that from the latter perspective the
former are complex rationalizations of non-existent phenomena.
How do you tell psychologists that their theories which explain
the effects of factors a,b or c on variables x, y or z are a waste
of time because there are no such effects; just statistical noise?

People sometimes criticize control theory for not being based on a
large enough data base. Most of PCT data comes from simple tracking
experiments; we have looked at the control of many different
types of variables -- but it seems like the amount of PCT data
is small compared to the amount of data piled up in the psychology journals.
I think that this is a misperception, however, because 90+% of
the data in the journals is basically noise. The amount of real,
worthwhile facts in the journals is probably far less than what
is already part of the PCT literature. In fact, what real data
exists in the psych literature has already been plucked up by
PCT types. This includes some of the operant conditioning data
and existing tracking data. There is a lot of suggestive data
in the psych literature -- but it will continue to be little
more than suggestive until someone does the studies correctly --
so that the relationships between variables are consistently
perfect -- less than 3% error variance always.

I think it is interesting that the only data in the psych literature
that meets PCT standards of quality (in terms of error variance)
is data obtained in situations where the subject is clearly
controlling a variable -- as in the tracking and operant tasks
(and some psychophysical tasks, especially those where the subject
is controlling a relationship between variables). In fact, that
may be one way of pressing the case for the value of the PCT
perspective (assuming that you accept the idea that a science should
be based on high quality data. I have found that this is NOT a
universally accepted idea, especially in the social sciences; in
fact, I have actually run into people who found the results of some
of my studies of coordination to be suspect (or uninteresting)
precisely because they were NOT statistical; the fact that the
control model accounted for 99% of the variance in behavior also
made the results seems "trivial" to these people. PCT has to
deal with the fact that many social and behaviorial scientists
think that data are not interesting unless there is a sizable
amount of error variance.) Only by viewing behavior as control
and setting up situations where a person can control a variable
can one get the kind of quality data one finds in "real" sciences
like physics and chemistry. The fact that the IV-DV approach to
research gives such crummy results suggests that it is based on the
wrong model. Maybe it's time for that discussion of statistics now?

Best regards

Rick (Obviously not mellowed by Europe) Marken

ยทยทยท

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)