[From Bruce Abbott (950102.1615 EST)]
My posts seem to be getting "stuck" in our SMTP server. The last one
languished there for a day or two. Ah, well...
Bill Powers (941231.1235 MST)
Bruce Abbott (941230.1700 EST)
The problem with the standard statistical analysis has nothing to do
with the analysis per se. It's in the assumed organization of the system
that is analyzed. What your analysis would reveal would be a high
correlation between the controlled variable of a control system and the
setting of its reference signal. But a person knowing nothing of how the
system in the aircraft works might well think that the joystick controls
a stimulus which affects something in the aircraft that causes it to
produce a response in the form of a control surface angle.
Yes, I agree.
Isn't it interesting how much I was able to learn about the system
using that outdated methodology described in my text?
You could learn about observable relationships, but the standard
statistical methods can't tell you that your underlying model is right.
I don't believe that the statistical approach would have suggested that
the control surfaces would push back against your attempts to deflect
them.
Perhaps not, but that fact is easy to demonstrate empirically using ordinary
"IV-DV" methods. I could push or pull the control surfaces with various
amounts of force (IV) and measure the counterforces generated by the servo
(DV). In fact, this is all that "the Test" amounts to, isn't it, i.e.,
applying a disturbance to some variable and observing the system's response?
Anyway, the correlations you would get in this experiment would be in
the high nineties, the kind we're interested in in PCT. I don't know
your opinion on the subject of correlations in the eighties or lower, or
on the subject of using population measures to predict the behavior of
individuals.
In my methods text I recount how, when inferential statistical tests were
first making their way into experimental psychology, journals sometimes
published research articles that contained elaborate ANOVA tables with all the
sources of variance, sums of squares, degrees of freedom, and so on, but which
failed to contain any mention of the actual data--not even the means were
presented. The story is a warning against believing that the results of the
inferential analysis are the important result of the research. I also warn
students that relationships shown by group averages may bear little
resemblance to those shown by individual subjects (Sidman, 1960), and devote a
whole chapter to single-subject methods.
My view of research methods and of statistics is that they are tools. As with
all tools, when used correctly for the purpose for which they were designed,
they can be helpful. When used incorrectly, for the wrong job, they can be
worse than useless. I would not argue that you remove the screwdriver from
your tool-kit just because someone might try to use it as a chisel.
Pearson correlation is one of those tools that too many researchers use
incorrectly, but which can be useful if one is aware of its limitations. It
is fine as a summary index of relationship when the relationship being
summarized is linear, there are enough independent points (two points will
always give r= 1.0), no serious outliers in the data, and the ranges of the
variables involved are not overly restricted (r basically compares the scatter
above and below the best-fitting straight line to the scatter ALONG the line;
the more elongated the oval, the higher the r). I would prefer to work with
scatterplots, residual plots, and measures of variance accounted for (of which
r-squared is one) than with Pearson r, but as a compact summary of the
relationships obtained in PCT experiments it seems perfectly suited.
Anyway, the correlations you would get in this experiment would be in
the high nineties, the kind we're interested in in PCT. I don't know
your opinion on the subject of correlations in the eighties or lower...
It depends on the context. As a student of the experimental analysis of
behavior, I was taught that sloppy relationships indicate a lack of sufficient
experimental control; that such results should precipitate an investigation
aimed at identifying the uncontrolled sources of variation and bringing them
under experimental control, and that such a search often identifies important
variables which themselves may become the subject of future research. The
high correlations achieved between disturbance and response in the PCT model
are impressive precisely because they demonstrate this kind of experimental
"control" (in the EAB sense). In this context, one would want the
correlations to be very high--as close to 1.0 as possible. In other contexts,
moderate correlations may be useful.
But let's return to the context of my original post. What got me started
along this line was Rick's assertion that the methods outlined in my text are
useless for studying "living control systems." I agree with you that a
correct model is necessary if certain data are to be interpreted correctly,
and that different methods of analysis (from those that have been applied in
the past) are required to properly study some aspects of the system. Yet
there are many, many questions about how humans and other animals function for
which these stock-and-trade methods work well, and I strongly disagree with
Rick's assertion that what traditional research methods texts teach must be
seen as total nonsense after one adopts PCT. It is one thing to argue that
the wrong tool has been chosen for the job at hand, but quite another to argue
that the same tool must be used on every job. Even "living control systems"
offer plenty of "jobs" for which those other tools are well suited.
Regards,
Bruce