# From arrogance to pretension, yet

[From Bill Powers (930329.1230 MST)]

Chuck Tucker (930329) --

You can translate all the words into other symbols (sy=symbols)
and put ='s, +'s, or any other sign you care to between them
and they still turn out for me to be words; I still have to
read them to have some idea of what you might be stating;
actually I find your use of such symbols a pretention on your
part to appear to be systematic and scientific when your
research does not have such features for me.

Hmm. If they're a pretension on Rick's part, they must be a
pretension on my part too. Judging from referees' comments on
various papers, you probably aren't alone in dismissing the
simple little equations as just a pretentious way of saying the
same thing you could say in words. There's certainly enough of
that sort of pretension in the literature: (m + o)^2 meaning
something like motive and opportunity considered twice, and so
on.

But the equations are real equations in this case. When the
position of the cursor c is said to be equal to the disturbance d
plus the output o, and this is expressed as c = d + o, the
meaning is just what the algebra says. It is not that the
presence of a disturbance sort of affects the cursor, and so does
the output. It is that the cursor position in pixels measured
from the center of the screen is numerically equal to the measure
of the handle position plus the numerical value of the
disturbance. If the disturbance is 100 units and the handle
output is -99 units, the cursor will be exactly at the position
1, not 2 and not 0. The accuracies of the apparatus and the
computer are many times greater than needed to make this true
within one pixel on the screen. Saying that c = o + d pins down
the meaning exactly; there are no ordinary-language words that
can do this without adding many, many sentences -- and sentences
explaining the sentences. As above.

If you just say in words that the disturbance and the output
affect the cursor, you're leaving all kinds of slop in the
meaning. Words are blunt instruments for conveying quantitative
meanings. There is no pretension in using equations that actually
mean something. The symbols are just words from a language with
which you're not familiar, indicating meanings with a precision
that you're not accustomed to.

It may also help you to know that ordinarily, the disturbances I
use in my experiments are generated by a random-number generator
and smoothed by a fixed algorithm. For convenience I usually
record 10 or 20 tables of disturbances for later use, but often I
just generate and record them as the experiment proceeds. In any
case, I never know beforehand what is in those tables unless I
write a plotting program to check on the range and variability of
the disturbances, which I do only when adjusting the smoothing to
affect the general difficulty of the task. And then I usually
regenerate the whole set. The disturbances are a complete
surprise to me and to the simulation, and when they're being
generated in real time they have a different pattern on every
experimental run. So it makes no difference at all who the
subject is -- myself or a neophyte. There's no way to cheat.

As to instructions: about the minimum instructions for a tracking
experiment are "Keep that (point to the cursor on the screen)
between those marks (point to the target marks) using this (point
to the handle)." The subject will immediately begin tracking when
the experiment begins. You can convey this information in a
hundred ways, and after 10 seconds of tracking you can't tell
which instructions were used. Of course you can use
incomprehensible instructions, but even then the subject is
likely to say, shortly after the run begins, "Oh, I see -- is
this all you meant?"

I'm not recommending poor instructions. But I think that you
sometimes go overboard in instructing people what to do, reading
exact wordings from cards, as if this somehow guaranteed that
each person would get the same meanings from them -- or as if any
slight deviation would totally screw up the experiment. Perhaps
this is a holdover from the kinds of experiments you're used to
in sociology, where the results are so ambiguous to begin with
that any variation in the instructions could spoil the results
altogether. Maybe -- just guessing -- this is why having the
experimenter be a subject seems to be a such a no-no for you.
It's probably true in normal-science experiments that if the
experimenter knows what's going on, he or she can fudge the
results. But that's not true in control-system experiments. You
can't control any better than you can control, and the
disturbance is simply not knowable, even to the person who wrote
the program. Unless you're some sort of eerie prodigy who can
predict the next 1800 outputs from a random number generator and
mentally run them through a 3-stage smoothing filter.

I do have to stand up for Rick when you say

YES, this study (and all others that I have seen you do) does
bother me. It does so because it is so poorly done that I
would not consider it worthy of inclusion in a list of
scientific work. The claims that you make for such work are
usually not supported for me by your study.

Rick's claims are always exactly upheld by his data in the
published experiments. The only reason you can't see this is that
you can't read the algebra or the program steps, so you have to
rely on the verbal approximations to make your judgments. That
leaves too much room for you to bend his meanings to fit your
preconceptions, and where he makes statements that conflict with
what you believe, to see plenty of room for him to be wrong. This
is a general problem with trying to convey the results of
modeling to people who are used to working with words alone. It's
exacerbated by the fact that the models we use, and the programs,
are so simple-looking. It's hard to believe that the author isn't
giving them a lot of interpretive help in order to make them fit
the data so well. Most people who are used to normal experimental
results in the behavioral sciences just can't believe that such
simple expressions could achieve such accuracy of prediction. And
they haven't the experience with quantitative modeling to see
that there are no tricks or hidden assumptions.

I think you have to blame your reactions to Rick's experiments on
your own lack of experience with this sort of modeling process.
You're putting form before substance.

I do agree with you that Rick gets a little wild in his claims on
the net. But so would you, for a while, if you understood what he
understands. It will pass.

ยทยทยท

------------------------------------------------------------
Best,

Bill P.