Tests&payoffs; completeness; reorganization

[From Bill Powers (930106.1515)]

Rick Marken (930106.0830) --

Well put about psychology being a study of group statistics but
thinking it is about individuals.

···

--------------------------------------------------------------
Martin Taylor (930106.1310) --

RE: taking tests

In a less extreme vein, I think what I'm getting at is that one
should always pay attention to the payoff matrix when there is a
choice about taking tests.

One obvious application is taking drugs that are thought to have
beneficial effects in treatment of some condition. I've seen,
more than once, statistics about the "benefits" of drugs that
proudly cite very high-reliability findings that 10 to 20 percent
of the patients show a positive response to the drug. This will
surely have mass-statistical benefits over a population, but in
any specific case it is highly unlikely that such a drug will be
of any help. Furthermore, all drugs have unpleasant and possibly
dangerous side-effects, and most of them are very expensive.
Spending too much money on drugs out of a limited budget can also
have unpleasant and even dangerous side-effects -- for example,
if you skimp on food, or fail to keep up your medical insurance
payments (this is the US, not Canada). On top of this, you have
to add the uncertainty in diagnoses which can easily be wrong
about which population of potential patients you belong to, and
hence wrong about which drug is indicated.

When you take a test to determine your placement in a job or your
standing in school, the chances are very great that you will be
classified significantly too high or too low. You then end up
bored with something that is too simple to be a challenge, or
forced to fake your way through something that is beyond your
level of competence. Either way, your chances of failure are
increased.

I could go on and on. Any time that something important in your
life depends on the outcome of a test based on statistical data,
you should weigh the risks inherent in being misjudged by the
test, and consider that the likelihood of being misjudged is one
of those facts that is not easily obtainable -- because it is so
high in any individual case. To be specific, yours.
-------------------------------------------------------------
David Goldstein (930106) --

You announced that your mail address should be changed to include
"Rowan" instead of "Glassboro", but in fact your return address
in mail to me reads

<From: VAXF::IN%"goldstein@saturn.glassboro.edu" 6-JAN-1993
<11:43:58.33
Mail addresses are registered with a national network manager
somewhere. It seems very unlikely to me that the address of your
computer system would be changed just because the name of the
college was changed. This would entail getting that information
to everyone in the world who sends mail to your mainframe
location. My attempt to send a message to you containing "rowan"
was rejected before even being sent: No Such Host. But my mail
commenting on this problem, sent to "glassboro", obviously got
through!
--------------------------------------------------------------
Bruce Nevin (930106.1254) --

Nice round-up on theories and solutions. I think there's another
dimension to this, though. Theories are supposed to be
explanations of what we observe. While they help to determine
what we observe, there is also the matter of what constitutes an
explanation; this is not necessarily theory-dependent.

I think that people are easily lulled by the sound of words into
accepting as general explanations statements that really don't
explain anything, under anyone's theory. Suppose someone is very
rude to you, but a friend tells you "Don't pay any attention --
he's just lost his job." That certainly sounds like an
explanation. But when you ponder it a while, you realize that it
doesn't work both ways -- that you still don't expect everyone
who has just lost a job to be rude to you. You might expect them
to be distracted, sad, angry, subdued, worried, inattentive, or
perhaps rude -- but you don't expect them all to be rude. The so-
called explanation really has some huge holes in it, a lot of
unspoken assumptions.

I've been debating points like this with Dr. Diabolo, via Greg
Williams as the channel. It's easy to offer an explanation of
tracking behavior by saying that the person is stimulated by the
movements of the cursor in such a way as to move the cursor
toward the target. This certainly has the ring of an explanatory
statement. But an explanatory statement also contains, by strong
implication, a prediction: if a cursor moves in such-and-such a
way, a person will be stimulated to move it toward a target,
whereas if it moves in some other way the person will not be so
stimulated. As soon as you put it that way, you see the hole in
the explanation. Explanations like that are really just
descriptions of some outcome that has already happened. They
don't do you any good in predicting outcomes that have yet to
happen.

This isn't so much a matter of the correctness of the explanation
as of its completeness. When you hear a complete explanation, you
don't see any holes in it: it allows for only one prediction to
be made from the antecedent conditions and the explanation. Given
similar conditions, and remembering the explanation given for
them, you can visualize what outcome is to be expected. And you
can visualize it in enough detail to know when the actual outcome
DOESN'T fit the prediction.
Learning to hear completeness in an explanation is like learning
to hear it when someone is giving you directions. If someone says
"Keep going down this road and take a right at the schoolhouse,"
you may not realize that these directions are incomplete until
you get to the schoolhouse and find that there is a road both
immediately before and immediately after the schoolhouse. If the
person said "Turn at the schoolhouse and go three miles...",
however, the experienced direction-taker (and explanation-giver)
will immediately call a halt and say "turn which way?" It's a
matter of following the directions, or the explanation, in
imagination, constructing a mental picture according to the story
that's being told. When something's missing or incongruent, the
mental model refuses to work -- one doesn't know what to imagine.
Which way should I imagine that I'm turning? To do this reliably,
of course, one must be sensitive to putting things into the
mental picture that the directions didn't actually specify.

One reason that the PCT models for tracking behavior are so
convincing is that they don't leave out any steps. There may be
some blurring of intermediate steps, as when we say that the
perception of a relationship occurs without being able to say how
that perception is constructed from visual information, but the
sense is there that no vital step has been omitted. The
correcteness of the model isn't what does the trick; after all,
the predictions are off by 5% on the average, which means that
now and then there is a movement of the real cursor that is
totally unpredicted by the model (as when the participant
sneezes). What matters the most is the sense of completeness.

A complete model always predicts some specific outcome which
can't be confused with a different outcome, should the prediction
be wrong. The model for tracking behavior doesn't just predict
that there will be handle "movements." It says that at time t,
the handle will be _here_, at time t+0.02 sec, _here_, and so on
point by point through the whole run. There is absolutely no
equivocation in the prediction. We can, in fact, measure exactly
how far off the model is for every single data point. This is
true whether the model predicts behavior very closely or is
wildly wrong. The model presents an explanation of the behavior
that we can recognize as COMPLETE. When the explanation or model
is complete, we can tell when it is wrong, and not be concerned
whether a different interpretation might make it right. If it's
wrong it's wrong, and we can fix it. If the wrongness depends on
interpretation, there's no way to tell what needs fixing.

My constant objection to theories or explanations in conventional
behavioral sciences is that they skip over vital places where I
don't know what to imagine as the story unfolds. The skips aren't
marked, as by saying "there's a function here but all we know is
its output." They're just left blank. Consider reinforcements.
Reinforcements, which occur in the sensory world of an organism,
are said to influence the probability of a behavior. Between the
reinforcement and the behavior, however, there is a huge nothing.
The "probability" is a fuzzy cloud with numbers in it floating somewhere
unspecified, unconnected either to the pellet of food
or the paw pressing the bar. This doesn't even begin to qualify
as an explanation in my mind.

So. I think that one can detect incompleteness without being for
or against any theoretical system.
---------------------------------------------------------------
Martin Taylor (930106.1530) --

Reading your fine post on reorganization, I had a little AHA as
some concepts fell into place. What we need to propose, I think,
is (as you said) that reorganization has to be based on
information that is actually available and pertinent. That's the
only way it can work. The nature of this information depends on
what is being reorganized and why reorganization is necessary. If
there's a local problem in a control system, and information is
available about the nature of the problem, reorganization can be
based on that local information and can act locally. So if the
error signal is chronically large, it's obvious that something
about the control system is likely to need trimming up, and the
machinery for doing that should be locally available. This isn't
100% infallible, but reorganization only has to work most of the
time. All this is in support of your proposal about localized
reorganization.

On the other other hand, if the problem is, say, indigestion
resulting from eating berries of a certain crimson color, the
problem in the stomach can't be corrected by reorganizing the
stomach; it has to be corrected by reorganizing the food-
identifying and ingesting systems, provided it isn't too late. So
there will necessarily be many instances of the kind of
reorganization I have been visualizing.

I still want to hang on to the principle that the intrinsic
reference levels for reorganizing systems must be inherited, and
the controlled variables involved must be defineable without
reference to the meanings of signals in the hierarchy. We can
have error signals in general as intrinsic variables which should
remain less than some amount, but we can't have specific
intrinsic errors like the car not being where it is supposed to
be in its lane. The reorganizing systems have to function from
the very beginning, because they are responsible for the
construction of the hierarchy. They must always work in ignorance
of what kind of error is being corrected in terms of the external
world.

RE: the Zeigarnik Effect. We really, really have to do some
experimental work on attention and control. Attention has a
strong effect on some control processes, but apparently not on
others. What's the difference? Why does the waitress, balancing a
tray while she talks to a passing customer, let the tray
gradually sag until the coffee-cups slide off of it, yet not at
any time seem about to lose her balance and fall over? In a
hierarchy of well-developed control systems, why should it make
any difference whether you're paying conscious attention to any process? The
first thing we have to establish is how much
difference it does make. We can measure control parameters over
fairly brief intervals, ten seconds or so, well enough to see any
major changes while they're happening. This sort of thing looks
completely doable, by someone with access to subjects and support
for such a project. I think some major discoveries about the
hierarchy of control are waiting for this kind of experiment to
be done. Isn't there someone out there in a position to take it
on?
------------------------------------------------------------
Best to all,

Bill P.