Arrogance vs. visions

[From Bill Powers (930325.0850)]

Ken Hacker (930324) --

Arrogance in what is called scientific activity is closing
doors to inquiry and citing mathematics or "working models" or
experiments as the only acceptable form of reasoning about
knowledge claims.

Is it arrogance to set standards for accepting knowledge claims
so high that even one's own attempts to make claims are more
likely to fail than succeed?

I agree with you that there are other approaches to knowledge,
but in my view that is all they are: approaches. We have to spend
a lot of time mulling over imperfectly stated propositions and
guessing how we might investigate them, but I don't think that
this kind of preliminary activity belongs in the category of
"knowledge claims." It's the sort of thing you do while you're
looking for an idea that has enough solidity to be put forth as a
candidate for a knowledge claim. When you do finally clear your
throat and rap on the table and announce that you have an idea
that might actually check out, you should expect to have the idea
subjected to the most severe tests that can be devised -- which,
preferably, you have already thought up and done yourself.

The criterion that the physical sciences adopted was that a
theoretical prediction should fit the data ALL of the time and
under ALL circumstances, with an error no greater than the error
of measurement. That is a pretty stiff requirement even when some
practical latitude is allowed. It has probably caused 99% of the
plausible ideas about how physical reality works to be rejected.
But look at what it produced. The one percent (or even less) of
the ideas that survived have remained undisputable for centuries.
Even Newtonian mechanics is still the undisputed theory of choice
in very nearly all macroscopic situations -- and has been for
over three centuries. This is what you get when you cross the
boundary between verbal reasoning and precise analysis -- when
you're able to do it.

I don't know why you put "working models" in quotes. A working
model is simply a proposition put to the ultimate test: it must
generate detailed predictions of real behavior through time, with
every deviation from the actual behavior under a critical
microscope. This is not merely a case of fitting a theoretical
curve to a scatter diagram. It is a committment to predict where
EACH POINT on the diagram will fall and to treat each deviation
of the model's prediction of a point's position from the actual
position as an indication of a deficiency in the model. With that
kind of goal in mind, one does not just accept the deviations as
natural variability, but keeps going back to experiment, and back
again, trying to find out why the model didn't predict correctly.
One is never satisfied that a model is as good as it can be. And
one certainly doesn't publish the first plausible explanation
that fits 2/3 of the observations. Not, at least, while claiming
that it constitutes knowledge.

Various perspectives offer knowledge about different aspects of
behavior. Thus, anthropology is no less valid or more valid
than PCT, just different in what it chooses to study.

This sort of eclecticism is generous, but I think misguided. The
generalizations that are found in anthropology and other such
sciences do not generate predictions of any usefulness. They
generate descriptions that more or less fit some of the data.
There would be far more progress in these fields (and far fewer
publications to wade through) if it were required that each
description be recast as a prediction of what will happen under
specific circumstances, and then that the prediction be tested
against new observations. And, of course, that it predict
correctly within narrow limits of error.

I think that over decades and centuries of failure to find
rigorous principles of behavior, the behavioral sciences have
progressively lowered their ambitions until very few behavioral
scientists actually expect to explain behavior in clear and
convincing terms. It seems to be generally accepted that behavior
is too messy, too variable, too complex to be explained in any
but the most approximate way. The statistical approach to
describing behavior is in itself an abandonment of rigor.
Statistical facts are unexplainable; they simply exist. There is
a great temptation to give up on the attempt to find clear and
correct explanations, and to assume that nature itself contains
random factors that will forever obscure the view. After
continual failure to penetrate the fog, it must be a great relief
to conclude that the fault is not yours, but that of the
universe.

It is probably hard for many people to understand why PCTers wax
so enthusiastic over dull and uninteresting tracking experiments.
The reason is not so much that tracking behavior itself is
interesting. It's that tracking behavior LOOKS as if it contains
large random components, but under the aegis of the PCT model,
those random components turn out to be highly orderly. People
accustomed to the normal appearance of behavioral data and the
normal kinds of fits of models to the data experience the
modeling of tracking behavior using PCT as a relevation, an
experience that reveals possibilities in the study of behavior
not even dreamed of before.

This becomes even more apparent when a person ventures to extend
this sort of modeling to new situations where a new model must be
constructed. Look at what Tom Bourbon did. He got his courage
together and asked, "Would this work if I set up models of TWO
people and had them do a cooperative tracking task, where the
outcome depended on BOTH of them?" He abandoned caution and
simply set up two control systems model exactly like the ones
used for a single-person tracking task, plus the interactions,
and tried it. It worked with the same degree of predictivity: the
two models interacted in the experiment almost precisely as the
two real people did. The first time.

This kind of result is enough to open a person's eyes and to make
all other experimental approaches to behavior look very, very
inadequate. It doesn't lead to understanding of the more complex
aspects of human behavior, but it reveals a path which goes in
that direction. It's clear that by going down this path step by
step, learning how to extend the scope of the model while
maintaining its essentially perfect fit to the data, we could
gradually progress to a more and more complete understanding of
behavior, in terms that will stand up for as long as Newton's
Laws have withstood the test of time. So what, if others are
already exploring those regions of complexity? They're not
getting this kind of result or anything even in the same universe
of discourse with it. PCT will eventually get where they are, and
when it does (it, or whatever it has become by then), it will
blow away the fog and show what is actually happening, just as it
now can do with simple tracking experiments.

When we finally get to those highest levels of organization, PCT
may not be recognizeable in current terms. But what will be
recognizeable is the expectation and the confidence that a proper
approach conducted under high enough standards will produce
knowledge of a kind unfamiliar to practitioners of behavioral
science as it is known today. Then what will have been the point
of all the groping around in the fog that is going on today? Most
of what is accepted as knowledge today will probably simply
disappear, having led nowhere. That is what has happened to most
of the so-called knowledge that has ever been published in the
behavioral sciences, even 20 years ago, even 10 years ago. You
could probably pick any study done 20 years ago and replicate it
(more or less) today -- but who would care? Who will care, 20
years from now, about most of the findings of today?

Is this arrogance, Ken? Or could it be something more interesting
than that?

ยทยทยท

-------------------------------------------------------------
Best,

Bill P.