At Mary's Bequest

[From Bill Powers (951106.0835 MST)]

I wrote the following as part of a direct post to Bruce Abbott. Mary
insists that I put it on the net, so here is it, somewhat abridged.
Bruce doesn't agree with everything in it. But don't blame me; I have to
do whatever Mary tells me to do. That's the rule.

···

=====================================================================
I give [Thorndike] credit for doing what he did. But that's quite aside
from the scientific value of what he did. I can admire the one without
feeling the need to admire the other.

I realize that the kinds of explanations Thorndike offered were exactly
the kind that were acceptable to his colleagues and that he had no
reason to think they would not be. What I argue against is not
Thorndike, but the whole discipline that found his arguments acceptable.
And I don't even say, in sociological and psychological terms, that
something else should have happened. I'm not readying the case of the
prosecution against scientific criminals. That's just where psychology
was in the 1920s.

All I'm saying is that in the light of what we know now, that whole way
of approaching behavior, the lack of thoroughness and the easy offering
of untestable and vague propositions, the jumping into the middle of a
large problem and pretending it's simple, is no longer adequate. I think
that psychology tried to catch up with physics in one burst of effort,
and as a consequence tried to get away with taking a lot of short-cuts
to knowledge. That effort has failed, although few psychologists will
admit it to outsiders.

If we want a successful science of life, we have to go back and do what
the early psychologists failed to to: build up a data base of truly
reliable facts, just as physics and chemistry had to do to get where
they are now. No more 0.8 correlations. No more ignoring of
counterexamples. No more facile verbal arguments. What we must have are
correlations of 0.99 and better, obtained by methods that anyone can
apply and get the same results. As you and I saw in the analysis of the
meal-size data, if we stick to the effort to get good results and don't
accept the first interpretation that looks reasonable, we can get to
that level of predictivity. By doing experiments right and paying
attention to everything of even peripheral importance, we can do even
better. We HAVE to do better if we're ever going to have a science.

You have to get very high correlations if you're ever going to be able
to string half a dozen facts together in a deductive argument and come
up with a conclusion that has better than a 50-50 chance of being true.
That's always been the weakness of psychology. You end up with a bunch
of disconnected facts, each with a probability of 60 or 80 percent of
being true in any one case, with no possibility of building a reliable
conceptual structure out of them.

I truly think that psychologists gave up 60 or 70 years ago, or more, on
the idea that they could ever discover anything really true about
behavior, true all of the time and under all conditions. When Watson's
grand dream began to crumble away, psychologists began to settle for
less and less, lowering their expectations until all they really
demanded was p < 0.05 and a publishable paper. We can see this
phenomenon going on even in EAB, which started with Skinner's simple and
almost infallible experiments in the 1930s. Now we see things like the
matching law being accepted even though the matching fails to occur in
many cases and even though one has to take a very loose view even to
conclude that matching is seen in _any_ experiment. What Herrnstein
should have done was to conclude that the matching law just wasn't
working out, and that some other line of research would advance the
science more. But once people get the idea that they're seeing a hint of
an effect, they get stubborn and refuse to see that the hint isn't
getting any stronger. Just one more experiment, one more variation, and
we'll have it!

Suppose that Herrnstein had said, "Hey, I'm looking for predictive
errors of less than 3 percent and I'm not getting them. Time to move
on." That whole episode would be finished, would have been finished 20
years ago, and the ensuing labors turned to more promising ideas. If he
had been a physicist, this is exactly what the journal editors would
have demanded; he wouldn't have got to the point of having publishable
results. But in psychology, he not only could publish his results, he
could attract dozens, maybe hundreds, of people to his cause and waste
their time, too. It's Gresham's Law of science: low standards drive out
high standards, just as bad money drives out good.

Lots of people have objected to my hard-nosed attitude about high
correlations and reliable facts. They say, "But if we take that
attitude, there won't be anything to publish!" What they don't see is
the payback for doing so, the ultimate advantages of publishing only
findings that are as reliable as the findings of elementary physics.
It's the same principle as natural selection; you get the best results
by weeding out anything less than the best.

With PCT we have seen some examples of facts that have a degree of
reliability that approaches that of elementary physics. This is what
turned Tom and Rick on, and why they have become so contemptuous of
standard psychology. Once you have seen what CAN be done, that it's not
just a pipe dream and a wish, how can you go back to accepting
correlations of 0.8, or 0.4, or 0.2? Once you've personally constructed
a model that predicts random-looking actions of individual human beings
within 3% for one whole minute, and that can do so 10 years in advance
(as Tom has done), how can you go back to methods that can't predict the
behavior of one individual within 300% for a single second?

I don't know how far PCT can take us. But it can take us a lot farther
than we are now. If we will just be hard-nosed about finding the _right_
model, the one that always gives us those very high correlations, we can
build up a stockpile of facts that are going to survive for a long time,
and on which we can build a structure, step by step, that will hold up
against very serious criticism. This is going to mean that we have to
ignore many "big" problems that we can't tackle yet. Physicists and
engineers dreamed of sending ships to the moon long before they could do
it, and when they finally did it, a lot of the earlier ideas about how
to do it quietly vanished; we don't use giant guns built along the
slopes of Pike's Peak, for example. What we see now as the nature of
human behavior, as the problems that human beings have, is going to
change drastically. Our very ways of formulating these problems are
going to change.

I'm sure everyone has heard THAT lecture before.

Best,

Bill P.

[From Tom Bourbon (951106.2341 CST)]

[From Bill Powers (951106.0835 MST)]

I wrote the following as part of a direct post to Bruce Abbott. Mary
insists that I put it on the net, so here is it, somewhat abridged.
Bruce doesn't agree with everything in it. But don't blame me; I have to
do whatever Mary tells me to do. That's the rule.

(There followed yet _another_ iteration of Bill's lucid argument for a
legitimate science of behavior, to replace the earlier, failed attempts.)

I'm sure everyone has heard THAT lecture before.

Sure we have, but you can say it again, any time you like -- or any time
Mary says you must. This is what it is all about.

Tom

[From Bruce Abbott (951107.1035 EST)]

Bill Powers (951106.0835 MST) --

I wrote the following as part of a direct post to Bruce Abbott. Mary
insists that I put it on the net, so here is it, somewhat abridged.
Bruce doesn't agree with everything in it. But don't blame me; I have to
do whatever Mary tells me to do. That's the rule.

This is just Bill's way of saying that he wants to move this discussion back
to CSG-L. After all, if I understand PCT correctly, Mary can't make Bill do
anything.

I wasn't going to respond publicly, but then Steph got into the act. She
says I have to post my reply to Bill's post. Sorry, Bill, but I have to do
it--you know what happens to a guy if he doesn't follow the rules.

···

===============================================================================

I realize that the kinds of explanations Thorndike offered were exactly
the kind that were acceptable to his colleagues and that he had no
reason to think they would not be. What I argue against is not
Thorndike, but the whole discipline that found his arguments acceptable.

All I'm saying is that in the light of what we know now, that whole way
of approaching behavior, the lack of thoroughness and the easy offering
of untestable and vague propositions, the jumping into the middle of a
large problem and pretending it's simple, is no longer adequate.

I disagree with this blanket condemnation, because I believe that there have
been a number of research programs in psychology that would not fit this
description (which is not to say that there haven't been too many of the
kind to which you allude). The best researchers and theorists in psychology
have not evidenced a "lack of thoroughness" or been prone to the "easy
offering of untestable and vague propositions." It's not so much that they
have lacked rigor, but that they have been using the wrong approach.

It doesn't take long to realize that the system one is studying is of
staggering complexity. The best strategy for understanding this system
would be to develop a "circuit diagram" of it and work out how the system
functions from that. Unfortunately, this is a little like trying to
understand the circuitry of your television set without any understanding of
resistors, capacitors, inductors, Ohm's law, amplifiers, or basic circuit
design. I think researchers in psychology have assumed that an
understanding of the basic mechanisms would have to await advances in
neuroscience, and that the best they could do in the meantime was to try to
understand the system from a functional standpoint. So they turn on the TV,
and they attempt to learn something about it by fiddling with the knobs and
testing various hypotheses about its internal organization. Pretty soon
they have some impressive graphs depicting the "laws of TV-set behavior."
These relate such variables as the brightness of the image of the setting of
the brightness and contrast controls, the sound intensity to the setting of
the volume control, and so on. It is discovered that the picture develops
"snow" when the set is moved beyond a certain distance from the transmitter,
and that the same thing happens when the antenna is rotated to a certain
position; this leads to the formulation of a theory which assumes that the
clarity of the picture is a function of something called "signal strength,"
which falls off with distance and is also influenced by the angle of the
antenna with respect to the transmitter.

You get the idea. I maintain that there is nothing fundamentally
unscientific or nonrigorous about this approach; it simply represents the
best one can do when given a system of enormous complexity, until one has
the proper theoretical basis upon which to construct a structural model of
the system that can be expected on theoretical grounds to behave like the
system. Until then, there is nothing to do but to model the system's
behavior and to explore how well such a model continues to perform (relative
to the actually observed behavior) when the system's operating conditions
are varied.

This kind of research can lead to progress of a certain type. To the extent
that the empirically-established relationships hold, one can make practical
use of them. It the picture too dark? Try turning this little knob
clockwise. Is the picture snowy? Try turning the antenna; if that doesn't
work, try mounting the antenna on the roof. Better yet, empirical testing
has shown that a different antenna, shaped like SO, mounted on the roof and
aimed in this direction, will probably pull in a clear picture and get you
several additional stations as well.

Meanwhile, all this manipulating of system variables can lead to some
worthwhile hypotheses about what the system producing the observed
relationships must be doing. That is, the observations provide clues as to
the system's inner organization. That volume control is somehow AMPLIFYING
the sound--there must be something we might call an AMPLIFIER in that TV
somewhere, which takes a weak "sound" ("incoded" in the TV's "nervous
system") and somehow "strengthens" it. Now that we know that there is an
amplifier in there, perhaps we can begin to explore the circuits connected
to the volume control and speaker, to see if we can discover anything that
might perform this function. Hey, we seem to be making _progress_.

So I view research of this type as having value. It is real science, not
mush, but it is the kind of science one does when one does not yet have the
proper foundation on which to begin building the underlying mechanism. It
is top-down rather than bottom-up research.

Unfortunately, this approach has been successful enough in psychology (and
in other disciplines such as economics and sociology) that it has come to be
regarded too highly. Watson and Skinner asserted that it was all that is
needed in a science of behavior, thus literally _precluding_ any approach
based on assumptions of internal organization. Even those (e.g.,
cognitivists) who are willing to talk about internal processes continue to
rely on the top-down method and fail to distinguish the functional approach
(modeling the behavior of the system) from the mechanistic (modeling the
structure of the system). THAT, Bill, is scientific psychology's
fundamental mistake.

There are others as well, such as relying too strongly on verbal rather than
mathematical deduction, but these I think are derivative. If you can't find
a quantitative model having the required predictive precision, you end up
having to be satisfied with merely ordinal ones (i.e., this will be greater
than that). If your behavioral model describes only the surface
relationships, you can handle the unexpected deviations from theory only by
conducting additional empirical searches for variables to add to the
equation. The realization that many variables may disturb the theoretical
functions leads one to expect a certain degree of "noise" in the data, given
that all these variables cannot be held constant over the course of the
observations. Thus, you come to expect, and tolerate, relatively weak
correlations in the data. And you begin to look for ways to discover the
underlying relationships that may be obscured by these other uncontrolled
variables. One solution is statistical analysis.

SUMMARY and CONCLUSIONS

Psychology has proceded from the assumption that it cannot yet follow a
bottom-up research strategy to understand its subject matter, owing to the
complexity of the brain and the lack of understanding of its components and
their interconnections. Indeed, some (Watson, Skinner) have asserted that
such a strategy is not even necessary, that one can build a science of
behavior using a purely functional approach. Consequently, researchers have
adopted a top-down strategy based on modeling input/output functional
relationships. This approach has succeeded in identifying relationships of
fair generality, and in some cases practical value, while providing hints
about the functional properties of some of the underlying mechanisms through
which behavior emerges. Unfortunately, this approach has become so well
established within psychological science that hardly anyone even understands
the difference between it and one that builds from the ground up, or if they
do, they do not believe that the bottom-up approach is currently within reach.

You and I, of course, believe that a bottom-up approach is not only viable,
it is the only approach that can yield a truly fundamental account of
behavior, one in which behavior emerges from the organization of the system
rather than from empirical relationships among observable external
variables. Not only that, we think we have identified the fundamental unit
on which such a system is constructed. From this point of view the
continued exclusive reliance on the top-down approach is misguided and a
serious impediment to progress. In my view, the problem has not been bad
science, but a false belief about what kind of science is possible.

Whether you agree with my analysis or not, at least you should now follow
why I take the position I do about the scientific merits of psychological
research. It should also be clear that I do understand the difference
between the current top-down approach and the bottom-up approach you
champion, and recognize the clear advantages of the latter.

Thus far (two days) I have received no reply from Bill to this post, other
than to make his half of the exchange public. I had hoped we could discuss
this issue privately for a while, to see if we could come to some kind of
understanding without the distractions of a larger audience. Guess not.

Perhaps it's just as well; there are others who no doubt will want to reply
to my argument. I particularly want to hear from Dag.

Regards,

Bruce

<[Bill Leach 951107.19:24 U.S. Eastern Time Zone]

[Bruce Abbott (951107.1035 EST)]

Bruce, I think that maybe the server (or one of its' primary links) is
getting a bit flakey. The listserver mail seems to be coming in blocks.
Sometimes with nothing for a day or more showing up.

In addition, in my situation I am now loosing as many as six messages a
day because the host is somehow "eating" them. I know that a couple of
yours were lost that way (I do receive the notification that a message
arrived and who it was from just no message).

I would like to point out that your example of studying a television by
twiddling the knobs is not quite the same thing as studying a control
system by twiddling its' perceptual inputs. Basically the TV could care
less where its' knobs are set and will take no action to change the
settings.

It is true that there are valid reasons for studying and knowing things
about "most people" but such efforts are not aimed at understanding
behaviour no matter what their proponents say since the loop is not
understood to be closed.

-bill