on science

[Hans Blom, 970408b]

(Bill Powers (970331.0030 MST))

... the mere fact that a lot of people accept the textbook
explanations of MCT is no indication at all that these explanations
are error-free. How many people who espouse this approach are simply
passing on the arguments and interpretations they all read in these
same textbooks? ... So who does that leave to keep the theoreticians
honest, but outsiders like me?

As an answer -- and contrast -- to this rather romantic view of
science, some quotes, plucked from a long post by David Longley,
dated 21 Oct 95 10:12:15 GMT, in one of the AI or philosophy
discussion groups. It will be no surprise, I guess, that I side with
the views expressed below. The basic question is: who/what do we
trust more, our own private impressions or the collective results of
science? The quotes do not apply to clinicians only, of course.

'Ultimately, then, clinicians must choose between their own
observations or impressions and the scientific evidence ... Failure
to accept a large and consistent body of scientific evidence over
unvalidated personal observation may be described as a normal human
failing or, in the case of professionals who identify themselves as
scientific, plainly irrational.'

Dawes, Faust & Meehl (1989), Science 243, 1668-1674. Clinical Versus
Actuarial Judgement.

Another quote about personal wisdom versus scientific consensus, here
called "the formula":

'When Shall We Use Our Heads ?'

'The question "When shall we use our heads instead of the formula?"
presupposes that we are about to make a clinical decision at a given
point in time, and must base it upon what is known to us at that
moment. In that context, the question makes perfectly good sense. It
is silly to answer it by saying amicably, "we use both methods, they
go hand in hand". If the formula and your head invariably yield the
same predictions about individuals, you should quit using the more
costly one because it is not adding anything. If they don't always
yield the same prediction - and they clearly don't, as a matter of
empirical fact - then you obviously can't "use both", because you
cannot predict in opposite ways for the same case. If one says then,
"Well, by 'using both,' I mean that we follow the formula except on
special occasions," the problem becomes how to identify the proper
subset of occasions. And this of course amounts to the very question
I am putting. For example, does the formula tell us "Here, use your
head," or do we rely on our heads to tell us this, thus
countermanding the formula?'

P.E. Meehl (1971). When Shall We Use Our Heads instead of the
Formula? Ch 4: PSYCHODIAGNOSIS: SELECTED PAPERS.

A quote about how not a theory but practical results convince:

'An early version of the Green revolution was made possible in the
early 1930s by advances in agricultural technique. The government
duly proceeded to inform the nations' farmers of these techniques by
means of county agricultural agents spouting statistics and
government pamphlets and sat back to await the glowing reports of
increased crop production. No such reports followed and it soon
became clear that farmers were not converting to the new techniques.
Some clever government official then set up a program whereby
government agricultural agents moved in on selected farms and
cultivated the crops along with the farmers, using the new
techniques. Neighbouring farmers watched the crop results and
immediately converted to the techniques.'

Nisbett R.E, Borgida E, Crandall R, and Reed H (1982). Popular
Induction: Information is not necessarily informative.

And a final quote about science as a collective effort:

'Humans did not "make it to the moon" (or unravel the mysteries of
the double helix or deduce the existence of quarks) by trusting the
availability and representativeness heuristics or by relying on the
vagaries of informal data collection and interpretation. On the
contrary, these triumphs were achieved by the use of formal research
methodology and normative principles of scientific inference.
Furthermore, as Dawes (1976) pointed out, no single person could have
solved all the problems involved in such necessarily collective
efforts as space exploration. Getting to the moon was a joint
project, if not of 'idiots savants', at least of savants whose
individual areas of expertise were extremely limited - one savant who
knew a great deal about the propellant properties of solid fuels but
little about the guidance capabilities of small computers, another
savant who knew a great deal about the guidance capabilities of small
computers but virtually nothing about gravitational effects on moving
objects, and so forth. Finally, those savants included people who
believed that redheads are hot-tempered, who bought their last car on
the cocktail-party advice of an acquaintance's brother-in-law, and
whose mastery of the formal rules of scientific inference did not
notably spare them from the social conflicts and personal
disappointments experienced by their fellow humans. The very
impressive results of organised intellectual endeavour, in short,
provide no basis for contradicting our generalizations about human
inferential shortcomings. Those accomplishments are collective, at
least in the sense that we all stand on the shoulders of those who
have gone before; and most of them have been achieved by using
normative principles of inference often conspicuously absent from
everyday life. Most importantly, there is no logical contradiction
between the assertion that people can be very impressively
intelligent on some occasions or in some domains and the assertion
that they can make howling inferential errors on other occasions or
in other domains.'

R. Nisbett and L. Ross (1980). Human Inference: Strategies and
Shortcomings of Social Judgment.

Greetings,

Hans

[From Rick Marken (970408.1220 PDT)

Hans Blom (970408b) --

It will be no surprise, I guess, that I side with the views
expressed below.

Failure to accept a large and consistent body of scientific
evidence over unvalidated personal observation may be described
as a normal human failing or, in the case of professionals who
identify themselves as scientific, plainly irrational.'

Yes. But failure to accept validated personal observation over a
large, inconsistent and invalid body of scientific evidence can also be
described as a normal human failing or, in the case of professionals who
identify themselves as scientific, blathering idiocy:-)

A quote about how not a theory but practical results convince:

...Neighbouring farmers watched the crop results and immediately
converted to the techniques.'

The practical results we want are behavioral models that fit behavioral
data. We have shown you how well the PCT model fits
behavioral data and how poorly the MCT model fits the same data.
Sounds to me like the only one around here who is not convinced
by practical results is ol' Mr. Practical himself -- you.

Best

Rick

[From Bill Powers (970312.1016 MST)]

Hans Blom, 970408b--

... the mere fact that a lot of people accept the textbook
explanations of MCT is no indication at all that these explanations
are error-free. How many people who espouse this approach are simply
passing on the arguments and interpretations they all read in these
same textbooks? ... So who does that leave to keep the theoreticians
honest, but outsiders like me?

As an answer -- and contrast -- to this rather romantic view of
science, some quotes, plucked from a long post by David Longley,
dated 21 Oct 95 10:12:15 GMT, in one of the AI or philosophy
discussion groups. It will be no surprise, I guess, that I side with
the views expressed below. The basic question is: who/what do we
trust more, our own private impressions or the collective results of
science? The quotes do not apply to clinicians only, of course.

In the final analysis you have nothing to go by but your private
impressions. When someone writes a book, do you believe everything that's in
it just because it's written down on a printed page and someone tells you
that you should accept it? Don't you still have to form a private opinion
about what you find there? If what you read makes sense to you, if you can
follow the reasoning and if you find no mistakes in it, you may well choose
to believe it -- you may feel _compelled_ to believe it. But what would
become of science if we simply looked at the name of the author or publisher
and decided that anything this person said, or this publisher printed, must
be right even if we can't see how?

Those who want to be among the accepted authorities always argue that the
great machine of science is to be trusted more than any individual's
personal ideas. But this has been shown over and over to be the very
antithesis of science, an almost certain guarantee that bad ideas will be
preserved and good ones prevented from developing. The whole point of
science is to test and verify, to re-examine, to make sure that no line of
reasoning is allowed to go unchallenged regardless of how many people have
accepted it. The moment an individual scientist becomes reluctant to
challenge accepted wisdom, that person ceases to be a scientist.

For me, Hans, MCT has not passed the initial test of making sense in its
basic approach to control. When you have to use adaptation to make even the
simplest kind of control system work, something is fundamentally wrong. I
can't become interested in the theoretical superstructure that has been
built on the basic assumptions, because to me the assumptions are themselves
untenable.

What basis would I have for blindly accepting the whole MCT structure, and
going through all the labor of learning the mathematical conventions and
manipulations, if I could see no reason to adopt the basic design in the
first place? Even if I were mathematically adept, I would rather put my
efforts into developing the mathematical structure of PCT, because I can see
that the negative feedback approach, in continuous systems, continues to
make sense no matter how long or critically I look at it. Questions of
perception, of behavior, of multi-ordinality of experience and control, of
memory, of learning, and even of physiology and neurology, all seem
consistent with the basic PCT unit of organization; MCT has nothing useful
to say about most of these subjects (except for a rather ponderous,
incomplete, and -- it seems to me -- impractical approach to adaptation).

I have said several times that I am in no position to judge the usefulness
of MCT as an engineering tool. It may represent the next step upward in our
ability to develop artificial control systems, or it may prove to be a fad
that will have its day only to be replaced by the next great advance, or
simplification. That does not concern me. What concerns me is that MCT
offers little that will help us to understand the phenomena of organisms
controlling, while PCT seems directly applicable behaviorally, structurally,
neurologically, and experientially. PCT, too, will be replaced some day, but
not until it has been explored to the same extent as the "adaptive
open-loop" concept has been explored -- and more.

The adaptive open-loop system is inherently sensitive to errors of all
kinds, in computation or implementation. That is why it MUST be adaptive,
even to show performance comparable to that of a negative feedback
error-driven system. But adaptation requires modeling the environment and
the sensors and actuators of the controller itself, and acquiring knowledge
about the world that must be handled by a very complex system, as complex as
the whole human brain. The adaptive machinery must be specifically designed
to work with the basic open-loop structure to overcome its disadvantages.

The fact that adaptive schemes have been invented which work to some degree
and promise to work better speaks well for the ingenuity of mathematicians
and engineers, but there is nothing _necessary_ about such schemes. If you
begin with a different basic organization, one that is inherently
_insensitive_ to variations in output properties and properties of the
environment, you can still explore schemes for adaptation that will work,
and promise to work better. But given much less inherent sensitivity to
parameter variations and disturbances, such schemes will require far less in
the way of modeling the world, far less ability to predict accurately and
deduce disturbances. To put that the other way around, a system with very
advanced adaptive capabilities at least at some levels of organization will
be able to accomplish far more if its basic organization naturally minimizes
the effects of disturbances, parameter shifts, calibration errors, and
computational errors. That is what the basic PCT organization provides: a
basic unit of organization that is inherently robust, and which can only be
improved by adding adaptive capabilities. The basic MCT archtecture, minus
its adaptive machinery, is inherently susceptible to errors and
disturbances; it is the least robust basic organization possible. Only
adaptation saves it from being an obvious failure.

Best,

Bill P.