cognitive science; building complex models

[From Bill Powers (930908.1900 MDT)]

Michael Fehling (930908.1059 PDT) --

I'll get Newell's book and check up on the SOAR concept again. I
saw it somewhere before and evidently didn't get het up about it,
but I'll look more closely.

In this book, Newell argues that it is time that psychologists
formulate and empirically validate computational "agent
architectures" that model the whole range of capabilities of
the human "cognitive system (sic)."

The basic question here is what IS the whole range of
capabilities of the human cognitive system? Back when Newell,
Simon, and Shaw were into their Logic Theorist and General
Problem Solver projects, I thought their approach was pretty
good, actually studying what people did instead of dreaming up
abstract theorems. But even then I felt that their concepts of
cognition were very narrow, circumscribed by the kinds of
computational tools available and by the kinds of cognitive
activities that college professors tend to be fascinated with. I
once approached a professor at Northwestern University who was
interested in this field, and pointed out to him that a lot of
the goal-seeking and subgoal-defining procedures being used could
be viewed as a control process, and was slapped down immediately.
Control theory had NOTHING TO DO with problem solving, and that
was that. That way of looking at it went outside the boundaries
of the accepted game.

I think that before any large-scale project applying ANY theory
of cognition can get anywhere, we have to spend a lot of time
studying what people actually perceive and control, at many
levels. For one thing, this is the only way to find out what the
levels really are, or if levels really exist. I've always felt
that the weakest part of cognitive science was a lack of simply
observing human behavior and learning what people spend their
time doing, and how they do it. The literature I've seen is
topheavy with mathematics, which is applied to extremely
simplistic, stereotyped, and hastily conceived notions of what
behavior is and how it works. As you say, S-R concepts are at the
bottom of a lot of it. But even when the organism is seen as more
autonomous, the idea of cognition still seems limited pretty much
to symbol-manipulation, the sorts of things that can be modeled
easily on digital computers and that appeal to scientists who are
good at symbol-manipulation.

I repeat, what IS the whole range of human cognition? How would
you go about finding out? Not by thinking up tests that
presuppose that cognition consists only of certain reasoning
processes, puzzle solving, resolution of paradoxes, and all that
stuff dear to the followers of Hofstadter. What about art or
music, or politics, or science itself, or religion, or the
conduct of war, or getting a girl to go out with you, or
describing to a stranger how to get to a gas station? What about
perceiving and then applying principles: principles of
mathematics, principles of morality, principles of agriculture,
principles of music theory, principles of language? What about
conviction, belief, explanation, systematization, the sense of
self, the sense of patriotism, the sense of devotion? In the
whole panoply of things that people do with their brains to
govern the higher levels of their own experiences, what is
commonly called cognition seems a shriveled little voodoo doll,
hardly representative of the scope of the real thing.

Mary Midgley, one of my favorite sharp thinkers, dug up a
quotation from Tinbergen, the ethologist:

"We can apply to ethology what F. A. Beach once said of American
psychology, that 'in its haste to step into the twentieth
century' it had tried to rush through the preparatory descriptive
phase -- a thing no natural science can afford to do." [Midgley,
Mary; _Evolution as a religion_ (London and New York, Methuen,
1985) (p.130-131)].

This is what has been missing since the beginning of the modern
(post-WW II) investigations of the mind. All the facts about
behavior with which these approaches have attempted to deal have
been handed down from old-fashioned psychology, as if the new
cognitivists and modelers thought that a brand new conceptual
approach was needed, but that the facts left over from the
rejected old scientific approaches were perfectly good. There was
an attempt to leapfrog directly into a new conception of the mind
without going through any naturalistic phase where the scientist
simply pondered behavior -- and experience -- in the wild,
looking at it afresh and trying to decide just what there was
about it that actually needed to be explained by a new theory.

In a way, it was like writing science-fiction. A science-fiction
writer has it much easier than a real novelist, because the only
research required is the science part (if there is one); the
backgrounds, the culture, the things people do for a living, the
whole rich underlay that makes a novel a real report on the human
condition, can all just be made up. In cognitive science and
computer science and so forth, all the fun is in the complexity
of the theorems, the clever efficient algorithms, the
mathematical truths, the eternal, if irrelevant, verities of
abstract thought. The actual examples of behavior to which all
these advanced thoughts apply can be, basically, made up:
anything plausible, really, will do. What people actually do
isn't the point. That's drudgery, just sitting around watching
behavior.

I think that control theory has given us a radically new way to
understand behavior. I think that the first thing we should do is
observe how it changes our understanding of what we see people
doing. That's been the real thrust of all my work: relating
control theory to ordinary behavior of all kinds, trying to get
some sort of taxonomy going, trying to see what kinds of
experiments we need to do to sharpen up the picture, and most of
all trying to see in a new way what has always been taken for
granted. When we see behavior as controlling perceptions, as
control systems controlling perceptions using other control
systems controlling other perceptions, we also redefine what it
is that a scientist wants to know about behavior. We want to know
what people perceive and control, not what their actions are. As
science has always focused on the outward actions required to
bring about unsuspected and unobserved goal conditions, what we
really need to know about behavior has never been observed
scientifically. That is what we need to do first. Only then, when
we know what a model has to explain, does it make any sense to
start building elaborate constructions and trying to simulate
complex behavior -- if behavior, indeed, is even the point.

I have no objection to building complex models. Only what is it
that you propose to model? I really think we should pay attention
to that question first.

···

----------------------------------------------------------------
Best,

Bill P.

[Michael Fehling 930909 7:31 AM PDT]

In re Bill Powers (930908.1900 MDT --

Bill (and others),

Thanks for your openness to my suggestion. However, I want to clarify one
element of what I've proposed:

        I am suggesting to accept Newell's challenge to biuld a UTC (unified
        theory of cognition). I.e., show how the PCT community _is_ capable
        of constructing (at least through several prototypes) a model of a
        "PCT agent." This PCT agent model should embody most, if not all, the
        control levels, the same model should be able to perform a wide range
        of experimental and real-life tasks (i.e,. from target tracking,
        through memory performance, to problem solving and decision making).

        I am _not_ suggesting that the substantive ideas of Newell's
        problem-space theory architecture play any role in at all in the
        PCT-agent model.

        To put it bluntly, I challenge PCT proponents to beat psychological
        modelers like Newell, Anderson, Hayes-Roth, etc. at their own game.
        And I'm willing to help if I can.

- michael -

[From Oded Maler 930909 17:00 - ET]

* [Re: Michael Fehling 930909 7:31 AM PDT and others]

What I was trying to say is that my impression of AI as cognitive
science is much worse than on AI as software engineering and
computer science (which is not too high either, but this is
explainable by being a convert from AI to theoretical CS;
And some of my best friends are doing very good AI research).
Whenever I hear cognitive scientists (and I had the occasion
to hear some in Aix last year) I had the feeling that they
speak about hypothetical persons, and that their models are
arbitrary. This was also my impression after reading some
of Newell's stuff, but I admit my reading was not deep.
Again my attitude might be an over-reaction of a naive
logicist-abstractionist (as I once was) to the "discovery" of
the concrete perception-action world. Maybe after
establishing the lower levels, one will need the architectures
proposed by Newell et al, but my feeling is that a physical-symbol
system, where all lower-levels are encapsulated in black-boxes,
is not the correct model for human cognition. (I think an interesting
criticism of Newell is Hofstadter's "waking up from the boolean dream").
I want to repeat that I speak out of intuition and ignorance,
cognitive science books I browsed thru seemed so boring and
inspiration-less.

Btw, it came as a surprise to me that cybernetics/system theory is
taught at Stanford. My exposure to system theory was thru some
abandonned books I found in a cellar at Weizmann Institute - they
belonged to a guy who did not get a tenure there...

Best regards

--Oded

···

--

Oded Maler, VERIMAG, Miniparc ZIRST, 38330 Montbonnot, France
Phone: 76909635 Fax: 76413620 e-mail: Oded.Maler@imag.fr