IJCAI-93 and Powers(73)

[From Oded Maler (930908) 1230 - ET]

I was last week at IJCAI - the biggest AI conference. Some of my
impressions are orthogonal the CSG-list interests. The main impression
is, however that the AI community has already abandonned any aspiration
for modeling the mind, and is dedicated to various sorts of software
engineering and its corresponding theories (logic, complexity, etc.)
Whenever some issue related to human (and not machine) psychology
was raised, I felt the effect of my 2 year partial exposure to PCT. Thanks.

There was a panel on active vision, where two extreme approaches
have been introduced: to "traditional" reconstructionist approach
(a-la Marr) where Vision's role is to give a general-purpose black
box that translate pixles into predicates, and the "purposive vision"
approach, emphasizing the role of vision within the perception
action loops, without invoking explicit "representation" (proponents
of the latter cite Gibson, and Brooks's approach is also consistent
with this approach). Clearly this topic could benefit from PCT.
There was an anecdote, communicated by Y. Alloimonos (one of the
major proponents of active vision) and attributed to Arbib:
A woman had a severe damage in its visual cortex. When asked by the
doctor: "which of my two hands is closer to you" she couldn't respond.
However, when she was told to reach for the nearest hand, she could
do it. This should be a "clue" that a lot of object-recognition
and vision-guided action evolved long before linguistic capabilities.
(Of course there's a bug there - how could she interpret the verbal
input). Anyway, I'm just reporting.

Finally, I have confession to make: I have not yet read BCP, the
main reason is that it is not as popular in Europe as it is in
the US :slight_smile: If some of the American participants has an access to
a copy which he can buy, I will be happy to reimburse the expenses.

···

--

Oded Maler, VERIMAG, Miniparc ZIRST, 38330 Montbonnot, France
Phone: 76909635 Fax: 76413620 e-mail: Oded.Maler@imag.fr

[ From Ray Allis 930908.1030 PDT ]
(Oded.Maler@imag.fr Wed Sep 8 03:57:57 1993)

The main impression
is, however that the AI community has already abandonned any aspiration
for modeling the mind, and is dedicated to various sorts of software
engineering and its corresponding theories (logic, complexity, etc.)

Abandoned years ago; John McCarthy expressed on usenet comp.ai (circa
1986?) his view that AI is a sub-discipline of computer science. There
seemed to me to be general agreement with that among usenet comp.ai
readers. My view is decidedly orthogonal.

···

--
Ray Allis - Boeing Computer Services
ray@atc.boeing.com

[Michael Fehling 930908 10:58 AM PDT]

In re Ray Allis 930908.1030 PDT
      in hiw reply to Oded.Maler@imag.fr Wed Sep 8 03:57:57 1993 --

I want to counter an impression given by the comments of Oded and Ray on
current emphases in artificial intelligence (AI) theory and research. More
importantly, I want to suggest a significant opportunity created by recent
developments in AI that run counter to the observations of Oded and Ray.

  Regardless of the de facto focus of IJCAI (and AAAI) this year, there has
always been, and _continues_to_be_, a schism in the artificial intelligence
(AI) community between those who take a "computer-science" stance and those
who take a "cognitive science" stance. First, a very large part of the work
coming out of Carnegie-Mellon Univ. (CMU) is decidely focused on AI as
modeling human cognition. Students from this program are continuing this work
at many other institutions. The intellectual leaders of this movement are the
late Allen Newell and Herbert Simon. Besides the very large group continuing
Newell and Simon's program, there are many others quite actively doing their
own AI as cognitive science such as John Anderson, John Pollack, Barbara
Hayes-Roth, etc.

  Incidentally, just before he died Newell published a book entitled
_Unified_Theories_of_Cognition_ that may be worth a look by proponents of PCT.
In this book, Newell argues that it is time that psychologists formulate and
empirically validate computational "agent architectures" that model the whole
range of capabilities of the human "cognitive system (sic)." Newell goes on
to describe his own group's efforts to build such an agent architecture. This
group includes work by former students and other colleagues at a many other
institutions. Newell's followers are building SOAR, their particular proposal
for an intelligent agent architecture. Crucially, they are doing experiments
and field studies to test SOAR's ability to account for a very broad range of
human behavior patterns.

  Newell's book may interest PCT proponents in at least two ways. First, it
represents arguably the best that AI as cognitive science has to offer so far.
(John Anderson's work might be proposed by some as an alternative with
comparable coverage.) Newell's book clearly presents the dominant view within
cognitive science how to computationally model cognition. It is worth reading
for that reason alone. It gives a very good account of what PCT is up against
today.

  Second, and perhaps more importantly, I think Newell's book creates a
significant opportunity for PCT. For one thing, Newell and others have, at
least, softened biases among psychologists and other cognitive scientists
against comprehensive theories as illustrated by Newell's account of unified
theories of cognition. (N.B., John Anderson's work has also contributed to
this reawakened interest in comprehensive theory.) Moreover, Newell's
stipulations for building and testing a "unified theory of cognition," seem to
be more than fully met by Bill Powers' PCT program.

  So, a real opportunity has been created to present PCT as an alternative to
the kinds of comprehensive theories to which members of the AI and cognitive
science communities have been exposed so far. Some very hard work would need
to be done to exploit this opportunity. For example, to compete with a theory
like Newell's one would need to begin building a comprehensive agent
architecture based entirely on PCT principles. This would require a
coordinated effort by many to design and then implement a succession of model
prototypes. In parallel, a set of specific experiments would need to be
devised that could be carried out with real subjects and simulated with the
model.

  I think that this opportunity can and should be exploited. I've spent the
last fifteen years building the kinds of computational architectures called
for in this case. In reading about PCT, it seems to me that it is a theory
whose implications can be truly understood only when a comprehensive "agent"
model has been built to explicate the complex interactions within and between
control levels and the complex interactions with the environment that result.
And, as daunting as this task seems, my experiences with this kind of modeling
leads me believe that such a model can be constructed.

  I would urge those doing (or hoping to do) hands-on development of PCT to
have a look at Newell's book in the light of my suggestions. After looking it
over, sit back and imagine how a similar project based on PCT, rather than
Newell's theory, would compare. If, there is interest in this by people who
are willing to acquaint themselves with the work I've mentioned, then I would
be pleased to explore these ideas further.

- michael -

[From Chris Malcolm]

Ray Allis writes:

(Oded.Maler@imag.fr Wed Sep 8 03:57:57 1993)

The main impression
is, however that the AI community has already abandonned any aspiration
for modeling the mind, and is dedicated to various sorts of software
engineering and its corresponding theories (logic, complexity, etc.)

Abandoned years ago; John McCarthy expressed on usenet comp.ai (circa
1986?) his view that AI is a sub-discipline of computer science. There
seemed to me to be general agreement with that among usenet comp.ai
readers. My view is decidedly orthogonal.

As is the view of us here in the Dept of Artificial Intelligence at
Edinburgh. For more than thirty years we have successfully resisted the
view of the Dept of Computer Science that we are a sub-discipline of
theirs. Seems to make no more sense than saying that astronomy is a
sub-discipline of telescope science.

Chris Malcolm

[ Ray Allis 930908.1545 PDT ]
(Michael Fehling 930908 10:58 AM PDT)

This is tangential to the present discussion, but I want to avoid the
impression that my comments were based only on my observations of this
year's IJCAI meeting. I wasn't present at the 1956 founders' meeting,
but I've been involved with "AI" since "Computers and Thought" was new
(and I, myself, was somewhat newer). Newell and Simon are the people
who proposed the "Physical Symbol System Hypothesis", which asserts
that digital computers are sufficient to replicate human thought. I
take issue with this hypothesis. I listen in on this conversation
because I believe PCT (Bill Powers) has stepped outside that
logico-deductive framework and offered a very significant insight into
intelligent behavior.

···

---
For 35+ years work on machine intelligence has concentrated on only
part of the problem. Improving the technology of logic is useful, but
is computer science, and does not accomplish thinking as humans do it.
We should not confuse and mislead people that "AI" is trying to produce
human "thinking", if in fact the goal is something less.

The goal was, and for some people still is, the development of
intelligent machines. Several very useful applications of computers
have been developed, but there has been no success in achieving that
original goal. There are as yet no intelligent machines; no machines
which demonstrate intelligent behavior, and no clear promise of ever
having such machines. The devices which have so far been built are
lifeless, mechanical automatic deduction machines, brittle mechanisms,
and wholly untrustworthy as agents of our welfare.

We wish for a "calculus" which allows perfect reasoning, a goal sought
by Boole, for example, and earlier people, essentially back to the
Greeks who defined formal logic. Digital computers are a physical
implementation of formal, deductive reasoning. Newell and Simon in
1976 (in their Turing Award Lecture) named digital computers "Automatic
Physical Symbol Systems", and to me that seems a proper description.

There have historically been two approaches to building intelligent
machines. One approach is to try to understand how human brains work,
and build machines which duplicate that operation. The reasoning is
that humans are the best, maybe the only, example of intelligence. It
may be that in order to be intelligent, a machine will have to do it
the same way a human does.

The other approach is to try to produce intelligence without
restricting the methods we use. This line of reasoning leads to a
notion of intelligence as something separable from humans. It requires
however, that we understand intelligence well enough to construct
intelligent machines without simply copying a human. The second group
of people say "We fly much better than birds, and we don't flap our
wings!" Humans do indeed travel very rapidly in the atmosphere, and
that is the objective, but it can be argued that that is "flying" in
the sense that a "flying squirrel" flies, not flying in the same sense
that a bat or a dragonfly or an eagle flies. Incidentally, we should
note that flying is a much more clearly defined task than "behaving
intelligently".

There are two reasons the first approach has not been pursued as
seriously as the second. First, it is very difficult and it may be
that researchers chose seemingly achievable tasks first. It does seem
psychologically more comfortable to work on tasks where you have some
idea what to try. We are accumulating knowledge about the brain's
operation steadily, but slowly. We do not yet know enough to build an
artificial brain. Second, the people who populate the field of AI
research for the most part share a particular philosophical view that
human thinking equals logical reasoning. In their opinion digital
computers, as the best-yet logical engines, are suitable to duplicate
human thinking.

The fundamental limitation of (machine) reasoning systems which use only
formal logic is this; the statements which are being manipulated have
no meaning. Statements are manipulated as symbols, as with an algebra,
deliberately independent of any meaning. Meaning and relevance are
attached by humans to the statements before they are submitted to the
mechanical reasoning process and after they are returned. Meaning has
no place during the process. Any statement, such as "All men are
mortal", or "Dry wines have less sugar than white wines", is an extreme
abstraction from observation, from sensation, from perceptual
experience. These statements are David Hume's "matters of fact".

Deductive logic can thoroughly explore the implications in a set of
premises by applying a set of rules. However, the initial premises are
no more or less than observations of reality as clearly as it can be
perceived by humans, and at some point it is necessary to compare the
results against observed reality.

The missing "cognitive abilities" are of course induction and analogy.

Intelligent behavior requires that induction be performed using as
nearly as is possible the full connotation of experience. This
requires recourse to the raw observation, or at least to the very
lowest level of abstraction as the sensory experience is stored. This
means that the machine must sense its environment, for itself, with no
intermediate abstraction by us, and that experience must be represented
in a way that makes induction and analogy possible (i.e. analog(ue)).

Given that a computer is -used- as a computer, i.e. to manipulate symbols:

(1) Symbol manipulation is deduction.

(2) Digital computers are symbol manipulators.

(3) Induction and analogy are not deduction; not logic.

(4) Induction and analogy are necessary to (human) intelligent behavior.

(5) Digital computers cannot exhibit induction and analogy. (1,2,3)

(6) Digital computers are not sufficent for (human) intelligent behavior. (4,5)

( Outburst terminated. We now return control of your set, which will
  continue with regularly scheduled programming ... )

--
Ray Allis - Boeing Computer Services
ray@atc.boeing.com

[Michael Fehling 930809 ]

In re Ray Allis 930908.1545 PDT --

Ray,

I'm very sorry to have inspired a flame. I merely meant to address Oded's
specific characterization of AI as largely focused on "software engineering"
(or knowledge engineering, etc.), or your assertion that AI is generally seen
as a subdiscipline of computer science. Yes, those views are a part of AI
research. But, they are only a part. Remember that IJCAI is only one among
many conferences in which AI research is presented. Others include Cognitive
Science Society, various meetings on computational linguistics, philosophy
conferences, etc. Regardless of your view of AI's history, a significant
portion of these conferences report research on a computational view of
_human_ inteligence, including continuing discussions of the hypothesis that
machines can think with which you seem to find so flawed. So, you may find
fault with the results reported or with the premises of the theories
investigated, but that's not the point I made.

  More importantly, I fear that your flame could distract attention from the
proposal that I tried very hard to make clear in my previous message. In
fact, if you read Newell's book carefully, you'll find that your flame even
misses the mark with his original version of this proposal. Newell is _not_
addressing the issue as to whether machines can think. Instead, Newell is
arguing that unified theories of cognition can now be formulated and explored
with the aid of computer _models_ of these theories. This is a proposition
about modeling, not about whether machines can think.

  I proposed to take up Newell's challenge because
        (a) PCT may offer a much better foundation for a unified theory of
            cognition (and connation and affect) than the theory of
            "problem-spaces" underlying Newell's SOAR, and
        (b) developing a comprehensive model will allow PCT researchers to
            explore implications of the theory, operationalize testable
            hypotheses, etc. in far more depth than the diagrammatic
            presentations I have seen so far.

  Suppose we do this and find that, indeed, our model of a PCT-agent
explicates the nature of intelligent behavior much better than its
competitors. Will this tell us whether machines can think? Well, to
paraphrase Rhett Butler: Frankly, Ray, I don't give a damn. :slight_smile:

- michael -

Michael Fehling 930909 8:46 AM PDT

In re Tom Bourbon (930909.0840 --

Tom,

In the Jan. '93 issue of the journal _Artificial_Intelligence_ I published a
commentary on Newell's UTC book. One my main arguments was to compare
Newell's idea of a cognitive (sic) system to hackneyed behaviorist ideas of
Clark Hull and Tolman. As you say, "for all of its comprehensiveness and
impressive complexity, Newell apparently conceived of his SOAR architecture as
the middle part of one gigantic reflex arc, operating in bang-bang, S-R, C-E
fashion..." In fact, SOAR is worth reading precisely because it makes this
S-R commmitment so clear.

  I urge you and others on CSG-L to look up my commentary (and that of others
on the list.) Incidentally, my article contrasts Newell's view of how to
explain behavior with the Noam Chomsky's idea of a "competence" account.
Since he is a linguist Avery might find this worth reading.

- michael -

From Tom Bourbon (930909.0840)

I stumble into the office, look at the mail from over night, and suddenly I
am a day behind!

[Michael Fehling 930908 10:58 AM PDT]

In re Ray Allis 930908.1030 PDT
     in hiw reply to Oded.Maler@imag.fr Wed Sep 8 03:57:57 1993 --

There have already been several replies to Michael's post on Newell, but here
is my first. It must be brief; my "real" duties are stacked up.

It looks as though I am the only member of the Three Stooges (as
Rick so aptly labeled three of the PCT modelers) who has read, or who
remembers much about, Newell's book from 1990, _Unified_Theories_of_
_Cognition_. The book contains the William James Lectures he delivered at
Harvard, which I thought was almost ironic in that Newell characterized his
SOAR architecture in terms I could not distinguish from a gigantic S-R
reflexological system -- that, in a lecture series named in honor of the
person who a century ago described volitional control of perception.

In "Models and Their Worlds," Bill Powers and I cited and quoted Newell. We
used his ideas from the book as an example of what we called a "hybrid S-R
theory." Here is the section from "Worlds."

···

++++++++++++++++++++++++++

     A more complex hybrid SR-cognitive model was endorsed by the
cognitive theorist Allen Newell (1990) in the 1987 William James
Lectures. Newell spoke of how "It is possible to step back and
treat the mind as one big monster response function from the total
environment over the total past of the organism to future actions
. . . " (p. 44). On a more immediate scale, he said, "The world is
divided up into microepics which are sufficiently distinct and
independent so that the control system (that is, the mind) produces
different response functions, one after the other" (p. 44). For
strategic purposes, Newell places his theory in the category of
cognitive theories that he says do not effectively explain how
perception and motor behavior are linked to central cognitive
processes. Then he says such theories, ". . . will never cover the
complete arc from stimulus to response, which is to say, never to
tell the full story about any particular behavior" (p. 160). In
his allusion to the reflex arc, Newell remarkably implies the
equivalence of the causal models in his cognitive theory and in
reflexological SR theory.
     In either their simple or complex forms, hybrid SR-cognitive
models produce results identical to those of SR models, and we do
not discuss them further.

++++++++++++++++++

The reference is to Newell's book, _Unified_Theories_of_Cognition_.
"Models and Their Worlds" appeared in CLosed Loop, Winter, 1993, and is
available from Greg Williams, who is on the net.

All I have time for right now is a quick, "there you have it, folks." For
all of its comprehensiveness and impressive complexity, Newell apparently
conceived of his SOAR architecture as the middle part of one gigantic reflex
arc, operating in bang-bang, S-R, C-E fashion -- linear and sequential at
its core. Many other parts of the book make it clear that he conceived of
his system in those terms, but I do not have time to locate them and post
them right now.

If Newell thought of behavior as something that comes out of the far end of
a big reflex, he obviously had not looked closely at behavior and he
certainly had not noticed the phenomenon of control -- a phenomenon that
probably more than any other defines life.

To date, the research and modeling on PCT is much more modest than the
lofty project to develop the SOAR architecture, but the PCT model explains a
universal phenomenon of behavior, specifically, the behavioral control of
perception. The work on PCT so far is at a level of complexity very much
like that of rolling balls down inclines. I think we must nail down our
facts and methods at that level before we try to reach the moon. And when
the time comes to go to the moon, the laws that apply to balls on the
incline will still apply.

Perhaps more on this latter. People are waiting at my office door.

Until later,

Tom Bourbon
Tom Bourbon
Department of Neurosurgry
University of Texas Medical School-Houston Phone: 713-792-5760
6431 Fannin, Suite 7.138 Fax: 713-794-5084
Houston, TX 77030 USA tbourbon@heart.med.uth.tmc.edu

From Tom Bourbon [930909.1706]

Michael Fehling 930909 8:46 AM PDT

In re Tom Bourbon (930909.0840 --

Tom,

In the Jan. '93 issue of the journal _Artificial_Intelligence_ I published a
commentary on Newell's UTC book. One my main arguments was to compare
Newell's idea of a cognitive (sic) system to hackneyed behaviorist ideas of
Clark Hull and Tolman.

Michael, after reading only this far, I decided this is an article I want to
read. I cannot pretend to be on top of even a small part of the literature
in cognitive science and artificial intelligence, but all the while I was
reading Newell's book, I thought, "If this is the best they have to offer
..." And I thought repeatedly of how sad it was that these lectures were
given in the name of William James, when Bill Powers should have been the
one speaking.

As you say, "for all of its comprehensiveness and
impressive complexity, Newell apparently conceived of his SOAR architecture as
the middle part of one gigantic reflex arc, operating in bang-bang, S-R, C-E
fashion..." In fact, SOAR is worth reading precisely because it makes this
S-R commmitment so clear.

Having read the book, I believe you selected an excellent place for PCT
people to quickly size up one of the major Grand Theories of our time. In
fact, it will take a PCT reader far less time to come to an opinion about
the theory than to complete the book; the passage I quoted, concerning the
reflex arc from stimulus to response and across the entire history of
the organism, was on page 44.

I urge you and others on CSG-L to look up my commentary (and that of others
on the list.)

That I shall do.

I rather liked this original conclusion to your post. Who do you think keeps
"the list" that came through in your Freudian slip?

Until later,

  Tom
Tom Bourbon
Department of Neurosurgry
University of Texas Medical School-Houston Phone: 713-792-5760
6431 Fannin, Suite 7.138 Fax: 713-794-5084
Houston, TX 77030 USA tbourbon@heart.med.uth.tmc.edu