Cariani on AI

[From Bill Powers (930921.2030 MDT)]

Ed Ford (direct post) --

Sure, go ahead and send me the materials. I'll be happy to look
them over.

···

----------------------------------------------------------------
Peter Cariani (930921) --

I just wanted to second (with a rant of my own) Ray Allis'
remarks (9/9) on the symbolic logic straitjacket that has
enveloped most "AI" since the 1956 Dartmouth conference. It was
at this conference that it was decided (erroneously, in my
opinion) by the proto-AI community that all processes are
amenable to digital, rule-governed description, and that
therefore the route to artificial intelligence would necessarily
and solely involve logic-based problem-solving.

BBS, Sept. 1993, has two segments that are germane to your point.
The first is a target article by Shastri and Ajjanagadde, all
about a "connectionist representation of rules, variables, and
dynamic bindings using temporal synchrony," and the second
consists of some continuing commentary and a reply on Roger
Penrose's _Emperor's new mind_.

The first is an example of an approach that is probably just as
much a mistake as the early AI mistake, although the mistake is
hidden in pseudo-circuitry instead of logic. It's the idea that
"reflexive reasoning" rests on a vast knowledge base, expressed
by the authors as propositions that relate to other propositions
according to rules. They diagram the propositions as
connectionist circuits, but basically assume that every statement
to which the knowledge base may lead is stored somewhere in the
knowledge base, in some form.

The best commentary on this article is by Stanley Munsat (p.
466). He sets up a thought problem in which a detective asks you
questions about two masked men who have just held up the
restaurant where you were eating.

  The detective asks, "How tall was the one with the green ski
  mask?" I hestitate. He says, "Was he taller than the cashier?"
  I immediately reply "yes." Was that fact already encoded in the
  LTKB?" [ LTKB = Long-Term Knowledge Base].

Munsat offers the alternative that what we know is not stored as
(propositional) facts, but as experiences holistically stored,
from which we can generate statements of fact "when the need
arises." This is what I've been trying to say in referring to an
underlying continuum of analog experience even at the highest
levels, with statements or propositions simply representing our
attempt to describe these underlying experiences in a symbolic
language. We answer the detective's question not by searching
through a linked knowledge base of statements, but by looking at
the memory of the experience and, paying attention to the
relationship suggested by the detective, creating a description
(or seeing the validity of one) on the spot. The levels of
organization that give us the ability to generate propositions
are not themselves run by propositions; propositions are their
_product_, not their mechanism. A description is a product of
something that can perceive and then employ methods of symbol-
generation to represent the perception in a conventional way.

To my surprise, Penrose seems to be on the same track. Some of
the delayed comments on his book focused on his treatment of
Godel's Theorem, which Penrose had characterized as not being
provable algorithmically. He explained further what he meant by
that. For example, he says (in one long sentence)

  But the insight I intended to express, which comes from Godel's
  theorem, appears to get somewhat lost if one simply phrases it
  as the formal theorem that Davis refers to, because the
  "insight" involves not just a piece of _formal_ mathematics --
  that is, a succession of strings of symbols correctly obeying
  the rules of some formal system -- but also the _understanding_
  of the underlying _meanings_ that the symbolisms of the
  formalism encode (such as the fact that the actual meaning of
  the symbol [inverted A] is to be "for all natural numbers" etc.
  (p.616-617)

If he had carried this one or two steps further he would have
been saying just what I am saying. The meaning that is referred
to by the phrase "for all natural numbers" does not consist of
the words "for", "all", "natural", and "numbers," but of the
_meanings_ of those terms and their arrangement, which are not
words but the perceptions to which the words refer (at some high
level). The ability to perceive things like sequence, program,
principle, and system concept does not deliver up strings of
words, but the perceptions themselves, directly. The words and
other symbols point to those perceptions, but are empty unless
the person seeing or hearing the words and symbols already has
experienced the right kinds of wordless symbol-less perceptions
(or, through reorganizing in the attempt to make sense of the
strings of symbols, comes to find perceptions that seem to fit
the symbolic description). A proposition is not itself a fact,
but a _description_ of a (putative) fact.

If you read Penrose's full remarks, I think you might agree that
he is looking for something along these same lines.

What you say is, for my money, right on the mark and worth
repeating:

To any practicing scientist, it is inconceivable that we could
do away with making measurements, and to any practicing engineer
(besides computer scientists) it would be inconceivable that we
could do away with effectors that act on the world, but this is
exactly what the AI community did by excluding real perception
and action.

---------------------------------------------------------------
Best to all,

Bill P.