···
---
For 35+ years work on machine intelligence has concentrated on only
part of the problem. Improving the technology of logic is useful, but
is computer science, and does not accomplish thinking as humans do it.
We should not confuse and mislead people that "AI" is trying to produce
human "thinking", if in fact the goal is something less.
The goal was, and for some people still is, the development of
intelligent machines. Several very useful applications of computers
have been developed, but there has been no success in achieving that
original goal. There are as yet no intelligent machines; no machines
which demonstrate intelligent behavior, and no clear promise of ever
having such machines. The devices which have so far been built are
lifeless, mechanical automatic deduction machines, brittle mechanisms,
and wholly untrustworthy as agents of our welfare.
We wish for a "calculus" which allows perfect reasoning, a goal sought
by Boole, for example, and earlier people, essentially back to the
Greeks who defined formal logic. Digital computers are a physical
implementation of formal, deductive reasoning. Newell and Simon in
1976 (in their Turing Award Lecture) named digital computers "Automatic
Physical Symbol Systems", and to me that seems a proper description.
There have historically been two approaches to building intelligent
machines. One approach is to try to understand how human brains work,
and build machines which duplicate that operation. The reasoning is
that humans are the best, maybe the only, example of intelligence. It
may be that in order to be intelligent, a machine will have to do it
the same way a human does.
The other approach is to try to produce intelligence without
restricting the methods we use. This line of reasoning leads to a
notion of intelligence as something separable from humans. It requires
however, that we understand intelligence well enough to construct
intelligent machines without simply copying a human. The second group
of people say "We fly much better than birds, and we don't flap our
wings!" Humans do indeed travel very rapidly in the atmosphere, and
that is the objective, but it can be argued that that is "flying" in
the sense that a "flying squirrel" flies, not flying in the same sense
that a bat or a dragonfly or an eagle flies. Incidentally, we should
note that flying is a much more clearly defined task than "behaving
intelligently".
There are two reasons the first approach has not been pursued as
seriously as the second. First, it is very difficult and it may be
that researchers chose seemingly achievable tasks first. It does seem
psychologically more comfortable to work on tasks where you have some
idea what to try. We are accumulating knowledge about the brain's
operation steadily, but slowly. We do not yet know enough to build an
artificial brain. Second, the people who populate the field of AI
research for the most part share a particular philosophical view that
human thinking equals logical reasoning. In their opinion digital
computers, as the best-yet logical engines, are suitable to duplicate
human thinking.
The fundamental limitation of (machine) reasoning systems which use only
formal logic is this; the statements which are being manipulated have
no meaning. Statements are manipulated as symbols, as with an algebra,
deliberately independent of any meaning. Meaning and relevance are
attached by humans to the statements before they are submitted to the
mechanical reasoning process and after they are returned. Meaning has
no place during the process. Any statement, such as "All men are
mortal", or "Dry wines have less sugar than white wines", is an extreme
abstraction from observation, from sensation, from perceptual
experience. These statements are David Hume's "matters of fact".
Deductive logic can thoroughly explore the implications in a set of
premises by applying a set of rules. However, the initial premises are
no more or less than observations of reality as clearly as it can be
perceived by humans, and at some point it is necessary to compare the
results against observed reality.
The missing "cognitive abilities" are of course induction and analogy.
Intelligent behavior requires that induction be performed using as
nearly as is possible the full connotation of experience. This
requires recourse to the raw observation, or at least to the very
lowest level of abstraction as the sensory experience is stored. This
means that the machine must sense its environment, for itself, with no
intermediate abstraction by us, and that experience must be represented
in a way that makes induction and analogy possible (i.e. analog(ue)).
Given that a computer is -used- as a computer, i.e. to manipulate symbols:
(1) Symbol manipulation is deduction.
(2) Digital computers are symbol manipulators.
(3) Induction and analogy are not deduction; not logic.
(4) Induction and analogy are necessary to (human) intelligent behavior.
(5) Digital computers cannot exhibit induction and analogy. (1,2,3)
(6) Digital computers are not sufficent for (human) intelligent behavior. (4,5)
( Outburst terminated. We now return control of your set, which will
continue with regularly scheduled programming ... )
--
Ray Allis - Boeing Computer Services
ray@atc.boeing.com