[From Bill Powers (951224.1100) --
Shannon Williams (951222) --
You've already got the wet-blanket treatment from Rick and Avery; here
comes another. But the aim isn't to discourage you; it's to point out
that the "Turing Test" you propose addresses only one level of brain
organization, the level that assigns symbols to categories of
experience, manipulates those symbols according to various kinds of
rules, and converts its symbolic output into signals telling lower-level
systems what is to be perceived. This level requires no knowlege of what
the symbols mean, just as a computer does need to know what the bits it
is manipulating mean, or even what the program does. This (I think) is a
real level of organization, but it can't work by itself. What you're
proposing to do is interesting from the standpoint of studying this
level of human organization, but to study it properly you have to
understand what it can't do as well as what it can do.
I call this the "program level," but it really encompasses about three
levels. The lowest is the categorizing level, that chops the continuous
flow of lower-level experience into either-or classes of experience.
This is probably the level where symbols get assigned to the classes of
experience, like "dog." Then there is a sequence level, which is like a
pure list-processor that controls for the order in which classes of
relationships and events are brought about: FIRST open the door, THEN
walk through it. The third is the computing level, the rule-driven
level, the level of true programs which are characterized mainly by
choice-points: if (the car is in the garage), then (execute the garage-
door-opening sequence); else (go directly to the car). Two sequences and
a relationship become part of a program structured by the if-then-else
organization.
These are the levels on which AI traditionally focused, as I understand
it. The problem is that these levels lack all experiential meaning --
the above program could just as well be written this way, as far as the
computer is concerned: if(input element 1) then (output element 2) else
(output element 3). The same program structure would apply to an
infinite number of specific situations. The words used are irrelevant to
the computer, although they have meaningful associations with other
experiences for the programmer. The programmer can arrange to tell the
computer that "the car is in the garage" is false, but to the computer
this means only "not element 1." It does not mean that the car is not in
the garage, because the computer can't imagine an empty garage; it can't
see. The computer will not have a sudden feeling of horror and wonder
"What the hell happened to my car?"
I think that I could always tell whether a machine or a person was on
the other end of the line. I would ask it experiential questions,
questions relating to lower orders of perception, that required
imagining experiences rather than simply looking up descriptions and
rules in symbolic form. For example, since we would be communicating
through written words, I might ask "is a lower-case t taller than a
lower-case o?" A human being could tell me right away by looking at the
screen, but a computer would be unable to answer unless someone had
specifically entered the height of a t and the height of an o into the
data base. The world of perceptual attributes is so enormous that it
should be easy to think up questions that the program probably couldn't
answer, but that any human being could.
Another kind of question that the computer program couldn't answer like
a human being is "How do you know that?" Suppose I asked, "Is it raining
outside?" The computer might answer "yes" or "no," but when I try to get
it to tell me how it knows, at best it will only be able to describe the
reasoning processes it used. It will never say "Because I just looked
and it is (isn't) raining." Or suppose I asked "Is your brother in the
room with you?" it might say "no," but when asked how it knows it would
tell me something like "Because my brother is in Cleveland and I am in
Chicago; my brother cannot be in two different places at the same time."
It would tell me why, logically, the brother is not in the room. But a
real human being would say, "Because I don't see him here, that's how I
know." Or I could ask "Are apples sweeter than oranges?" Whatever the
answer, the reason given to justify it would not be a human reason.
Another distinction that can be made is that computers can run programs,
but they have no way of handling principles or system concepts. If I ask
"Is it better to be safe than sorry?", the computer might be programmed
to have a preference for one of the words, safe or sorry, but it will
not be able to apply this principle to any specific arbitrarily-chosen
situation. Suppose I said, "You can bet (just once) $1 with a
probability that 90% of the time you would get $2 back, or $100 with a
probability that half of the time you would get $1000 back. If you would
rather be safe than sorry, which bet would you make?" The human answer
is that you'd rather risk losing only $1 10% of the time. Unless the
programmer has specifically treated this situation in advance, the
program will choose the largest payoff probability.
System concepts are also beyond computer programs. "Would you rather be
a computer program or a human being?"
You're proposing to have the program build its own data base. However,
if the inputs consist only of sentences, symbol strings, the computer
will have no referents for these strings other than more symbol strings.
This means that if you tell it "roses are red," it will know only that
the symbol "rose" carries the attribute of "red". It will not know what
either "rose" or "red" means in the same way a human being knows. A
human being experiences things _before_ attaching any symbol to them; it
experiences their attributes before any attributes are named. The
meanings of the assigned symbols are the non-symbolic experiences, which
are far richer in detail and possible relationships than any sentence
describing them.
As a project for investigating some specific levels of human
organization, your proposal makes sense. But it is not likely to pass a
Turing test, if the questioner understands the human hierarchy of
control.
ยทยทยท
-----------------------------------------------------------------------
Best,
Bill P.