Turing test

[From Bill Powers (951224.1100) --

Shannon Williams (951222) --

You've already got the wet-blanket treatment from Rick and Avery; here
comes another. But the aim isn't to discourage you; it's to point out
that the "Turing Test" you propose addresses only one level of brain
organization, the level that assigns symbols to categories of
experience, manipulates those symbols according to various kinds of
rules, and converts its symbolic output into signals telling lower-level
systems what is to be perceived. This level requires no knowlege of what
the symbols mean, just as a computer does need to know what the bits it
is manipulating mean, or even what the program does. This (I think) is a
real level of organization, but it can't work by itself. What you're
proposing to do is interesting from the standpoint of studying this
level of human organization, but to study it properly you have to
understand what it can't do as well as what it can do.

I call this the "program level," but it really encompasses about three
levels. The lowest is the categorizing level, that chops the continuous
flow of lower-level experience into either-or classes of experience.
This is probably the level where symbols get assigned to the classes of
experience, like "dog." Then there is a sequence level, which is like a
pure list-processor that controls for the order in which classes of
relationships and events are brought about: FIRST open the door, THEN
walk through it. The third is the computing level, the rule-driven
level, the level of true programs which are characterized mainly by
choice-points: if (the car is in the garage), then (execute the garage-
door-opening sequence); else (go directly to the car). Two sequences and
a relationship become part of a program structured by the if-then-else
organization.

These are the levels on which AI traditionally focused, as I understand
it. The problem is that these levels lack all experiential meaning --
the above program could just as well be written this way, as far as the
computer is concerned: if(input element 1) then (output element 2) else
(output element 3). The same program structure would apply to an
infinite number of specific situations. The words used are irrelevant to
the computer, although they have meaningful associations with other
experiences for the programmer. The programmer can arrange to tell the
computer that "the car is in the garage" is false, but to the computer
this means only "not element 1." It does not mean that the car is not in
the garage, because the computer can't imagine an empty garage; it can't
see. The computer will not have a sudden feeling of horror and wonder
"What the hell happened to my car?"

I think that I could always tell whether a machine or a person was on
the other end of the line. I would ask it experiential questions,
questions relating to lower orders of perception, that required
imagining experiences rather than simply looking up descriptions and
rules in symbolic form. For example, since we would be communicating
through written words, I might ask "is a lower-case t taller than a
lower-case o?" A human being could tell me right away by looking at the
screen, but a computer would be unable to answer unless someone had
specifically entered the height of a t and the height of an o into the
data base. The world of perceptual attributes is so enormous that it
should be easy to think up questions that the program probably couldn't
answer, but that any human being could.

Another kind of question that the computer program couldn't answer like
a human being is "How do you know that?" Suppose I asked, "Is it raining
outside?" The computer might answer "yes" or "no," but when I try to get
it to tell me how it knows, at best it will only be able to describe the
reasoning processes it used. It will never say "Because I just looked
and it is (isn't) raining." Or suppose I asked "Is your brother in the
room with you?" it might say "no," but when asked how it knows it would
tell me something like "Because my brother is in Cleveland and I am in
Chicago; my brother cannot be in two different places at the same time."
It would tell me why, logically, the brother is not in the room. But a
real human being would say, "Because I don't see him here, that's how I
know." Or I could ask "Are apples sweeter than oranges?" Whatever the
answer, the reason given to justify it would not be a human reason.

Another distinction that can be made is that computers can run programs,
but they have no way of handling principles or system concepts. If I ask
"Is it better to be safe than sorry?", the computer might be programmed
to have a preference for one of the words, safe or sorry, but it will
not be able to apply this principle to any specific arbitrarily-chosen
situation. Suppose I said, "You can bet (just once) $1 with a
probability that 90% of the time you would get $2 back, or $100 with a
probability that half of the time you would get $1000 back. If you would
rather be safe than sorry, which bet would you make?" The human answer
is that you'd rather risk losing only $1 10% of the time. Unless the
programmer has specifically treated this situation in advance, the
program will choose the largest payoff probability.

System concepts are also beyond computer programs. "Would you rather be
a computer program or a human being?"

You're proposing to have the program build its own data base. However,
if the inputs consist only of sentences, symbol strings, the computer
will have no referents for these strings other than more symbol strings.
This means that if you tell it "roses are red," it will know only that
the symbol "rose" carries the attribute of "red". It will not know what
either "rose" or "red" means in the same way a human being knows. A
human being experiences things _before_ attaching any symbol to them; it
experiences their attributes before any attributes are named. The
meanings of the assigned symbols are the non-symbolic experiences, which
are far richer in detail and possible relationships than any sentence
describing them.

As a project for investigating some specific levels of human
organization, your proposal makes sense. But it is not likely to pass a
Turing test, if the questioner understands the human hierarchy of
control.

ยทยทยท

-----------------------------------------------------------------------
Best,

Bill P.

[from Mary Powers 951224]

To Shannon:

Bill left a word out in the first paragraph of his post - should
be "just as a computer does NOT need to know...

On Turing machines:

It seems to me that they suffer from the flaw of being way too
ambitious. I suppose that the idea of needing to communicate
back and forth with a computer in order to find out if it can
pass the Turing test is what drives the notion that a convincing
conversation is the best test. To me it seems like expecting the
Wright brothers to design an F-15. Bill talks about AI people
jumping into the middle of the hierarchy, omitting the "lower"
experiential levels. I add to this that language is far too
complicated a phenomenon to get to in one jump.

Wanting to get to the interesting stuff is typical in the
behavioral sciences. Look at all the books purporting to explain
consciousness. In BCP Bill suggests that it might be a good idea
to first understand how it's possible to stand up - balanced on
the equivalent of sticks stacked vertically on one another and
held together with rubber bands.

So instead of Turing test number 1, maybe we should back down to
number 0. Imagine a person tracking a target on a screen, with
both the target and the joystick subject to a random disturbance.
The person's control characteristics, such as the integration
factor, are then included in a computer model, which is then let
loose on a tracking task with different disturbances (which is
recorded). Five years later, the same human does the tracking
task that was recorded when the computer model did it. How well
do human and computer model match? Around .97 correlation.

This appeals to me because it is so simple and the results are so
excellent. This and other tracking tasks devised by Tom Bourbon
have had trouble making their way into the literature because
"correlations that good must mean a tautology" and similar feeble
remarks by reviewers who don't know what else to make of
correlations that high. It probably doesn't quite pass the
Turing test because the computer model is a little too perfect
(no tremor or other flaws, which probably could be programmed
in - by someone with the time and the funding) - and it certainly
doesn't pass the test as currently construed - i.e., verbal. But
this is exactly my point - why on earth are people chasing after
a verbal test before they know anything at all about modelling
the living systems that support the ability to be verbal?

A question that also arose while reading the fascinating novel,
Galatea 2.2, by Richard Powers.

Gotta check the turkey - we anticipate an onslaught of
grandchildren for a Christmas Eve present bash - then on to
Boulder tomorrow for more of the same.

Have a great Christmas
Happy Hanukkah
Merry Mithras
Let's hear it for the Winter Solstice

Mary P.