Turing Tests -Reply

[Hans Blom, 960131]

(Rick Marken (960130.1845))

So a machine [in a Turing Test] behaves like a human if its visible
outputs "looks like" what would be generated by a human. This is a
wonderfully superficial view of behavior -- but it's precisely the
view of behavior that pervades all of conventional psychology. There
is no inkling that some behavior might have purposes that are not
readily visible -- and some behavior may not.

An X behaves like a human if its visible outputs "looks like" what
would be generated by a human. So is X a human? This has always been
my problem with Asimov's Three Laws of Robotics, of which the
essential element is how a robot can recognize whether an X is a
human. Will a robot, who is forced through its inbuilt laws to assist
humans, still help a man dressed in an ape suit emitting ape noises?
A gorilla? An anencephalic? A premature baby? Where are the limits?
Are there any? Or are they so fuzzy that large safety margins will be
required in practice with still some residual error? I still have no
solution to this recognition problem except the naive heuristic that
if it quacks like a human etc. ...

Anyway, all I see of others is appearances, behavior. I cannot look
inside other's heads to read their goals. And frequently enough I
have encountered situations where someone told me of his/her goals,
yet discoved that their behavior made me strongly doubt their words.
I may be tempted to assume that _I_ know their "real" goals, but that
usually turns out to be hubris.

All I know about others is through their "wonderfully superficial"
external appearances. Not much different from a Turing Test, I would
say.

Greetings,

Hans