Turing Tests

[From Rick Marken (960130.1845)]

Remi COTE (290196.1637) --

Turing test are good, but they can be misleading when dealing with
complex system.

Actually, the Turing Test is a _bad_ way to deal with control
systems;-) But the Turing Test is not bad because it's _mis_leading;
it's bad because it's _non_leading; with control (retrofactive)
systems, the Turing Test leads nowhere. This is because the Turing
Test is based on a concept of behavior that is precisely the opposite
of the concept of behavior that we get from PCT.

The Turing Test treats behavior (specifically verbal behavior) as
_output_; systems (organisms or computers) generate behavior (for
example, sentences like "I never really liked my mother" or "Green
ideas sleep furiously") in response to inputs (for example, "How
do you feel about your parents?" or "Did you sleep well last night?"
Some of the output generated in response such inputs is "human-like"
and some isn't.

The Turing Test was designed as a way of testing a machine's
ability to generate human-like verbal output. Turing suggested
that a human and a machine be placed in two different rooms and
that another human try to determine, based on verbal responses
to questions, which room contains the human. If the human judge
can't tell which room contains the human, then the machine is
said to be able to geberate "human like" (or intelligent) behavior.

So a machine behaves like a human if its visible outputs "looks
like" what would be generated by a human. This is a wonderfully
superficial view of behavior -- but it's precisely the view of
behavior that pervades all of conventional psychology. There is
no inkling that some behavior might have purposes that are not
readily visible -- and some behavior may not.

It's too bad that Turing didn't know about the most important
characteristic of the behavior of living systems: retrofaction.

Best

Rick

[Martin Taylor 960131 11:30]

Rick Marken (960130.1845) to Remi Cote

Turing test are good, but they can be misleading when dealing with
complex system.

Actually, the Turing Test is a _bad_ way to deal with control
systems;-) But the Turing Test is not bad because it's _mis_leading;
it's bad because it's _non_leading; with control (retrofactive)
systems, the Turing Test leads nowhere. This is because the Turing
Test is based on a concept of behavior that is precisely the opposite
of the concept of behavior that we get from PCT.

I think you quite misunderstand the nature of the Turing test. It is not
based on any concept of behaviour at all. It is based on the notion that
all you can determine of the outer world is what you perceive. If you are
confronted with a 180 cm biped with two arms and what you would call a
face, wearing fashionable clothes, that talks with you in a sensible and
colloquial manner about matters of mutual interest, you would be likely
to think this object to be human. It might be a cleverly disguised alien,
or a clever robot. But you couldn't know.

Turing was interested in the intelligence inside the skin, not with the
softness of the skin or its colour or the fashion of its clothes. So he
devised a test that would not allow you to use those perceptions in
determining whether you are talking with a human. Likewise, he had no
access to voice synthesis and recognition systems, and he might have
thought that voice quality would also be irrelevant to intelligence. So
he restricted the communication to strings of words. If the "thing" at
the other end could not be distinguished from a human, no matter what you
did at your end to disturb it, he argued that you would have no grounds
for believing its (verbal) intelligence to be non-human.

So a machine behaves like a human if its visible outputs "looks
like" what would be generated by a human.

That's the only way you can determine that your colleagues are human.
What you get from CSGnet is very like what you would get from the Turing
test as he envisioned it. Am I human? Is Remi? What more do you want?

This is a wonderfully
superficial view of behavior -- but it's precisely the view of
behavior that pervades all of conventional psychology. There is
no inkling that some behavior might have purposes that are not
readily visible -- and some behavior may not.

And how do you propose to perceive them if, by YOUR definition, you
can't perceive them?

It's too bad that Turing didn't know about the most important
characteristic of the behavior of living systems: retrofaction.

It wouldn't have made any difference if he did. He based his test on the
foundational notion of PCT, that all you can tell of the world is what
you can perceive of it. If humans are indeed retrofactive systems, and
if there are ways to distinguish between a retrofactive system and an
input-output system, the Turing test ensures that an input-output system
will not be confused with a human.

Martin