Turing Tests, perception, and empirical questions

[Peter Cariani, Fri Feb 2 96, 11AM]
Hi, everyone. I had some problems posting these 2 messages, so I
have concatenated them and am trying again. -- Peter Cariani

In the discussion about Turing Tests, Martin wrote:

The Turing Test restricts your perceptions and actions in only
one way: you can affect the test object only through language, and you
can see only its language actions. Otherwise, do what you will.

Turing's Test is actually even worse than this. It systematically excludes
both perception and action from the evaluative process. This is an
omission even more basic than leaving out action-percept-action loops
(what I understand, during my absent slumber, disconnected from CSG,
has come to be called "retrofaction"). I don't think that was at all
accidental -- Turing was after those aspects of mind that could be
encoded symbolically, not those aspects of mind that are dependent on
other kinds of operations/interactions with the world
(i.e. he was a Cartesian at heart).

Turing's Test therefore categorically disallows questions
of an empirical nature: (ok, computer, person or whatever you are,
is it snowing outside right now?)

and evalulations of action: (ok, computer, person or whatever you are,
give me your best rendition of "Wild Thing").

All a computer (i.e. without sensors or effectors) can do is to evade
these questions, whereas a person (or robot or animal) with sense organs
and motor systems can provide some sorts of answers/behaviors.

Since all that is left when you reduce the inputs and outputs to symbol
strings is logic (computation), Turing succeeded in truncating the
problem so as to fit exactly those functions that a digital computer
is capable of performing. A computer cannot, by itself, answer empirical
questions nor can it, by itself and by virtue of its computations alone,
act on the external world. I know "language" has come to be defined by
the Chomskians in an analogous, very artifically-restricted way,
in terms of logical operations alone rather than also encompassing
semantic (perception, action) and pragmatic (goals, purposes, evaluation,
adaptation, control/retrofaction) relations. So, I'm not even sure that
the computer manifests what could be called "language-actions" when it
spits out a string of symbols in response to an input string. If language
isn't pure syntax, but requires something more, then the Turing Test is
not even dealing with "language" per se (Searle's argument).

Tuing's Test was a clever rhetorical move, but, arguably,
it helped set back AI, linguistics, cognitive science, and philosophy
by several decades.

ยทยทยท

------------------------------------------------------
Second message
------------------------------------------------------
Martin Taylor wrote:

I think you quite misunderstand the nature of the Turing test. It is not
based on any concept of behaviour at all. It is based on the notion that
all you can determine of the outer world is what you perceive.

and this seems to have been interpreted elsewhere to imply that Turing came
up with the idea that "all you can determine of the outer world is what you
perceive" (!!??). Needless to say, the notion has a long and venerable lineage
that long predates Turing. I imagine Turing's Test was largely inspired by
behaviorist psychology (as I said before) truncated to fit the capacities of
the computer. But before that, there were people saying similar things
about perception and empirical knowledge about the world:
Bohr, Mach, Kant, various British empiricists, Leibnitz,
probably going all the way back to Romans and Greeks and maybe even further.

Turing made important contributions, but let's not turn him into a
minor deity (I agree that his death (murder?) was quite tragic).
There are inherent problems in the theory of computation
that were created by his incorporation of potentially-infinite tapes
into his "paper automata". Had he stuck with automata with finite tapes,
we would have a theory of computability that is coextensive with what
devices can physically be built (i.e. what computations can be actually carried
out by physical machines), instead of the present confusions
about whether Turing-computability per se matters at all
for real world computation or for the workings of the brain.

-Peter Cariani

Peter Cariani, Ph.D.
Eaton Peabody Laboratory
Mass Eye & Ear Infirmary
243 Charles St.
Boston, MA 02114 USA
tel (617) 573-4243
peter@epl.meei.harvard.edu

[Martin Taylor 960202 15:00]

Peter Cariani, Fri Feb 2 96, 11AM

In the discussion about Turing Tests, Martin wrote:

The Turing Test restricts your perceptions and actions in only
one way: you can affect the test object only through language, and you
can see only its language actions. Otherwise, do what you will.

Turing's Test is actually even worse than this. It systematically excludes
both perception and action from the evaluative process.

As the terms are used in this discussion group, if the enetity could not
use perception, it would have no access to what the tester typed, and
if it could not use action, the tester would have no access to what it
typed. You mean that the Turing test excludes non-language perception
and action, exactly as I said. The human is restricted to whatever
inputs and outputs are available to the tested entity. If that were
not so, the test would be inherently unfair.

Since all that is left when you reduce the inputs and outputs to symbol
strings is logic (computation),

Of course it is. If the entity is a computer, the same is true when the
inputs and outputs are seeing and making pizza. If a human is using
language, and cannot use voice modulation, the data at the input and output
are logically describable, but that doesn't necessarily mean the human
uses logic to act on them.

So, I'm not even sure that
the computer manifests what could be called "language-actions" when it
spits out a string of symbols in response to an input string. If language
isn't pure syntax, but requires something more, then the Turing Test is
not even dealing with "language" per se (Searle's argument).

You are asserting that the Turing test can never be passed. I'd say that
this is a question that can not now be answered in the negative.

Your idea of language is vastly different from mine. In my view, language
is a way of affecting one's prception of the state of a communictive
partner. By using language you can change the partner's state, and
by the partner's using language you can perceive something of the
partner's state. In control/retrofaction/perfaction theory terms,
language serves both as output medium and as sensory input in a feedback
loop. In a cooperative dialogue, one partner has a perception that can
be brought to a reference state by the actions of the other (which may
but need not be completely verbal). The other partner has a reference
that the originator should come to be satisfied with the recipient's
state--which implies that the recipient adequately interpret the
originator's message. There are two control/perfaction systems interacting.

Now if the Turing-tested entity is not behaving as if it was controlling
some perception of the tester, the tester will very soon come to believe
either that it is non-cooperative or that it is non-human. The very
essence of language is the effect it has. "Pure syntax" has very little
to do with language in use. An outside analyst who cannot influence
what either communicative partner says cannot, in general, determine
what messages are being transmitted, regardless of the precision of the
syntax. To tell what someone else is saying usually requires interaction.
When you don't have the possibility of itneraction (as when you write
a book or a journal article) you have to use language to affect an
imaginary model of your target audience, and you must use a default
set of assumptions about the effect your language will have. Precise
syntax helps, if your default audience can use it to determine your
intentions.

The Turing Test is _explcitly_ interactional. It does not consist of
sumbitting a written list of questions to which answers are requested
and there's an end of it. During the Test, a human entity might be
expected to ask its own questions of the tester, for example, and
to make appropriate responses to possibly unexpected answers.

Had he stuck with automata with finite tapes,
we would have a theory of computability that is coextensive with what
devices can physically be built (i.e. what computations can be actually carried
out by physical machines), instead of the present confusions
about whether Turing-computability per se matters at all
for real world computation or for the workings of the brain.

It's worse than that. Turing computability relates to algorithms, processes
whereby an input data set is transformed into an output data set. A given
algorithm will give the same output every time it is given the same input.
Algorithms don't work in a disturbed environment. People do. People
interact with their environment, whether you believe in retrofaction
theory or not, and the data on which they are working changes constantly.
It is immaterial whether there are algorithms to compute this or that
function, provided that what a person does keeps her alive and well.
To relate computability to intelligence is an enormous mistake, far more
damaging to cognitive science than the damage you impute Turing as
having done to AI.

What matters is not whether a particular computation might require an
infinite tape, but whether the person can jump out of the way before
the leaping tiger arrives. It's time that's the big problem,in real life.
In the case of the Turing Test, a computer that took 24 hours to answer
any question, however complex, would not be readily confused with a human.
Time comes into even the test using typed communication.

Martin