[From Bill Powers (920918.1600)]
Rick Marken (920918) --
I think there's one aspect of the Turing test you may be overlooking.
It's not just an examination of output spontaneously created by a hole
in the wall behind which either a machine or a person might be
lurking. It's an interactive test; that is, you send messages through
the hole and get messages back. Your messages can be anything you
like. So if you like you can send messages that amount to
disturbances, under the hypothesis that the other entity is
controlling for something by using its verbal outputs. You can try to
discover controlled variables by clever pushing and pulling on ideas,
using words. Like we do on the net all the time.
Of course this isn't what the AI types would do. I don't think they
really know what they would be looking for. Didn't Penni say that the
best program fooled about half the people, at the last big contest?
That's about chance, isn't it? In other words, the best the judges
could do with a machine that didn't give obvious clues about its
machine origins was to guess whether it was "intelligent," yes or no.
And they guessed at random, which means they thought a human being was
a machine half the time, too. They couldn't even identify intelligence
for sure when they were interacting with a human being. How did they
expect to tell when it was a machine behind door number 2? Maybe that
wasn't how it came out, though. Penni, how many people were misjudged
as machines?
Intelligence is a SILLY idea. Suppose you read a paragraph from a book
to a machine, and asked it to repeat it back. A person with a good
memory might do it, and a person with a bad memory wouldn't be able
to. The machine could easily repeat it back perfectly, so to fool you
it would have to deliberately forget some lines some of the time, or
pretend to. Or suppose you asked it to prove a theorem. If it started
transmitting the proof back the instant your transmission ended, you
would have to conclude that it was a machine on the other end, because
it was too good at theorem-proving. Unless the proof were full of
mistakes.
And so on. No matter how you define intelligence, it comes down to
doing something well or fast or both. The particular abilities on
which you judge intelligence will naturally be those that you admire.
But being able to do anything too well or too fast will reveal that
HUMAN intelligence isn't responsible -- or rather, that a human being
has had too much time to prepare a machine to handle all possible
contingencies in a way impossible for a human being, thus at best
reflecting a non-human intelligence.
But the Turing test, just being a free interaction between two
entities, can be used for any purpose you like. Under PCT we'd look
for evidence of something other than "intelligence." We'd look for
control. And if we found it, we might not be able to tell whether the
other entity was really human just because it's able to control, but
we could be sure that if it's a machine, it wasn't programmed by an AI
aficionado.
ยทยทยท
----------------------------------------------------------------------
--
As to Stella, you might be able to accomplish the same thing with
Pascal, BASIC, or a spreadsheet, and so might about six others in the
CSG, but what about the rest? Maybe Stella isn't the answer, but I
think a block-diagram based system would be a lot easier for non-
programmers to get the hang of than the other ways.
----------------------------------------------------------------------
-
Best,
Bill P.