the "Turing Test"

[From Shannon Williams (951226)]

Bill Powers (951224.1100) --

the "Turing Test" you propose addresses only one level of brain
organization,

That's fine. I do not mind addressing one level at a time.

to study it properly you have to understand what it can't do as well
as what it can do.

How am I suppose to understand what a thing can or can't do, before I
study it?

I call this the "program level," but it really encompasses about three
levels. The lowest is the categorizing level, that chops the continuous
flow of lower-level experience into either-or classes of experience.
This is probably the level where symbols get assigned to the classes of
experience, like "dog." Then there is a sequence level, which is like a
pure list-processor that controls for the order in which classes of
relationships and events are brought about: FIRST open the door, THEN
walk through it. The third is the computing level, the rule-driven
level, the level of true programs which are characterized mainly by
choice-points: if (the car is in the garage), then (execute the garage-
door-opening sequence); else (go directly to the car). Two sequences and
a relationship become part of a program structured by the if-then-else
organization.

These are the levels on which AI traditionally focused, as I understand
it.

Many AI people have realized that they cannot follow a computational
structure. But they cannot think of any other way to think about thinking.

This means that if you tell it "roses are red," it will know only that
the symbol "rose" carries the attribute of "red". It will not know what
either "rose" or "red" means in the same way a human being knows. A
human being experiences things _before_ attaching any symbol to them; it
experiences their attributes before any attributes are named. The
meanings of the assigned symbols are the non-symbolic experiences, which
are far richer in detail and possible relationships than any sentence
describing them.

I am not interested in convincing you that computers can be just as
conscious and sentient and empathic as we are. To me the point is not
worth discussing. It ranks up there with arguments about the existance of
Gods or about why women are superior to men.

However, I do think that AI could benefit immensely from your insights. As
I read _The Philosophy Of Artificial Intelligence_ I often feel like
screaming. I want to scream for them to turn the puzzle up-side-down and
jiggle a little. But they don't know how.

They can solve their implementation issues. They have theories about how
'knowledge' can be represented by the brain. They have theories about how
knowledge can be associative and distributed. Even though these theories
are not implemented yet, they seem doable. Implementation is not considered
the primary issue.

The primary issue comes from what Hayes calls the 'frame problem'. Dennet
describes the 'frame problem' as: "We must teach [the robot] the difference
between relevant implications and irrelevant implications." In other
words, they do not visualize where goals come from and what type of
physical model a 'goal oriented' hierarchy requires.

My program sketch was only to outline how one of these hierarchies might
begin. I was looking for feedback as to how I should begin to improve the
sketch.

-Shannon

PS. Hope everyone is enjoying their Holidays!