AI & AL problems

[From Bill Powers (920812.0800)]

penni sibum (920811) --

I don't fault Chapman for bypassing problems with vision, form recognition,
noise tolerance, and so on. Even if we knew how a game player converts an
array of pixels into a signal standing for "monster," we might prefer to
leave out those computations just to fit the simulation into a computer.

...it does get occasionally confusing, but that in fact is the nature >of

video games: the player identifies w/ the character in the game. >(that's
why *we* get confused: i don't know if sonja confuses itself >with the
amazon or pengi confuses itself with the penguin.)

What I'm trying to get at is that the *model* is not confused; it does what
it does. When you boil down all those hyphenated phrases, they have to
become variables and operations on them. All the verbal interepretations
are going on outside the model, in the modeler. I'll have more to say after
I see Chapman's book.

ยทยทยท

------------------------------------------------------------------------
Oded Maler (920812) --

The point is that there is some level where we (seem, at least, to)
work in discrete terms, e.g., "if my PC doesn't boot then I replace
the diskette" ,etc.

I don't think that the discrete terms are always as discrete as they seem.
Suppose you do stick a disk in drive A and turn on the computer. You hear
the fan start, the red light on the disk drive blinks, the disk drive
whirrs and buzzes ....

At this point, has the PC booted? What is the state of the
categorical/sequential/logical perception, "booting up the pc?"

It goes on. The B drive light blinks and the drive buzzes, the hard disk
light flickers, the printer goes kachunk, and finally all the action quiets
down. Only one thing -- during all of this, there's been nothing on the
screen.

Now what's the status of "the disk drive is booting up?" All the time that
the familiar activity was going on, you were getting perceptual signs that
the computer was on its way up (a familiar event). But with nothing on the
screen, part of the process is missing. The computer seems to be booting
up, but it's not doing it quite right. When the action stops, you've got a
big question mark hanging: did it really boot up or not? What's wrong? You
perceive MOST of "the computer is booting up" but not ALL of it. Eventually
you realize that there's no pilot light on the monitor, and you turn the
switch to "on" -- and the screen fills up with the usual gobbledygook. NOW
the computer has booted up.

I'll bet that Martin Taylor would say that the perception of the discrete
category or state is probabilistic. When something is missing, the
probability-value of the perception is less than 1. I would just say that
the perception changes on a continuous scale from none to the reference
amount. There are probably other ways to interpret this experience,
implying still other propositions about how the perception of discrete
categories works. But the one way of putting it that we wouldn't use, I
think, is "either the computer is booting up or it's not." That strictly
mathematical interpretation doesn't fit the way perceptions actually
behave. When everything happens right except for the screen display, we
don't experience this as "the computer is not booting up." We experience it
as "something's wrong with the way the computer is booting up."

To me, this means that set theory, symbolic logic, and computer programming
don't quite fit the way the brain works at these levels. No doubt there are
aspects of what the brain is doing that resemble the way mathematical
representations behave, but I think it would be a mistake to assume that
the mathematical processes are anything more than an approximation to the
real system's mode of action. It's possible that something else entirely is
going on.

Think of the difference between "absolutely" in the following sentences:

"What you say is absolutely true"

"I'm pretty sure that what you say is absolutely true."

The meaning of "absolutely" is weakened by saying "I'm pretty sure." It's
not negated or affirmed; it's left somewhat less than affirmed. While the
words still seem categorical, their meanings are shifted in a continuum, as
are the meanings of anything further we say on the subject:

"I'm pretty sure that what you say is absolutely true, but let's act
cautiously."

Computers can't act cautiously; all they can do is act or not act.

... try running Little Man in a non-uniform environment with obstacles,
non-reversible consequences, dead-ends [and this is by no means an
attempt to dismiss its achievements - just to note that there are many
hard problems in the highr-levels]).

Oh, yes, there certainly are. I think, though, that as we build up a
working model level by level, the nature of those hard problems will begin
to look different. Maybe the hard problems will prove to have simpler
solutions. Look how hard reaching out and pointing seems if you try to
accomplish it without feedback, the way the motor program people are doing.
In the motor program scheme, the higher levels have the burden of analyzing
the environment and the dynamics of the arm, and computing signals that
will result in a particular trajectory and end-point. It would take at
least a program-level system to compute the inverse dynamics of the system.
With the control-system approach, the higher levels don't have to be
concerned with that at all: all they have to do is decide on the desired
relationship between the fingertip and the target. So all that terribly
complicated computation disappears from the model. You're still left with
problems, such as what to do when there's an immovable obstacle in the way
of pointing, but I'd rather try to solve problems like that than the
problem of computing inverse dynamical equations.

Now, it's not clear what 1% accuracy means in this abstract world,
because the disk-drive cannot be 99% pregnant - and if I buy from you
a low-level black box, I trust that it will work.

The problem of 1% accuracy shows up when a logical process commands a
result to occur, and it occurs with a 1% error. This is a reasonable error
for a good lower-level control system that's dealing with moderate
disturbances. What is the next process in line to do? If it just proceeds,
it will begin with a 1% error, and after it's done, the error might be 2%.
There's nothing to prevent a random walk as these errors accumulate, until
what's actually accomplished in response to a command or a decision bears
no resemblance to what was intended at the higher level. Somehow these
little quantitative errors have to be taken into account at the higher,
supposedly discrete, levels. Of course in what you term an abstract world,
such discrepancies can't exist; there's no provision for them. This may
mean that the abstract world is not a good representation of the higher
levels of perception and control. Of course human beings can behave as if
the world were discrete -- but the world they deal with then is imaginary,
and when the abstract processes are required to work in the real-time
world, they will probably fail.

I'm probably trying to make my case too strong. A lot of behaviors governed
by discrete decisions aren't critical; if I decide to drive into Durango,
it doesn't much matter whether I hit the center of town within 1%. If I
miss the Durango Diner by one block I can just park and walk the remainder
of the distance (if I can find a spot that close). The lower level systems
can make up for a lot of imprecision and idealization on the part of the
higher ones -- they fill in the critical details that the "abstract"
systems leave out.

One last observation. Control systems don't deal with pregnancy -- only
with perception of pregancy. If a woman you know well shows up looking
considerably stouter than usual, you could easily perceive her as 60%
pregnant and 40% overweight. You won't congratulate her, and you won't form
an opinion of her eating habits, because there are two conflicting
perceptions of her state. To assert that a woman can't "be 99% pregnant" is
to make an epistemological assumption, which is that the world is identical
to our perceptions of it. We assume that the world can't be in two
mutually-exclusive states at once; in fact, that's the essence of
categorical thinking. But the world we experience can be in states between
different categorical boundaries and between the logically true and false.

What is wrong in our verbal account of our introspection concerning the
way we operate in this level? Do the higher-level percepts and >references

carry with them some concrete features of the lower level >ones from which
they are constructed/grounded in such a way that the >solution "flow"
instead of being searched-for/calculated?

Higher systems are both continuous and discrete, I think. Anything a person
does is an example of human systems at work. People can obviously get into
a "calculation" mode where they apply rules literally and arrive at yes-no
results. So a model of a brain has to contain the ability to do this. On
the other hand, these processes can also operate in a sloppier way where
the processes look more like a flow than a switch. So the model has to be
able to do that, too. I think that at what I call the program level, there
is a lot more going on than digital calculation. I think this is a
generalized computer that can follow ANY kinds of rules -- the rules of
Boolean logic are just one kind.

I don't expect immediate answers..

Me neither.
-----------------------------------------------------------------------

Bruce Nevin (920811) --

Do you think we should send that guy at Discovery some copies of Closed
Loop and invite him to join this net? Why don't you ask him?

-----------------------------------------------------------------------
Best to all,

Bill P.