[From Bill Powers (930909.1100 MDT)]

Oded Maler (930909.1730) --

Some fine words lately, Oded.

There is a real farmer with real bird, real fox etc. When he

has to solve his *real* problem, he has to perceive the bird,

the river, the boat, etc., counteract disturbances, digest and

breath meanwhile, and a lot of other things.

Cognitive science/symbolic AI approach says: a prerequisite for

an agent to solve the *real* version of the problem is to solve

the abstracted symbolic problem. If we cannot design an

architecture to solve puzzles (where we provide the

abstractions), we have no hope to build systems that in

addition have to do the abstraction themselves. That's way we

should investigate symbolic architectures.

Right, that seems to be the rationale. But there is an added

assumption, which is that the problem is going to be solved in a

particular computer-like way. The real farmer may think, "Gee, I

can see the puzzle: I can't leave the chicken with the grain or

the fox with the chicken -- looks impossible, I give up." Then,

having given up solving that problem, he may simplify the

problem: "Basically, do I really care if the chicken eats a

little of the grain? I guess not." Or he bops the fox over the

head with an oar so he can leave it with the chicken while he

goes back for the grain. Or he wrings the chicken's neck so he

can leave it with the grain while he goes back for the fox --

after all, we're having the chicken for dinner tonight anyway.

When I think of the chicken-fox-grain problem and play by the

rules, I don't solve it by logic: I do it in imagination. I try

taking the grain across, and realize that oops, the fox is eating

the chicken, I can't do that. I try taking the chicken across,

but then realize on the next trip that I'll be leaving the grain

for the chicken to eat or the fox to eat the chicken. In other

words, I just zip through the possibilities in imagination and

experience what's wrong with them. Then, for a moment, I'm

stumped. The solution comes out of the blue: Ah, of course, I can

bring the chicken back with me when I take the fox over. Then I

can see why I didn't get the solution right away: the way the

problem is stated, it refers only to transporting items in one

direction. So I failed to include the premise that I could

transport things either way. If I'd thought of that first, the

solution would have been trivial: just do things in the right

sequence and nobody eats anything forbidden. The solution isn't

in finding the right logic, but in finding the right sequence,

which I can do without logic.

Computers can't perceive sequence without a lot of elaborate

programming. I can. Give me A, B and then B, A, and I can tell

you immediately that the sequences are different. The computer

has to be programmed to go through an elaborate series of logical

tests involving whether A is in memory while B is being sensed

and vice versa, and then comparing the results for the two cases,

which gets the job done but slowly. I have hardware for detecting

sequence immediately.

If you give me a real logical problem, which involves the

simultaneous truth-values of several propositions, I have to

dredge up my Boolean algebra and work it out the same way the

computer has to work it out, if it's programmed to do Boolean

algebra. I would do it a lot slower, because I have to remember

the rules and apply them, and probably write them down so I don't

forget where I am. The Boolean algebra I'm using doesn't explain

what I'm doing, of course, which is following rules, a non-

Boolean process. What really needs explaining is not the logic,

but how I can follow rules, as I said a little earlier today.

That would be a basic cognitive function.

## ยทยทยท

-----------------------------------------------------------

Best,

Bill P.