[From Bill and Mary Powers (930908.2315 MDT)]
Damn you guys, I have to get to bed.
Ray Allis, that was a brilliant post and anticipated mine which I
send before I read yours.
Avery Andrews (930909.1415) --
In the case of our farmer, he has to come up with a plan, a
sequence of imagined states of affairs, such that the first one
is how things are, the last one is how things are wanted to be
(everybody on the opposite sides of the bank), and each state
can be attained from the previous ones by dumb control systems.
It of course requires a lot of experience to know what kinds of
state-transitions can be achieved by your personal library of
systems, so it comes as no surprise when we learn that children
are pretty bad at (realistic) planning. On this view, AI-type
planning happens, but is very heavily dependent on experience,
and on the existence of control systems to actually make the
steps in the plans happen.
There's one more aspect to this, which is that logic and symbol-
manipulation that goes with it is not the highest level of
organization in the human brain. Mary, in her usual up-a-level
way, points out that the AI program for solving the fox-grain-
chicken program will never come up with the solution that lets
the farmer get across the pond in either of two ways: build a
cage for the chicken. You take the chicken and the grain (or fox)
across, and go back and get the fox (or grain). Where did the
cage come from? From outside the premises given to the AI
program.
When you set the conditions of the problem, the logic-based
approach is helpless: it can't get outside them. A higher level
of system can look at the premises and see that a better set of
premises would lead to a solution that achieves the goal more
elegantly or with fewer constraints (these criteria are
principles, not programs). So the logical approach fails both by
not having low enough levels of control and by not having high
enough levels.
Good night,
Bill (and Mary) P.