Although the video-game playing programs don't seem to have to cope with
much in the way of disturbances, surely some could be added without too
much drama. E.g., the Amazon dungeon might have its floor populated with
conveyer-belts going in various directions & moving at variable speeds,
and also, say rotating platforms (to make things more challenging, these
also might be invisible to Sonja). The issue is then how much revision
Sonja would require to cope with this. My suspicion is not much. E.g.,
if she's trying to go toward an amulet, the effect of a conveyer belt
will be the same as if the amulet were moving, and the mechanisms already
there ought to be able to cope with it (always move in the direction
`toward the amulet', whatever that happens to be at the moment).
And then there's the issue of to what extent the continuously-varying-
disturbance regime that exists at lower levels (wherein muscles are
continuously varying the respose they emit to a given neural current,
etc.) persists thru to higher ones. E.g. could you get a reasonable
robot by hitching continuous control systems for the lower levels to
a Agre-Chapman style one for the `tactics' of dealing with situations.
The sheer fact that the existence of control is easy to miss is maybe
evidence that this is really true.
Something else I'd like to find out is whether the `arbitration network'
architecture used in the Pengi's and Sonja's central systems could be
profitably redone along PCT lines, controlling for projected occurrence
of desireable things and non-occurrence of undesireable ones. Two outcomes
that would justify the effort, if they ocurred, would be:
a) more insight into the organization (knowing that such-and-such a
actually a control system controlling for some value of some
ought to help in relating these system's structure to their
behavior
b) workable methods for the systems to improve their performance
(there currently don't seem to be any).
An objection of Bill's that I don't buy is that the smarts in these
systems come from the programmer: the main idea that the interative AI
people had to shove off the deck was that you couldn't accomplish things
in the world without elaborate advanced planning. It is therefore quite
sensible to first produce systems that can function with zero or minimal
planning, and then worry about how these architectures might arise in
nature. This is basically Chomsky's point in generative grammar that it is
futile to talk about the mechanisms of language-acquisition if you don't
have a reasonably viable notion of what the outcome of acquisition
is like.
Avery.Andrews@anu.edu.auf