disturbances in video game players

Although the video-game playing programs don't seem to have to cope with
much in the way of disturbances, surely some could be added without too
much drama. E.g., the Amazon dungeon might have its floor populated with
conveyer-belts going in various directions & moving at variable speeds,
and also, say rotating platforms (to make things more challenging, these
also might be invisible to Sonja). The issue is then how much revision
Sonja would require to cope with this. My suspicion is not much. E.g.,
if she's trying to go toward an amulet, the effect of a conveyer belt
will be the same as if the amulet were moving, and the mechanisms already
there ought to be able to cope with it (always move in the direction
`toward the amulet', whatever that happens to be at the moment).

And then there's the issue of to what extent the continuously-varying-
disturbance regime that exists at lower levels (wherein muscles are
continuously varying the respose they emit to a given neural current,
etc.) persists thru to higher ones. E.g. could you get a reasonable
robot by hitching continuous control systems for the lower levels to
a Agre-Chapman style one for the `tactics' of dealing with situations.
The sheer fact that the existence of control is easy to miss is maybe
evidence that this is really true.

Something else I'd like to find out is whether the `arbitration network'
architecture used in the Pengi's and Sonja's central systems could be
profitably redone along PCT lines, controlling for projected occurrence
of desireable things and non-occurrence of undesireable ones. Two outcomes
that would justify the effort, if they ocurred, would be:

  a) more insight into the organization (knowing that such-and-such a
     actually a control system controlling for some value of some
     ought to help in relating these system's structure to their
     behavior

  b) workable methods for the systems to improve their performance
    (there currently don't seem to be any).

An objection of Bill's that I don't buy is that the smarts in these
systems come from the programmer: the main idea that the interative AI
people had to shove off the deck was that you couldn't accomplish things
in the world without elaborate advanced planning. It is therefore quite
sensible to first produce systems that can function with zero or minimal
planning, and then worry about how these architectures might arise in
nature. This is basically Chomsky's point in generative grammar that it is
futile to talk about the mechanisms of language-acquisition if you don't
have a reasonably viable notion of what the outcome of acquisition
is like.

Avery.Andrews@anu.edu.auf

[From Bill Powers (920809.0900)]

Avery Andrews (920808) --

The issue is then how much revision Sonja would require to cope with
this. My suspicion is not much. E.g., if she's trying to go toward an
amulet, the effect of a conveyer belt will be the same as if the amulet
were moving, and the mechanisms already there ought to be able to cope
with it (always move in the direction `toward the amulet', whatever >that

happens to be at the moment).

I'm not familiar with Sonja -- could you (or penni) give a precis of what
the program does? And if possible, how it does it? How, for example, how
would Sonja know what direction is "toward the amulet?" Would this be
computed relative to Sonja's present heading? How would Sonja change that
heading in the appropriate direction?

E.g. could you get a reasonable robot by hitching continuous control
systems for the lower levels to a Agre-Chapman style one for the
`tactics' of dealing with situations. The sheer fact that the existence
of control is easy to miss is maybe evidence that this is really true.

The lower control systems can control only for lower-level variables. If
you posit a lower-level system that can perceive a target object and orient
the body toward it, and another that can perceive the distance of the
object and make that distance approach a reference distance, the higher-
level systems don't have to accomplish those things. You can assume a
reference signal that picks the target and another that picks the intended
distance from the target, leaving it up to the lower systems to orient
toward it and move the body toward it. But doing so will alter the
relations of the body to everything in the environment (for example, doors,
corridors, other objects). Something has to be monitoring these relations
if they matter to the logical situation. If the motions in space are part
of the simulation, the logic has to wait until the commanded result is
perceived as having been accomplished; there's no point in going beyond
"move to the amulet" until perception of that situation has been
accomplished. Otherwise the logic won't pick up the consequences of
physical interactions that affect achievement of the logical goals.

If Agee and Chapman are actually building logical control systems, that's
great.

Something else I'd like to find out is whether the `arbitration >network'

architecture used in the Pengi's and Sonja's central systems >could be
profitably redone along PCT lines, controlling for projected >occurrence of
desireable things and non-occurrence of undesireable

ones.

Controlling for projected occurrances sounds like model-based control. That
is, the behaving system contains a model of the properties of the
environment. In the imagination mode, it alters its actions (on the model)
until the perceived result is the one wanted. Then the switch is thrown and
the same actions are sent as reference signals to lower systems that
actually carry them out. The error signals in the lower systems indicate
the degree to which (and the way in which) the lower systems failed to
produce the expected perceptions, and so shows how the model must be
modified or the goal must be changed.

An objection of Bill's that I don't buy is that the smarts in these
systems come from the programmer: the main idea that the interative AI
people had to shove off the deck was that you couldn't accomplish >things

in the world without elaborate advanced planning. It is >therefore quite
sensible to first produce systems that can function >with zero or minimal
planning, and then worry about how these >architectures might arise in
nature.

I agree that these AIers are moving toward the CT point of view. They
don't, however, seem much interested in learning about the CT point of view
(I tried). But until I see exactly how these guys would accomplish
something like "move toward the amulet" I would still suspect that THEY
know how to move Sojna toward the amulet, but SONJA doesn't. Maybe this
doesn't matter. But maybe it does.

To me, the critical question is "what can Sonja know and when can Sonja
know it?" If that's not clear in the model it has to be made clear. Not all
logical decisions can physically be carried out. None of them is carried
out instantly, and none can be carried out without altering the situation
on the way to accomplishing them. If the logic simply runs blithely along
assuming that every commanded result is immediately and successfully
brought about, or if the model itself isn't detecting the current state of
the environment in the relevant regards, this isn't a CT model. But tell me
more about it.

···

--------------------------------------------------------------------
Best

Bill P.