[From Bill Powers (920601.1000)]
to David Chapman, copy to CSGnet --
OK, Penni Sibun has now opened my mind to your "Pengi" article, and it's
obvious that our interests are very close together indeed.
One of my beefs with the AI approach and others (like the "motor program"
approach) has been the conception of control as a planning-execution
process. Like you, I don't doubt that such things happen, but I agree with
you that they can't happen in real time. In fact behavior organized that
way doesn't really work very well. The real environment is too dynamic and
too full of unpredictable disturbances. The best-laid plans of man gang
usually agley. Plans are made under one set of conditions, which are only a
snapshot of a changing world, and are executed under another.
One of the problems I've seen in conventional behavior modeling is that the
modeler fails to take the point of view of the behaving system. The modeler
knows the properties of the environment and of the system being designed,
and in creating the desired behavior travels freely back and forth across
the boundary between Inside and Outside. So if the model doesn't "do" quite
the right thing to the external world, the modeler steps inside the
behaving system and tweaks it (increase the gain of this circuit a little,
add some compensation to that one), watching the results, until the outcome
is right. This, of course, lets the modeler do things that the system by
itself could never do. The modeler is actually BEING a whole lot of
functions that belong in the model. So most modelers are really trying to
model something far more complex than they realize, and are giving their
models far more external help than they know.
It's OK for the modeler to watch the results and tweak the model. But the
tweaking has to be in terms of the capabilities of the model itself -- that
is, the model has to be able to accomplish the result without help. "If I
were this model, knowing only what it can know and being able to produce
only those outputs it can produce, how would the problem look to me and
what would I have to be able to do so solve it?"
I think your Pengi model is coming much closer to my (idealized) view of
modeling than to the conventional get-the-job-done one. Pengi works on the
basis of perceptions of the current environment ("indexical-functional
aspects" for crissake). The information it uses is a representation of the
current state of affairs (I call it a perception, you call it
I think that to some extent you're still using too much of what YOU know
about the situation ("it is both vulnerable (if the penguin kicks the
block) and dangerous (because it can kick the block at the penguin)". The
penguin can't behave "because" the bee is vulnerable or dangerous, unless
you've given it the ability to appreciate such abstract conditions as
vulnerability or danger. Those two judgements are outside the universe of
bee and penguin and iceblocks. But I think that you're working yourself (as
of 1987, of course) away from the third-party approach and toward what I
think of as real modeling, because for the most part the basis of behavior
in Pengi isn't such abstractions, but real-time interactions with the
environment according to a few general and simple rules.
One problem in talking about control theory (my version) with people
familiar with currently-popular approaches is that most people who use the
words "control system" aren't really talking about control systems, but
about S-R or planning systems. Real control systems don't respond to
stimuli and they don't plan. They don't precalculate outputs that will have
desired results -- that concept has grown up mostly without benefit of any
experience with real control systems. Just thought I'd warn you, in case
you were identifying "control system" with some other current concept (like
I sent the first communication to your old MIT address, which was on the
paper. Then I sent it again when I found the Stanford address. And now
this. I'll drop it here and wait for your response, if any.