Subsumption vs. HPCT

[From Bill Powers (920630.0800)]

Oded Maler (920630) --

I think you misinterpret Brooks a little bit. As a roboticist he is
surely aware that the robot's world is just the perceptual space of
its sensors. I think all his work in the last years drove AI-based
robotics much closer to the lines of HPCT.

I have a copy of Rodney Brooks' "Intelligence Without Reason" -- in fact
several, thanks to my friends. The one I'm looking at is marked "To appear:
IJCAI-91," so I've been aware of Brooks's work for some time.

I agree with many things Brooks says about AI, and with parts of his
alternative to it. I have agreed with some aspects of his approach since I
began this work nearly 40 years ago -- for example, behavior based on real-
time interaction with the world, the emergence of complex behavior from
simple interactions, the concept of parallelism, the idea of layering
(although different from his idea), the building up of complex behaviors
out of simple behaving units. I even agree with his assessment of
cybernetics, especially its "lack of mechanism for abstractly describing
behavior at a level below the complete behavior, so that an implementation
could reflect those simpler components."

But Brook knows nothing about my work, or at least dismisses it (he never
cites it). My model has employed situatedness, embodiment, the dynamics of
interaction with the world, and emergence of intelligent-looking behavior
from interaction of components of the system, right from the beginning.
Brooks called these "a set of key ideas that have led to a new style of
Artificial Intelligence research which we are calling behavior-based
robots." If I were the one with a large budget and an academic position, I
would be tempted to dismiss Brooks as having reinvented a (somewhat square)
wheel. The main reason I can't do this (other than my natural sunny
temperament) is that I have never had any support for building robots for
fun; while I've long said that the environment is its own best simulation,
actually using it that way, by building behaving robots, has been beyond my
means. Of course I HAVE built robots, but nobody would recognize them as
such.

You have to realize how work like Brooks' looks to me. I understand that
from other points of view, it might seem that I would be encouraged to see
that my ideas are similar to those of a leader in today's explorations of
simulated behavior. From my point of view, however, Brooks is just starting
to catch on to an approach with which I have had decades of experience. He
still carries a lot of the baggage of old-style AI with him, despite his
revolt against it. He pays little attention to real human and animal
behavior, and so has missed most of the hints which it contains. He has not
even discovered the simplest principle of control behavior, which is that
the perceptual signal, not the act, is under control.

A year or two ago, after seeing some of the PR material on subsumption
architecture in Science News and Science, I sent Brooks a long letter
inviting him to cooperate with the CSG and challenging him to a discussion
of issues. I sent him my Demo 1 and Demo2 programs, and version 1 of the
Little Man. He did not object to anything I said or to anything in these
programs -- at least not to my knowledge, as he has never replied. Unless
he simply dropped everything in the wastebasket without looking at it, he
evidently found nothing of any interest in the letter or the programs. I
have given up pounding on closed doors.

Brooks lists under "Vistas" some "key research areas that need to be
addressed ..." The first is "Understanding the dynamics of how an
individual behavior couples with the environment via the robot's sensors
and actuators." In other words he's about to start looking into the
phenomena that will lead to re-inventing control theory, real soon now. He
could save himself a lot of time and trouble if he were interested in
seeing how others have solved his problem (starting some 50 years ago). But
I think he's not much interested in going about it that way. He's convinced
that he's doing something new and is way out ahead of the world. Well,
that's far from true.

ยทยทยท

-------------------------------------------------------------------

In particular, in section 6.2 when he describes the robot that picks
the soda can he says:

".. The hand has a grasp reflex that operated whenever something broke
an infra-red beam between the fingers. When the arm located a soda can
with its local sensors, it simply drove the hand so that the two >fingers

lined up on on either side of the can. The hand then >independently grasped
the can. ... As soon as it was grasped, the arm >retracted - it didn't
matter whether it was a soda can that was >intensionally grasped or one
that magically appeared."

In fact, it probably didn't matter whether it was a soda can or someone's
leg or a bug alighting on the infrared sensor. This is what I mean about
the designer putting too much of his intelligence into the robot -- or at
least into interpretations of what the robot is doing. This robot is billed
as "reliably picking up soda cans." The casual reader gets the idea that
this robot knows what a soda can is. It doesn't. Brooks implies that the
array of scanners was sufficient to identify a "soda-can-like object." If
that's so, he has achieved an enormous breakthrough in object recognition.
I rather suspect that all these soda cans were upright, located so they
couldn't be confused with a small book standing on a table, placed within
easy reach, contrasting strongly with the background, and set up so they
couldn't be knocked over too easily. I also doubt that when the hand
grasped the can, it could tell what object it was holding, if any. I'm just
imagining all that, of course.

"By carefully designing the rules," says Brooks, "Herbert was guaranteed,
with reasonably reliability, to retrace its path." But Herbert did not know
how to get back where it started; the rules said nothing about that. They
said things like "When passing through a door southbound, turn left."
There's the modeler's intelligence USING the robot to achieve an end that
the robot itself is helpless to achieve. This is what I mean by hanging
onto old-style AI concepts.
----------------------------------------------------------------------
My basic objection to the subsumption architecture is that it is enormously
wasteful of resources. Every new module starts from scratch, with raw
sensory data. When one module finishes its task and another sees an
opportunity to act, the new module actually has to turn off the old one and
take over completely. An "approach" module is turned off, and a "retract"
module takes over, because if they both worked at the same time they would
come into conflict. Why not a single "position" module, given a reference
signal adjustable over the whole range from approach through stasis to
withdrawal, using the same sensor and effector connections? Brooks has
broken behavior down into TASKS, with modules designed to perform each task
as an achievement in the objective world. Once any task has been
accomplished, the module assigned to it becomes useless -- to do a
different task, one has to design a new module from the ground up.

In the HPCT model, the levels of control are used by higher levels; the
higher levels use not only the control capacities of the lower systems, but
in many cases the perceptual interpretations developed up to that level.
While each control system is specialized to control just one specific
variable, it is a general-purpose control system: it can be used in any
situation where that variable needs to be controlled, and control can be
set to occur around any reference level. Control is not organized around
objective "tasks," but around control of specific sensory variables,
independently of what external task is being performed. While the present
"pandemonium" concept does waste resources in a different way, this waste
may be unavoidable (in any model). But there is never any need, in the HPCT
model, to create a new perceptual function or a new output function that is
an exact duplicate of an existing one.

Once the first level of control in the HPCT model is reasonably complete,
it can be used by all subsequent levels in the performance of any behavior
of any kind for any purpose. In fact, it constitutes the ONLY MEANS by
which higher-level systems can act. Higher systems have no direct access to
sensors or actuators. This is the general case; once any level of control-
system exists, it is the ONLY MEANS by which higher levels can control
their own perceptions.

The subsumption architecture, therefore, contrary to Technically Sweet's
comment, is fundamentally different from the HPCT architecture -- far more
complex and entailing far more duplication of function. What hierarchical
relationships do exist consist mainly of turning one module off and another
one on. The system doing this switching would use, in the HPCT model, very
high-level functions in which it is all too easy to insert the
experimenter's knowledge of the world without giving that knowledge to the
model itself.

The HPCT model is oriented toward the control of variables in hierarchical
relationships, with objective effects of doing so being side-effects of no
concern to the model. The subsumption architecture is oriented toward
producing certain objective effects and relationships as seen from the
standpoint of the observer outside the robot; the robot is given whatever
instructions and task-achieving modules are needed to make an effect occur
in the perceptions of the observer. The HPCT model relies exclusively on
feedback control of perception. The subsumption model uses feedback in only
the crudest on-off way, and in many places employs the "SMPA" (sense-model-
plan-act) principle that Brooks calls a "bias" that has impeded AI
research. Brooks himself has characterized his robots as "stimulus-
response" devices. And indeed, for the most part, that is what they are. In
some places they accidentally incorporate control-system principles. But
there is no principled application of control theory.

Brooks has a long way to go to catch up with HPCT, although others have a
longer way to go.
---------------------------------------------------------------
Best,

Bill P.