Accept no substitutes

I have only recently joined this discussion group, so I offer the following
remarks with the caution that I am very poorly exposed to agreed-upon tenets
of PCT. Nevertheless...

Rick Marken (930820.1430 PDT) argues that Albus' hierarchical control
architecture is not comparable to PCT. In particular, Rick claims that
Albus' system does not "control...PERCEPTION..." Let me offer a simple
portrayal of Albus' control approach that seems to challenge Rick's
assertions.

In general, as I understand the notion of perception, _any_ controller that
calculates its outputs to the controlled system as a result of comparing
_interpreted_ or _evaluated_ inputs to some internal reference standard can be
reasonably characterized as controlling perception. In Albus' system, for
example, the control loop at each hierarchical level is a model-reference
controller in which interpreted inputs are compared to reference values in the
model and outputs are produced to reduce the difference between the
interpreted inputs and the reference values. If one thinks of the interpreted
inputs as perceived conditions and the reference values as desired conditions,
then the control process surely seems to be accomplishing control of
perception in the sense with which I understand PCT to be concerned. In fact,
why wouldn't _any_ model-reference controller be an instance of perceptual
control? Such model-reference systems, including Albus', generate their
control outputs in response to "signals that represent potentially very
abstract [i.e., interpreted] aspects of the environment"?

I suspect this comment will suffice to draw fire.

Perhaps I should introduce myself. My Ph.D. is in Psych (Stanford '73).
Since then my research has contributed to artificial intelligence, cognitive
science, and the systems and decision sciences. I have been particularly
involved with developed architectures for real-time problem solving and
intelligent control. Of late my students and I have begun to focus on models
of organizational problem solving. I joined the CSG list to see if the
discussions of PCT contribute to my work on topics such as these.

-- Prof. Michael Fehling, Director --
   Laboratory for Intelligent Systems
   Dept. of Engineering-Economic Systems
   321 Terman Center
   Stanford University
   Stanford, CA 94305-4025
     phone: (415) 723-0344
     fax: (415) 723-1614
     email: fehling@lis.stanford.edu

[From Rick Marken (930820.1430 PDT)]

Cliff Joslyn (930820) --

a description of the Sensory and Control Hierarchy
implemented by a guy named Albus
bears on uncanny similarity to PCT.

The similarity is quite illusory. Albus just missed one or two little
points about control, like the fact that a hierarchy of negative feedback
systems controls PERECPTION (not output) and that higher level
systems control by specifying reference levels for lower level
perceptions (again, not outputs).

Each level takes input from a lower level sensory
signal (ultimately the environment), and compares it to the output of a
"world model" at that level, and finally passes it on to an action
subsystem at that level, which feeds to a lower level action system (also
ultimately the environment).

Welcome to the world of computed output. The "world model" determines
the effect of hypothetical actions on the environment. The lower level
sensory signal represents the actual effect of prior outputs based on this
model. New outputs are tried in the world model to see what might
result according to the model, which has been adjusted based on
knowledge of the effects of prior outputs -- the "sensory signals".
A PCT model doesn't need a "world model" because it controls a
perceptual "model" of the results it intends to produce, not the
outputs that will be used to produce that result. At best, the Albus
model adds an extra, useless step (the "world model" output computations)
to the process of control. At worst, the Albus model could never work in a
real environment.

Your quote below confirmed my suspicions about how the Albus
model works:

"... Based upon current sensory information, the world models are
updated. The updated world models can then serve to modify the desired task
control actions until, at the lowest level, the necessary drive signals are
sent to the robot to initiate actions in the environment."

This is an output generation model, pure and simple. Albus has no notion
that a closed loop control system can be designed to control perceptual
signals that represent potentially very abstract aspects of the
environment, like "part-position orientation". The problem of control,
for Albus, is a problem of designing an accurate and adaptable "world
model" (another word for "world model" is "inverse kinematics"). The
problem of control, for PCT, is designing perceptual systems that can
produce signals that represent variations in the results to be controlled.
Solving the PCT control problem has proven to be far more successful
and computationally feasible than solving the "inverse kinematics"
problem.

Do any of you know anything about this?

I do know that Albus and Powers coincidentally and independently
published a series of articles on robotics in the same issues of Byte
magazine (1979). I was very new to PCT when I read the articles
(in 1979); in fact, it was my (successful) attempts to replicate
(on my Apple II) the experiments described by Powers in those Byte
articles that started my fall from conventional psychology to PCT.
But even at that time, just from reading the two sets of articles, it
was obvious which was the work of a genius. Bill's articles were clearly
written, addressed every detail of the model and described stuff you
could observe for yourself -- surprising stuff that really worked.

There is a good rule of thumb for determining whether anyone is
actually developing a model that is like PCT: if they are developing
such a model, then they are in the Control Systems Group. Albus is
not a member of CSG, nor is Carver and Scheier, Randall Beer, Rodney
Brooks or others who claim (or have been claimed) to have a PCT-like
model of behavior. The reason for this (I suspect) is that the notion of
"control of perception" is a BIG disturbance to the way these modellers
think about behavior. If a modeller doesn't understand what is meant
by the statement "behavior is the control of perception" then
it is HIGHLY unlikely that he or she has a PCT-like model.

Remember: When modelling behavior, accept no substitutes. Always
specify the real thing: PCT.

Best

Rick

[From Chris Malcolm]

Rick Marken writes:

Cliff Joslyn (930820) --

a description of the Sensory and Control Hierarchy
implemented by a guy named Albus
bears on uncanny similarity to PCT.

The similarity is quite illusory. Albus just missed one or two little
points about control, like the fact that a hierarchy of negative feedback
systems controls PERECPTION (not output) and that higher level
systems control by specifying reference levels for lower level
perceptions (again, not outputs).

Some of what Albus writes in "Brains Behaviour etc" is temptingly
suggestive of PCT, and some of it is temptingly suggestive of
Behaviour-based robotics (the paradigm exemplified by Brooks), but it
becomes clear when you study the details, esp of his actual designs of
systems, such as the NSF automated production line, or his
NASA-commissioned Mars Rover architecture, that he is really very
classical indeed, and his hierarchies are simply stacked classical robot
control systems. I suspect the confusion happens because he suspects
that greatly extended hierarchical control has the magical ingredient
that will buy all the wonderful capabilities of biological systems, and
in the more abstract parts of his writing suggests this.

"... Based upon current sensory information, the world models are
updated. The updated world models can then serve to modify the desired task
control actions until, at the lowest level, the necessary drive signals are
sent to the robot to initiate actions in the environment."

This is an output generation model, pure and simple. Albus has no notion
that a closed loop control system can be designed to control perceptual
signals that represent potentially very abstract aspects of the
environment, like "part-position orientation". The problem of control,
for Albus, is a problem of designing an accurate and adaptable "world
model" (another word for "world model" is "inverse kinematics").

Solving the PCT control problem has proven to be far more successful
and computationally feasible than solving the "inverse kinematics"
problem.

Arm inverse kinematics is only one level, and a low one, of the
many-levelled system Albus envisages as controlling the assembly robot
of the future. His higher level world models are a good deal more
comprehensive and abstract than the arm kinematics.

There is a good rule of thumb for determining whether anyone is
actually developing a model that is like PCT: if they are developing
such a model, then they are in the Control Systems Group. Albus is
not a member of CSG, nor is Carver and Scheier, Randall Beer, Rodney
Brooks or others who claim (or have been claimed) to have a PCT-like
model of behavior. The reason for this (I suspect) is that the notion of
"control of perception" is a BIG disturbance to the way these modellers
think about behavior. If a modeller doesn't understand what is meant
by the statement "behavior is the control of perception" then
it is HIGHLY unlikely that he or she has a PCT-like model.

Beer and Brooks themselves don't claim to have PCT-like models. It has
been claimed by others that their models are significantly closer to PCT
than classical robot architectures, which is, I think, an arguable
position.

I know that both Beer and Brooks are aware of PCT because I've discussed
it with them both, and they both have no problem with the general notion
of behaviour controlling perception, and like it. Not being sure whether
I properly understand PCT myself, I certainly can't judge whether they
do. I do know that they both consider it inappropriate for their own
particular research prgorammes.

Chris Malcolm
Dept of AI, Edinburgh University