Albus; good disturbances

Bill,

Sincere thanks for the extremely thoughtful and detailed discussion of your
views on Jim Albus' hierarchical control model and its (non)relation to PCT.
You've provided just what I hope for with my reaction to Rick Marken's earlier
message. I want to spend some time digesting your message before replying to
specific points. For now, let me convey the following thoughts.
  First, please be advised that I was trying to stimulate a response
clarifying the stance of PCT vis-a-vis conventional control theory (including
AI and intelilgent control engineering). I am not trying to sell anyone on
the virtues of Albus' model. Having studied it carefully, I, too, find it
suffers from some serious defects, though not entirely the ones to which you
and Rick Marken refer. On the other hand, Albus' model does typify the way in
which many AI'ers (e.g., Rod Brooks, Tom Dean), systems scientists (e.g., T.
Fu, Alex Meystel), and, most importantly, cognitive scientists (e.g., Herb
Simon, John Anderson) attempt to employ control concepts in their models of
cognition and behavior.
  Second, your experience with Jim Albus may signal less than you think. I've
had the pleasure of interacting with Jim Albus a few times in recent years.
My experience with Jim suggest that his demeanor toward you may have reflected
merely his reserved nature rather than any parochial attitude toward your
work. Also, he may have simply reacted with the conservatism that most
professional researchers tend to display when confronted with a new,
comprehensive theory wiith which they are unfamiliar.
  Third, let me assure you that I am acutely aware of your focus on "_living_
control systems" as opposed to "robotics or artificial control systems in
general." As it happens, I share that focal concern. It is what enticed me
to join this list. My life-long scientific interest is in (a) understanding
the phenomenon of intelligence as it arises in living systems, and (b)
employing our best understanding of intelligence to develop techniques that
support or enhance its manifestation in human action. However, it appears
that I may be more reluctant than you to declare a schism between those
intersted in biological control and those concerned with the control of
(other) mechanisms. Herb Simon's notion of a "science of the artificial"
compels me to "design a control system that will accomplish some [complex and
ambiguously specified] task" as an effort to synthesize a useful approximation
to the capabilities of a "living" intelligent system. (I wonder if you and I
are really all that far apart on the value of building artifacts that model
interesting notions of control.)
  Fourth, perhaps you can answer a question regarding the relevance of PCT to
my lab's current research focus on social/organizational systems. My group is
now developing a theory of organizational problem solving as a control system
that manages the "sensory effects of outputs" (to borrow your phrase).
Currently, this work builds upon ideas such as Butler's institutional theory
of organizational decision-making, and Jay Forrester's system-theoretic model
of organizational behavior. Neither of these approaches focus sufficiently on
the closed-loop relationship between an organization and its physical and
social environment. On the other hand, PCT looks very appealing, judging from
my very limited exposure. So, can you tell me whether you or anyone else in
the PCT community has applied this theory to group/organizational/social
systems?
  Finally, let me again thank you for the instructive response to my comments.
Rest assured that I will study them carefully. If you have any specific
suggestions as to readings or any reprints that you could send me, I would
appreciate them. (I am already tracking down a copy of your book in the
Stanford library.)

- michael -

p.s. I enclose my full address again for convenience.

-- Prof. Michael Fehling, Director --
   Laboratory for Intelligent Systems
   Dept. of Engineering-Economic Systems
   321 Terman Center
   Stanford University
   Stanford, CA 94305-4025
     phone: (415) 723-0344
     fax: (415) 723-1614
     email: fehling@lis.stanford.edu

[From Bill Powers (930821.0815 MDT)]

Cliff Joslyn (930820), Michael Fehling (930820) --

Welcome to CSGnet, Michael! You say:

In general, as I understand the notion of perception, _any_
controller that calculates its outputs to the controlled system
as a result of comparing _interpreted_ or _evaluated_ inputs to
some internal reference standard can be reasonably
characterized as controlling perception. In Albus' system, for
example, the control loop at each hierarchical level is a
model-reference controller in which interpreted inputs are
compared to reference values in the model and outputs are
produced to reduce the difference between the interpreted
inputs and the reference values.

We have considered model-based control, but so far have not found
it necessary for the kind of modeling we do, and haven't really
tried out formal models of that kind. I will get a copy of the
article by Albus that Cliff mentioned. I met Albus in 1979 at a
conference on robotics sponsored by Carl Helmers, then editor of
Byte, and we had some discussions, but he didn't seem to think
much of my version of control theory. Polite but distant, you
know the story. At that time his model was in some ways like PCT,
but in others very different. I don't know if he's changed the
basic organization since then.

The basic problem with "computing output" is that it relies on
the environment being free of ongoing disturbances and
maintaining its response to actions accurately. There's a gray
area where you could say that even PCT systems compute output,
but the kind of computation is quite different from what Albus
was talking about in '79, and what other roboticists talk about
today. In the PCT model, the computation is simply whatever
output characteristic is involved in converting an error signal
into output signals, in real time. The output function is active
at the same time that the input function is working, and at the
same time that comparisons are taking place. Also, the output
computation is simple for a given control system because all that
has to be computed is a new reference signal for the next level
down, not a new state of the external environment.

In a model-based control system as I know it, the reference
signal is not the model's output; it is given by higher systems
as usual, specifying the desired state of the perception. The
model is a representation of the response of all lower systems
and the environment to output signals from the control system.
The control system's output, which activates the model as well as
serving as a reference signal for lower systems, creates a
behavior of the "imagined" perceptual signal based on the model
of average properties of the combined lower systems and the
environment. If this representation is accurate, controlling the
perception of the model's behavior is equivalent to controlling
the perception obtained from lower systems and ultimately from
the environment. The real perceptual signal behaves just as the
model's output does, because the output signal entering the model
makes the model behave just as the same output signal entering
lower systems makes the perceived environment behave.

This kind of control system can do something that the PCT models
we use can't yet do: continue to operate when the feedback from
the environment is momentarily interrupted. Of course independent
external disturbances that occur while feedback is interrupted
can't be resisted; they aren't represented in the model. The
organism is then behaving open-loop with respect to the real
environment, although closed-loop with respect to the internal
model. This is like closing your eyes and walking across the room
to touch a wall. While your eyes are closed, you're imagining a
room and controlling a model of you in the room, with the output
signals activating both the model and the lower-level kinesthetic
systems that produce real walking. You can get pretty close this
way, although errors build up rapidly with time if you don't
encounter any landmarks you can feel along the way.

The comparison of the real perception with the model's output
isn't very helpful when the real perception is present; you don't
really need the model then. But that comparison serves as a
continuous update of the model, by some process like
deconvolution that allows altering the parameters of the model in
a way that makes the model's output agree better with the actual
perception. It's possible, in fact, for the model to remain
turned on all of the time, if somehow the output of the model
consists of the model's behavior plus the difference between the
real perception and the model's behavior. Then you get a
perceptual signal that does reflect ongoing disturbances, yet
loss of the real perception would not prevent at least a limited-
time imitation of control behavior. I'm not sure what other
advantage this design would have.

The problem is that for this to work in a model of organisms, ALL
levels of control would have to work this way. The perceptions in
a higher-level control system are derived in large part from
perceptions already under control at lower levels. This means
that when the lights go out, visual signals are lost at ALL
levels, not just the highest one. So all the levels would have to
contain models of the visual world. Maybe they do. But maybe
there's a simpler explanation.

It's simpler to suppose that when the lights go out, the visual
control systems simply lose control, and you switch to a
different mode of control based on those systems that are still
working, the kinesthetic ones. If you lose kinesthetic control,
you're finished: models won't do any good. Anyhow, that looks
simpler (and more realistic) to me now. Fortunately we don't have
to decide this in advance. As we continue to develop models and
match them to real behavior (and whatever is learned about
neurology), we can try out all the possible models and see which
fits the best. If we run up against a situation in which only
model-based control will fit what we observe, then there'll be a
motive for exploring it further. Maybe then Albus' work will save
us a lot of effort. But that's a long way ahead.

It's important for newcomers to this net to realize that the CSG
(control systems group) is concerned with _living_ control
systems, organisms, not basically robotics or artificial control
systems in general. We aren't engineers trying to design a
control system that will accomplish some assigned task that can
be seen in the environment. We're trying to reverse-engineer a
complex control system of largely unknown construction that
already exists, and which behaves for its own benefit. This means
that we aren't trying to find just any old way that works for
controlling some variable in the environment, but the way that
organisms do it. We try to take the organism's point of view, not
the customer's.

Albus, in the only work I have at hand from 1979, takes the
external observer's viewpoint in defining what the control system
is doing. The emphasis in his approach is not what the system
perceives, but what its outputs actually do to the environment.
This does not have to have anything directly to do with what the
system perceives. In a hierarchical model of an organism's
behavior, he shows basically a command-driven structure. The
basic organization is that of a series of branching commands:

                         survival
                  / | \
   feeding reproduction migration

                 / \
        fight build nest mate care for young

        > \
    chase bite threaten

... and so on. This is an external observer's view, not the view
from inside the organism. It's largely a linguistic tree, a way
of categorizing and subcategorizing observations in greater and
greater detail. This is certainly not the organism's own view of
what it is doing or trying to get to happen. It's what an
external observer can see as consequences of the actions of an
organism. Albus talks a lot about output trajectories. The
feedback is used primarily as a way of correcting these objective
output trajectories.

In another diagram, Albus shows this arrangement:

  command from table
  higher levels
                \ / \
          a specific input -- of -- sum ---> output -->
                / \ / |
  feedback |
  from sensors weights <--adjust --[compare]
                                                        >
                                                        >
                                                      desired
                               NOTE!! -------> output

Here it is clear that the reference signal specifies output, not
input: the assumption is that the system wants to create a
particular output. This is again the external point of view, that
of an observer who can see only the external consequences of
actions. For designing artificial systems, this arrangement is
OK, because there is an engineer standing by who knows what
effect of the output is required and who can twiddle the internal
characteristics of the control system until that effect is
produced. But an organism knows nothing of its own outputs; it
can sense only their effects, as inputs. The organism can't be
concerned about outputs; they're not part of the perceived world.
Only the _sensory effects_ of outputs exist for the living
control system. And there is no engineer standing by to make sure
the appropriate outputs are generated.

In the third part of the 1979 Byte series (August), Albus draws
his hierarchical system. It has a lot of resemblances to the HPCT
model, although he doesn't mention my previously published work.
However, his interpretations of the connections between
components and levels are quite different, and from my viewpoint
somewhat confused. He combines comparison and output in one box
(H) and puts comparison and perception in another (G):

                to higher systems word |
                      > generated |
                      > ---
                 word detected --> pitch error --> | H |
                      > ---
                      > pitch gener | ator
                     --- |
                    > G |<-- predicted phoneme <-----
                     --- |
                      > from to pitch
                      > pitch systems
                      > detectors

It's hard to see how a "pitch error" is generated from a signal
identified as showing that a word has been detected; there's
clearly a reference signal and comparator missing here. In fact,
the block diagram of which the above is a small part looks like a
working diagram, but on closer inspection it actually doesn't
specify enough to result in any particular behavior. How can one
line (pitch error) that simply branches off from a dot in the
middle of another line (word detected) be given a completely
different label? Clearly, a lot has been left out of this
diagram. I would like to give Albus the benefit of the doubt and
assume that he has in reserve a much more detailed version of
this diagram, and has programmed and run models to show that this
arrangement will actually do something. If so, I wonder why he
didn't mention the results, to show the skeptic that all these
pretty diagrams aren't just pipe-dreams.

[By the way, leafing through this 1979 Byte I came across a
Cromemco ad, saying "Low-cost hard disk computers are here. 11
megabytes of hard disk and 64K of fast RAM in a Z80 computer for
under $10K." Wow.

In Part 4 (September), Albus shows his basic concept of control.
He talks about three basic modes of action:

1. Acting - the Task Execution Mode

2. Observing - the Sensory Analysis Mode

3. Imagination -- the Free-Running Mode

So it's pretty clear that he doesn't (or didn't then) see
behavior as the control of perception! Behavior is executing a
task. Perception is analyzing sense-data in preparation for
acting. Albus says

"We must show our robots what each task is and how to do it. We
must lead them through in explicit detail, and teach them the
correct response for almost every situation."

This is a very different view of behavior from that of PCT --
although in the world of robots it may still be a practical view,
owing to the problem of giving robots sufficiently advanced
perceptual systems.

My impression of Albus in 1979 was that he talked a great game,
but was short on working models. Maybe he's gone past that arm-
waving stage in the ensuing 14 years. I wasn't impressed then
with the workability of the diagrams he drew. Maybe I would be
now. I'll check it out.

···

-------------------------------------------------------------

Dag Forssell (930820 1640) to Rick Marken (930820.1000 PDT)

You confuse me. Your "good disturbance" was to move a
satisfied, fat dumb and happy social scientist towards an
interest in PCT. You wanted your good disturbance to CREATE an
error signal - in a way YOU would call good from YOUR systems
concepts and values.

You guys are diddling with your own semantics. "Good" from the
standpoint of a controlling system is whatever reduces error,
isn't it?

Disturbances don't always make error greater. They can equally
well tend to move a perception toward its reference level,
helping the control system. In either case, they will be
resisted. If they make the error greater, the control system's
efforts will increase. If they make the error smaller, the
control system's efforts will decrease, which is equivalent to
adding an effort opposed to the "helping" disturbance.
Disturbances are resisted, whether they be "good" or "bad."
--------------------------------------------------------------
Best to all,

Bill P.