Abstract, concrete, HPCT

[From Bill Powers (920531.0700)]
Copy to CSGnet.

Greetings from CSGnet. My name is Bill Powers. I have just received a copy
of your delightful paper (with Agre), "Abstract reasoning as emergent from
concrete activity," from my nephew Avery Andrews, who is a linguist
residing in Australia. How is "Agre" pronounced? Is it "ah-gruh" or
"aeger?" Or something I haven't guessed? I think I can handle "Chapman."

There are some points of contact between your ideas and the basic model
that's behind CSGnet (a Bitnet-internet list). The CSG stands for "control
systems group", which is a collection of people (including many off the
net) from many disciplines who have taken up some ideas I developed (with
Clark and McFarland) in the 1950s, and have been working on since then.

There are three aspects of this theoretical framework.

One, called CT, or control theory, is just the basic body of theory
developed by control-system engineers in the 1930s and 40s to describe and
predict the behavior of closed-loop negative feedback systems --
servomechanisms, regulators, and such.

The second is PCT, or perceptual control theory, which is the adaptation of
CT to the universe of organismic behavior (starting with Wiener,
rosenbleuth, and Bigelow but branching off quite early from cybernetics).
The basic idea behind PCT is that living control systems act to bring
perceptual representations of external variables into a match with
internally-specified reference signals, maintaining them in a near match
despite changes in the reference signals and occurrance of external
disturbances tending to alter the perceptions. "Perception" is used in a
generic sense to mean all experiences from raw sensory input to abstract
representations. We talk about PCT when we mean to indicate only that some
perception is under control by behavior, the kind of perception being
secondary.

The third aspect is HPCT, meaning hierarchical perceptual control theory.
This is not really control theory per se, but an attempt to introduce facts
of experience and some neurological facts into the general model, to make
it specific to human experience and human architecture. I'm going to bore
you with a rather detailed description of this hierarchy, because unless
you understand it you won't see how it relates to your work.

The concept behind HPCT is a hierarchy that runs in two directions: a
perceptual hierarchy building upward, and a control hierarchy building
downward. A given level (containing many control systems) receives inputs
that are copies of perceptual signals of lower order, some under direct
control and some uncontrolled. A perceptual function in a specific control
system generates a new signal that represents a variable of a new type,
derived from lower-level perceptions (or sensors, of course, at the lowest
level). A comparator compares the state of this signal with a reference
signal received from systems of a higher level. The error signal resulting
from the comparison goes to an output function that ends up distributing
reference signals to control systems of the next lower level -- the same
level where the perceptual signals originated. Only the lowest level of
outputs generates muscle action.

So each level of system acts to match its own perceptual signals to
reference signals received from higher levels, and acts by means of varying
reference signals for systems at the next lower level. The result is a
hierarchy of goal-seeking and goal-maintaining control systems with many
systems at each level and many levels. The highest level of reference
signals has to be handled in a special way, of course, which I won't get
into here.

The first level of perception is called the "intensity" level. The
perceptual signals at this level are generated by sensory nerve endings (a
perceptual signal is measured in impulses per second -- individual spikes
have no significance in this theory). Each first-order perceptual signal
therefore represents the intensity of stimulation in one sensory ending. As
neural signals vary only in magnitude (carried as a frequency), they are
one-dimensional: they represent how much stimulation is present, but not
what kind. So the first level of perception is a collection of millions of
signals representing pure magnitudes: essentially, positive numbers. This
first level of perception contains all possible information about an
external world, as far as the brain is concerned (meaning, of course, as
far as we are concerned). Some first-level perceptions are under direct
control: primarily, those representing muscle stretch and tension. We
experience these as "efforts."

Second-level perceptions are functions of sets of first-level perceptions.
The functions are probably weighted sums. The signals that result are
called "sensations," which are vectors in little subspaces made of a few
independently variable intensity signals. Taste, for example, seems to be a
function of only four kinds of intensity signals. Color seems to be a
function of three kinds. Second-level sensations are controllable by
varying reference signals for those first-level perceptions that are under
control: muscle tensions. Most second-level sensations are not under
control. There are probably uncontrolled perceptions at every level,
although fewer at the higher levels.

Sensation-signals, just like intensity signals, can vary only in magnitude:
one signal can represent only how much of the particular sensation is
present, not what kind it is. The "kind" is determined by the weightings
applied to the intensity inputs in the perceptual function. So this is a
pandaemonium model: one control system controls only one kind of
perception, and controls it strictly with respect to its magnitude. This
holds true at all levels.

Third-level perceptions, functions of sets of sensation-signals, are called
"configuration" signals. I don't know the nature of these functions, or of
any perceptual functions from here on up. At this level, the world of
objects and static patterns comes into being. But there are also sound
configurations (phomemes, chords), tactile configurations (a squeeze), and
somatic configurations (internal feelings like nausea) -- all sensory
modalities are involved. A given configuration signal has a magnitude that
indicates the degree to which a given kind of configuration is present. One
signal can represent only one kind of configuration.

This is the perceptual world that we think of as consisting of "concrete
objects." You see where I'm going -- this is one of the lowest levels of
the same world you refer to as "concrete."

The next level is concerned with something like "transitions," which could
mean rates of change (like rate of spin) or partial derivatives and
integrals -- paths from one configuration to another. The shapes of paths
can be altered smoothly, as can the speed and direction with which paths
are traversed. You can traverse a path partway, stop, and reverse to the
starting configuration. So the control of transition-perceptions involves
at least the dimensions of shape, direction, and speed. The "shape"
dimension may simply be an underlying configuration perception.

Next comes "events." An event is a unitary set of transitions,
configurations, sensations, and intensities perceived as a space-time
package. An example is "jumping." Below the level of events, the underlying
perceptions flow smoothly from one state to the next. At the event level we
make arbitary divisions of this flow into sections that we perceive and
control as a single thing happening.

Above events are "relationships," which are derived from perceptions at the
event level on downward. Relationships are things like on, in, beside,
before, after, inside, outside, between, and so forth -- not as named, but
as perceived. Control of relationships is involved in most behaviors. The
means of control is to vary reference signals for events, transitions,
configurations, etc.

Above relationships are "categories." This is the first "digital" level:
all the levels below are basically analog. At the category level we
perceive different things as examples of the same thing: we perceive dogs
instead of individual instances of dogs. And at this level, I believe, we
begin to symbolize: substitute one representative perception for a class of
perceptions. The "representative" perception can be arbitrarily chosen: a
representative perception standing for many different configuration signals
that look different but are classified as the same might be the
configuration of marks that looks like this: "dog." A word is simply a
perception used as a symbol for -- used to indicate a category of -- other
perceptions, the symbol in this case being a visual configuration
perception. Any perception can be used as a symbol for any other perception
or class of perceptions. I am not, by the way, very satisfied with the
definition of this level, particular the process of naming. There could be
a missing level.

The category level, once defined, leads to a re-evaluation of the lower
levels: we realize that lower level perceptions, in themselves, are neither
names nor categories. One of the difficulties in parsing experience into
levels of organization is that we often apply an inappropriately high level
of perception in trying to grasp the nature of a lower level. A
configuration perception, for example would not be "a dog." It would be
that configuration, directly experienced, that we put into a category with
other configurations and refer to with a name, "dog." I think you allude to
this problem in your paper.

Above categories we have "sequence," or "ordering." I think this is also
what Common Lisp users mean by a "list." It is not the elements of the
list; it is the sense of "listness" or ordering itself. It is a perception
standing for a set of lower-order perceptions with regard to their sequence
of occurrance. It is NOT a "program," because it contains no choice-points.
A sequence is like a recipe: break two eggs into a bowl, stir well, add
milk, pour in frying pan, add bread, etc., with the final element being
called French toast. The elements of this sequence are categories of
relationships among events consisting of transitions from one configuration
to another, all built out of sensations having variable -- and controlled
-- intensities. There is control at each level, but the highest level of
control involves assuring that the perceived sequence is of a particular
recognizeable kind.

Category-names in sequences become the elements of "programs." A program is
a network of choice-points. To perceive a program is to perceive a
particular recognizeable network: not any one path through it, but the
entire module with all its branches at once. Each element in the network
can be anything from a sequence, a list, on down. This is the main level, I
think, where "abstract reasoning" takes place (although of course the
elements with which reasoning deals are sequences of symbols for categories
of ...).

Above this level (!), I believe, is a level at which we perceive
"principles." Other words might be "generalizations" or "heuristics." These
are things that human beings have no trouble recognizing and controlling
for, but which we have as yet not succeeded in getting hardware to do. We
can generate programs that are EXAMPLES of principles (successive
approximation, for example, which you mention), but those programs are not
the principles. Similarly, our names for principles are really names for
lower-level situations that constitute instances of principles, as a
particular set of sensations is an instance of a configuration, with other
sets of sensations being instances of the SAME configuration.

And finally, at the top (as far as I know now) we find "system concepts."
These are things like "physics" and "government" and "AI" and "self." They
are entities perceived as functions of sets of principles etc. The system
concepts for which we control determine what happens at all lower levels --
in general, although not, of course, in detail.

These levels were defined on the basis of subjective experience, but also
meet some communicable criteria for a hierarchical control relationship. A
perception at any level, if analyzed into elements other than smaller
perceptions of the same type, proves to consist of sets of perceptions of
the next lower level and of a different type. This is a subjective call, of
course, and my analysis might not exactly match someone else's. But so far
there seems to be pretty good agreement with others who have looked
critically at the same aspects of experience. I expect all the definitions
to change, eventually, as we explore them experimentally.

The other criterion is that in order to control a perception of any given
level (act to bring it to a specific state), it is necessary to VARY the
target-states of lower-level perceptions. To alter the visual configuration
we call (at the category level) "squareness" to make it a little less
square, we must alter the sensations that constitute its sides and corners.
CONTROLLING any given level of perception requires VARYING lower level
perceptions.

I think that my definitions of levels meet these criteria. The only way to
check this out, of course, is to look for yourself.

You have probably noticed that in this hierarchy of perceptions, the entire
world of experience, everything from the most concrete stimulus intensity
to the most abstract system concept, appears as a perception in the brain.
The "outside world" doesn't come into it at all. When you lean your bicycle
against the wall, you're controlling one configuration perception to bring
it into a specific perceived -- but not necessarily named -- relationship
with another configuration perception. When you worry about how to lock the
door without letting the bike fall and spill groceries everywhere, you're
sorting through sequence perceptions, trying to find one that will work (in
imagination, a subject we'll skip but that is in the model). And the
sorting is done in terms of the NAMES of CATEGORIES of lower-level
perceptions, these names becoming symbols that are handled by some sort of
logic, under control of principles such as "don't blow it."

What's going on in the outside world while you're controlling all these
levels of perceptions is a good question. I think it can be answered only
in terms of models of possible realities. What we experience consists of
neural signals.

Well, in a very small nutshell, that's HPCT. I haven't talked about the
logic of control, or the kinds of experiments one does to establish what in
fact is being controlled with respect to what reference state, but perhaps
this is enough to tell you that we may have some common interests. I've
probably given the impression that the theory is much better developed than
it really is, particularly at the higher levels. But in Big Picture terms,
perhaps you get the point. In a phrase that I'm trying to discourage the
use of, because it's turning into a slogan, it's all perception (and
control of perception).

Control theory says that control systems VARY their actions in order to
CONTROL their inputs. Not their outputs. What others see as controlled
output -- as behavior -- is really just an indirect effect of controlling
perceptions. Another way to say this is that control systems control
OUTCOMES rather than MEANS. This is why some of your buddies at MIT are on
the wrong track: they're trying to build models of motor behavior that
specify outputs, where the real system works by specifying inputs. They're
forgetting that between muscle tensions and their final effects are many
other unpredictable influences that also contribute to the outcome. Regular
outcomes can be produced only if they are sensed, and if control is
centered on matching what is sensed to some reference state. To produce the
same outcome twice, in the real environment, you must NOT produce the same
outPUT twice.

As you can guess, HPCT has a lot to say about AI. And a lot to learn from
it.

If you want to look in on our list, the listserver is at

listserv@vmd.cso.uiuc.edu (U. of Illinois)

Send the message (to the above address, not to me, as you no doubt know)

SUBSCRIBE CSG-L lastname, firstname,location

It's an open forum, and pretty active (a megabyte per month, sometimes).
You might find any subject at all being discussed, but all in terms of
control theory. Don't hesitate to start a new thread -- or to just listen
if that's your preference.

I think you may find HPCT a great tool for saying all those good things you
have to say.

Best,

Bill Powers

Hi, thank you for your messages. HPCT sounds interesting.

   How is "Agre" pronounced?

``Eigri,'' in continental orthography.

   Control theory says that control systems VARY their actions in order to
   CONTROL their inputs. Not their outputs. What others see as controlled
   output -- as behavior -- is really just an indirect effect of controlling
   perceptions. Another way to say this is that control systems control
   OUTCOMES rather than MEANS. This is why some of your buddies at MIT are on
   the wrong track: they're trying to build models of motor behavior that
   specify outputs, where the real system works by specifying inputs. They're
   forgetting that between muscle tensions and their final effects are many
   other unpredictable influences that also contribute to the outcome.

You might actually find it useful to look at the work of Chris
Atkeson, a roboticist at (I'm afraid to say) MIT, and his students,
particularly Eric Aboaf. They make this same point, and have a model
of skill learning that sounds similar to what you are suggesting here.
(Unfortunately I don't have any cites other than MIT tech reports; you
could write to Atkeson at cga@ai.mit.edu.)

   OK, Penni Sibun has now opened my mind to your "Pengi" article,

A longer and perhaps clearer exposition of this stuff is in an MIT
Press book (``Vision, Instruction, and Action,'' 1991).

   knows the properties of the environment and of the system being designed,
   and in creating the desired behavior travels freely back and forth across
   the boundary between Inside and Outside. So if the model doesn't "do" quite
   the right thing to the external world, the modeler steps inside the
   behaving system and tweaks it

The closest work in AI to addressing this issue is the literature on
``temporal difference'' learning. I've done a litle work in that area
with an eye to improving Pengi-like models in the direction you
suggest, without any spectacular results so far. You might want to
look at the current issue of Machine Learning Journal, which I believe
is a special issue devoted to TD techniques. TD, btw, is pretty
closely related to both dynamic programming and classical control
theory.

David