[From Bill Powers (920902.0800)]
RE: Vision, Instruction, and Action; David Chapman.
Chapter 2: The concrete-situated approach.
2.1 Routineness.
"The routineness idealization holds that we can and should study
routine activity first, making only occasional reference to the novel
elements that are introduced from time to time into the course of
routine activity."
[By routine activity is meant such things as] "making breakfast,
driving to work, reading the paper, typing forms, giving the kids a
bath, and grocery shopping."
"Novel activity is possible only against a background of routine
competence. In fact it typically proceeds by assembling routine pieces
in novel patterns."
"The activity of animals up to the level of lower vertebrates is
wholly routine. ... if there is distinct neural machinery responsible
for novel activity, it must depend on the stable, previously evolved
machinery for routine activity."
···
---------------------------------------------
My remarks:
What is it that is repeated about a routine activity? Consider making
breakfast. One certainly does not have the same thing for breakfast
every morning, although there are certain little rituals like making
the coffee, even after switching to decaf has made drinking coffee
rather pointless. And unless you live alone, you never find things in
exactly the same place. You run out of things. You find the milk has
turned. There are only enough eggs for French Toast.
Nevertheless, at some level of perception, the "same thing" is
repeated every day: we (some of us) have breakfast. I suggest that the
level is _categories_. It isn't that in carrying out a routine one
repeats the same actions in the same relationships to the world each
time. It's that whatever actual relationships with the world are
created each morning, we categorize them in the same way, as "making
breakfast" or at least "having breakfast."
What is routine is that we select the same categories of activities
each morning, categories known by their names more than by the details
of what goes on. Category control requires only that whatever is going
on, it be perceived as belonging to the selected reference-category.
If someone offers you a chocolate sundae for breakfast, you turn it
down not because you don't like such things, but because "that's not
breakfast food." There is a category error, which you correct by
refusing the food. In England you might well be offered fish for
breakfast. Americans wouldn't normally consider a dish of fish to be
breakfast and they'd pick something else, and try not to act revolted.
So routine activites are not fixed sequences or specific experiences;
they are activities perceived and named from the viewpoint of the
category level. Striking variations in the world can lead to category
errors just as disturbances can produce errors at any level of
organization. Control at the category level entails resetting
reference signals for lower levels of control: control of
relationship, event, transition, configuration, sensation, and
intensity (make the coffee strong). Those reference signals are
different every time we make breakfast.
I do not believe that the behavior of lower animals is strictly
routine. I don't think that invertebrates are capable of varying their
detailed actions so as to maintain the same caterory of relationship
to the environment in the face of disturbances.
Back to Chapman.
---------------------------------------------------------------------
2.2 Situatedness
" ... an agent's most important resource in computing what to do is
its concrete situation. In driving, you are responsive to other cars,
to road signs, and to the geometry of the road. Pouring milk over your
breakfast cereal, your hand constantly adjusts the flow to prevent
milk from spashing off the irregular flakes and spilling out of the
bowl. ... concrete action without perception is virtually impossible."
"Situations change continually. Algorithms which formally solve your
problem are of no use if they terminate after the problem has changed
or solved itself or turned into a disaster."
"Real situations are immensely messy, always more complicated than any
representation of them can be."
"Real situations, in particular, make it hard to know exactly what the
outcome of your actions will be."
"Perception is your only access to the situation around you. You can
only see and hear things that happen nearby. Some things that matter
to you will always be unknown and unknowable. Algorithms which require
complete information are no use,"
"the concrete-situated approach addresses these four difficulties by
suggesting that agents
* Initimately interact with the changing situation.
* Represent only relevant aspects of the situation.
* Continually improvise and try out alternative ways of doing things.
* Ground representation and action in perception."
--------------------------------------------------------------------
My remarks:
This is straight control theory, with the exception of a few
throwbacks to the stimulus-response view (as in responding to other
cars and road signs). Chapman, however, does not know control theory.
He has noticed that it is hard to know what the outcome of an action
will be, but he has not noticed that we vary our actions to maintain
the outcome the same anyway. He speaks of interaction with the
environment, but doesn't see that this interaction is specifically one
of control; it is not haphazard, but aimed.
He recognizes the messiness of the real world, but not that we achieve
regular ends, in great detail, despite the messiness. He sees the
importance of perception, but he appears to see perception as
regulating behavior, rather than the other way around (grounding
representation and action in perception). He uses "representation" in
some specialized way that I don't understand, making it seem that
perception is something other than representation. Perhaps this is his
first glimmering of levels of perception. He does clearly understand
that perception is the only access to the external world, which is
fundamental to the understandinging that behavior is the control of
perception, not of the external world.
The "concrete-situated" approach, of course, has been taken for
granted in PCT since its beginnings 39 years ago. This has made it
hard to understand what Chapman, Agre, et. al. are going on about. I
have been looking for some deep significance in these words, when all
they are saying is that you have to consider how the environment and
organism affect each other, and that the organism must act in the
world as it is at any moment. To offer this as an alternative to the
traditional AI approach is really a devastating criticism of AI.
Back to Chapman.
----------------------------------------------------------------------
-
2.3 Interactivity
" ... the organization of activity is emergent from interactions
between an agent and its environment, rather than being a property of
the agent (as in _mentalism_) or of the environment (as in
_behaviorism_). Causality rapidly loops in and out of the agent,
rather than looping around inside the agent's head and occasionally
emerging to affect the world ..."
"An agent that accepts the world as an equal partner in organizing its
activity does not continually try to force interactions to conform to
a preconceived idea of how things should go. That's futile; other
processes and agents in the world would constantly force you off
track."
"Locating the determination of activity in interaction implies viewing
life as a series of opportunities and contingencies; openings to
participate in particular sorts of activities and events that arise as
you do so. Life is not a series of problems, each handed to you as a
unit and requiring a unitary solution. Life is constantly ongoing. The
concrete-situated approach takes dancing and hanging out, rather than
solving the eight [-queens?] puzzle or designing a circuit, as
prototypical activities. These activities involve other people, who
are likely to make it hard to impose your solutions. On the other
hand, in these situations the other people will share the work of
making interaction simple."
----------------------------------------------------------------------
My remarks:
Noticing that interaction between a person and an environment is of
central importance is certainly in line with the PCT view. Even
noticing that "activities" depend for their details on vagaries of
external forces and other people is consonant with PCT. The
environment is full of disturbances.
But Chapman reveals his cultural background here. He is promoting a
particular world-view, rather like the Tao and quite like Maturana's
view, which says "go with the flow; the river flows by itself so don't
try to push it; we're just ships drifting under the influence of wind
and tide and where we end up is not up to us." There's a sense of
groupiness when he talks about other people that sounds very young to
me (but then what doesn't?). There's a sense of not wanting to face
the world alone -- or to realize that this is what everyone has to do,
by virtue of the relationship of the brain to what lies outside it.
Penni echoed this sentiment when she asked rhetorically, "Why figure
something out for yourself when you can ask someone else who's been
there?"
As is obvious from my participation on the net, I have nothing against
groupiness. It's nice. I seek it out. But one mustn't overlook the
fact that seeking out the group and the niceness of being with and in
it is a GOAL; if you don't decide that this is what you intend to do,
and if you don't overcome disturbances and obstacles that stand in the
way of doing it, you will not end up with the group. One of the
obstacles is the group itself. Groups are not known for reaching out
and trying to get people into them (evangelists aside). It's more the
other way around; they tend more often to reject outsiders, which is
one of the things that feels so nice when you get inside. When people
learn from those around them, which is the nice way of putting it,
they also regress toward the mean and stop thinking for themselves,
which is the nasty way of putting it.
The problem here is that not knowing PCT, Chapman doesn't realize that
even the most trivial of behaviors is an OUTCOME that would not occur
with any reliability unless ACTIONS varied, and unless they varied
SYSTEMATICALLY to oppose disturbances and in relation to a GOAL.
Organisms never just "hang out," if by that one means putting oneself
into a situation, social or otherwise, and just being carried along by
it. This simply doesn't happen. Even inside the coziest group, the
people have goals for their relations to others and for how others
treat them. Even just asking someone else a question implies that you
want an answer, or at least a response. If someone tells you a fact or
technique, that fact or technique will go in one ear and out the other
unless you have prepared a place for it, and want to adopt it as
yours, and do so for a reason, to suit a purpose of yours.
Chapman does realize that it's hard to impose your solutions on other
people. In PCT we say that you can't control other people, at least in
the long run (with reference to recent discussions of manipulation)
without resorting to overwhelming physical force.
There is, in what he says, a kernel of truth. We do not accomplish our
goals in the real world the hard way. Doing things the hard way not
only reduces the quality of control, but violates other goals like
avoiding physical exhaustion, avoiding offending people and arousing
their opposition, and avoiding inner conflicts and physical
contradictions. The optimum control organization is the one that
maintains the perceived world as one wants to perceive it with the
minimum necessary effort. And even an ordinary control process does
not produce extra effort when doing so would actually prevent control.
When a crosswind is blowing your car into a curve by just the right
amount, you don't help it by turning the steering wheel the same way.
This doesn't even require thought or principles: if you did act, the
car would deviate from the path you want. The steering control system
produces WHATEVER effort is required to keep the car in the turn,
including none. Of course when you come out of the turn, you hope that
the steering wheel is still connected.
Back to Chapman.
---------------------------------------------------------------------
2.4 Dynamics
"A _dynamic_ is a pattern in the interaction between an agent and its
world... The concrete situated approach prescribes interleaving study
of machinery and dynamics."
"Here is an example of dynamics... When you use a bowl, you put it
back at the top of the stack. Over time, the bowls that aren't used
often tend to sink to the bottom of the stack."
"* Dynamics are not causal agents...
"* A dynamic operates only _ceteris paribus_, when an unbounded set of
conditions holds. If you have a big dinner party, you may use all your
bowls and they'll end up back in the cupboard scrambled...
"* Dynamics are not processes in the agent's head. They are patterns
of interaction which may be noticed by a theorist. Dynamics are
typically not represented in [perceived by?] the agent...
"* A dynamic typically operates in many different domains. [Your
records may end up sorted as the bowls do].
"* Beneficial dynamics often arise without their having been intended
by the agents involved. This is not just an accident, but depends on
subtle facts about the structure of activity ..."
"The concrete-situated approach shifts explanatory focus from things
in the head to dynamics. We postulate new pieces of machinery only as
a last resort... Machinery parsimony is simply good engineering."
" We have found that the deeper your dynamic understanding, the less
machinery you need. Bits of machinery, when postulated, do not
subserve particular capabilities; the entire agent is applied to every
task..."
----------------------------------------------------------------------
-
My remarks:
Chapman means by dynamics not differential equations as I would mean,
but properties of the external world and the people in it, as one
interacts with them. By "machinery" I take it that he means the design
of the behaving system, the model. I strongly support his views that
(a) no more machinery than necessary should be proposed, and (b) that
the machinery should be general-purpose, not designed _ad hoc_ to fit
every special circumstance. One can't always avoid this; we say that
the perceptual function of a configuration-control system produces a
perceptual signal that stands for a configuration. But with the
philosophy Chapman recommends, such ad-hoc chunks in the model are
marked for replacement as soon as possible. As we do.
I don't know the whole story on Chapman and Agre's ideas about
"beneficial dynamics," but the statement rings a bell. My concept of
reorganization provides a way in which unperceived benefits of
dynamics can influence behavior so that behavior comes to take
advantage of them. The key to the idea is the word "benefits." A
benefit must be an improvement in the state of the organism, somehow.
It need have neither a logical nor a known relationship to anything
the organism is controlling for in the CNS hierarchy. Through this
kind of reorganization, as opposed to algorithmic reorganization, it
is possible for behavior to reorganize at random until some quite
remote deficiency in the organism is remedied. There never does have
to be any knowledge in the CNS about WHY the change in behavior was
beneficial. So it is possible for very subtle aspects of "dynamics" to
result in behavioral organizations that take advantage of them, where
doing so is to the benefit of the organism as a whole.
In comparing the PCT view with Chapman's, it's important to note that
when Chapman speaks of what the agent "knows" he doesn't make any
distinction between conscious and unconscious processes. The
perceptions of which he speaks are almost always those of which we
would normally be conscious. In PCT, of course, the term "perception"
simply means the presence of a neural signal in a perceptual channel;
nothing is implied about that perception being within the field of
awareness. So whether something is "perceived" or not tends to be
judged by Chapman in terms of whether one would be aware of it or not
-- which we see as only part of the story.
I'm not talking about "conscious" and "unconscious" in the Freudian
sense. Awareness tends to dwell on certain levels of perception. But
those perceptions, in the hierarchical model, could not exist if all
the perceptual functions at all the lower levels, right down to spinal
reflexes, were not generating their own perceptual signals as usual.
This is because each level of perception does not start from scratch
(as in Brooks' subsumption architecture) but derives its own type of
perception by applying input functions to perceptions that already
exist at at lower levels. This is why the control hierarchy also acts
not directly on the world, but by varying the reference signals for
systems of lower levels, the same levels where the raw material for
the higher level of perception comes from.
The result is that a lot of what Chapman talks about as the dynamics
of "the world" would correspond, in PCT, to the dynamics of _lower
levels of perception and control_. The world that is perceived at
higher levels, where instruction and more complex interpretations take
place, is still inside the organism and part of the activity of the
machinery. To us, of course, it seems consciously to be "outside." But
knowing what is "outside" is not as easy as that.
One last sample from Chapman:
----------------------------------------------------------------------
--
2.5 Routines
"For us, the big question in designing an agent is, what sort of
machinery will engage in the sorts of routines we want? The
architecture we propose is not the only one that will engage in
routines; in fact almost any system interacting with an environment
will, because routines typically arise from physical determinism. If
you put the same agent into the same situation twice, it will do the
same thing. Even though real situations and real agents are never
exactly the same twice, people and places change only slowly. Your
desires, your daily routine, the arrangement of your office and
kitchen, your route to work, and your relationship with your office
mate, are all relatively constant. An agent's activity can be mainly
routine because the world is mainly routine."
----------------------------------------------------------------------
My remarks:
I already commented on routines at the beginning. Routines, when
repeated, repeat only as categories; the details don't repeat. Not
even in ordinary, simple, apparently fixed patterns. We simply
perceive what happens on repeated occasions as "the same thing"
because we are perceiving in terms of categories, not details.
When Chapman speaks of "doing the same thing" he isn't distinguishing,
as we would, between an action and what the action accomplishes. This
is strictly because of not knowing about PCT. It may be true that when
you put the same agent into the same situation, the agent will
ACCOMPLISH the same thing as last time. This would hold true even if
you put the agent into a different (but not WILDLY different)
situation. But it is never true that when the agent is in the same
situation as before, the agent's ACTIONS can be the same as before.
That is simply impossible. The agent can't repeat its initial
conditions, and neither can the situation. To a control system this
makes no difference. To a system that assumes that commanding the same
action as before will have the same result as before, it makes ALL the
difference. Even a minute scarcely noticeable change in initial
conditions will throw even fairly proximal results off to the point
where nothing like the previous result is obtained.
This is the great glaring error of almost all the sciences of behavior
-- the assumption that repeated outcomes simply follow from repeated
actions. In a computer model, of course, in which we can predict
exactly what will follow from prior conditions and known operations,
and in which actions, too, are digitized and exactly repeatable, this
failure of causality at the output is never encountered. So you can
say that if Sonja aims at the monster, she will hit the monster. You
can say that if Sonja aims, her missiles will go where she aims. You
can say that if Sonja walks through an entrance and around a corner to
where an amulet is, she will not run into the wall or grasp at a place
12 inches to the right of the amulet. All you have to do in the
digital world is command an outcome, and it happens.
At one level of analysis this is OK. If control systems were used to
accomplish all these subgoals, the reference-position would indeed be
reached despite all the inherent variations and disturbances in the
real world. The hand might start moving in the wrong direction because
of a change in body position or a previous velocity, but by the time
it was extended it would be in exactly the position required for
picking up the amulet.
But at another level of analysis it is not OK. The system has to be
designed so it can in fact produce consistent outcomes at EVERY level,
despite realistic disturbances and uncertainties that always exist.
The PRINCIPLE of control has to be incorporated at every level, or a
real working model that has to function in reality instead of in a
computer will simply fail.
I think that if Chapman were to learn the principles of control, he
could incorporate them into his models in a way that would make them
far more robust than they now can possibly be outside the world of
bits and precise computations. In many ways he is already using them,
but not in a systematic way, with understanding. What he is doing is
fundamentally useful; he is exploring the higher levels of control
organization where we have made little progress. He and Agre have
abandoned the sterile approach of conventional AI for all the right
reasons, and in doing so have gone a long way toward discovery of the
principles of control. But to take Penni's side for a change, why go
through that long painful process of reinventing the wheel, when you
can ask someone who's already been there?
----------------------------------------------------------------------
-
Best,
Bill P.