Relevant to planning vs. practice, etc.
From pagre@weber.ucsd.edu Fri Jan 15 12:47:31 1993
Return-Path: <pagre@weber.ucsd.edu>
Received: from cscgpo.anu.edu.au by fac.anu.edu.au (4.1/SMI-4.0)
id AA15876; Fri, 15 Jan 93 12:47:26 EST
Received: from Csli.Stanford.EDU by cscgpo.anu.edu.au (4.1/SMI-4.1)
id AA13371; Fri, 15 Jan 93 12:43:36 EST
Received: from ucsd.edu by Csli.Stanford.EDU (4.1/inc-1.0)
id AA23601; Thu, 14 Jan 93 17:46:56 PST
Received: from weber.ucsd.edu by ucsd.edu; id AA12072
sendmail 5.67/UCSD-2.2-sun via SMTP
Thu, 14 Jan 93 16:39:19 -0800
Received: by weber.ucsd.edu (16.6/UCSDGENERIC.4)
id AA24877 to jtoth+@pitt.edu; Thu, 14 Jan 93 16:39:07 -0800
Message-Id: <9301150039.AA24877@weber.ucsd.edu>
Status: R
Social Intention
by Phil Agre
Department of Communication D-003
University of California, San DIego
La Jolla, California 92093-0503
pagre@ucsd.edu
(619) 534-6328
Position paper for the IRCS Workshop on Intention in Action and Animation,
University of Pennsylvania, March 1993.
I'd like to complicate the notion of "intention". What do we "intend" to do
when we make breakfast, hail a cab, clean the house, or set the dinner table?
Computational theories of action have always offered a particular simple
answer: we intend to take certain actions in service of certain goals, and
both the actions and the goals are written out -- for the designer if not for
the agent -- in some kind of representational language. I came to question
these ideas as I began to study anthropology, where the conceptual boundary
between "individual" and "social" levels of analysis is complicated and
contentious, and where as a result the notion of "action" becomes terribly
difficult to pin down. I think everyone will agree that our customs and
habits often have meanings and consequences beyond what we consciously
formulate for ourselves. Some anthropologists will argue that our conscious
intentions are basically beside the point, that culturally organized habits
operate through a kind of "calculation" wholly implicit in the social order.
Others will argue that people know perfectly well what they are doing,
and that the social order is an epiphenomenon of a multitude of rational
choices. Can computationalists help settle such questions? Should the real
complexities that lie behind such questions change the way computationalists
work? I honestly don't know. But let me tell you about a small attempt to
work out an answer.
In our paper "Cultural Support for Improvisation" at AAAI-92, Ian Horswill and
I described a simple model of routine activity involving cultural artifacts
such as kitchen implements, shop tools, and the like. On one level our result
was technical: we showed that a re-formalization of certain kinds of classical
planning in object-centered terms allows an agent to improvise correctly using
a trivially simple constant-time action-selection mechanism. The basic idea
is that some worlds can be represented using a separate little state-graph for
each object, so that a given fork might travel between the states of "clean
and dry", "messy", and "clean and wet" according to the agent's actions and
the effects of other energy sources (such as evaporation). The whole world
can then be represented as a cross-product of all the smaller state graphs.
(We didn't know at the time that this trick had already been invented by David
Harel, 1988.) Given this basic idea, we went wrote down the state graphs for
every object in my kitchen, given all the actions one customarily performs on
those objects. These graphs, it turns out, are easily classified into about
three categories, each characterized by a different and rather simple graph
structure: "materials" have acyclic graphs which look like branching trees
(and usually simply straight sequences); "tools" have cyclic graphs which
can always be brought back round to a "normal" state using simple "focused
action"; and "containers" are a little more complicated. The general problem
of achieving goals in such a world is wholly intractable, but the more
specific problem of achieving goals in a world that consists only of these
categories of objects is computationally very straightforward, for the simple
reason that subgoals hardly interact at all. More precisely, such a world
affords a simple planning abstraction hierarchy, with materials on the bottom,
tools on the top, and containers in-between.
As technical results go, this one is not particularly complicated.
Indeed, some might argue that it's trivial: after all, anyone can solve a
planning problem once the subgoal interactions have been defined away. But
the point isn't that our mechanism performs thirty percent better on some
recognized problem. Quite the contrary, our point is that we've discovered
some structure in the world that should make us worry that those recognized
problems don't happen much in everyday life. They might well happen all the
time in industrial applications, of course, given that businesses must push
the boundaries of complexity and efficiency to a much greater extent than
ordinary people making breakfast. But if our goal is to explain what people
are like, then my own feeling is that we should start with an understanding
of the dynamics of their lives, and particularly of the properties of those
dynamics that might allow us to postulate simpler kinds of mechanisms -- and
indeed to explain why creatures such as ourselves are possible at all, given
the gross computational intractability of the standard formalizations of so
many everyday activities.
And one kind of regularity in our interactions with the world derives from the
artifacts we work with. These artifacts are almost always handed down through
the culture, and they regularly "encode" important kinds of wisdom, both about
conducting practical activities and about the larger social relations into
which those activities fit. In particular, the set of artifacts in a normal
kitchen seems "designed" to avoid interactions. The force of this point may
be obscured by the familiarity of our kitchen tools. This happens a lot in
anthropology: it's hard to understand the special qualities of familiar things
until you learn about the entirely different things that other cultures use
in different ways. It would certainly be an interesting exercise to look
at the kitchen utensils of the world; perhaps the computational properties
we've noted are cultural universals. Or maybe the story is more complicated
than that. How, for example, are we going to formalize bread? People keep
telling me that every culture in the world has some kind of starchy substance
like bread or tortillas that it uses as an edible food-wrap, so the boundary
between "tools" and "materials" gets crossed.
Many other elements are clearly missing from our story. We don't explain
where the tools come from -- sometimes the truth is lost to history, and other
times the truth concerns the complicated institutional arrangements that make
inventions like "post-its" seem easy in retrospect, but yet other times the
answer is that someone (for better or worse) engaged in a complex rational
analysis of an activity and attempted to optimized some artifacts accordingly.
We also don't explain the "handing down" of the ability to use these tools.
The "wisdom" I ascribed to the tools doesn't reside in the tools as such, of
course, but in the tools together with the practices for using these tools,
and in the social organization of learning -- apprenticeship, for example --
through which both the tools and the practices are handed down. This would
seem like a rich subject for computational research that might go beyond the
rather individualistic model of learning that began with the early concept
induction work.
And, finally, we don't explain the exceptions to the routine. These are
the most interesting for many people. They're interesting for us, too, but
probably in a different way from many people. Computational modeling research
in AI has usually been informed by the values of generality, so that the very
hardest cases are given highest priority -- after all, if people can do them
then our machines have to do them as well. My own view, though, is that it's
important to consider these "exceptions" in the context of the whole elaborate
background of routine against which they take place. We might well ask, which
exceptions ever occur, given the existing patterns of activity? What social
supports, such as passing around of war stories, do people have for dealing
with them? Which exceptions do people ever really master?
But the most important question for our subject here is, how do these
exceptions force people to rethink their activities? We take on a multitude
of habits and customs and conventions and artifacts because they're just "what
one does". Rarely if ever do we "unpack" them to ask about the assumptions
they embody. And the hitting of exceptions is such an occasion. This is a
matter for intuition: what proportion of our culture's hidden assumptions does
the average person ever have the occasion and the resources to unpack in a
lifetime? My own intuition is that the proportion is very small. Be this as
it may, this theory of incremental unpacking of the assumptions in cultural
forms leads us to a new, dynamic conception of intentions: what we intend
by our actions, on this view, does not depend on what we do, but rather on
our depth of understanding of what we do. "Intention", in other words, is a
moving target, shaped both by our culture and by our explicit articulations
of our culture. I'm not saying I know how to build computational models
that work this way, but it certainly sounds like a way in which computational
modeling might join into a larger conversation that currently goes on largely
without us.
Philip E. Agre and Ian Horswill, Cultural support for improvisation, AAAI-92.
David Harel, On visual formalisms, Communications of the ACM 31(5), 1988,
514-530.
···
Date: Thu, 14 Jan 93 16:39:07 -0800
From: pagre@weber.ucsd.edu (Phil Agre)
To: bug-sdl@weber.ucsd.edu
Subject: social intention