I see I forgot to put the salutation on the letter I copied to CSGnet.
It was to David Chapman (and Philip Agre) at the Artificial Intelligence
Laboratory at MIT. Avery Andrews sent me their paper, "Abstract reasoning
as emergent from concrete activity," Proc. 1986 Workshpop
Timberline, OR, "Reasoning about Actions & Plans." Penni Sibun's name was
on the copy -- maybe she can say how to get copies or where to find the
publication.
Bill Powers
I see I forgot to put the salutation on the letter I copied to CSGnet.
It was to David Chapman (and Philip Agre) at the Artificial Intelligence
Laboratory at MIT. Avery Andrews sent me their paper, "Abstract reasoning
as emergent from concrete activity," Proc. 1986 Workshpop
Timberline, OR, "Reasoning about Actions & Plans." Penni Sibun's name was
on the copy -- maybe she can say how to get copies or where to find the
publication.
Bill Powers
i got the paper from one of the authors. the proceedings are
published, but i imagine they're hard to find. there is an
easier-to-find paper of the period, viz.,
Agre, P. and D. Chapman (1987), ``Pengi: An Implementation of a Theory
of Activity.'' {\em Proceedings of the Sixth National Conference on
Artificial Intelligence}, Seattle, pp~268-272.
the two papers should really be read together; i don't think the pengi
paper makes any sense at all w/o some sort of background (it's
extremely short and dense). since the abstract/emergent paper is hard
to find, i'd be happy to send the pair to anyone that's interested.
fyi, chapman is at teleos research in palo alto
(zvona@sail.stanford.edu) and agre is at ucsd (pagre@weber.ucsd.edu).
they did this stuff as gradstuds at mit; they've moved on somewhat to
other things, but pengi remains their most important (imho) and
certainly most famous work. w/ pengi, they managed to change the
course of a subfield of artificial intelligence (planning). v.
briefly, ai planning is the part of ai concerned w/ the issue
of how a system decides what to do and does it. the paradigm used to
be that
an agent selects a goal, builds a plan out of actions to achieve the
goal, and then executes the actions as specified by the plan. it was
generally assumed both that all action happened this way and that the
possibility that the world might change in important ways before the
plan was done was irrelevant. a&c managed to question these notions
forcefully enough that ai planning weenies have to at least pay lip
service to concepts of a fast-changing world and actions taken w/o
deliberation. the deeper points of the work--such that an agent and
its world are mutually constructed and that it follows from this
mutual construction that the world constrains the choices an agent
has at any point, so that most of the time deciding what to do just
isn't a big deal (agre calls this ``leaning on the world'')--have been
largely missed.
--penni
[From Rick Marken 920601 09:00)]
penni sibun says:
the paradigm used to be that
an agent selects a goal, builds a plan out of actions to achieve the
goal, and then executes the actions as specified by the plan.
Yeah. But that's old hat. Now it's dynamical systems theory -- point
attractors and all that jazz. AI people come up with new ways to be
wrong faster than we can say "control of perception". Thus, our criticisms
of AI and "action theory" tends to be two years out of date all the time.
the deeper points of the work--such that an agent and
its world are mutually constructed and that it follows from this
mutual construction that the world constrains the choices an agent
has at any point, so that most of the time deciding what to do just
isn't a big deal (agre calls this ``leaning on the world'')--have been
largely missed.
I can see why. What does it mean that an agent and its world are "mutually
constructed"??? How does one put this into a model that actually behaves
in the real world?? I have a feeling that the c&a paper is about
dynamical systems again. I bet those constraints imposed by the world on
an agent are like point attractors. I see the "mass spring" model of
purposeful behavior lurking in language like "leaning on the world".
Maybe you could just give a quick summary of how c&a would model some
simple, purposeful behavior -- like pointing at a moving target or taking
a sip of tea?
Best regards
Rick
···
**************************************************************
Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)
[From Oded Maler 920601]
[From Rick Marken 920601 09:00)]
I can see why. What does it mean that an agent and its world are "mutually
constructed"??? How does one put this into a model that actually behaves
in the real world?? I have a feeling that the c&a paper is about
dynamical systems again. I bet those constraints imposed by the world on
an agent are like point attractors. I see the "mass spring" model of
purposeful behavior lurking in language like "leaning on the world".
Rick, you are so predictable.. I'm sure that at some level in your
hierarchy A&C and Beer evoke the same percepts. Anyway your feeling
is incorrect and their work is not using the buzzwords of dynamical
systems.
Maybe you could just give a quick summary of how c&a would model some
simple, purposeful behavior -- like pointing at a moving target or taking
a sip of tea?
Since in the sequence/program level your favorite theory still does not have muc
to offer beyond qualitative hand-waiving, maybe don't be so quick in
dismissing others' attepmts to approach such problems.
Best regards
--Oded