Planning and Agre's Cooking Problem

[from Avery Andrews (920902.0914)]
  (Thomas Baines 92090909)

> I think Avery almost has it.

Spurred on by this, I'll dump the rest of this file. Before Penni
introduced me to Interactive AI, but after I started getting CSGNet,
it struck me that there was something very wierd about the discussions
of the `frame problem' in my AI books, because they seemed to be
concerned with the problems faced by programs that attempted to imagine
everything that might happen in some domain that they had no practical
experience in.

By contrast, real planning, as carried out with some modicum of success
by real people (we have 4 people, 1 car, 7 hard commitments & 3 wishes --
how are we going to get thru Saturday?) is carried out in domains where
people have a lot of practical experience. Consider a typical operator
(= step in a plan). In addition to its desired effect, it will have
various side-effects, which can be grouped into three types:

  a) irrelevant (when I execute DRIVE(OWEN,BELCONNEN), the place on the
      driveway where the car is normally parked will get wet if it
      rains).

   b) relevant, but routinely controllable (as a result of the driving, the
       fuel-level of the car will decrease). One of the effects of culture
       is to increase the routine controllability of side effects.

   c) relevant, but uncontrollable. (while I am doing the driving, there
        won't be any car at home, so no other person can do any driving).

So the real-world planner can find out from hearsay and experience what the
type (c) consequences of his/her actions are, and construct `clean' sets of
operators that have few such consequences. The general intractability of
planning means that real plans have to be quite short, so that experienced
and competent people would have large libraries of operators, which might
be individual quite complicated (like the tricks that people make up to do
the last layer of the Rubik's cube).

In terms of everyday live, it strikes me that errors are very often made
when a side-effect of an operator switches from type a) to type c).
E.g., when a family with two drivers is reduced from having two cars to
only one, they are constantly making plans which presuppose that another
car will be available when one has been driven off somewhere. It took
My wife and I a tremendously long time to stop making up these kinds of
errors after we had the use of a friend's car for six months (tho we did
usually manage to spot our mistake before anyone actually drive off to
somewhere).

And, as Penni has been saying to me recently, a lot of the info about
useful operations doesn't have to be figured out at all - you can pick it
up by watching and listening, because it's just floating around in the
culture.

Avery.Andrews@anu.edu.au

Avery- I like the typology (relevant;relevant but routine;
          relevant but uncontrollable). Try this on. I am working on
          information systems which require different "granualities"
          at different levels of use. The person at the top doesn't
          wan to be deluged, so he/she wants aggregation, but not loss
          of content. Those below want less aggregation, but not loss
          of the leaderships "vision" from the top. I'm looking at a
          model which puts everyone's information universe in a space
          that, in two dimensions, is a pyramid. The base of the
          shape is a time dimension (going right) and the hieght of
          the shape is a measure of granularity.
          People at the top have require less granularity, so the
          height of their pyramid is greater than those below.(Showing
          that their perception is more course - thier min scale is
          greater.

          The shape is trisected, so that the bottom part represents
          those "inputs" that represent things which can be considered
          stable during the relevant time span.(Relevant time span is,
          by the way, a function of the time in which the perceiver
          can respond to the input). The next portion up in the
          pyramid represents data that is predicably variable; by
          which I mean that it varies according to some pattern or
          routine KNOWN TO THE PERCEIVER. The top part of the pyramid
          represents information that the perceiver must treat as
          either random, or varying in an unknown fashion; and
          therefore is the part of the data spectrum faced by the
          perciever which must recieve the greatest "attention."
          Breakdowns in understanding of information among the layers
          of the organization are related to the distribution of data
          among the sections of the pyramids at each layer.
          (lowerlayer having shorter time lines, and lower "ceilings",
          because their perceptions are finer).
          I'm still working on getting this in PCT language.