lawnmower men, imagination, reorganizaton

[Avery Andrews 930926.2004]
  (Bill Powers (930925.0800 MDT), Rick Marken (930925.2000))

Of course the lawnmower-repair story got motivated by Bill's admonition
to think about problems of a kind that people actually manage to solve,
plus a recollection that the guys who founded symbolic AI paid
quite a lot of attention to what went on in actual problem solving
(unfortunately of the school-desk rather than real-world type).
Getting a lead on what to notice is essential in this kind of
exercise, & I certainly felt that PCT concepts were pulling their

Some more points on imagination.

1) People are much worse at imagining things from scratch than at
imagining variants of what's in front of them. E.g. trying to
imagine whether a piece of furniture in the living room will be able
to pass through a bend in the hall on the way to the bedroom is likely
to be a waste of time (if the fit is at all tight), but is much more
likely to be accurate when the furniture and the hall-bend are both
at hand in the same scene. To me this suggests that there is a
system of `operators' that can be applied to actual scenes (to calculate
what *this situation* would look like if *that chair* were moved
over there). Then in full-fledged imagination the operators
apply to imagined scenes rather than actual ones in front of you,
with a concomitant loss of accuracy and detail.

A monkey can push a box that's in front of it to under a banana in
order to get a banana - I wonder if it could drag one in from another,
adjoining cage (chimps apparently carry tools around, so I suspect that
they could). Transforming scenes by adding objects that aren't actually
in them but could be fetched from somewhere would be intermediate
between imagining rearrangements of what's there, and imaginings where
nothing in the imagined scene is actually visible.

2) David Marr has some ideas about higher-level representations
that might be useful - for example hierarchical representations
of animal shapes that might actually be useful for modelling
imagining changes in posture, etc. I'm not aware that anyone has
developed this approach to the point where it would be plausible
for things in general (it's based on cylinders, and animals do
seem to for the most part be made out of cylinders). But it might
be a starter proposal for hybrid digital-analog modelling (there's
an article about this stuff by Jackendoff and Landau in a recent

3) As I spent some time on last year, I think, people are especially
hopeless at imagining with kinds of situations that they have little
or no experience of. A plausible reason for this is the existence of
what is called the `frame problem' in AI, which is the fact that when
you change some aspects of the representation of a situation, certain
other aspects must typically be changed concurrently, and others may
be left alone, and it is incredibly difficult either to figure which ones
can be left alone, or to update everything in accord with a reasonable
system fo rules.

My conjecture is that this is just a fact about planning/imagination:
the frame problem is impossibly hard, so people don't actually solve it,
but in their experience with various kinds of situations just learn
what you can leave alone in the updates of the imagined situation that
follow from performing various actions, and what has to be recomputed
(and if much has to be, imagination doesn't work, therefore, what you
want is results, and what you get is consequences). By the time someone
actually has to fix a lawnmower, they are likely to have had at least
a decade of experience taking things apart and putting them together
again--a lot of time to learn what tends to follow from performing
what operation in this domain. I'm sure that I'd be far worse at
such tasks than actually am if I hadn't had a summer of watching my
former stepfather solving this kind of problem on a farm.

A final point on reorganization - perhaps a fig-leaf for Fowler and
Turvey, perhaps. In BCP, reorganization basically comes in as a
last resort - random restructuring when basic needs are unsatisfied.
This is clearly unsuitable as an explanation for how people manage
to solve the kind of problem they presented. In more recent PCT work,
reorganization seems to be a much more routine and directed sort
of affair. No intrinsic reference levels were affected by my initial
failure to shove the cord thru the hole. So the conception is rather
different here, and the reorganizations have to somehow be guided so
that it happens in the right area. I don't think there's necessarily
anything deeply mysterious here, but it does seem to be a shift from the
BCP model.

Well, there's enough free association from me.