word-based control, reorganizing lawnmowers

[From Bill Powers (920926.0730 MDT)]

Avery Andrews (930926) --

   I had the bright idea of first tying a knot in the cord to
   keep it from retreating into the pulley mechanism again, and
   then another one to hold the little metal bar that sits
   transversely in the handle, it wasn't until I actually saw
   this arrangement and that it sucked that I realized it was
   hopeless. Imagination failed here.

Imagination worked perfectly well here. What failed was the
system USING imagination, and perhaps the store of experience on
which imagination draws. If you had skipped the imagination step
and had simply tied the knots, the arrangement wouldn't have
worked any better. Imagination simply lets you try out dumb ideas
where nobody else can see how dumb they are. I have had more dumb
ideas than anyone but me will ever know about.

Reorganization gives you NEW ideas; it doesn't give you GOOD NEW
ideas. When you struggle with a problem for a while and then
suddenly experience that wonderful "AH, I've got it" feeling,
you've gone only half the distance, or maybe 10%. The rest of the
job consists of finding out what's wrong with this wonderful new
idea. Most of the time this isn't hard to do, if you remember to
try. When you discover what's wrong, you reorganize some more,
and keep reorganizing, until finally you can't find anything
wrong with the result any more. The harder you try to find
something wrong with your own "insight," the harder other people
will have to try to find something wrong with it. What we're
looking for in science are ideas that NOBODY can find anything
wrong with. Those are the ideas that hang around for a while,
until finally someone _does_ find something wrong.

If you're the thick-skinned type, you can just try out every new
idea as it comes up, by actually doing whatever it calls for and
ignoring the laughter. But this is not only embarrassing, it can
cost a lot in effort and resources. Imagination can save a lot of
effort and resources. If you build up a pretty good model of how
things actually work, you can test many new ideas just by
imagining putting them into practice and noticing how much
imagined effort it takes to make them seem to work. The more
effort you need, the less likely it is that what you imagine will
actually work when you really try it.

The ultimate test, of course, is in really trying the idea.
That's when you find the holes and errors in your mental model.
Presumably, every such discovery modifies your mental model a
little, so the incorrect imaginings get fewer and more great new
ideas get discarded for cause before you actually have to try
them out.

This can lead, of course, to "false negatives" -- you can discard
perfectly good ideas because your model's wrong. If you trust any
mental model too far without checking it closely against actual
experience with the world, you can end up with a fantasy instead
of a model. That's what's wrong with living too long with
abstract concepts.

   we have reasonably clear ideas about how imagination in
   symbolic mode might be made to work - it's called
   `exploring problem spaces'. The methods are basically
   those of logic, and thus go back way beyond the birth of
   digital computers--to Peirce, Boole and Aristotle.

This approach can achieve the best that can be done with
imagination: self-consistency of the mental model. It can't,
however, weed out self-consistent ideas that are contradictory to
the way actual experience works. You can start out exploring a
problem space by saying "For all lead balloons floating at an
altitude of 40,000 feet ...." and go on happily to derive the
theorems, the implications, and the conclusions. When you're done
you have many solutions to problems that exist only in an
imaginary universe. If you tried to test this imaginary model
against experience, it would crash at the first statement.

To make this more realistic, consider all the work that has been
done in a problem space defined as the set of all motor
activities that are functions of sensory stimulation.

   humans are the only creatures that talk ...

If you applied to human beings the same techniques used to
determine whether other animals talk, you would conclude that
human beings don't talk, either.

  and appear to be by far the best at solving problems in
  imagination, tho monkeys and apes seem to have significant
  abilities in this direction (as when they get the hanging
  banana by shifting a crate to under it, and standing on the
  crate).

Where monkey interests coincide with human interests, human
beings recognize monkey problem-solving when they see it. When
they see a monkey looking at limbs overhead, and then see the
monkey zoom up one tree, scamper along a limb, leap to a limb of
another tree, and pick a banana growing there, the human
observers see instinctive behavior that just happens to lead the
monkey to -- surprise, surprise -- a banana.

I think people waste an incredible amount of effort trying to
prove that they're unique, wonderful, blessed, incredibly
intelligent, and in general far above all other creatures in
every way. Including other human beings. Makes me wonder what
they're worried about.

···

--------------------------------------------
   To me this suggests that there is a system of `operators'
   that can be applied to actual scenes (to calculate what *this
   situation* would look like if *that chair* were moved over
   there). Then in full-fledged imagination the operators apply
   to imagined scenes rather than actual ones in front of you,
   with a concomitant loss of accuracy and detail.

I like this a lot, and it should be amenable to experimentation.

   As I spent some time on last year, I think, people are
   especially hopeless at imagining with kinds of situations
   that they have little or no experience of. A plausible
   reason for this is the existence of what is called the `frame
   problem' in AI, which is the fact that when you change some
   aspects of the representation of a situation, certain other
   aspects must typically be changed concurrently, and others
   may be left alone, and it is incredibly difficult either to
   figure which ones can be left alone, or to update everything
   in accord with a reasonable system fo rules.

I've offered a simpler explanation: imagined perceptions are
replays of past experiences, manipulated by control systems in
the imagination mode (memory data being routed via a short-cut
into the perceptual channels instead of serving as reference
signals for lower systems).

This, too, suggests experiments. One would be to devise a medium-
complex control task in which only certain segments of a
continuous relationship are ever experienced during learning --
then see what happens when values of a lower-level reference
signal never experienced before are required. Would the person
automatically carry through the nonexperienced segments? Or would
behavior in those regions have to be learned from scratch?

   My conjecture is that this is just a fact about
   planning/imagination: the frame problem is impossibly hard,
   so people don't actually solve it, but in their experience
   with various kinds of situations just learn what you can
   leave alone in the updates of the imagined situation that
   follow from performing various actions, and what has to be
   recomputed (and if much has to be, imagination doesn't work,
   therefore, what you want is results, and what you get is
   consequences).

I agree. And I love "What you want is results, what you get is
consequences."

   A final point on reorganization - perhaps a fig-leaf for
   Fowler and Turvey, perhaps. In BCP, reorganization basically
   comes in as a last resort - random restructuring when basic
   needs are unsatisfied. This is clearly unsuitable as an
   explanation for how people manage to solve the kind of
   problem they presented.

I agree with you that reorganization is much closer to the
surface that I have previously implied. Continued error signals
in the hierarchy, whether produced by disturbances or by setting
new goals, seem to bring up reorganization immediately if control
is not immediately successful. This is a point for Martin Taylor,
who has insisted all along that reorganization has to be a
capacity of EVERY control system. As to the fig-leaf for Fowler
and Turvey, do they have any concept like a reorganizating
system? Or, perhaps, might we say that this is ALL they have?

  In more recent PCT work, reorganization seems to be a much
  more routine and directed sort of affair. No intrinsic
  reference levels were affected by my initial failure to shove
  the cord thru the hole.

Yes, but primary because of giving much more importance to error
signals in the hierarchy (absolute value or mean squared error)
as members of the set of intrinsic signals.

Hey, how about this? Creativity is the process of setting the
reference signal for the absolute amount of intrinsic error
(maybe of just a few kinds) to a nonzero level. That would force
reorganization to start. You still need an underpinning of fixed
intrinsic reference levels to tie learning in with evolution, but
what if higher systems could learn deliberately to set nonzero
reference levels for some aspects of intrinsic error? That would
enable you to recognize that you need a new idea, and set the
machinery in motion to start producing new ideas at random, from
which you can select the good (i.e., workable) ones by trying
them out in action or in imagination. Would this work at all
levels, or only at the higher levels?

As to reorganization being "directed", its effects are still
RANDOM. All that directs it is selection -- that is, the
reduction of intrinsic error to zero, which results in retaining
the outcome of the last reorganization.
---------------------------------------------------------------
Rick Marken (930925.2000) --

   Anyway, one nice thing about the word based approach to
   simulation is that it obviates the need to be able to
   do what PCT models will almost certainly not be able to
   do for some time -- model the perceptual functions
   of higher level variables.

With that, I think you've taken us a step closer to "qualitative
modeling," which has come up at infrequent intervals. David
Goldstein was the first to request this: how can you model these
complex behaviors of interest to psychotherapists, in a semi-
rigorous way, when you can't instrument the equivalent of control
sticks and cursors?

How about trying to devise scales of higher-level perceptions on
which points can be represented by a sequence of words? Suppose
you tried to define an underlying scale of, say, distance of
cursor from target using a preset vocabulary like

very far away
in the vicinity
near
close
very close
almost touching
against

Maybe you could get the words by asking a person to describe a
series of pictures in which the underlying continuous scale is
displayed in various evenly-distributed states. Then you turn
around and let the person point to those words (using a control
handle, what else?) as a way of indicating the position on the
scale.

Ah, you could do the same for the action: let the person point to
words designating the degree of an action:

a slight touch
a light press
a firm press
a strong press
a very hard press
a maximum-effort press

There must have been work done on this -- ranking terms according
to the relative amount of the indicated experience.

If we could get the right experiment set up, it ought to work
just like the tracking experiments, except that the number scale
would be coarser.
-------------------------------------------------------------
Best to all,

Bill P.