Info; frustration; sorry; hi-level control

[From Bill Powers (930410.2000 MDT)]

Avery Andrews 930410.1930

A lot of the heat in the info dispute has been generated in a
sort of slip-zone between the claims that information is
somewhere (p(t), for example), and claims that it does or
doesn't get used by someting (like an organism or ECS).

I think this is still begging the question of which usage of
"information" is meant: the technical information-theoretic
usage, or the "everybody knows what information is -- it's
_information_" usage.

Nonetheless, the presence of the information is reflected in
the evolutionary potential of the situation: for such an
organism, some patch of skin might evolve into an eye, which
can extract at least some of the available information.

In the technical sense, information is a function of the range of
a variable and the number of discriminable states it might be in.
A more general definition brings in the probability that each
possible state might exist, which gets you into expressions that
look like the expressions describing entropy in physics. Bayesian
statistics introduces conditional probabilities, and so forth.

Now, are you saying that there is an evolutionary potential for
the organism to become sensitive to log(R/r), or to
SUM(p*log(p)), or to H(D|P)? Or do you mean that there is a
potential for signals to arise inside the organism which are
analogs of physical variables outside them? The latter, as Cliff
Joslyn pointed out with his usual lucidity, is the _semantic_
meaning of "information," with which information theory does not
deal (but takes as given). This semantic meaning is exactly what
PCT deals with, directly, without using the measures or theorems
of formal information theory.

When you use "information" in an unqualified sense, you could
mean either thing.

ยทยทยท

--------------------------------------------------------------
Greg Williams (930410) --

I'd characterize second part of the experiment that you describe
as a conflict experiment.

In the first part, the rats successfully solve the choice problem
when the consequences are randomized. Being unable to control for
the previous variables (landing safely), they cease to try to
control for them, and simply control for avoiding the penalty of
hitting a closed door. The "period of increased variability"
before this solution is found is interesting, because that is one
of the signs I would expect when reorganization starts. See
Robertson's paper on the complex learning experiment.

But then the experimenter disturbs another controlled variable:
the avoidance of pain or discomfort. To avoid the puff of air,
the shock, or the prod, the rats must leap off the pedestal,
which carries its own penalty. So they now have two goals,
neither of which can be satisfied. This is a situation that I
have speculated is very likely to result in reorganization.
However, as the rats are repeatedly returned to this situation
and there is no new organization that will resolve the conflict,
the conflict persists, and reorganization can only become more
and more intense. Some of the rats may become catatonic, turning
off all inputs. Some may simply go into a frenzy of
reorganization and lose their motor coordination altogether,
never arriving at any solution and beginning to reorganize so
rapidly that even a successful solution doesn't remain in effect
long enough to reduce the intrinsic error.

One effect of higher-level reorganization might well be to run
through the repertoir of existing lower-level control processes,
searching at random for a solution. One possible solution would
be an "overwhelming" attack on the source of the prod, the shock,
or the puff of air. Another would be to jump into areas of the
enclosure that are free of disturbances, but don't entail
smashing into a locked door. When all these selections fail, as I
am sure the experimenter will ensure, the final recourse must be
not to behave at all, no matter what happens. Any behavior simply
costs energy and has no effect in reducing intrinsic error. So I
think I would predict catatonia as the final stage -- either
that, or an uncoordinated frenzy of activity totally at random,
until exhaustion, convulsions, or death ensued.

I have heard of rats in cages with electrified floors learning to
hop in great bounds instead of walking, minimizing the time of
contact with the wires on the floor. I think if you just put
yourself in the rat's place and figure out what will lead to the
least intrinsic error, you can predict most of the things that
will be tried (by at least some of the rats). Reorganization is
not rational, so it can result in trying any crazy thing that
happens to work.

We bombed out on getting Maier's book from Ft. Lewis library: it
was listed as not checked out, but wasn't to be found anywhere.
Probably been missing for 20 years. We'll look elsewhere.

One thing I'm pretty sure of: what happens in these experiments
isn't caused by "frustration."
--------------------------------------------------------------
Sorry about horning in on your private comments to Gary -- when
you said "To all those who appear SINGLE-minded in their view of
science (i.e., population description isn't "real")" I thought
you included me in that, and not having noticed that this post
was intended to be for Gary alone, I answered it.

I tend to agree with Gary, anyway: but what you're doing now with
Maier's study is taking us in the right direction.

I hope that when you get to the studies themselves, you will try
to pick out the kind of data I'm most interest in: how many
animals showed the predicted result, how many didn't, and how
many were equivocal.
---------------------------------------------------------------
Dick Robertson (930409) --

I saw at once that my question was nonsensical in regard to
first order systems, since their output simply increases or
decreases to combat error in the loop. But what about higher
order systems, where it would seem that output "selects"
different variables in the system(s) below?

This is worth thinking about. In higher-level systems, especially
those that seem to switch between qualitatively different
behaviors, how does an error signal get turned into a choice of
lower-level behavior that will tend to reduce the error signal?

If we're going to have a higher-level control system at all, it
has to be organized to behave in some systematic way, unless
we're going to say that all higher-level control is trial-and-
error. Systematic control means that error very reliably produces
a change in lower-order behavior that tends strongly to reduce
the error. This can mean only that there is a systematic
relationship between the amount and direction of error and the
choice of lower-level system to use.

One way to think about this would be to propose a mapping scheme,
in which each value of the error addresses the map, and locates a
connection that previously made THAT error smaller. The map
itself would be built up through reorganization. Once a
successful map has been constructed, then for each possible
error, the output would be routed to the control system that
specifically can reduce that amount of that error.

I've touched on this idea in talking about the correction of
linguistic errors. When there's a grammatical error, we either
correct it immediately or never correct it. To correct it
immediately implies that we have had experience with correcting
that kind of error, and that the error leads directly to lower-
level actions that result in its correction. This does imply some
sort of mapping of THIS kind of error into THAT kind of
corrective action.

In the simplest case, of course, the "map" is just a single set
of permanent connections, which route the error to serve as
reference signals in the lower systems that contribute to the
higher-level perception being controlled. Once the right
connections have been found, the higher-level perception is
permanently under control and no further switching of connections
is needed (that is, the intrinsic error that produced the
switching through roerganization is now maintained at a low
level). This is the sort of multilevel control that Rick and I
have explored -- without the stage of reorganization, which takes
place mostly in Rick and me as we fiddle with the model until it
starts to work right. To introduce reorganization, we'd have to
program in the fiddling part. This would be useful -- it would
suggest what has to be there permanently to make the fiddling
likely to work.

Perhaps one avenue to explore would be to look for systematic
relationships between higher-level errors and the type of action
chosen to correct them. It might well be that the actions would
sort out into some continuum, so that larger errors imply
choosing actions farther toward one end of the continuum, and
smaller ones actions toward the other end. These actions might
differ qualitatively, yet be ordered in terms of their
effectiveness in correcting errors of different sizes in the same
higher-level variable.

At the lower levels, changes in the sizes of errors simply show
up as changes in the magnitudes of lower-level reference signals.
But at higher levels, changes in the size of an error might, for
example, result in switching in an orderly way through a set of
categories of control. When you try to loosen a bolt, at first
the error created by the bolt's not turning as specified by the
reference signal simply results in applying more muscle force.
When that fails, you might put a pipe extension on the handle of
the wrench; then you might pound repetitively on the pipe with a
hammer; then you might try pouring some Coke on the frozen
threads and waiting for a hour. Then you might get out the drill.

These various actions are qualitative different, yet they
represent a continuum that is related to the size (and duration)
of the error. They are ordered in the dimension of increasing
ability to overcome a stubborn stuck bolt, and increasing
duration of the error. Each one entails costs -- exerting a
painful amount of effort, having to switch to a new method,
risking damage to the components, having to wait, having to
partly destroy the components. So other goals are involved, and
one would rather use the least extreme method. I suppose that the
tendency is to use the method that induces the least error into
other control processes.

I think such progressions exist, and that we actually have most
of them set up in advance. It's partly a matter of the logic
level at work: if you can't find the keys to your front door, you
immediately run through a list of "logical" possibilities as to
where they might be. We have search strategies for such
situations, and they involve trying lower-level control processes
in an orderly sequence (actually or in imagination), not random
trial and error. But logic needn't be involved. All that's really
needed is a map, such that as the error increases or lasts for a
longer time (integral control), the map switches the output
connection in an orderly way through a list of choices that
systematically connects the error to the lower-level reference
signal appropriate for that degree of error.

I would seem that this idea could be explored experimentally. Set
up tasks of varying difficulty, and means of solving them of
varying effectiveness, with an increased cost attached to the
increased effectiveness. Something like "You can rent a
screwdriver for 1 cent per minute, a wrench for 5 cents per
minute, a hammer for 25 cents per minute, and a power drill for
$1 per minute." Then have the person go down a line of slotted-
head bolts, getting 15 cents for each one that is loosened, under
the rule that they all have to be removed in order to get
anything. The bolts, of course, vary greatly in the torque with
which they were tightened.

I'll bet that the person would always start by seeing if the bolt
could be removed with fingers alone, then progress through the
tools in order of effectiveness.

That's the sort of thing you're asking about, isn't it?
-------------------------------------------------------------
Best to all,

Bill P.

[From Dick Robertson] (930413)
Yes, that's the sort of thing I was looking for, though your answer led to
some questions I hadn't even thought of. The map you described would be a
form of memory for systems of the level in question, wouldn't it? Does that
resemble at all the memory map in the little man?
Best, Dick R