comments via chapeau

[From Bill Powers (960105.1600 MST)]

Martin Taylor 921005 11:30 --

     This is in response to the discussion between Bill and Greg.

October, 1992! Forgive me if it isn't fresh in my mind.

Your 12 possible modes of reorganization would seem to cover the subject
pretty thoroughly. One thing this list suggests to me is that these
modes, in the best of all possible theories, would simply be different
manifestations of a single method of reorganization applied to various
parts of the system. Perhaps the most useful aspect of your writeup is
to show that the subject of reorganization is not simple, and that to
approach it experimentally is going to take some pretty ingenious
strategies. Just working out examples of each type would be a
considerable project.

     Hebbian learning can operate within different kinds of input
     function, provided that the function is such that small changes in
     parameter values cause small changes in the function's behaviour.

This is an important principle; it says that the brain must have some
degree of pre-organization to make reorganization effective. One of the
problems with the old "program-writing-program" studies was the fact
that in a computer, a one-bit change in a program can have any amount of
effect from none to complete destructiveness. The principle of small
effects from small changes is not built in to digital computers.

Skinner, with his method of shaping, was implicitly following this
principle. The organism was never required to make a sudden large change
in organization; instead, the steps were made as small as possible. Of
course to make this work, the organism must also follow the same
principle; make small changes which alter behavior only slightly in any
dimension.

This post ought to be archived; it will provide some organizing
principles when someone decides to sink some extended effort into
studies of reorganization.

···

-----------------------------------------------------------------------
Shannon Williams (960105) --

     look at it this way:

       input = 13, output = move right.
       input = 12, output = move right.
       input = 11, output = move right.
       input = 10, output = 0.
       input = 9, output = move left.
       input = 8, output = move left.
       input = 7, output = move left.

This is not a control system, but an input-output system with a bias. As
Rick Marken pointed out, to have a control system you must also express
the input as a function of the output, creating a closed loop. This is
implicit in your next statement,

     The accumulative effect of these input/output is to center you at
     10

Why would that "center you at 10?" That would occur only if moving right
or left affected the input -- if the input was a representation of where
you are. So you have to include the effect of output on input if you're
going to have a control loop.

     When you observe an organism generating output you say that he is
     in error. You watch his environment change, and then you see him
     stop generating output. At this point you say he is not in error.
     Whenever you change his environment, he puts it back. So you call
     this his reference. This is ALL you know.

You're trivializing what is really a carefully worked out procedure for
defining and measuring variables involved in a control process. For
example, we don't just see the organism "producing output." We see
output that is strongly and explicitly connected to the input, affecting
the input by completely visible means. We can apply disturbing
influences to the input, and observe that the output immediately changes
to oppose the effects of the disturbance and keep the input from
changing. By applying different disturbances we can find the state of
the input variable toward which the output always tends to push it. The
organism acts as if this variable were sensed relative to some
particular value of it, so the action depends on departures of the input
from this particular value. We call those departures the "error" because
the organisms acts as if it is trying to correct the departures, to
eliminate the difference. And we call the particular value the
"reference level" because reactions to the input are not to the actual
value of the input, but to that value taken with reference to the
particular value that the organism is maintaining by its actions.

     Just because you have named one thing 'error' and another thing
     'reference' does not mean that you understand what they are. You
     are in no position to demand that 'error' be defined as you do
     above.

What you're doing is rejecting definitions. There is no question of
understanding what error and reference really "are." They are what we
define them to be. You could call the input variable v and the special
value maintained by the action v0. You could say o for output, and write

o = f(v0 - v)

to describe how output depends on input via the organism. You could look
at the environment and describe how v depends on the output and on
independent influences, disturbances represented by d: v = g(o) + h(d).
All this just a matter of mathematical notation, describing observed
relationships among variables. If we choose to call v0 - v the "error",
and v0 the "reference level," this has no effect on the basic
observations or the mathematical representation. Those are just words.
There is a 60-year tradition of control theory behind them, which
probably explains why I use them in discourse, but their meaning is in
the mathematics and the observations. If you don't want me to say
"error" I can just say "vo - v." It's the same thing. But why the hell
shouldn't I call that the "error" if I want to?

     Offer explanations from the world, not from your PCT manual.

That is exactly what we always do, except when proposing a MODEL of the
insides of the organism. The basic principles of control don't require
proposing that model; they are built strictly on observable variables
and relationships. Of course we have the great advantage here that
thousands of actual control systems have been built, so we know how they
are typically organized inside and can propose some simple models that
have a good chance of being maps of real internal organizations. In some
cases the entire loop, including the neural systems and motor outputs,
have been traced out, so we know of complete examples of real control
organizations inside organisms. But we can do without that evidence,
because there are rigorous ways of deducing the overall organism
function for a particular control task, deriving it from observable
relationships among external variables. All that the evidence from
inside does is tell us HOW the organism function is implemented, among
the many ways the same function could be implemented.

I hate to say it to my old pal Shannon, but you're talking through your
hat.
-----------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 960108 11:00]

Bill Powers (960105.1600 MST)

Your 12 possible modes of reorganization would seem to cover the subject
pretty thoroughly.

Thank you.

I think it omits, or at least hides within "changes in the output function"
an element that has been occasionally discussed over the time since I first
wrote it--the possible linkages between the output of one ECU and the
"gain control" of another. Those possibilities should be added if your
suggestion is to be followed:

This post ought to be archived; it will provide some organizing
principles when someone decides to sink some extended effort into
studies of reorganization.

I find that I repost it every year or so, but to have it accessible from the
CSG Web page would be better.

···

-------------------------

One thing this list suggests to me is that these
modes, in the best of all possible theories, would simply be different
manifestations of a single method of reorganization applied to various
parts of the system.

That's a very intriguing possibility. I'm not at all clear how it could
work, using topologically continuous (small-step, small change) learning
under some conditions and discrete (infinitesimal-step, large change)
learning under other conditions. But non-linear systems often show
bifurcations in their behaviour, in which small changes in parameter
values show a characteristic change in the system behaviour at some
specific parameter value. As an example, think of a bowl; a ball in the
bowl rolls to the middle (which is the bottom). But now gradually reduce
the slope of the sides, until the "bowl" becomes a flat sheet, and then
a dome, as in the picture:

  \ / __
    * * --------- * *
     -- / \

Until the "bowl" becomes flat, any ball, placed anywhere on the surface,
goes to the same place. The infinitesimal change between "shallow bowl",
"flat", and "shallow dome" marks a big change; the ball now runs away
to infinity.

This is the behaviour of a loop whose gain goes from negative (control)
through zero (no output) to positive (runaway). (The elements of the loop
may be linear, but the behaviour isn't: the effect of a disturbance has a
factor 1/(1+G), which is far from linear ). The parameter may change
very slightly, but the result changes dramatically. The same kind of thing
may be true of the reorganizing system, and there may exist some unifying
principle such as you suggest.

Martin