[From Bill Powers (920630.2000)]
Martin Taylor (920630.1500) --
Well, good. It was beginning to look as if we would soon run out of things
to disagree about.
Let me acknowledge first several valid objections you have raised to my
concept of reorganization. Yes, there is a problem with getting
reorganization to occur in the right place, and not disturb systems that
are working correctly (or at least usefully). Yes, there is a problem with
the highest level of reference signals if they are not supplied by a
reorganizing system. And (to a much lesser degree) reorganization of
systems that handle discrete variables is a problem -- as everyone who has
tried to build a self-writing program has discovered (no law of small
effects there, as you point out, at least in a digital computer).
There is one way out of the first problem. That is to define the
reorganizing system as a distributed system, a mode of operation of every
ECS, but one that is NOT concerned with the normal business of control.
This would solve the problem of specificity of the locus of reorganization,
in that this distributed system would sense error and act to correct it at
the place where it occurred. I have long held this concept in reserve -- I
think I even mentioned it in BCP -- but as I don't have any idea what this
special mode of operation might be (the Hebbian solution is not yet, to my
mind, worked out well enough to model) I have elected to go with a lumped
model that will work in essentially the same way. There are other possible
solutions to this problem. And there are also reasons for NOT wanting too
great a degree of specificity, and for NOT confining the motives for
reorganization to purely local conditions, as will be seen as we go along.
The second problem, that of the highest level of reference signals, is not
so difficult once the first problem is accepted as being solved (one way or
another). The simplest solution is to say "I don't know" and wait for a
better idea. Next would be to recognize that reference signals are, in
themselves, meaningless; it's the perceptual function that gives a
particular reference signal the significance of "so much of THIS
perception." So a reference signal can simply be a bias in the perceptual
function, or be set to zero (in which case the new perception will be
avoided -- not such an unrealistic deduction!). Another possibility, under
the hypothesis that reference signals are derived from memory, is that the
most predominant experience of a particular perception at the highest level
at any time during development becomes the default reference signal: this
is simply the way the world works, and we defend it even not knowing why we
do. That, too, does not seem so very unrealistic. And finally, we might
suppose that the selection of the highest reference signals is always
random, experimental. In that case we would have to allow the reorganizing
system to select reference signals -- but of course not systematically, and
as I will propose, not directly.
The third problem is the most difficult, and I think I'll pass on it for
now because there are other aspects of reorganization that we need to
discuss. I hope I'm not leaving out a problem that you consider more
important than any mentioned so far.
On p. 188 of BCP there's a diagram of the relationship of the reorganizing
system to the learned hierarchy, which I now wish I had drawn a little
differently. The way it's drawn, one could easily see the reorganizing
system as the highest level in the hierarchy, but for one rather subtle
fact -- too subtle for many people to notice. If you look at the outputs of
this highest-level system, you will see that they affect ALL levels in the
hierarchy, not just the "next highest" level. That's not allowed in a
hierarchical system, because intermediate levels of system will sense the
result as a disturbance and alter their actions to counteract it. Levels
can be skipped going upward, but not downward.
What I had in mind would have been better illustrated by drawing the
sensors, comparator, and output function of the reorganizing system BESIDE
the hierarchy rather than above it, with the output arrows travelling
sideways to reach all parts of it. The main point of the diagram, however,
is clear: the reorganizing system monitors variables that are neither
sensed nor controlled by the neuromotor hierarchy. I'll try to justify
that.
The need for some sort of learning system was always evident, even when the
model was in its birth throes. Like everyone else, I assumed that a system
responsible for growth and development would cause RELEVANT learning to
take place. That is, if the system were hungry, it would learn how to get
something to eat. If it were cold, it would find a way to get warm again.
But the longer I tried to think of a way to make a system like this work,
the more circular the problem seemed. You had to know that food would cure
hunger before you could learn how to look for food. You'd have to know what
food looks like and how it smells and tastes before you had ever seen,
smelled, or tasted it.
Then I realized the cause of the difficulties: I was assuming that there
was some natural obvious imperative relationship between feeling hungry and
eating, between feeling cold and doing something like exercising to keep
warm. I was assuming as a reason for learning the very thing that had to be
learned.
I had read Ashby's _Design for a brain_ by this time, and (whether he
thought of it this way or not), his concept of superstability, his
"homeostat," showed the answer. The basis for reorganization can't be any
PRODUCT of reorganization. It has to be something completely aside from
WHAT is learned. Ashby called these somethings "critical variables." They
are variables in the system that, if maintained within certain limits
(Ashby's way of seeing it) or close to certain reference levels (my way),
would assure or at least promote a viable organism.
It was then only a short step to realizing that if a hierarchy of control
were ever to come into being, this process of reorganization had to be
operational from birth, and most likely from early in gestation. This meant
that it could not use any perceptions of intensities, sensations,
configurations, transitions .... system concepts, as the nervous system
would come to perceive such things, before the ability to perceive (and, I
would now add, control) such things had been developed. This immediately
ruled out any principle of reorganization that uses any familiar
perceptions or means of control; particularly, any programs, principles, or
system concepts. Those things would EMERGE from reorganization; they could
not cause it. There could be no reorganizing algorithm.
What, then, could possibly be the basis for reorganization? What would
guide it? Part of the answer lies in preorganization of the nervous system
-- organization at least to the degree of making it possible to construct
perceptual functions, comparators, and output functions, or their
equivalents, with the necessary interconnections. But that only provides
the possibility of an adult organization; the details must be left up to
interactions with the environment, or no learning could happen,
particularly not on the massive scale of learning that marks human
development.
Well, I thought, why not pleasure and pain? When we feel pleasure, we feel
no urge to change; when we feel pain, something is wrong and we must learn
to behave differently, or at all. But pleasure and pain are abstractions,
whereas a real organism must operate in terms of variables and processes
relating them. Pleasure and pain are just labels for certain ill-understood
experiences. What underlies them?
That led to the concept of intrinsic variables. By "intrinsic," I mean that
these variables have nothing to do with anything going on outside the
organism. They are concerned with the state of the organism itself. They
might BE variables like blood pressure and CO2 tension, or they might be
signals, biochemical or even neural, standing for such variables. In
general we can speak of them as signals, since the null case is that in
which a variable IS the signal. The important feature of these signals is
that they must be inheritable: they must exist in the organism prior to any
reorganization. They are Ashby's critical variables.
For each intrinsic variable or signal, there is some state that is at the
center of a range that is acceptable for life to continue. That is a
_definition_ of an intrinsic variable and its reference level in this
context, that separates such a variable from all the other variables in a
living system. Ashby's term, critical variable, is probably better, and
maybe after everyone understands what I'm talking about we might think
about adopting it "officially." At any rate, for every intrinsic variable
there is some reference level, so that when all intrinsic variables are at
or near their respective reference levels, the organism is in a viable
state.
Now the outlines of a reorganizing system begin to appear. When intrinsic
variables depart from their normal reference levels, something is seriously
wrong; survival is in question. This is "pain,"
generalized. If the organism is to survive, it must do something.
But in the beginning, it doesn't know how to do ANYTHING. It has no
conception even of an external world. It has no idea of how that external
world bears on its well-being. It has no knowledge of how to affect that
external world even if it has the capacity to do so. Therefore, we have to
conclude, whatever is done about the intrinsic error, it must be done at
random -- without any systematic relationship to the outside world.
We can debate, of course, just how much of a head-start evolution actually
provides for this process. In lower organisms, it's quite a large amount.
But I wanted to solve the worst case because that would establish an
important principle; any organization capable of working in the worst case
would naturally work even better with a head start. So we can ignore that
consideration.
In my model of the reorganizing system, I posit intrinsic reference signals
and a comparator for each one. This is a metaphor; in fact, all we need
assume is that there is some reference state established by inheritance,
and that deviations of the intrinsic variables from their respective
reference levels lead to reorganization. We don't need to guess at the
mechanisms or even the locations of these processes -- that kind of
question can be answered only by a kind of data that nobody knows how to
obtain yet. We can only speak of functions, not about how they are carried
out. A control-system diagram illustrates the functions and how they must
be related.
The diagram of the reorganizing system on p. 188 in BCP shows just one
intrinsic variable and perceptual signal, one reference level, one
comparator, and one branching output that mediates the random changes in
the target location, the present or future hierarchy of control systems.
This is a schematic representation of a system that may involve hundreds of
intrinsic variables with specific reference levels, and hundreds or
thousands of pathways that connect error signals (if signals they be) to
the target locations for reorganization. The signals may be purely
biochemical, or some of them may be neural, although not part of the main
hierarchy (the autonomic system and reticular formation may be involved).
In all likelihood, this system that is shown as a single control system
really consists of a multitude of control systems distributed throughout
the body and nervous system, or throughout the biochemical systems which
pervade everything. The geometry is immaterial, as is the nature of the
signals and computers.
Now the crucial part: closing the loop for these intrinsic control systems
that create the organization of behavior.
The outputs of the reorganizing system change neural connections, both as
to existence and as to weights. They cause no neural signals in themselves;
they merely change the properties of the neural nets. In doing so, they can
connect sensory inputs to motor outputs, and thus, in the presence of
stimulation, create a motor response to a sensory stimulus. They don't
create any particular response; they only establish a functional connection
so that motor output bears a relationship to sensory input according to the
amount of input, should any such inputs occur.
The motor outputs affect the world, which affects the sensory inputs. The
only stable configuration is that involving negative feedback. When there
is negative feedback, some part of the world tends to be stabilized, or
even to be brought into a specific state that is resistant to disturbance.
Of this negative-feedback (or other) relationship between action and
perception, the reorganizing system knows nothing. The entire world of the
reorganizing system consists of intrinsic variables, which relate to the
state of the organism itself, not to the state of the outside world. But
when actions occur, they affect the world, and the world affects the state
of the organism in ways other than sensory. The state of the world affects
intrinsic variables.
Therefore if reorganization results in stabilizing certain aspects of the
external world against disturbance, and brings those aspects to specific
states, the result may be -- MAY be -- to bring some intrinsic variables
closer to their reference states. This is purely a side-effect of what the
new control system is doing. What the control system is sensing and
controlling may have nothing directly to do with the side-effect that is
changing an intrinsic variable. But if the result of sensing and
controlling in that way is to lessen intrinsic error, reorganization will
slow or even cease. And that control system will continue in existence.
The question of specificity of reorganization arises. I hope you can see
that the reorganizing system is loosely enough defined to allow for many
possible solutions to this problem. I won't go into them just yet.
Now we can see the basic logic of reorganization. The reorganizing system
is not concerned at all with what control systems exist or what variables
in the outside world are brought under control. All it is concerned with is
keeping intrinsic variables near their reference levels. If there is
intrinsic error, reorganization commences, with the result that perceived
variables are redefined, sensitivities and calibrations change, means of
control change, and the external world is stabilized in a new state. The
only significance of that new state to the reorganizing system is that
intrinsic error may be corrected, putting an end, for a while, to
reorganization.
Given a reorganizing system that works this way, an organism can learn to
survive in environments having almost completely arbitrary properties. A
pigeon with such a reorganizing system can come to maintain its internal
nutritional state near an inherited reference level by pecking on keys
instead of grain, or even by walking in figure-eight patterns -- it can
learn to control variables that have absolutely no natural, logical, or
rational connection to nutrition, and by doing do, can protect its own
nutritional state. It does not need to reason out why performing such acts
is so vital, or even what the connection is with getting food. It doesn't
even have to know that ingesting little bits of brown stuff has the effect
of keeping it from starving.
Reorganization is not an intelligent process; it produces intelligence as a
byproduct of its real function, which is keeping the organism alive and
functioning. It is the most powerful and general control system there can
be, because it assumes nothing about the properties of the outside world.
NOTHING. It does not even know there is an outside world.
If you recall my posts on the origins of life a year or so ago, you'll
realize that this process of reorganization exemplifies exactly the same
process that I proposed as the way the first living molecules came into
being. The main difference is that the variability that creates new
organizations to be retained or evolved away comes not from external forces
but from an active process of random change driven by internal error
signals. The reorganizing system is evolution internalized and made
purposeful.
So it is important that reorganization not be TOO specific. In an arbitrary
environment, there's no telling what control processes must be learned or
modified in order to keep intrinsic error near zero. Reorganization depends
on the ACTUAL effects of controlling certain objective variables, not on
our symbolic understanding of experience, on our theories, or on our
perceptual interpretations.
If there's one primary concept that must be understood to understand my
theory of reorganization, it's that the variables controlled by this
process are completely apart from the variables represented as perceptions
in the learned hierarchy. The learned hierarchy is concerned primarily with
sensory data about an outside world, and about those aspects of physiology
that are represented in the sensory world. The reorganizing system is
concerned about variables in the world beyond the senses -- with the actual
state of the physical organism at levels inaccessible to the central
nervous system.
Some of these variables might actually be in the brain. I have entertained
the idea that because comparison is such a simple process, involving just a
subtraction, comparators might be part of the kit with which we start
building a functioning nervous system. In that case, error signals could
also be intrinsic variables. It is not necessary to learn from experience
that error signals represent some degree of failure to control; large error
signals indicate serious problems, no matter what perception they relate
to. The reorganizing system could monitor error signals in general, en
masse, without any need to know what they mean, and their mere presence at
large magnitudes could be sufficient to cause reorganization to start. This
would satisfy the requirement that intrinsic variables be inheritable. And
one result would be that loss of control could lead to highly localized
reorganization, precisely in the system that has lost control.
But not all intrinsic variables are in the brain.
Another possibility is that certain kinds of intrinsic variables are
associated with certain levels of control -- in other words, as Mark Olson
suggested, that the levels of the hierarchy might be associated with
different classes of intrinsic variables and reorganizing processes. This
would at least localize reorganization at the right level, if not in the
right system. But I have no idea what these classes of intrinsic variables
might be -- what INTERNAL states would have special significance relative
to the different levels of control and their effects in the OUTSIDE world.
One might suppose that sexual intrinsic variables might require rather
higher levels of control to be acquired and modified, as relationships with
other control systems are involved. Achieving sexual satisfaction can
certainly require walking in figure eights and worse. But examples are hard
to think of.
I have also proposed that reorganization might be directed by awareness,
and entail what we feel as volition. That's pie in the sky right now.
···
-------------------------------------------------------------------
As to "restructuring," I don't feel that this necessarily relates to what I
think of as reorganization. But I won't fight over this point, or over your
proposals about the sequence in which various aspects of organization might
come into being. The one point on which we apparently have a significant
difference is on the relationship of intrinsic variables to what is
learned. I hope I have explained more clearly what I mean by intrinsic
variables and just why I think that there must be NO relationship to the
learned control systems, save for the one imposed by the natural
environment that relates the state of the world to the basic intrinsic
state of the organism itself. It would seem that you have not understood my
terms here; perhaps now you do.
--------------------------------------------------------------------
Best,
Bill P.