Reorganization

[From Bill Powers (920630.2000)]

Martin Taylor (920630.1500) --

Well, good. It was beginning to look as if we would soon run out of things
to disagree about.

Let me acknowledge first several valid objections you have raised to my
concept of reorganization. Yes, there is a problem with getting
reorganization to occur in the right place, and not disturb systems that
are working correctly (or at least usefully). Yes, there is a problem with
the highest level of reference signals if they are not supplied by a
reorganizing system. And (to a much lesser degree) reorganization of
systems that handle discrete variables is a problem -- as everyone who has
tried to build a self-writing program has discovered (no law of small
effects there, as you point out, at least in a digital computer).

There is one way out of the first problem. That is to define the
reorganizing system as a distributed system, a mode of operation of every
ECS, but one that is NOT concerned with the normal business of control.
This would solve the problem of specificity of the locus of reorganization,
in that this distributed system would sense error and act to correct it at
the place where it occurred. I have long held this concept in reserve -- I
think I even mentioned it in BCP -- but as I don't have any idea what this
special mode of operation might be (the Hebbian solution is not yet, to my
mind, worked out well enough to model) I have elected to go with a lumped
model that will work in essentially the same way. There are other possible
solutions to this problem. And there are also reasons for NOT wanting too
great a degree of specificity, and for NOT confining the motives for
reorganization to purely local conditions, as will be seen as we go along.

The second problem, that of the highest level of reference signals, is not
so difficult once the first problem is accepted as being solved (one way or
another). The simplest solution is to say "I don't know" and wait for a
better idea. Next would be to recognize that reference signals are, in
themselves, meaningless; it's the perceptual function that gives a
particular reference signal the significance of "so much of THIS
perception." So a reference signal can simply be a bias in the perceptual
function, or be set to zero (in which case the new perception will be
avoided -- not such an unrealistic deduction!). Another possibility, under
the hypothesis that reference signals are derived from memory, is that the
most predominant experience of a particular perception at the highest level
at any time during development becomes the default reference signal: this
is simply the way the world works, and we defend it even not knowing why we
do. That, too, does not seem so very unrealistic. And finally, we might
suppose that the selection of the highest reference signals is always
random, experimental. In that case we would have to allow the reorganizing
system to select reference signals -- but of course not systematically, and
as I will propose, not directly.

The third problem is the most difficult, and I think I'll pass on it for
now because there are other aspects of reorganization that we need to
discuss. I hope I'm not leaving out a problem that you consider more
important than any mentioned so far.

On p. 188 of BCP there's a diagram of the relationship of the reorganizing
system to the learned hierarchy, which I now wish I had drawn a little
differently. The way it's drawn, one could easily see the reorganizing
system as the highest level in the hierarchy, but for one rather subtle
fact -- too subtle for many people to notice. If you look at the outputs of
this highest-level system, you will see that they affect ALL levels in the
hierarchy, not just the "next highest" level. That's not allowed in a
hierarchical system, because intermediate levels of system will sense the
result as a disturbance and alter their actions to counteract it. Levels
can be skipped going upward, but not downward.

What I had in mind would have been better illustrated by drawing the
sensors, comparator, and output function of the reorganizing system BESIDE
the hierarchy rather than above it, with the output arrows travelling
sideways to reach all parts of it. The main point of the diagram, however,
is clear: the reorganizing system monitors variables that are neither
sensed nor controlled by the neuromotor hierarchy. I'll try to justify
that.

The need for some sort of learning system was always evident, even when the
model was in its birth throes. Like everyone else, I assumed that a system
responsible for growth and development would cause RELEVANT learning to
take place. That is, if the system were hungry, it would learn how to get
something to eat. If it were cold, it would find a way to get warm again.
But the longer I tried to think of a way to make a system like this work,
the more circular the problem seemed. You had to know that food would cure
hunger before you could learn how to look for food. You'd have to know what
food looks like and how it smells and tastes before you had ever seen,
smelled, or tasted it.

Then I realized the cause of the difficulties: I was assuming that there
was some natural obvious imperative relationship between feeling hungry and
eating, between feeling cold and doing something like exercising to keep
warm. I was assuming as a reason for learning the very thing that had to be
learned.

I had read Ashby's _Design for a brain_ by this time, and (whether he
thought of it this way or not), his concept of superstability, his
"homeostat," showed the answer. The basis for reorganization can't be any
PRODUCT of reorganization. It has to be something completely aside from
WHAT is learned. Ashby called these somethings "critical variables." They
are variables in the system that, if maintained within certain limits
(Ashby's way of seeing it) or close to certain reference levels (my way),
would assure or at least promote a viable organism.

It was then only a short step to realizing that if a hierarchy of control
were ever to come into being, this process of reorganization had to be
operational from birth, and most likely from early in gestation. This meant
that it could not use any perceptions of intensities, sensations,
configurations, transitions .... system concepts, as the nervous system
would come to perceive such things, before the ability to perceive (and, I
would now add, control) such things had been developed. This immediately
ruled out any principle of reorganization that uses any familiar
perceptions or means of control; particularly, any programs, principles, or
system concepts. Those things would EMERGE from reorganization; they could
not cause it. There could be no reorganizing algorithm.

What, then, could possibly be the basis for reorganization? What would
guide it? Part of the answer lies in preorganization of the nervous system
-- organization at least to the degree of making it possible to construct
perceptual functions, comparators, and output functions, or their
equivalents, with the necessary interconnections. But that only provides
the possibility of an adult organization; the details must be left up to
interactions with the environment, or no learning could happen,
particularly not on the massive scale of learning that marks human
development.

Well, I thought, why not pleasure and pain? When we feel pleasure, we feel
no urge to change; when we feel pain, something is wrong and we must learn
to behave differently, or at all. But pleasure and pain are abstractions,
whereas a real organism must operate in terms of variables and processes
relating them. Pleasure and pain are just labels for certain ill-understood
experiences. What underlies them?

That led to the concept of intrinsic variables. By "intrinsic," I mean that
these variables have nothing to do with anything going on outside the
organism. They are concerned with the state of the organism itself. They
might BE variables like blood pressure and CO2 tension, or they might be
signals, biochemical or even neural, standing for such variables. In
general we can speak of them as signals, since the null case is that in
which a variable IS the signal. The important feature of these signals is
that they must be inheritable: they must exist in the organism prior to any
reorganization. They are Ashby's critical variables.

For each intrinsic variable or signal, there is some state that is at the
center of a range that is acceptable for life to continue. That is a
_definition_ of an intrinsic variable and its reference level in this
context, that separates such a variable from all the other variables in a
living system. Ashby's term, critical variable, is probably better, and
maybe after everyone understands what I'm talking about we might think
about adopting it "officially." At any rate, for every intrinsic variable
there is some reference level, so that when all intrinsic variables are at
or near their respective reference levels, the organism is in a viable
state.

Now the outlines of a reorganizing system begin to appear. When intrinsic
variables depart from their normal reference levels, something is seriously
wrong; survival is in question. This is "pain,"
generalized. If the organism is to survive, it must do something.

But in the beginning, it doesn't know how to do ANYTHING. It has no
conception even of an external world. It has no idea of how that external
world bears on its well-being. It has no knowledge of how to affect that
external world even if it has the capacity to do so. Therefore, we have to
conclude, whatever is done about the intrinsic error, it must be done at
random -- without any systematic relationship to the outside world.

We can debate, of course, just how much of a head-start evolution actually
provides for this process. In lower organisms, it's quite a large amount.
But I wanted to solve the worst case because that would establish an
important principle; any organization capable of working in the worst case
would naturally work even better with a head start. So we can ignore that
consideration.

In my model of the reorganizing system, I posit intrinsic reference signals
and a comparator for each one. This is a metaphor; in fact, all we need
assume is that there is some reference state established by inheritance,
and that deviations of the intrinsic variables from their respective
reference levels lead to reorganization. We don't need to guess at the
mechanisms or even the locations of these processes -- that kind of
question can be answered only by a kind of data that nobody knows how to
obtain yet. We can only speak of functions, not about how they are carried
out. A control-system diagram illustrates the functions and how they must
be related.

The diagram of the reorganizing system on p. 188 in BCP shows just one
intrinsic variable and perceptual signal, one reference level, one
comparator, and one branching output that mediates the random changes in
the target location, the present or future hierarchy of control systems.
This is a schematic representation of a system that may involve hundreds of
intrinsic variables with specific reference levels, and hundreds or
thousands of pathways that connect error signals (if signals they be) to
the target locations for reorganization. The signals may be purely
biochemical, or some of them may be neural, although not part of the main
hierarchy (the autonomic system and reticular formation may be involved).
In all likelihood, this system that is shown as a single control system
really consists of a multitude of control systems distributed throughout
the body and nervous system, or throughout the biochemical systems which
pervade everything. The geometry is immaterial, as is the nature of the
signals and computers.

Now the crucial part: closing the loop for these intrinsic control systems
that create the organization of behavior.

The outputs of the reorganizing system change neural connections, both as
to existence and as to weights. They cause no neural signals in themselves;
they merely change the properties of the neural nets. In doing so, they can
connect sensory inputs to motor outputs, and thus, in the presence of
stimulation, create a motor response to a sensory stimulus. They don't
create any particular response; they only establish a functional connection
so that motor output bears a relationship to sensory input according to the
amount of input, should any such inputs occur.

The motor outputs affect the world, which affects the sensory inputs. The
only stable configuration is that involving negative feedback. When there
is negative feedback, some part of the world tends to be stabilized, or
even to be brought into a specific state that is resistant to disturbance.

Of this negative-feedback (or other) relationship between action and
perception, the reorganizing system knows nothing. The entire world of the
reorganizing system consists of intrinsic variables, which relate to the
state of the organism itself, not to the state of the outside world. But
when actions occur, they affect the world, and the world affects the state
of the organism in ways other than sensory. The state of the world affects
intrinsic variables.

Therefore if reorganization results in stabilizing certain aspects of the
external world against disturbance, and brings those aspects to specific
states, the result may be -- MAY be -- to bring some intrinsic variables
closer to their reference states. This is purely a side-effect of what the
new control system is doing. What the control system is sensing and
controlling may have nothing directly to do with the side-effect that is
changing an intrinsic variable. But if the result of sensing and
controlling in that way is to lessen intrinsic error, reorganization will
slow or even cease. And that control system will continue in existence.

The question of specificity of reorganization arises. I hope you can see
that the reorganizing system is loosely enough defined to allow for many
possible solutions to this problem. I won't go into them just yet.

Now we can see the basic logic of reorganization. The reorganizing system
is not concerned at all with what control systems exist or what variables
in the outside world are brought under control. All it is concerned with is
keeping intrinsic variables near their reference levels. If there is
intrinsic error, reorganization commences, with the result that perceived
variables are redefined, sensitivities and calibrations change, means of
control change, and the external world is stabilized in a new state. The
only significance of that new state to the reorganizing system is that
intrinsic error may be corrected, putting an end, for a while, to
reorganization.

Given a reorganizing system that works this way, an organism can learn to
survive in environments having almost completely arbitrary properties. A
pigeon with such a reorganizing system can come to maintain its internal
nutritional state near an inherited reference level by pecking on keys
instead of grain, or even by walking in figure-eight patterns -- it can
learn to control variables that have absolutely no natural, logical, or
rational connection to nutrition, and by doing do, can protect its own
nutritional state. It does not need to reason out why performing such acts
is so vital, or even what the connection is with getting food. It doesn't
even have to know that ingesting little bits of brown stuff has the effect
of keeping it from starving.

Reorganization is not an intelligent process; it produces intelligence as a
byproduct of its real function, which is keeping the organism alive and
functioning. It is the most powerful and general control system there can
be, because it assumes nothing about the properties of the outside world.
NOTHING. It does not even know there is an outside world.

If you recall my posts on the origins of life a year or so ago, you'll
realize that this process of reorganization exemplifies exactly the same
process that I proposed as the way the first living molecules came into
being. The main difference is that the variability that creates new
organizations to be retained or evolved away comes not from external forces
but from an active process of random change driven by internal error
signals. The reorganizing system is evolution internalized and made
purposeful.

So it is important that reorganization not be TOO specific. In an arbitrary
environment, there's no telling what control processes must be learned or
modified in order to keep intrinsic error near zero. Reorganization depends
on the ACTUAL effects of controlling certain objective variables, not on
our symbolic understanding of experience, on our theories, or on our
perceptual interpretations.

If there's one primary concept that must be understood to understand my
theory of reorganization, it's that the variables controlled by this
process are completely apart from the variables represented as perceptions
in the learned hierarchy. The learned hierarchy is concerned primarily with
sensory data about an outside world, and about those aspects of physiology
that are represented in the sensory world. The reorganizing system is
concerned about variables in the world beyond the senses -- with the actual
state of the physical organism at levels inaccessible to the central
nervous system.

Some of these variables might actually be in the brain. I have entertained
the idea that because comparison is such a simple process, involving just a
subtraction, comparators might be part of the kit with which we start
building a functioning nervous system. In that case, error signals could
also be intrinsic variables. It is not necessary to learn from experience
that error signals represent some degree of failure to control; large error
signals indicate serious problems, no matter what perception they relate
to. The reorganizing system could monitor error signals in general, en
masse, without any need to know what they mean, and their mere presence at
large magnitudes could be sufficient to cause reorganization to start. This
would satisfy the requirement that intrinsic variables be inheritable. And
one result would be that loss of control could lead to highly localized
reorganization, precisely in the system that has lost control.

But not all intrinsic variables are in the brain.

Another possibility is that certain kinds of intrinsic variables are
associated with certain levels of control -- in other words, as Mark Olson
suggested, that the levels of the hierarchy might be associated with
different classes of intrinsic variables and reorganizing processes. This
would at least localize reorganization at the right level, if not in the
right system. But I have no idea what these classes of intrinsic variables
might be -- what INTERNAL states would have special significance relative
to the different levels of control and their effects in the OUTSIDE world.
One might suppose that sexual intrinsic variables might require rather
higher levels of control to be acquired and modified, as relationships with
other control systems are involved. Achieving sexual satisfaction can
certainly require walking in figure eights and worse. But examples are hard
to think of.

I have also proposed that reorganization might be directed by awareness,
and entail what we feel as volition. That's pie in the sky right now.

···

-------------------------------------------------------------------
As to "restructuring," I don't feel that this necessarily relates to what I
think of as reorganization. But I won't fight over this point, or over your
proposals about the sequence in which various aspects of organization might
come into being. The one point on which we apparently have a significant
difference is on the relationship of intrinsic variables to what is
learned. I hope I have explained more clearly what I mean by intrinsic
variables and just why I think that there must be NO relationship to the
learned control systems, save for the one imposed by the natural
environment that relates the state of the world to the basic intrinsic
state of the organism itself. It would seem that you have not understood my
terms here; perhaps now you do.
--------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 930616 18:40]

Going back deep into history to reopen a discussion never completed...

After our power failure of yesterday that inhibited (and lost) my information
theory posting, I have started going back through postings I saved for
later consideration. Here's the earliest.

(Bill Powers 920704.0800) to me

RE: Reorganization

I still like the idea that one is inserting levels,

I don't, particularly. There are reasons for which I like it, but more for
which I don't. How do you open up the connections from a higher to a lower
system to insert a complete control system with all its connections to and
from both the higher and the lower systems? This idea seems to me to entail
enormous difficulties, whereas building from the bottom up eliminates those
particular problems completely.
...
This is, in fact, how I think higher levels of control come about.
situations arise in which existing control systems can't correct intrinsic
error well enough any more, even though they're all keeping their own
errors small most of the time. They're not controlling the right ASPECT of
the environment. So a new level of control begins to form, setting the
formerly fixed or randomly varying reference signals in a systematic way to
control new perceptual variables that are functions of the old ones. This
adding of levels continues as long as there is a need and as long as there
are available neural components of the right kind still unorganized. You
arrive at the top level when you run out of new layers of neurons that
permit new types of control.

After nearly a year, I still like the insertion of new ECSs into the
hierarchy in arbitrary places. In particular, I like it between a
fixed "top level" consisting of the control of intrinsic (largely chemical)
variables and a growing hierarchy of controlled perceptual variables.
What I realize is that since Bill wrote that posting, I have more or
less absorbed into my understanding of PCT that the problem Bill raised
does not seem to exist. We are, in fact, intending to incorporate this
method of learning into our syntax control system (though not in its
initial incarnation.) I think (and hope) that the scheme fulfils what
Bill says in the second quoted paragraph above.

If I understand correctly, Bill (I address this to you specifically),
level-jumping connections of perceptual signals pose no problems.

Perceptual signals can go from anywhere to anywhere, provided that
they are useful (possibly with the caveat that they must go upward
in the hierarchy). On the other hand, output signals cannot usefully
jump levels, even though they may not be prohibited from doing so.
This restriction occurs because a level-jumping output would be affecting
the same variables (probably the reference signals of low-level ECSs)
that would be affected by intervening ECSs, and control by means of those
intervening ECSs would be more effective than affecting the low-level
references directly.

I think I have represented Bill P's position as best I remember it. In
any case, I will take the foregoing as a reasonable position.

Now let us consider inserting a new ECS between two cleanly separated
levels, M and N (N = M+1). The new ECS does not have a level designation.
I imagine this new ECS as receiving its reference signal from the outputs
of some of the level N ECSs, and its sensory signals from some of the
level M perceptual signals as well as (possibly) level M-1 perceptual
signals.

The new ECS has to be considered as being at level Q, which is M+1 and N-1.
In other words, it disturbs the labelling of levels M and N, which are no
longer consecutive. Now, N = M+2, at least in those parts of the hierarchy
to which the new ECS is connected. At this point, we have level jumping
signals in both directions between levels M and N, and for some of those
down-going signals, reference levels at M are set both from N and from Q.

What happens? Reorganization, in the sense of both smooth modification
of perceptual (and reference) weights, and in the sense of reconnection
within the hierarchy. At least that is what will happen if the new ECS
sets up conflict somewhere, such that a sustained error is introduced
into part of the hierarchy. If it doesn't, then it is controlling something
orthogonal to whatever was being controlled before, and the whole
hierarchy has become more capable.

Before considering the probable kind of reorganization, reconsider the
reason for having levels in the first place. It is the non-linearity of
the overall control system, which includes the part that goes through the
world. If the world's responses to outputs were linear, a one-level
system of orthogonal ECSs would be perfectly adequate. But it is not,
and so we need higher-level ECSs to shift reference levels according to
the current state of the environment. And I speculate that we probably
need more of them at any level than we have effectors at the interface
to the environment.

Now return to our new ECS at level Q, which is now assumed to induce
some conflict in the hierarchy (the contrary having been considered above).
The existence of this new ECS could be valuable to the hierarchy only if
it allowed better control of the level N perceptual signals. It can do
this if it represents a perceptual signal that coordinates the actions
of level M ECSs in a way that corresponds to some nonlinearity of the
world--a new CEV that is usefully controlled.

One of the accepted ways an ECS "learns" is by smooth modification of
its Perceptual Input Function in response to its own failure to control.
Bill has said that he has successfully simulated this as one aspect of
reorganization. If, with its (by assumption) randomly selected input
and output connections, it does not attain control, it will be likely
to change its PIF until it finds something that it can control. Given
that its reference signals are set from level N, that something will
be useful to level N ECSs. (Presumably it will also be changing its
output connection signs and/or weights at the same time, to achieve
control, affecting how it interacts with level M).

If the new ECS cannot achieve control, the question remains of whether
ECSs in the hierarchy die. Neurons, or at least neuronal connections,
seem to, especially early in life, as I understand it. I see no reason
why useless ECSs (or at least useless connections) should not also die.
If the new ECS at level Q attains good control, Bill has argued that
the level-jumping output connections from N down to M would become
less useful, and might die. If many new ECSs came into being at level Q,
then most of the level jumping connections would be reduced in usefulness
and would probably die. Eventually, level Q might become a proper level
in the hierarchy, rather than a single anomalous ECS dropped in an
awkward place.

Now more practically, how would a new ECS at level Q be constructed? What
we propose in our model is to use a kind of genetic algorithm, in which
the connections of the new level Q ECS to level N are copied from subsets
of the connections of several level M ECSs, and its connections to level M
are made to those same ECSs from which it borrowed the level N connections.
We may change this proposal, but that's the present idea.

(Bill)

There's some developmental evidence that, if interpreted in a certain way,
indicates that you may be right. For now, however, I don't see how you CAN
be right.

Does this help?

Martin

[Martin Taylor 920701 22:40]
(Bill Powers 920630.2000)

Sorry to post another near 400-line posting in the same day. But here it
is, anyway.

Well, good. It was beginning to look as if we would soon run out of things
to disagree about.

Oh, I wouldn't worry too much about that. We can always talk about
statistics. But I'd like to get as big as possible foundation of things we
agree solidly about, to move on to more subtle issues. What I would like to
sort out about PCT is what must be true, what cannot be true, and what may be
true if we allow some other assumptions. It's what you called "truthsaying,"
I think. No disagreement in the first two classes should be allowed to stand,
at least not in the foundational structure.

This is a long comment on your long posting about my discussion of learning.
For the most part I agree with what you say. You write so clearly that it is
possible also to find where and why I disagree with some bits. At the end of
your posting, you say:

The one point on which we apparently have a significant
difference is on the relationship of intrinsic variables to what is
learned. I hope I have explained more clearly what I mean by intrinsic
variables and just why I think that there must be NO relationship to the
learned control systems, save for the one imposed by the natural
environment that relates the state of the world to the basic intrinsic
state of the organism itself. It would seem that you have not understood my
terms here; perhaps now you do.

I don't think I had misunderstood your terms, but I did not have as clear an
idea of how you see the reorganization system as I now do. I stated the
intrinsic variables as being those dealing with body chemistry. That was
sloppy. I think you put it much better:

By "intrinsic," I mean that
these variables have nothing to do with anything going on outside the
organism. They are concerned with the state of the organism itself. They
might BE variables like blood pressure and CO2 tension, or they might be
signals, biochemical or even neural, standing for such variables.
...
For each intrinsic variable or signal, there is some state that is at the
center of a range that is acceptable for life to continue. That is a
_definition_ of an intrinsic variable and its reference level in this
context, that separates such a variable from all the other variables in a
living system.

Nevertheless, your description describes more or less what I had in mind.

You presented very clearly why you think that there should be a reorganizing
system outside the sensory-motor control hierarchy, and why the intrinsic
variables should be represented only in that separate system. But (so far) I
don't buy the argument as truthsaying, and I am not at all sure that the end
product of the argument, your separate reorganization system, is even
plausible. Maybe you can convince me.

[Intrinsic variables]
are variables in the system that, if maintained within certain limits
(Ashby's way of seeing it) or close to certain reference levels (my way),
would assure or at least promote a viable organism.

It was then only a short step to realizing that if a hierarchy of control
were ever to come into being, this process of reorganization had to be
operational from birth, and most likely from early in gestation. This meant
that it could not use any perceptions of intensities, sensations,
configurations, transitions .... system concepts, as the nervous system
would come to perceive such things, before the ability to perceive (and, I
would now add, control) such things had been developed. This immediately
ruled out any principle of reorganization that uses any familiar
perceptions or means of control; particularly, any programs, principles, or
system concepts. Those things would EMERGE from reorganization; they could
not cause it. There could be no reorganizing algorithm.

Total agreement so far, except for a possible question about what you mean by
"algorithm." Surely for reorganization to occur, it must follow some
discernable rules? You describe such rules later (random reconnection, for
example), so you must mean something different from what that sentence seems
to mean.

Now the outlines of a reorganizing system begin to appear. When intrinsic
variables depart from their normal reference levels, something is seriously
wrong; survival is in question. This is "pain,"
generalized. If the organism is to survive, it must do something.

But in the beginning, it doesn't know how to do ANYTHING. It has no
conception even of an external world. It has no idea of how that external
world bears on its well-being. It has no knowledge of how to affect that
external world even if it has the capacity to do so. Therefore, we have to
conclude, whatever is done about the intrinsic error, it must be done at
random -- without any systematic relationship to the outside world.

We can debate, of course, just how much of a head-start evolution actually
provides for this process. In lower organisms, it's quite a large amount.
But I wanted to solve the worst case because that would establish an
important principle; any organization capable of working in the worst case
would naturally work even better with a head start. So we can ignore that
consideration.

Again, total agreement, except that I might be prepared to argue that in
higher organisms, evolution provides even more of a head start. But going for
the worst case is fine. We started there. I made that presumption in the
posting that evoked yours, by questioning where the first ECSs came from.

In my model of the reorganizing system, I posit intrinsic reference signals
and a comparator for each one. This is a metaphor; in fact, all we need
assume is that there is some reference state established by inheritance,
and that deviations of the intrinsic variables from their respective
reference levels lead to reorganization. We don't need to guess at the
mechanisms or even the locations of these processes -- that kind of
question can be answered only by a kind of data that nobody knows how to
obtain yet. We can only speak of functions, not about how they are carried
out. A control-system diagram illustrates the functions and how they must
be related.

Fair enough, but the connotations here are beginning to bring us onto
treacherous ground. You are beginning to assume that the control systems for
intrinsic variables are eventually going to be found to be organized in a
system separate from the system that interacts with the outer world. You
haven't said it yet, but the end of the paragraph is stated in a way that
leads one's thinking in that direction.

Here, I'd like to do a little extrapolation of the discussion before rejoining
yours. Let us assume that there develops (exists?) a control hierarchy for
intrinsic variables. Does this not have a formal structure like that of the
familiar sensory-motor hierarchy, in that higher-level perceptions based on
the values of intrinsic variables are controlled through reference levels
supplied to lower level EICSs (Elementary Intrinsic Control Systems)? If so,
then are these higher-level EICSs not controlling percepts that correspond to
some Complex Internal Variable (CIV), just as higher level ECSs control
perceptions of Complex External (Environmental) Variables? If there exists
such a hierarchy how is it reorganized? If no such hierarchy exists, how are
errors in particular intrinsic variables going to be organized so that they
target parts of the sensory-motor control hierarchy relevant to them? More on
this after the next segment of your posting.

The outputs of the reorganizing system change neural connections, both as
to existence and as to weights. They cause no neural signals in themselves;
they merely change the properties of the neural nets. In doing so, they can
connect sensory inputs to motor outputs, and thus, in the presence of
stimulation, create a motor response to a sensory stimulus. They don't
create any particular response; they only establish a functional connection
so that motor output bears a relationship to sensory input according to the
amount of input, should any such inputs occur.

[...big chunk omitted]
The state of the world affects intrinsic variables.

Therefore if reorganization results in stabilizing certain aspects of the
external world against disturbance, and brings those aspects to specific
states, the result may be -- MAY be -- to bring some intrinsic variables
closer to their reference states. This is purely a side-effect of what the
new control system is doing. What the control system is sensing and
controlling may have nothing directly to do with the side-effect that is
changing an intrinsic variable. But if the result of sensing and
controlling in that way is to lessen intrinsic error, reorganization will
slow or even cease. And that control system will continue in existence.

I assume that the beginning of this passage refers to the creation of an ECS,
since the provision of an S-R connection would normally be useless. But there
is a third connection to be made if this is so--to the reference input of the
new ECS. Where does that come from? Is this new ECS a top-level one, in
which case the reference signal is (so far) undetermined? or is it
interpolated in an existing hierarchy?

[The intrinsic control system is] a system that may involve hundreds of
intrinsic variables with specific reference levels, and hundreds or
thousands of pathways that connect error signals (if signals they be) to
the target locations for reorganization.

In light of your insistence on rigorous ignorance and random operation, that
word "target" is interesting. I can see that if a sensory-motor hierarchy
exists, but is flailing about because all its connections are as yet randomly
made, then a random connection of the outputs of the reorganizing system would
(eventually) develop stable control, and the reorganizing system would then be
found to be targeted at relevant locations in the sensory-motor hierarchy.
But an organism that contained such a powerful sensory-motor hierarchy
initially would probably be dead long before the reorganization had taken
useful effect. So one must assume that infants do not have a well developed
control hierarchy, or that if they do, then the gain on most ECSs is near
zero. It is not by accident that the young of all species are either
incompetent or already organized with an effective sensory-motor control
hierarchy. If they do not have the control hierarchy, then the reorganizing
system must develop new ECSs, with the issues as raised above. If they do
have an existing hierarchy, then the reorganizing system cannot be randomly
linked to it. So, your argument seems to lead to the situation that you don't
want to allow: the reorganizing system DOES know about specific aspects of the
control hierarchy, whether the two hierarchies are separate or no, or even
whether the reorganizing system is a hierarchy at all.

Now we can see the basic logic of reorganization. The reorganizing system
is not concerned at all with what control systems exist or what variables
in the outside world are brought under control. All it is concerned with is
keeping intrinsic variables near their reference levels. If there is
intrinsic error, reorganization commences, with the result that perceived
variables are redefined, sensitivities and calibrations change, means of
control change, and the external world is stabilized in a new state. The
only significance of that new state to the reorganizing system is that
intrinsic error may be corrected, putting an end, for a while, to
reorganization.

OK. This paragraph rings true, taken on its own. But it does not need the
underlying idea that the reorganizing system is separate from the sensory-
motor hierarchy. If you change the words "intrinsic variables" for
"controlled percepts" in this paragraph, very little need change. Indeed, in
the parts quoted before, intrinsic variables ARE controlled percepts, but not
in the sensory-motor hierarchy. They are controlled through modifications of
the organism's internal environment, not the external world.

Given a reorganizing system that works this way, an organism can learn to
survive in environments having almost completely arbitrary properties. A
pigeon with such a reorganizing system can come to maintain its internal
nutritional state near an inherited reference level by pecking on keys
instead of grain, or even by walking in figure-eight patterns -- it can
learn to control variables that have absolutely no natural, logical, or
rational connection to nutrition, and by doing do, can protect its own
nutritional state. It does not need to reason out why performing such acts
is so vital, or even what the connection is with getting food. It doesn't
even have to know that ingesting little bits of brown stuff has the effect
of keeping it from starving.

Agreed. Important.

If you recall my posts on the origins of life a year or so ago, you'll
realize that this process of reorganization exemplifies exactly the same
process that I proposed as the way the first living molecules came into
being. The main difference is that the variability that creates new
organizations to be retained or evolved away comes not from external forces
but from an active process of random change driven by internal error
signals. The reorganizing system is evolution internalized and made
purposeful.

You mean this? (Powers 910731, quoting me in the first paragraph)

<>What I was trying to get across was that if part of a self-organized
<>feedback system happened one day to evolve so that its self-corrective
<>feedback was modified in response to some environmental disturbance that
<>it could not previously survive, then it would be more likely to exist
<>into a farther future. It would also be a rudimentary control system,
<>with an externally settable reference. That reference would itself be
<>part of a non-control-system stabilized structure, but could later
<>become incorporated in a higher-level control system that might evolve.
<>Then we would have a two-level hierarchic control system. That, in its
<>turn, could evolve a higher layer, and each such layer would contribute
<>tho the apparent stability of the entire system. But always at the top
<>there would be a non-control-system feedback complex of some degree of
<>apparent stability.
<
<This is very close to a proposition about the origins of life that I
<posted before you got onto the net. I started a little farther back than
<you do. Suppose you have a chemical reaction going on that is forming
<complex molecules, and that these molecules, during breakdown, interact
<with their substrate so as to influence concentrations of chemicals *on
<which formation of that kind of molecule depends*. This is feedback.
<Obviously, NEGATIVE feedback would be highly favored; modifications of
<the complex molecules that resulted in negative feedback effects on the
<replication-critical substances would lead to increased relative
<concentrations of those molecules. Where feedback is positive, that
<population of molecules would quickly disappear (changes in the critical
<substances would be amplified instead of opposed). This is strictly
<Darwinian evolution; nothing fancy.
<
<[....]
<
<There would, of course, be an unstoppable tendency for this kind of
<negative feedback to become more and more effective and thus more and
<more prevalent. The appearance of catalysts, enzymes, introduces
<amplification that vastly improves the tightness of feedback control.
<Somewhere in here, before or after the enzymes appear, there must also be
<the first appearance of reorganization (and here, Prigogine's concepts
<may glancingly intersect with mine). The system, which must be complex by
<now, becomes capable of reacting to chronic error by *causing* random
<changes in the molecular structure, or the structure of molecular
<relationships. The changes are random, but the selection process is not:
<the rate of random change drops to zero if and only if the error is
<corrected by the new relationship of the molecule/structure to the
<substrate environment. So we have the effect of directed evolution
<without any telology and without any external direction. This introduces
<a principle of evolutionary progress that Stephen J. Gould would hate:
<evolution plus blind variation and selective systematic retention must
<tend toward tighter and tighter control, and greater and greater
<resistance to external events that tend to affect the accuracy of
<replication. This gives us an evolutionary scale on which to compare
<organisms.
<
<I can now tack your paragraph above onto the end of my exposition, as the
<story of what happens next (actually your story and mine probably
<overlap). Once we have negative feedback, and amplification with enzymes
<and later with neurons, and the capacity to create internal error-driven
<blind variation of organization, we have the ingredients for a system
<that can add levels of control whenever that is the only solution to
<error-correction that is left. And I think we end up with a very pretty
<picture of the whole sweep of evolutionary history from soup to nuts like
<us.

Back to the present posting:

If there's one primary concept that must be understood to understand my
theory of reorganization, it's that the variables controlled by this
process are completely apart from the variables represented as perceptions
in the learned hierarchy. The learned hierarchy is concerned primarily with
sensory data about an outside world, and about those aspects of physiology
that are represented in the sensory world. The reorganizing system is
concerned about variables in the world beyond the senses -- with the actual
state of the physical organism at levels inaccessible to the central
nervous system.

"Beyond the senses" is perhaps misleading. I would substitute "beyond the
externally-directed senses. Clearly if intrinsic error is to be detected,
something must detect the state of the intrinsic variable.

This brings up a little point about reality perception. A few weeks ago we
had a little discussion (which I'm not going to seek out in my Hypercard
stacks) about the difference between haptic and tactile perception, in which
it was pointed out that when touching is being done by the subject rather than
to the subject, the subject perceived an object instead of touch sensations.
The control, and in particular the internal kinaesthetic sensations were
required if the subject were to perceive an external object. We also
discussed emotion. It seemed likely that internally generated sensations had
to be used in combination with sensations obtained through the external
sensors to create a stable emotional percept.

I do not find it plausible that internal states are unrepresented in the
controlled percepts of the sensory-motor hierarchy. In fact, I think that if
they were not, we would have much more difficulty with real-world control.
And I find it entirely plausible that there should be ECSs within the sensory-
motor hierarchy that derive their perceptions entirely from internal states,
including "intrinsic variables."

It is not necessary to learn from experience
that error signals represent some degree of failure to control; large error
signals indicate serious problems, no matter what perception they relate
to. The reorganizing system could monitor error signals in general, en
masse, without any need to know what they mean, and their mere presence at
large magnitudes could be sufficient to cause reorganization to start. This
would satisfy the requirement that intrinsic variables be inheritable. And
one result would be that loss of control could lead to highly localized
reorganization, precisely in the system that has lost control.

Yes, to the first part of the paragraph. But doesn't the second part begin to
suggest that Occam's razor is a bit blunt? You say that you have held in
reserve the idea that reorganization might be a local ability of an ECS, any
ECS. If sustained error is to be a localizable trigger for a separate
reorganization system, isn't it simpler just to allow any ECS with sustained
error to do its own reorganization by changing some signs at its output, or
perhaps seeking new structural links? We know that new dendritic connections
develop as a consequence of rich active experience in rats, so the generation
of new links from local activity is not absurd neurologically.

Allowing this, it seems to me that the passage quoted from last July leads
naturally to the presumption that the intrinsic variables do contribute to the
perceptions in the sensory-motor hierarchy, that errors in them are strong
determiners of the need for reorganization, and that a single mechanism rather
than a duple one is adequate for the task.

The single localized mechanism has an added advantage to the system as a
whole, not apparent at first glance. If, as you suggest, and as I think is
thermodynamically essential, there is a threshold of error below which the
probability of reorganization is essentially zero, then the fully reorganized
hierarchy will be in a critical state of subdued tension, with minor conflicts
at all levels. It will, in this state, be prepared for rapid reaction to
disturbances in any of its controlled variables. Any greater internal
conflict would lead to further reorganization, any less would lead to slower
reactions to disturbances.

As a final note, we did agree some months ago that reorganization must at
least be modular. This conclusion was based on an argument from degrees of
freedom, and would apply to any mechanism for random restructuring. The
underlying fact is that in a high-dimensional space almost all directions are
nearly orthogonal to any selected direction. If the actual linkage pattern
for optimal control of a particular percept represents a particular direction
in the space described by the signs of the output linkages of all the ECSs
contributing to that percept, then almost any random reorganization will have
a negligible effect if control is poor, but a devastating effect if control is
good. Only by reorganizing in a low-dimensional space can the e-coli effect
be put to good use.

So, if reorganization is to be done by an ignorant system external to the
sensory-motor hierarchy, it should be done at any one time only in small
localized groups of links, probably relating to a single ECS or a small group
of ECSs controlling a very similar percept. Also, presumably, low-level ECSs
that are able (usually) to provide percepts that match their references are
better supports for the reorganization of a higher-level one than are
incompetent low-level ECSs. One would expect low-level ECSs to settle down to
a stable, productive, life long before the higher-level ones that depend on
them. That doesn't mean they don't change over time, but they will usually
change smoothly, and more probably in their perceptual functions than in their
output connections.

···

-----------
I'm sure you will disagree with a lot of the above, and probably you will
think I have missed your main point. Maybe I have, but it feels right, and it
feels as if everything you wrote made sense, based on an unnecessary
underlying presumption.

Is this enough disagreement to make you happy? Long as this comment is, it
mostly does agree with what you wrote. But not all.

Martin

[From Bill Powers (920702.0800)]

Martin Taylor et. al. (920701) --

OK, lots of disagreements here to keep me happy. As I suspected, the basic
concept of the reorganizing system as I propose it is VERY hard to grasp --
it's hard even to see how such a thing could work. Martin Taylor: "I am not
at all sure that the end product of the argument, your separate
reorganization system, is even plausible."

How could a system that is not AT ALL concerned with the form of behavior
end up forming behavior? The problem is very much like trying to understand
how a system concerned with outcomes could be unconcerned with producing
specific outputs that produce just that outcome. Everybody on the net
understands control of outcomes through variations in action that are
systematically related to disturbances but not to the controlled outcome.
Now the problem is to see how an outcome could be controlled by variations
in action that are not related to ANYTHING.

Let's re-examine the lessons of E. coli. Inside E. coli there is a control
system (this is my way to model it, anyway) that senses a time-rate of
change of concentration of some substance. The perceived rate of change,
now just a chemical signal inside the bacterium, is compared with a
reference rate of change, another signal. The difference, the error signal,
acts on an output function, as usual.

This is, however, a very peculiar output function. What it does is not to
cause a systematic and appropriate change of direction, but periodically to
create a tumble, a random change in orientation of the body of the
bacterium. The tumbles themselves have no systematic effect on orientation.
They simply create new directions of swimming, at random.

Clearly it's the direction of swimming that systematically determines
whether the bacterium heads up or down a gradient of attractant. But the
bacterium's error signal does not operate on direction of swimming. It
operates on THE INTERVAL BETWEEN TUMBLES. Big error --> shorten the
interval; small error --> lengthen the interval. The final result is that
the bacterium proceeds quite efficiently up the gradient, in a series of
zig-zags that has longer legs in right directions and shorter legs in wrong
directions.

BUT THIS CONTROL SYSTEM KNOWS NOTHING ABOUT CONTROLLING DIRECTIONS OF
MOVEMENT IN SPACE. It does not sense direction of movement. It senses only
the rate at which a concentration changes. Koshland, in perfusion
experiments, showed that the same effect on intervals between tumbles is
obtained by varying the concentration in a flowing medium while the
bacterium is tethered to a sticky substrate in a fixed orientation.

So the bacterium is sensing one variable, concentration change, by
reorganizing a different effect: speed of swimming times the cosine of the
angle between direction of travel and chemical gradient. It is not
systematically affecting direction of travel, but only adjusting the
interval between mutuations in the direction of travel. At no time does it
have any information about which way it is swimming in space. It is
controlling a SIDE-EFFECT of direction of swimming -- time rate of change
of concentration of an attractant. From the viewpoint of the human
observer, and Koshland in particular, this bacterium knows how to
"navigate" in three-dimensional space. But that is entirely the wrong
interpretation. This bacterium knows nothing of three-dimensional space.
Its world has no extension beyond its own membrane. As far as the control
system in the bacterium is concerned, outputing an error signal causes the
time rate of change of concentration to remain more or less at the right
level, and that's all that's happening. The location of the body in space
or its actual direction of movement is outside the world of this control
system.

Let's take a small step upward in complexity. Suppose that we now have a
microorganism that can actually steer: it can sense the direction of the
gradient relative to its body. It compares this sensed direction with a
reference signal of zero (i.e., there isn't any reference signal) and
converts the error to a change in direction of swimming (perhaps by varying
the speed of movement of ciliae on different sides of its body). If a
positive error signal causes one side to speed up and the other to slow
down, the body will turn toward the gradient and align to swim exactly up
it; if the sign of the effect is reversed, the body will turn down the
gradient and align in the direction down the gradient. If the loop gain is
set too high in either direction, control will turn into oscillation, so
progress will cease either up or down the gradient. If this is a 3-D
situation, there will be two control systems of this kind. We need to
consider only one of them.

This chemical gradient could be one of a substance that is helpful or
harmful to the bacterium. In other words, PRESENCE of one substance would
have some deleterious effect on the inner workings of the bacterium, while
LACK of the other would have a deleterious effect on the same thing.

Let's now give this organism a simple reorganizing system. The intrinsic
variable is the concentration of some substance inside the bacterium that
is an indicator of health. As long as this substance is near a particular
reference level, all is well. If the surroundings are conducive to health
but at a level lower than the optimum, in terms of this indicator, the
organism should steer up the gradient if it knows what's good for it (the
reorganizing system does).

But let's suppose that the output gain, the effect of the error signal on
differential speed of the ciliae, starts at zero. Now, save for good luck,
there will be either an excess of substances that are deleterious to health
or a shortage of those that are conductive to health. In either case, the
indicator, the intrinsic variable, will depart from its reference level.
Reorganization will commence.

In this case, we assume that reorganization can affect only the loop gain.
The loop gain can be increased or decreased by some amount delta through
variations in output sensitivity, one factor in loop gain. Changing enzyme
concentrations could alter output sensitivity and hence loop gain. At
intervals, the size of delta is varied randomly within some small range
between positive and negative limits and is added to the current loop gain.
So the loop gain may increase by a small amount with each episode of
reorganization, or it may decrease.

If an increase in loop gain causes even more error in the reorganizing
system, or fails to decreazs it, the next random increment/decrement of
loop gain will occur sooner or at least no later. If the error is lessened,
the next change will be postponed a while. The result will be that the loop
gain will perform a biased random walk in the direction that lessens
intrinsic error. Tom Bourbon has demonstrated that this does in fact work.

If the SIGN of the loop gain is positive, the organism will swim directly
up the gradient. Simply reversing the sign will cause the organism to swim
directly down the gradient (because the relationship between changes in
direction and changes in error reverses).

The result will be not only that the SIGN of the output sensitivity will be
correct for swimming up a gradient of beneficial substances or down a
gradient of noxious substances, but the size of the loop gain will come to
the maximum at which stable control of direction still exists.

If this directional control behavior succeeds in keeping the level of
beneficial substances sufficiently high, or the level of noxious ones
sufficiently low, the reorganizing system will go to sleep -- it will
detect no error, and the rate of reorganization will drop to zero. From
then on, the control system will automatically swim up or down the gradient
and continue to avoid noxious substances or seek beneficial ones (one or
the other, but not both, in this very simple case). It will follow changes
in direction of the gradient in space without any further modification,
resisting disturbances that might make its path deviate from the right
direction.

In this example there's a direct relationship between the chemical gradient
sensed by the steering control system and the effect on the intrinsic
variable. The intrinsic variable that indicates state of health is affected
by the same chemical substance that is used for steering. In fact, it is
the reorganizing system that gives "value" to the substance being sensed
and controlled by the direction-control system. The direction-control
system is not concerned with the meaning of the chemical signals indicating
direction errors: it will just as happily steer the organism up or down the
gradient, depending on the sign of its loop gain. It's the reorganizing
system that decides that this substance is noxious or beneficial, in terms
of effects on some intrinsic variable that is an indicator of the state of
the organism.

But even that is misleading. The reorganizing system doesn't put a label on
the substance used for steering. It simply monitors the EFFECT of that
substance on the organism itself, and if there's intrinsic error it fiddles
with the loop gain until the intrinsic error goes away. The result is that
the organism either seeks or avoids that substance, according to whether
the loop gain ended up positive or negative. We, looking at the final
outcome, decide that this organism likes or dislikes the substance, judging
by the fact that it seeks or avoids it.

With only a small further change we can further generalize this example.
Suppose that this organism steers not by sensing chemical gradients but by
sensing light intensity impinging or receptors on either side of its body.
Depending on the sign of the loop gain, it will swim either toward or away
from the light.

Suppose also that in the medium there are noxious substances that have a
higher concentration where there is light, and that recombine to harmless
forms in the dark. Now the behavioral controlled variable is differential
left-right ( -up-down) light intensity, but the basis for reorganization is
some effect of a noxious substance on an internal indicator of health, an
intrinsic variable that has nothing to do with sensed light intensity.

Now reorganization will create a negative loop gain and the organism will
avoid light. The control system in charge of steering knows nothing about
noxious substances, and the reorganizing system knows nothing about light.
Yet the overall effect is that the control system avoids light and thereby
keeps noxious substances from changing the state of the intrinsic variable
that is the basis for reorganization. The organism has adapted to a
photochemical phenomenon that is entirely outside its ken. From the
observer's point of view, the organism has come to assign a negative value
to light, judging from the fact that it avoids light.

Suppose that something in the environment now gradually changes, so that
the physical situation reverses: now noxious substances form in the dark,
and are dissociated into harmless components by light. The organism will
find itself swimming into trouble by avoiding light. The noxious substances
will cause the indicator of health, the intrinsic variable, to depart from
its reference level, and reorganization of loop gain will commence. It will
cease only when the loop gain has become optimally positive, for that will
result in the organism's seeking instead of avoiding light. The steering
control system will now remove the organism from the concentrations of
noxious substances -- without in the slightest intending to do so.

I hope that this sequence heralds a dawning a little nearer in the future.
Martin Taylor says:

Fair enough, but the connotations here are beginning to bring us onto
treacherous ground. You are beginning to assume that the control >systems

for intrinsic variables are eventually going to be found to be >organized
in a system separate from the system that interacts with the >outer world.
You haven't said it yet, but the end of the paragraph is >stated in a way
that leads one's thinking in that direction.

Now I have said it much more explicitly. This is precisely what I am
proposing.

···

-------------------------------------------------------------------
I am concerned about some sloppiness that is showing up in various ideas
proposed for how ECSs do things. We have ECSs that can alter their own loop
gain, that that communicate with and control other ECSs at the same level
depending on the situation, and that can voluntarily "relinquish control."
This and other such proposals have the effect of putting a great deal of
additional function into an "E"CS. This is all right with me IF IT'S BACKED
UP BY A TESTABLE AND DEMONSTRABLY WORKABLE MODEL SHOWING HOW EACH PROPOSED
NEW FUNCTION WORKS.

That requirement is sort of being ignored. I get sucked into doing it
myself, which is among my main reasons for raising a red flag. New ideas
are a dime a dozen, but new ideas that have been worked out to the level of
modeling aren't. And I don't mean just a schematic diagram -- I mean
something you could demonstrate in a computer (even a very simple version
of it), just to show that it would actually do what you say it would do.
Not everything you draw will actually behave the way you think it will.

What would be necessary in an ECS that could "relinquish control" all by
itself? It would have to detect the conditions under which this is required
-- what kind of detectors, sensing what? How much computation, and what
kind, is needed to recognize the conditions? What kind of actuators would
it need, driven by what, acting on what? Many proposals that sound simple
when expressed in a few words turn out to entail far more complexities than
I, at least, would like to see in an ECS -- some of them require whole
hierarchies of control!

Of course our conjectures are always ahead of what we can actually support;
I'm not trying to put a damper on creativity. But if we get too far ahead
of ourselves, we'll fall back into that Scholastic mode in which all you
have to do is think of something verbally plausible while the possibility
of actually proving that your idea would work goes down the drain (in other
words, standard psychology). That I am vastly uninterested in.

I'll get around to answering some other posts later -- if I ignore anyone
it isn't out of distinterest, but due to shortage of time. Ask again if the
point hasn't been covered.
---------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 920702 13:30]
(Bill Powers 920702.0800)

Bill, you have presented another clear exposition on the aspects of
reorganization with which we are both (all?) familiar. But you haven't
dealt with the primary objection to your proposal, which has nothing whatever
to do with whether the reorganizing system/principle/method "knows"
anything about what it reorganizes. Everything you write about it in this
posting speaks to that non-issue.

I said that I didn't think that what you proposed was plausible for reasons
that I hoped were clear. I tried to specify the area of agreement and the
area of problems, but apparently it didn't get across. Sorry to be obtuse.

Rather than go through it again, let me just put the main objection very
simply. The bacterium whose control has three degrees of freedom can reorganize
with no trouble. In 3-space, a substantial portion of the randomly chosen
directions are "near" (say, within 60 degrees of) a preselected direction.
In higher-dimensional spaces, the proportion of directions near the selected
direction is very small.

If we call the i'th ECS at level L ECS(n,i), there is a vector with elements
link(ECS(n,i),ECS(n-1,k))
where link(p,q) is 1, 0, or -1, depending on the existence and sign of the
connection between p and q (or takes a real value, in the general case).

Taking the simple case in which link() can be 1, 0 or -1, a random
reorganization has a probability 0.33... of doing the right thing if there is
only one dimension, 0.111... in two dimensions, and (1/3)^n in n dimensions.
If there are three ECSs in each of two layers, that is roughly a one in
twenty-thousand chance. All other sets of connections lead to some conflict,
and even if we grant the probability that some other sets provide stability,
the odds are not good that global reorganization by non-targeted random
alteration of link sign will reach an optimum quickly. The problem is the
same as that of molecular evolution as seen by the creationists. You can't
do it that way. You have to grow stably. And that, it seems to me, means
targeted reorganization. But not reorganization in which the reorganizing
system "knows" what or why it is reorganizing, in the sense of "hunger means
arrange to get something to eat." We agree that that is ordinarily nonsense
(caveat: not nonsense under "teaching").

Now think what "quickly" means in respect of reorganization. It means survival.
If a critical feedback loop has positive feedback, the organism may die if the
problem (THAT problem) is not corrected quickly. But the reorganizing system
(we agree) does not know what THAT problem is. All it can know is that there
is error, and it can know that the error is manifest in a particular ECS. Now,
the dimensionality of the output connection set for that ECS is much smaller
than for the net as a whole. It seems reasonable to suppose that the positive
feedback could be corrected more easily and quickly in the low-dimensional
space of one ECS than in the high-dimensional space of the hierarchy. A blunt
instrument to do it might be to change the sign of the comparator output (i.e.
reverse all the link signs at once). Any such reorganization changes the demands
on (reference signals supplied to) some lower-level ECSs, and may result in
them developing uncorrectable errors, and thus reorganizing. Or it may not.
The system is blind to that. But each change is of low dimensionality, and
thus feasible to achieve through random processes.

I am concerned about some sloppiness that is showing up in various ideas
proposed for how ECSs do things. We have ECSs that can alter their own loop
gain, that that communicate with and control other ECSs at the same level
depending on the situation, and that can voluntarily "relinquish control."
This and other such proposals have the effect of putting a great deal of
additional function into an "E"CS. This is all right with me IF IT'S BACKED
UP BY A TESTABLE AND DEMONSTRABLY WORKABLE MODEL SHOWING HOW EACH PROPOSED
NEW FUNCTION WORKS.

That requirement is sort of being ignored.

Agreed. I've worried a bit about that, too. But is there any evidence that
theneural system is built of repetitive structures like an ECS at all? ECSs
are
very nice units for modelling, and make good predictions of actual behaviour
in simple situations when coupled appropriately. They are functionally very
effective, they implement a principle that has to be true, and they make much
of the net behaviour simple and intuitive to understand. My hunch is that the
real structures are much more distributed, and that we will rarely if ever be
able to identify a functional piece of the brain and say "here's a classic
scalar ECS."

In my mind, the sitation is rather like that of the computational linguists
who think they have sets of rules that determine legal and illegal strings in
a natural language. The rules work well for a central core of language, but
do not capture its fluidity. Likewise, I suspect that the simple ECS hierarchy
will capture behaviour in many situations, but will be only an outline of what
happens in the real world.

My worry is that it is too easy to solve problems by giving this cartoon-ish
entity, the ECS, the intrinsic ability needed to solve each particular problem.
In the "similarity-difference" paper (and in the AGARD extension) I proposed
that control could pass between two ECSs, and tried to use non-formal language
to describe it, so that the functional need for such a possibility could be
separated from speculation about mechanism. (I re-posted a possible mechanism
last night, which I hope will answer your concern about one ECS "wresting"
control from another--the functional but informal description of what must
happen).

Not everything you draw will actually behave the way you think it will.

How true!

What would be necessary in an ECS that could "relinquish control" all by
itself? It would have to detect the conditions under which this is required
-- what kind of detectors, sensing what? How much computation, and what
kind, is needed to recognize the conditions? What kind of actuators would
it need, driven by what, acting on what? Many proposals that sound simple
when expressed in a few words turn out to entail far more complexities than
I, at least, would like to see in an ECS -- some of them require whole
hierarchies of control!

Good questions, all. And the last point about hierarchies internal to an ECS
does ring a bell when I think of all the hierarchies that exist within a single
cell. Let's give a try at some kind of answer. Tentative.

Detectors: I can see two reasons why a control system will be unable to
maintain its error near zero when it has been doing so previously: (1) there
is a barrier in the environment stronger than the effectors in the ECSs control
loop--we hit a wall, for example; (2) another controller has newly started
to work on the same CEV or one in the control loop to the first (either in
the effector or the sensor part of the loop). I'm not sure these cases can
be distinguished internally to the ECS having a problem. But the result
might well be the same--relinquishment of control. So the sensor senses the
persistence of error over some period of time, just like the sensor needed
for reorganization (no matter which view of reorganization you take).

Actuator: The actuator could (should be expected to?) act on the gain control,
the "insistence" of the ECS. The initial response might be to increase the
insistence, and after some time in which the error does not reduce appreciably,
to reduce it. A second possibility concerns the imagination connection,
about which I am not so clear. But trivially, rather than the actuator
reducing the gain to near zero, it might switch the output connection into
the imagination mode rather than providing references to lower ECSs that
were not performing as desired. I find this a bit clumsy, and I'm not going
to argue for it. But it's a possibility.

Imagination is an issue that is currently nagging me. Perhaps one day I will
post on it.

(Bill Powers 920702.1000)

Tell you what. How above giving us a reference to the similarity-difference
studies that we can look at? I want to see what kind of facts we're talking
about.

The best I can do right now is to list the references I used in the 1983 book
(pp 172ff). These don't seem to include the category search studies that
seem to me to be the most dramatic. The name Joula, and the journal
Perception and Psychophysics springs to mind for them, but I think Juola
only did one. My referencing system is pretty haphazard, and it isn't easy
to go back for something specific like this. And I haven't looked at the
research since about 1985. But these seemed pretty convincing to me.

D.A. Taylor (no relation to me or JGT) Holistic and analytic processes in
the comparison of letters. Perception and Psychophysics, 1976, 20, 187-190

Jones, B. The integrative action of the cerebral hemispheres. P&P, 1982, 32,
423-433

Cunningham, J.P., Cooper, L.A. and Reaves, C.C. Visual similarity processes:
identity and similarity decisions. P&P 1982, 32, 50-60.

I note a comment at the end of this section of the book: "Throughout the next
few chapters, we shall find examples of such cooperating process pairs. One
is always a fast, global process preferred by the RH (right hemisphere), the
other a slower, analytic process usually performed by the LH. At higher
levels of processing, the evidence for two processes is stronger and comes
from more diverse sources (Chapter 11)."

For these, I suggest you get the book out of the library. Speaking of which,
the owner of the copy of BCP that I was using has left for a job in Australia,
so I can't normally refer to it any more.

Martin

[Martin Taylor 920706 11:30]
(Bill Powers 920704.0800)

Bill, as usual, caught my sloppy descriptions. In this case it was the
error functions. I described a potential function, not a "force" function.
The error should be the derivative of the functions I gave for the type 2
systems, so that the signs are different on the two sides of zero error.

I won't reply to the rest of Bill's postings until he comes back on the net.

Martin

[From Bill Powers (920711.0800)]

Martin Taylor (920710.1530) --

What I like about your proposal is the idea that there's a random-
output reorganizing process operating in the (potential) CNS
hierarchy, gradually being replaced by (or paralleled by) systematic
control. This reorganizing process could monitor certain built-in
aspects of potential control systems in the CNS, such as the signals
being emitted by comparators (which are so simple that we can assume
them to be part of the initial complement of parts). Also, the brain
is physically organized so that sensory computers tend to be lumped
together, as do motor output computers. There may be aspects of these
more or less localized networks that permit monitoring for invariance,
negative feedback relations between motor and sensory signals, and so
on -- without an implication that the reorganizing process knows in
advance what perceptions or actions will be appropriate for the
current environment.

One bit of confusion was caused by my using "sensing" in an ambiguous
way. I agree, of course, that all control requires sensing. When I
said that intrinsic variables were outside the sensory interface, I
meant that they are outside the CNS sensory interface. I don't think
that chemical sensors and comparators can be counted as part of the
CNS hierarchy. If some biochemical states of the organism become parts
of experience (as in emotions), they must affect neural sensors
connected to the CNS. Under my concept of the reorganizing process, if
biochemical states are to become a basis for reorganizing, they would
be sensed in some other way, probably by chemical or autonomic
sensors. To the CNS, sensed internal states are no different from
sensed external states; they are simple data about the current state
of affairs in the world outside the CNS. They have no value or any
built-in reference states as far as the CNS is concerned.

Let's say that the photon-based reaction affected the concentration
of CO2 in the organism. Then if the wiggle-motor became more
specifically sensitive to deviations in the CO2 level, both above and
below optimum, then the organism would begin to control for light
level.

You're making the photon reaction itself produce the CO2. What I'm
interested in is the case in which the photons arise from some object,
the presence of which implies some OTHER effect on CO2 in the body.
The photons might come from a warning light that indicates a leak in a
CO2 container. Now the organism has to learn to get out of there when
the warning light (whistle, vibration) is sensed, even though the
light itself is not the cause of the elevated CO2 tension in the
bloodstream, and position in space, not CO2 tension, is the variable
that becomes controlled. So the organism must learn to control for a
specific level of one perception as a way of controlling another
variable inside itself that has no necessary connection to the
controlled perception, save for happenstance properties of the
external world.

The pigeon has to be able to learn to walk in a figure eight (itself a
control process) as a means of making food appear. Walking has no
effect on improving nutritional state, save for the mad scientist in
the environment. Something has to make the learning of one control
process depend on an indirect and arbitrary effect of that control
process on the environment, and by that indirect route on the internal
state of the organism. This is the kind of thing that my reorganizing
system is meant to accomplish. I don't think that any localized
reorganizing principle could do it.

···

------------------------------------------------------------------
I think that before we get any further into complicated arguments and
misunderstandings, we should do some work on simulating
reorganization. I'm going step by step on this, and am not ready to
show a whole indirect reorganizing process yet. At the moment, what I
have is a method of solving simultaneous equations by reorganization.
This is not meant to imitate any particular organismic process; that
comes a few steps further on. What it does is illustrate some
principles.

The basic setup is this:

There are 10 perceptual functions, each producting a linear function
of 10 environmental variables. The form of the linear functions is
generated by 10 weights for each of 10 perceptual functions, so that

p[i] = a[i,j]*v[j], where 0 <= i < 9, 0 <= j < 9

The weights a[i,j] are generated at random in the range between -1 and
1 and are fixed. The initial values of v[i] are zero.

There are also 10 fixed reference signals r[i], generated at random in
the range -50 to 50. Thus we can compute 10 error signals e[i] = r[i]
- p[i].

The reorganizing system computes the square root of the sum of squares
of all error signals, which is the distance in 10-dimensional
hyperspace between the point p[i] and the point r[i]. On each
iteration, the distance is compared with the distance on the previous
iteration, to provide a measure of velocity toward or away from the
point r[i].

If the velocity is positive on any iteration, a reorganization takes
place. For all negative velocities, the direction of movement
resulting from the last reorgnization is applied over and over.

Reorganization is a two-step process.

First, a 10-element vector delta[j] is filled with random numbers
between -1 and 1. It is normalized by dividing each entry by the
square root of the sum of the squares of all entries; this makes the
entries into direction cosines in a 10-dimensional space.
Reorganization thus changes the direction of this vector randomly in
10-space. This normalization is not essential, but seems to make
convergence faster.

Second, the 10 elements of delta[j] are added to the 10 current values
of the environmental variables v[j] after multiplication by
"stepsize." This causes the complex environmental variable to move a
distance "stepsize" in hyperspace.

The loop is now closed, because that movement in hyperspace results in
a change in perceptions p[i] and therefore in the errors e[i].

The variable stepsize can be a small constant, or (for faster
convergence) can be proportional to the remaining hyperspace distance.
I have used the latter method.

Here is part of a run arbitrarily cut off at 5000 iterations:

Iteration Error left, # consecutive # consecutive
             fraction of max steps reorgs

n= 1 e/emax= 0.996 steps= 1 reorgs= 1
n= 14 e/emax= 0.978 steps= 12 reorgs= 1
n= 17 e/emax= 0.977 steps= 2 reorgs= 1
n= 24 e/emax= 0.975 steps= 5 reorgs= 2
n= 29 e/emax= 0.974 steps= 3 reorgs= 2
...
n= 4968 e/emax= 0.011 steps= 8 reorgs= 1
n= 4974 e/emax= 0.011 steps= 3 reorgs= 3
n= 4984 e/emax= 0.011 steps= 8 reorgs= 2
n= 4990 e/emax= 0.011 steps= 1 reorgs= 5
n= 4994 e/emax= 0.011 steps= 3 reorgs= 1
-----------------------------------------------------------------

DATA SUMMARY after run:

Environmental variables (target of reorganization):

49.84 13.35 -24.48 -3.29 25.79 14.76 -0.48 -9.68 67.15 -0.92

Perceptual coefficients (picked at random, normalized to 1.0):

-0.890 -0.460 -0.750 0.370 -0.850 0.840 0.610 0.630 -0.360 0.440
-0.850 0.060 -0.950 -0.950 0.530 -0.910 -0.150 0.370 0.400 0.200
-0.690 0.870 0.790 0.140 -0.570 0.100 -0.740 -0.390 -0.430 0.540
-0.390 -0.760 -0.400 0.720 -0.450 -0.760 0.340 -0.930 -0.560 0.130
-0.460 0.700 0.170 0.190 -0.360 0.550 0.990 -0.090 0.390 -0.800
0.030 -0.810 -0.040 0.220 -0.440 -0.490 0.110 -0.280 0.560 -0.280
0.640 -0.650 0.310 0.670 -0.970 0.050 -0.220 0.310 0.730 0.930
0.650 -0.050 -0.240 -0.460 0.300 0.820 -0.750 -0.570 -0.430 -0.820
0.030 -0.140 0.840 -0.910 0.200 0.140 0.310 0.200 0.790 -0.910
0.750 0.870 -0.780 -0.360 0.470 -0.940 0.640 0.230 0.620 0.650

Perceptual signals:
-20.93 -30.02 930 10.08 33.01 -20.97 -0.87 15.00 36.93 -1.97

Reference signals (picked at random):
-21.00 -30.00 10.00 10.00 33.00 -21.00 -1.00 15.00 37.00 -2.00

Error signals:
-0.07 0.02 0.70 -0.08 -0.01 -0.03 -0.13 -0.00 0.07 -0.03

Count = 5001, RMS Err/max = 0.0106
---------------------------------------------------------------------
What's interesting about this control system is that it is
independently controlling 10 perceptual variables with respect to 10
arbitrarily-selected reference signals, by means of randomly altering
10 environmental variables on which all the perceptual variables
depend in different ways. The control employs only the total error,
not the error in each channel by itself.

As I mentioned in a previous post, I have also made this work by
reorganizing the coefficient matrix a[i,j], with the environmental
variables fixed at random settings. So we have proof of principle for
two major ways of reorganizing: reorganizing the input function, and
reorganizing the feedback link (the present case). There's no reason
why both modes of reorganization can't be going on at the same time.
All that matters is whether those two points in hyperspace are getting
closer together or farther apart.

The basic random process occurs between the total error signal and the
set of environmental variables. This set of variables could represent
whole control systems, with the random adjustments being made on the
forms of their input functions, the signs of output connections, and
the various factors influencing loop gain. This arrangement would look
like the one you're suggesting, where the CNS is reorganizing itself.

Here's the C function that does the actual reorganizing. Most of the
variables are globals. The init() routine and the routine that
calculates error signals are also included:

/*below is called once per iteration */

void reorg(float *e, /* pointer to list of error signals */
     int nerr, /* number of error signals */
     float *delta, /* pointer to motion array */
     float *var, /* pointer to controlled variable array */
     int nvar) /* number of entries, delta & var array */
{
float temp;
static int numsteps = 0;
static int numreorg = 0;
static int newstate = 0; /* state=0 means not reorganizing */
static int oldstate = 0;
lastesq = esq;
esq = 0.0;
for(m=0;m<nerr;++m)
   esq += e[m]*e[m];
distance = sqrt(esq);
rate = distance - lastdistance;
lastdistance = distance;
stepsize = 0.7 * distance/maxdistance;
if(rate > 0.0)
  { /* reorganize delta array */
   newstate = -1;
   ++numreorg;
    for(m=0;m<nvar;++m) /* pick all new deltas at random */
      delta[m] = (rand() - 0.5*RAND_MAX)/(0.5*RAND_MAX);
    temp = 0.0;
    for(m=0;m<nvar;++m) /* normalize to 1.0 */
      temp += delta[m]*delta[m];
    temp = sqrt(temp);
    for(m=0;m<nvar;++m)
      delta[m] /= temp;
   /* delta now is a set of direction cosines */
   }
   else {++numsteps; newstate = 0;}

  if(newstate != oldstate) /* print consecutive # steps, # reorgs */
   {
    if(newstate == 0)
    {
printf("\x0d\x0an= %6d e/emax= %7.3f steps= %3d reorgs= %2d",
  count,distance/maxdistance,numsteps,numreorg);
     numsteps = 0; numreorg = 0;
    }
    oldstate = newstate;
   }

   for(m=0;m<nvar;++m) /* move contr var in hyperspace */
     var[m] += stepsize*delta[m];
}
void calcerr() /* compute all error signals */
{
   for(m=0;m<numvars;++m)
   {
    p[m] = 0;
    for(n=0;n<numvars;++n)
     p[m] += v[m]*a[m][n]; /* compute perceptual variables */
    e[m] = r[m] - p[m]; /* compute error signals */
   }
}

void init()
{
for(m=0;m<numvars;++m)
  {
   r[m] = random(100) - 50;
   delta[m] = 0;
   for(n=0;n<numvars;++n)
     a[m][n] = 0.01*(random(200) - 100);
  }
  count = 0;
  maxdistance = 0.0;
  for(m=0;m<numvars;++m)
   maxdistance += r[m]*r[m];
  maxdistance = sqrt(maxdistance);
}

Feel free to use, modify, etc.
---------------------------------------------------------------------
Best

Bill P.

In a message dated 8/7/2000 5:01:46 PM US Mountain Standard Time,
tbourbon@centex.net writes:

<< This really is where we have some different ideas. Error signals
are present all the time in a multi-level LCS like us, with
gazillions of control loops at each level. Error signals are
present all of the time -- but emotions are not. Error signals
are nothing more than differences between p and p* and there is
no suggestion of emotion attached to them, in and of themselves.
Often, we seem to perceive emotions when we cannot eliminate
error, or when we eliminate error that persisted for an extended
time

I have a hard time thinking of error signals as emotions. There
are too many of the former, and too few of the latter. >>

I agree with you, I do. When I am speaking of Reorganization in counseling
and in regard to a LCS, it is a rare event. Also, it is most noticeable in
the higher levels of Perception.

Reorganization takes place when coping skills fail and life styles interfere
with the quality of life and so on. A persistent nagging feeling of anxiety
and or depression.

Most of the time in our life we are meeting our daily needs and wants. There
is really just small corrections being made, but no need to Reorganize the
hierarchy.

I guess that is what I am getting at Reorganization is needed to change or
Create System Concepts, Principles, and Programs. To a much lesser extent
Category and Relationship levels.

Any changes to levels at or below the relationship level are most likely
behaviors that came to be or are extinguished because of the higher levels
that have already been Reorganized.

I do think Reorganization happens at the lowest level "intensity" and on up
the hierarchy, but only during the developmental yrs. as a new born through
early adolescence.

Really I think I am on shaky ground when I talk about Reorganization
happening for adults at levels lower than the top 3; System Concepts,
Principles, and Programs.

Mark

Mark,

I see that our conversation is going to a wider audience.

In a message dated 8/7/2000 5:01:46 PM US Mountain Standard Time,
tbourbon@centex.net writes:

<< This really is where we have some different ideas. Error signals
are present all the time in a multi-level LCS like us, with
gazillions of control loops at each level. Error signals are
present all of the time -- but emotions are not. Error signals
are nothing more than differences between p and p* and there is
no suggestion of emotion attached to them, in and of themselves.
Often, we seem to perceive emotions when we cannot eliminate
error, or when we eliminate error that persisted for an extended
time

I have a hard time thinking of error signals as emotions. There
are too many of the former, and too few of the latter. >>

I agree with you, I do. When I am speaking of Reorganization in counseling
and in regard to a LCS, it is a rare event. Also, it is most noticeable in
the higher levels of Perception.

Reorganization takes place when coping skills fail and life styles interfere
with the quality of life and so on. A persistent nagging feeling of anxiety
and or depression.

No problem. I think the idea you just described is already part
of PCT dogma.

My comments have been addressed specifically to your graphic
representation of a generic single-loop PCT model. In it, you
labeled the output of the comparator as "Error Signal or
Emotion." I only questioned the idea that the output of a
comparator is an emotion.

I applaud your attempt to use the control loop as a teaching
device during crisis intervention. I probably agree with the
remainder of your remarks, about the levels at which you think
reorganization is most likely to occur during counseling.

Cheers,
Tom

[From Bill Powers (2000.08.08.0923 MDT)]

Tom Bourbon (2000.08.08) --

I have a hard time thinking of error signals as emotions. There
are too many of the former, and too few of the latter. >>

Yes. I prefer to use a phenomenological starting point. As far as I can
tell, all emotions have two perceptual components.

One component is a set of feelings, by which I mean only bodily sensations
such as a pounding heart, the feelings of vasoconstriction, shortness of
breath, and other such things. These add up to sensing the state of the
body in ways that seem related to preparations for action. The sensations
may feel good or bad, but there's nothing in them to distinguish one
emotion from a lot of others. The same sensations might go with emotions we
think of as quite different, such as joy, excitement, anger, and fear.

The other component is cognitive and goal-related. When we're feeling an
emotion, we're wanting to do something: run away, attack, scream at
someone, shout with joy, embrace someone, become invisible, seek safety,
and so on. In other words, there is some reference signal and, most often,
a difference between what we're we're experiencing and what we want to be
experiencing concerning our relationship to the world.

It seems reasonable to me to see the somatic component of emotions as
representing preparation to perform the actions that the cognitive
component is preparing to carry out to achieve its reference conditions.
The neuroanatomy looks right, with the somatic component branching off at
about the thalamus. This picture seems to square with my own experiences. I
can't find anything in my own emotions other than these two perceptual
components -- even the "good" ones. Of course you could say I'm not looking
hard enough.

The connection of emotions with reorganization that I see is very indirect.
We can feel emotions simply by doing something strenuous for a moment.
Doing anything that requires a lot of effort entails large error signals,
because error signals are all that drive actions. All strenuous actions
require cranking up the body to supply the energy needed and to do other
preparatory things like redistributing blood flow and raising muscle tone.
None of these things necessarily implies intrinsic error, so none of them
necessarily implies reorganization. So emotion doesn't necessarily imply
reorganization.

Reorganization is likely to happen when errors are large and physical
preparation is fairly extreme as well as chronic. So emotion and
reorganization would tend to be linked strongly only at the extremes of
error. This implies that the strongest emotions, and the most likely
occasions for reorganization, would be in cases where the action fails to
correct the error for any of several reasons. The action itself could be
blocked in internal conflict. The action might be ineffective even if
taken. The physical situation might make action more dangerous than
inaction. And of course there could be a lot of more detailed reasons for
failure to correct error, such as Mark Lazarre points out.

An observation in passing: it is possible to create recognizable bodily
states by chemical means, such as by injecting adrenaline. When people
experience artificially produced body states, they often perceive them as
if the cognitive component were also there. A sudden jolt of adrenaline
might be perceived as a sudden wish to flee, or a sudden desire to attack
someone or something: fear or anger, as we label such experiences.
Chemically suppressing metabolism or muscular activity or chemical energy
levels in the body might be experienced as depression even when there is
nothing obvious to feel depressed about. When apparent emotions are
produced by chemical interventions or by physiological malfunctions,
attempts to treat them cognitively may well be futile or worse. The
cognitive way of dealing with unwanted emotions is to change the goals that
are driving them, but if the physical feelings are not being produced by
cognitive error signals, this approach not only can't work but can make the
person feel quilty about complaining. If you honestly can't see how you
could be wanting to produce a strongly unpleasant state in yourself,
telling you, in effect, that it's your own fault could only add a cognitive
burden to the already unpleasant situation.

Nice to hear from you, Tom.

Best,

Bill P.

In a message dated 8/8/2000 9:09:45 AM US Mountain Standard Time,
powers_w@FRONTIER.NET writes:

<< Reorganization is likely to happen when errors are large and physical
preparation is fairly extreme as well as chronic. So emotion and
reorganization would tend to be linked strongly only at the extremes of
error. This implies that the strongest emotions, and the most likely
occasions for reorganization, would be in cases where the action fails to
correct the error for any of several reasons. The action itself could be
blocked in internal conflict. The action might be ineffective even if
taken. The physical situation might make action more dangerous than
inaction. And of course there could be a lot of more detailed reasons for
failure to correct error, such as Mark Lazarre points out. >>

Tom: I have a hard time thinking of error signals as emotions. There

are too many of the former, and too few of the latter. >>

Mark:
I agree with you,(both). I do. When I am speaking of Reorganization in
counseling and in regard to a LCS, it is a rare event. Also, it is most
noticeable in the higher levels of Perception.

Reorganization takes place when coping skills fail and life styles interfere
with the quality of life and so on. A persistent nagging feeling of anxiety
and or depression.

Most of the time in our life we are meeting our daily needs and wants. There
is really just small corrections being made, but no need to Reorganize the
hierarchy.

What I am getting at is, Reorganization is needed to change or Create System
Concepts, Principles, and Programs. To a much lesser extent Category and
Relationship levels.

Any changes to levels at or below the relationship level are most likely
behaviors that came to be or are extinguished because of the higher levels
that have already been Reorganized.

I do think Reorganization happens at the lowest levels and on up the
hierarchy, but only during the developmental yrs. as a new born through
adolescence.

Really I think I am on shaky ground when I talk about Reorganization
happening for adults at levels lower than the top 3; System Concepts,
Principles, and Programs.

Mark:
> On the assumption that Gain is Lower as you go up the hierarchy on the side
> of Reorganization and larger resistors as you go up the Hierarchy

Tom: This part is not clear to me. Can you clarify it? >>

This is just what I observed -- for example If Reorganization occurs on the
level of Programs it may take a few or Several things to go wrong over
several weeks or months. But to establish and/or change Principles may take
years of less than adequate control, and many partial failures and partial
success. But to change the System Concepts, is by far the lengthiest in time
and needs more error at even greater degrees of Failure than any other level.

I was thinking it is like the resistors in the stereos my friends and I would
build. I may have chosen the wrong word "resistor" it might be "capacitor"
The point being you need bigger and more error for a longer amount of time as
you go up the hierarchy, before Reorganization occurs. When I does happen
it is called an epiphany or CRISIS

Mark

[From Bruce gregory (2000.0808.1308)]

Bill Powers (2000.08.08.0923 MDT)

Doing anything that requires a lot of effort entails large error

signals,

because error signals are all that drive actions.

I'm flabbergasted. As long as control is maintained, why should
error signals be large? Clearly I fail to understand something very
fundamental. When I pedal my bike, I assumed that my actions were
driven by changing reference levels rather than by large errors.

BG

In a message dated 8/8/2000 10:09:09 AM US Mountain Standard Time,
bgregory@CFA.HARVARD.EDU writes:

<< I assumed that my actions were driven by changing reference levels rather
than by large errors. Clearly I fail to understand something very
fundamental. >>

Yes, you do.

IF your actions were driven by changing reference levels, your bike ride
would be very unpleasant and, most likely, very painful.

Wishing you a speedy reorganization,
Mark

[From Bruce Gregory (2000.0808.1620)]

8/8/00 5:02:44 PM, "Lazare, Mark Crisis counselor, Phoenix AZ"

IF your actions were driven by changing reference levels, your

bike ride

would be very unpleasant and, most likely, very painful.

Wishing you a speedy reorganization,

We have a very different understanding of control systems. I'd
prefer that my organization for riding a bicycle remain just as it
is. It provides all the control I want or need.

BG

···

<DTSDTO@aol.com> wrote:

[From Rick Marken (2000.08.08.2100)]

I think it's highly likely that the economy is a collective
control system because it is the result of the operation of a
large collection of individual control systems.

Bruce Gregory (2000.0808.1652)

Is there any basis for your conviction?

Sure. I mentioned one; the individuals who make up the economy are
controllers themselves. So, for example, individual managers are
clearly controlling for keeping inventories low; this means that
the overall inventory level in the economy will be controlled
(maintained near zero). Business owners are also controlling for
keeping their income matching their outflow (which includes the
amount they pay themselves as profit); this means that the overall
difference between production outflow (GNP) and consumption inflow
(expenditures, E) will be zero.

Of course, the most basic evidence that the economy is a control
system is that nearly everyone in the economy is obviously
controlling one very important variable: money. People have
different references for the amount of money they want (and
different abilities to make the money they get match their
reference for the amount of money they want). But virtually
everyone I know -- even my brother -- will act in whatever way
they can (they produce) in order to get the amount of money they
want (in order to consume) and they act to protect the money
they have from disturbances (like theft or swindling): that is,
they control for having money. At the aggregate level we see
this as a population of people acting (producing Q) to produce
money for themselves (GNP) that can be used to consumer (E)
what they produced.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bill Powers (2000.08.09.0857 MDT)]

Bruce gregory (2000.0808.1308)--

Doing anything that requires a lot of effort entails large error

signals,

because error signals are all that drive actions.

I'm flabbergasted. As long as control is maintained, why should
error signals be large? Clearly I fail to understand something very
fundamental. When I pedal my bike, I assumed that my actions were
driven by changing reference levels rather than by large errors.

The maximum error signal in a normally operating control system is, by
definition, the amount of signal required to produce the maximum possible
effort by the lower systems. If the output function has a very high gain,
this can still be only a small amount of error in comparison with the size
of the reference signal. But it's still not zero error. Some control
systems use an integrating output function, which _in the steady state_ can
produce exactly enough output to make the error literally zero. But as soon
as disturbances or the reference signal changes, the error is once again
nonzero.

An average control system, I would guess, allows an error of something like
10% of the magnitude of the reference signal when it is producing the
maximum possible output (that's about the amount of error we see in a
tracking task of very high difficulty -- if the error gets greater the
person usually gives up). This is why it would be hard to thread a needle
when both wrists were supporting 50-pound weights.

Your idea that normal movements are produced mostly by changing reference
signals is probably right. But normal conditions involve forces that are
only a small fraction of the maximum force possible, so the error signals
required to produce them are also relative small.

Best,

Bill P.

[From Bill Powers (2000.08.09.0908 MDT)]

Mark Lazarre (2000.08.09)--

IF your actions were driven by changing reference levels, your bike ride
would be very unpleasant and, most likely, very painful.

Huh? To push down on a pedal, you raise the reference signal for the system
that controls sensed pedal pressure. To turn left, you set the reference
signal for the bank angle of the bicycle to a small leftward lean, which is
produced by setting the reference signal for the wheel angle so the wheel
cocks, for a moment, to the right ... and on on. It's reference signals all
the way down.

Best,

Bill P.

[From Erling Jorgensen (2000.08.09.1230 CDT)]

Mark Lazare (Tue, 8 Aug 2000 16:02:44 EDT)

Bruce Gregory (8/8/2000 10:09:09 AM )

<< [When I pedal my bike,] I assumed that my actions were driven
by changing reference levels rather than by large errors. >>

IF your actions were driven by changing reference levels, your
bike ride would be very unpleasant and, most likely, very painful.

I'm not sure I agree. The task that is called "pursuit tracking"
involves keeping a cursor aligned with a moving target, (admittedly
moving due to disturbances.) But I think it provides an image
for considering the variety of perceptions tracking their references
in the hierarchy.

I think of our current level of focus as a _compensatory tracking_
task. We are trying to stabilize a perceptual variable against
whatever is disturbing it from our environment. But we implement
it, in my way of thinking, through various _pursuit tracking_ tasks.
That is, we smoothly vary the references for those lower levels,
and they just as smoothly (without much pain or unpleasantness)
make their respective perceptions track the required references.
It is *as if* disturbances were being added to the references at
each lower level -- which is no real problem for a well-tooled
control system. Pursuit tracking is as easy as compensatory
tracking.

So in Bruce's example, "getting to the store" is a stable reference,
needing compensation against disturbances like: "I'm not there
now", "the car's not available", "it's not too far by bicycle", etc.
The sequences of getting, climbing on, and pedaling the bicycle,
etc., which implement that higher level goal, are various pursuit
tracking tasks, involving smoothly varying reference signals.
Now, if he gets there and there's no milk available (a plausible
higher level goal), that then is seen as the stable (compensatory
tracking) goal, for which "trying different stores" becomes the
varying (pursuit tracking) means.

All the best,
        Erling

[From Bruce Gregory (2000.0809.1447)]

Erling Jorgensen (2000.08.09.1230 CDT)

So in Bruce's example, "getting to the store" is a stable

reference,

needing compensation against disturbances like: "I'm not there
now", "the car's not available", "it's not too far by bicycle",

etc.

The sequences of getting, climbing on, and pedaling the bicycle,
etc., which implement that higher level goal, are various pursuit
tracking tasks, involving smoothly varying reference signals.
Now, if he gets there and there's no milk available (a plausible
higher level goal), that then is seen as the stable (compensatory
tracking) goal, for which "trying different stores" becomes the
varying (pursuit tracking) means.

Yes, that's a good way to describe it.

BG

[From Bruce Gregory (2000.0809.1449)]

Bill Powers (2000.08.09.0857 MDT)

An average control system, I would guess, allows an error of

something like

10% of the magnitude of the reference signal when it is producing

the

maximum possible output (that's about the amount of error we see

in a

tracking task of very high difficulty -- if the error gets greater

the

person usually gives up). This is why it would be hard to thread a

needle

when both wrists were supporting 50-pound weights.

Your idea that normal movements are produced mostly by changing

reference

signals is probably right. But normal conditions involve forces

that are

only a small fraction of the maximum force possible, so the error

signals

required to produce them are also relatively small.

Thanks for the clarification.

BG