# Bomb in the Hierarchy Simulation (was Back in control)

[Martin Taylor 2005.03.04.20.19]

[From Bill Powers (2005.02.26,1201 MST)]

The question remains, how would reversing the sign of feedback at one level have the effect of reversing the sign of feedback at a higher level? I'm sure there are cases where this would happen, but I can't think of any just now. If a lower-level system runs away, it would do so by making the perceptual signal avoid the reference setting, but that could go in either direction; p < r and p getting smaller, or p > r and p getting larger. One of those cases would make p change the right way for the next system up to retain negative feedback.

Actually, I don't think this is correctly analyzed. Rick's simulation seems to show what actually happens. It's a kind of _intrinsic_ positive feedback that we haven't discussed before and that I never thought through until now. I hope I've thought it through properly now.

Positive feedback means that any small error is increased through the action of the output affecting the perceptual signal so as to move the perceptual signal further from its reference value. We have hitherto talked about a simple loop, in which the output has a defined effect on the perceptual signal.

Now we are talking about a loop in which one element is itself a feedback loop. To talk about it, I need some symbols. Call the higher level perception, reference, error and output P2,R2, E2, and O2 respectively. Let O2 be the reference signal for several lower level loops, so O2 -> R11 = R21 ... = R2n. Then we have the obvious O11 ... O1n, P11 ... P1n, and E11 ... E1n as the remaining required symbols.

If Loop 1 is a negative feedback loop, then to a first approximation P11=R11. P2 = f(P11, ... P1n), and is directly influenced by O2.

If Loop 11 is in positive feedback, there's a problem. Start with Loop 11 being in equilibrium (E11 = 0) and (we assume) no disturbance. Until R11 changes, P11 will maintain its value, and P2 will, as well (assuming no disturbances affect P12 ... P1n).

Now let there be a small disturbance affecting P1k. This will affect P2, and change O2, changing R11 ... R1n. If Loop 11 is in positive feeback, P1n will then go into an exponential runaway (without limit in a linear system), which is likely to affect P2 in an exponentially increasing degree. O2 will need to change to compensate for this runaway.

The only way that O2 could halt the runaway in Loop 11 (and thereby halt its own runaway) would be if it could at some point adjust R11 to match the accelerating value of P11 while at the same time bringing its own error to zero through a fortuitous combination of values of P12 ... P1n.

So without computing a loop gain, it seems that reasonable kinds of function P2 = f(P11, ... P1n) are likely to lead Loop 2 into a runaway if P11 goes nto a runaway state.

What happens now if we reverse the sign of the output O2? In the ordinary case, if one sign of the output function leads to positive feedback, the other sign should lead to negative feedback. But it doesn't happen in this case. The runaway happens for either sign of the output function. Whatever fluctuation there is in the output creates an equivalent fluctuation in the reference signal at R11, and causes the perceptual signal P11 to diverge exponentially. If the function P2 = f(P11 ... P1n) allows it, P2 will go into exponential runaway, increasing the error E2, increasing the output O2, usually enhancing the virulence of the exponential runaway of P11.

That's what Rick found to happen in the spreadsheet, I think. He was surprised that changing the sign of the output didn't change the positive feedback to negative. Maybe the above is an appropriate explanation.

Now back to the dog-work!

Martin

[From Bill Powers (2005.03.05.0601 MST)]

Martin Taylor 2005.03.04.20.19 –

Actually, I don’t think this is
correctly analyzed. Rick’s simulation seems to show what actually
happens. It’s a kind of intrinsic positive feedback that we haven’t
discussed before and that I never thought through until now. I hope I’ve
thought it through properly now.

What positive feedback does is to make the input variable change in a
direction that takes it away from the reference level rather than toward
it. Moreover, the change continually increases even if the reference
signal remains constant (I take it we’re not interested in situations
where the loop gain is less than 1).

From the standpoint of a higher system, this means that in order to
cause a positive change in the higher-order controlled variable, the
reference signal must be changed so as to be more negative than the
perceptual signal, assuming a positive contribution of the lower
perceptual signal to the higher one. If the lower perceptual signal
followed the reference signal, the output of the higher system would have
to go more positive to cause a positive change in the perceptual signal
of the lower system, and thus a positive change in the perceptual signal
of the higher system. So in the vicinity of zero error, the relationship
between a change in output to a change in the perceptual signal, for the
higher system, has reversed because of the reversal in the lower
system.

Now let there be a small
disturbance affecting P1k. This will affect P2, and change O2, changing
R11 … R1n. If Loop 11 is in positive feeback, P1n will then go into an
exponential runaway (without limit in a linear system), which is likely
to affect P2 in an exponentially increasing degree. O2 will need to
change to compensate for this runaway.

The only way that O2 could halt the runaway in Loop 11 (and thereby halt
its own runaway) would be if it could at some point adjust R11 to match
the accelerating value of P11 while at the same time bringing its own
error to zero through a fortuitous combination of values of P12 …
P1n.

So without computing a loop gain, it seems that reasonable kinds of
function P2 = f(P11, … P1n) are likely to lead Loop 2 into a runaway if
P11 goes nto a runaway state.

Both your analysis and mine are only rough approximations because they
ignore dynamics (while assuming that runaway effects can’t be
instantaneous). But they do show that the higher system’s operation is
disrupted because of the lower system’s positive feedback. This is caused
by the effective reversal of the normal effect of the higher system’s
action on its own perceptual signal when the error is near zero.
As Rick and I have shown, however, reversal of the sign of feedback
effects in a lower system does not necessarily cause a runaway in higher
systems in (homo sapiens, anyway). What happens is that the higher
systems very quickly (after a measurable delay during which an
exponential runaway can be seen to start) reverse the sign in the
output function to restore negative feedback
. This was our first
experimental sighting of hierarchical control by variation of parameters
rather than variations in reference settings (although the latter, of
course, continued).

If you try the experiment (Rick probably has it running on the Web, or if
he doesn’t I’m sure it soon will be running there), you’ll see that the
runaway condition is very quickly detected – the reversal of the
relationship between direction of mouse movement and direction of cursor
movement. As soon as it is realized that this external relationship has
reversed, the subject reverses the relationship between visual error and
direction of mouse movement, and control returns to normal.

This is nature’s answer to the bomb in the hierarchy: don’t let positive
feedback exist for very long.

About 40 years ago, Wayne Hershberger did a clever experiment with chicks
for his PhD. He arranged a linkage between a chick and a food dish such
that whether the chick moved toward or away from the food dish, the food
dish moved in the same direction twice as fast. So the only way to get
fed was to move away from the dish, thus bringing it closer,

The chicks had great trouble with this reversal and some never learned to
circumvent it: they would run toward the food every time, and fail to
eat. But some discovered ways around it. In at least one case, the chick
learned to look straight up in the air, only glancing occasionally at the
dish, so the visual loop was broken most of the time. But no chick,
apparently, had the ability simply to reverse the direction of action in
the system for controlling the perceived distance to the food dish. That
connection, seemingly, was hard-wired.

On the principle that evolution stops when the result is just barely good
enough to correct the intrinsic error, we can assume that for any
organism, there is some level of organization as which the bomb can exist
and there is no higher system that can reverse the sign of the input,
comparator, or output function. But I think it’s also clear that for most
systems, simple reversals that occur naturally in the environment must
quickly be handled by corresponding reversals inside the controlling
organism. Otherwise organisms would be falling down in convulsions all
the time. If there is a mechanism for detecting runaway, and reversing a
connection in the system that is running away, then positive feedback
simply can’t occur in that system for long enough to matter.

Best,

Bill P.

[From Rick Marken (2005.03.05.0835)]

Bill Powers (2005.03.05.0601 MST)--

As Rick and I have shown, however, reversal of the sign of feedback effects in a lower system does not necessarily cause a runaway in higher systems in (homo sapiens, anyway). What happens is that the higher systems very quickly (after a measurable delay during which an exponential runaway can be seen to start) reverse the sign in the output function to restore negative feedback. This was our first experimental sighting of hierarchical control by variation of parameters rather than variations in reference settings (although the latter, of course, continued).

If you try the experiment (Rick probably has it running on the Web, or if he doesn't I'm sure it soon will be running there), you'll see that the runaway condition is very quickly detected -- the reversal of the relationship between direction of mouse movement and direction of cursor movement. As soon as it is realized that this external relationship has reversed, the subject reverses the relationship between visual error and direction of mouse movement, and control returns to normal.

Yes, the demo (along with a control modle that does _not_ recover from the positive feedback created by the polarity reversal) is at http://www.mindreadings.com/ControlDemo/Levels.html

Best

Rick

···

---
Richard S. Marken
Home 310 474-0313
Cell 310 729-1400

Re: Bomb in the Hierarchy Simulation (was Back in
control)
[Martin Taylor 2005.03.05.13.51]

[From Bill Powers (2005.03.05.0601
MST)]

Both your analysis and mine are only rough approximations because they
ignore dynamics (while assuming that runaway effects can’t be
instantaneous). But they do show that the higher system’s operation is
disrupted because of the lower system’s positive feedback. This is
caused by the effective reversal of the normal effect of the higher
system’s action on its own perceptual signal when the error is near
zero.
As Rick and I have shown, however, reversal of the sign of feedback
effects in a lower system does not necessarily cause a runaway in
higher systems in (homo sapiens, anyway). What happens is that the
higher systems very quickly (after a measurable delay during which an
exponential runaway can be seen to start) reverse the sign in the
output function to restore negative feedback
.

If I remember correctly, the sign that was reversed was that of
the low-level loop that was immediately affected by the feedback sign
reversal, not that of the upper level loop that fed the reference
value to the lower-level one. In other words, it was a reversal
in the sign of the output of a feedback loop in an ordianry positive
feedback state, not that of the loop in what I called an
“intrinsic” positive feedback state.

If I’m right in that memory, what was illustrated was a kind of
connection that doesn’t exist in the strict HPCT structure, in which a
higher-level system switches the functioning of a lower level
one.

As soon as it is realized that this
external relationship has reversed, the subject reverses the
relationship between visual error and direction of mouse movement, and
control returns to normal.

Yes, that’s what I thought.

This is nature’s answer to the bomb in
the hierarchy: don’t let positive feedback exist for very
long.

It may very well be so, for those parts of the system for which
this kind of possible reversal has been encountered and for which a
corrective control action has been developed (presumably by
reorganization on an individual or an evolutionary time scale).

About 40 years ago, Wayne Hershberger did
a clever experiment with chicks for his PhD. He arranged a linkage
between a chick and a food dish such that whether the chick moved
toward or away from the food dish, the food dish moved in the same
direction twice as fast. So the only way to get fed was to move away
from the dish, thus bringing it closer,

The chicks had great trouble with this reversal and some never learned
to circumvent it: they would run toward the food every time, and fail
to eat. But some discovered ways around it. In at least one case, the
chick learned to look straight up in the air, only glancing
occasionally at the dish, so the visual loop was broken most of the
time. But no chick, apparently, had the ability simply to reverse the
direction of action in the system for controlling the perceived
distance to the food dish. That connection, seemingly, was
hard-wired.

Yes. Isn’t it also true that chicks with prism lenses that move
the visual position of a seed a bit to the right or left don’t easily
learn to peck at the actual seed, but continue pecking where it
“ought to be”? I think Dick Held an Alan(?) Hein are names I
associate with those experiments.

On the principle that evolution stops
when the result is just barely good enough to correct the intrinsic
error, we can assume that for any organism, there is some level of
organization as which the bomb can exist and there is no higher system
that can reverse the sign of the input, comparator, or output
function.

I’d partially go along with that. Partially, because I suspect
that the corrective possibilities are more modular than “by
level”. And that they are learned (on an evolutionary or an
individual time scale) because a Bomb has rendered the non-correcting
system inoperable, inducing reorganization until the problem is fixed.
I’d take the reduction in the frequency of temper tantrums as people
mature to be a suggestive indication that bomb possibilities do have a
chance to get fixed during the life of an individual. Of course, the
same kind of evidence suggests that not everybody has been as
successful in fixing them as some people are!

If my verbal analysis of yesterday has any merit, the problem
can’t be fixed (or at least not easily) by altering perceptual
functions, weights, or error sign at the higher level. One way
(perhaps the only way) of fixing it is to do as in Rick’s experiment
– control whose output affects the operation of other (lower level)
control systems.

That’s a somewhat radical modification to the HPCT concept, but
it or something equivalent may be a required modification.

But I think it’s also clear that
for most systems, simple reversals that occur naturally in the
environment must quickly be handled by corresponding reversals inside
the controlling organism. Otherwise organisms would be falling down in
convulsions all the time.

Exactly! That’s the evolutionary point, and possibly also the
point within individuals, at higher levels where individual
experiences differ sufficiently.

If there is a mechanism for
detecting runaway, and reversing a connection in the system that is
running away, then positive feedback simply can’t occur in that system
for long enough to matter.

Right. That is the technical solution. The question is whether
that kind of switching is inherently built in to ALL connections
between levels in the presumed HPCT hierarchy, or whether it’s one
option that reorganization has available for its exploration of the
space of possible solutions for complex control systems that don’t
serve well to keep intrinsic variables where they ought to be.

I think this whole area needs to be analyzed more deeply, given
the implications.

Martin

[From Bill Powers (2005.03.06.0707 MST)]

Rick Marken (2005.03.05.0835)–

Yes, the demo (along with a
control modle that does not recover from the positive feedback created
by the polarity reversal) is at

Very nice. It worked right the first time, and the model’s exponential
runaway plotted right on top of mine.

Best,

Bill P.

[From Bill Powers (2005.03.06.0705)]

Martin Taylor 2005.03.05.13.51–

If I remember correctly, the
sign that was reversed was that of the low-level loop that was
immediately affected by the feedback sign reversal, not that of the upper
level loop that fed the reference value to the lower-level one. In
other words, it was a reversal in the sign of the output of a feedback
loop in an ordinary positive feedback state, not that of the loop in what
I called an “intrinsic” positive feedback
state.

I’m not sure that “intrinsic positive feedback” state is really
an example of positive feedback. When the reference signal sent to the
lower system is more negative than that system’s perceptual signal, the
perceptual signal increases (making the lower system’s error worse) When
the reference signal is more positive than the perceptual signal in the
lower system, the lower system makes the perceptual signal go negative,
making the error worse again. So the sign of the feedback for the higher
system depends on the magnitude of its output signal, which is not the
case for any real positive feedback system. For either sign of output in
the higher system, the feedback can appoear to be either positive or
negative, depending on the relative magnitudes of the perceptual and
reference signals in the lower system.

Even when the sign of the feedback appears correct for negative feedback
in the higher system, it’s not really negative feedback because as the
perception in the higher system approaches zero, the action of the higher
system declines toward zero but the change in the perceptual signal
doesn’t slow down – in fact it continues to increase because the lower
system is running away, reversing the sign of the error which then just
gets larger.

What we have here is a mixture of positive feedback, negative feedback,
and overwhelming disturbances. It’s hard to call that mixture any sort of
feedback.

If I’m right in that memory,
what was illustrated was a kind of connection that doesn’t exist in the
strict HPCT structure, in which a higher-level system switches the
functioning of a lower level one.

Not in B:CP, but it’s been discussed often enough on CSGnet. It’s always
been in the background as something to be explored, but I ran into
unexpected difficulties with explaining how the basic hierarchy worked
and never did much modeling with changeable parameters.

This is nature’s answer to the
bomb in the hierarchy: don’t let positive feedback exist for very
long.

It may very well be so, for those parts of the system for which this kind
of possible reversal has been encountered and for which a corrective
control action has been developed (presumably by reorganization on an
individual or an evolutionary time scale).

One thing to remember is that positive and negative weightings are a
natural property of synaptic connections; either kind can develop (see,
in fact, Science (25 Feb 2005), which has an article by Hussein
and Sheng, “Control of excitatory and inhibitory synapse formation
by neuroligins”, in which it is concluded that “leuroligins
control the formation and functional balance of excitatory and inhibitory
synapses in hippocampal neurons”. Of course they mean
“affect,” not “control”, but presumably there is some
control system in the background that uses neuroligins, whatever they
are, to control the sign of synapses.) It’s no great trick to switch
signs.

Yes. Isn’t it also true that
chicks with prism lenses that move the visual position of a seed a bit to
the right or left don’t easily learn to peck at the actual seed, but
continue pecking where it “ought to be”? I think Dick Held an
Alan(?) Hein are names I associate with those
experiments.

Yes, I’ve seen those results too. The capacity to reorganize seems to be
variable across species, and within single organisms too.

corrective possibilities are
more modular than “by level”. And that they are learned (on an
evolutionary or an individual time scale) because a Bomb has rendered the
non-correcting system inoperable, inducing reorganization until the
problem is fixed. I’d take the reduction in the frequency of temper
tantrums as people mature to be a suggestive indication that bomb
possibilities do have a chance to get fixed during the life of an
individual. Of course, the same kind of evidence suggests that not
everybody has been as successful in fixing them as some people
are!

Yes, I see what you mean, and who. I think that “bombiness” may
be more like a common transient phase of development than an inherent
problem – it’s just the luck of random reorganization that creates it,
and eventually cures it.

I think this whole area needs to
be analyzed more deeply, given the implications.

Yes. Next generation.

Bestr,

Bill P.