Back in control

[From Rick Marken (2005.02.17.1030)]

I tried updating the new CSGSite to see if I could do it and, sure enough, I
could! So I put a link to Martin Taylor's site and Dick Robinson's site up
at:

http://www.perceptualcontroltheory.org/links.html

I'll put up information about the meeting (like cost and all that stuff)
once I get it but I hope people write to Bruce Abbott (babbott@mfire.com)
ASAP and tell him whether or not you will be attending CSG 2005.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[Martin Taylor 2005.02.17.2354]

[From Rick Marken (2005.02.17.1030)]

I tried updating the new CSGSite to see if I could do it and, sure enough, I
could! So I put a link to Martin Taylor's site and Dick Robinson's site up
at:

http://www.perceptualcontroltheory.org/links.html

Actually you didn't put a link to my site. You put a link to the presentation Bill liked. If you could, I'd prefer it if you put the link to

<http://www.mmtaylor.net/PCT/index.html&gt;

somewhere, I guess among the all the other personal sites. It contains four such reports, though at different stages of development. You could say as a cut-line that my site tends to deal with the interactions of multiple control systems in different ways.

Also, when you didn't have control of the CSG site, you recommended that a link should be put to the ECACS site (Explorations of Complex Adapting Control Systems) <http://www.ecacs.net>. It hasn't happened yet.

There's no rush, but I think it's nice to be able to get to and from the various Web sites that talk about PCT.

While I'm on the topic, I think it would be a neat idea to set up a Wiki devoted to PCT. I've recently used MediaWiki (the one that runs Wikipedia <http://en.wikipedia.org/wiki/Main_Page&gt;\) to set up three Wikis on Unix systems (including my home Mac). It takes something under half an hour to set up if you have access to the full shell, though filling all the possible articles could take many lifetimes. But having stuff developed through a Wiki would prevent things from getting lost and forgotten in a way that simple archives don't. If the CSG Web site is hosted on a Unix machine, it would be dead simple to set up. A Wiki could even take over much of the function of the mailing list, too, though it's not best suited for the rapid interchange you get in the mailing list.

Martin

[From Rick Marken (2005.02.17.2140)]

Martin Taylor (2005.02.17.2354) --

Actually you didn't put a link to my site. You put a link to the presentation Bill liked. If you could, I'd prefer it if you put the link to

<http://www.mmtaylor.net/PCT/index.html&gt;

somewhere, I guess among the all the other personal sites.

As one of my favorite Jews once said "it is accomplished".

Also, when you didn't have control of the CSG site, you recommended that a link should be put to the ECACS site (Explorations of Complex Adapting Control Systems) <http://www.ecacs.net>.

It hasn't happened yet.

OK. I'll make you a deal. You send me the equations for The Bomb in the hierarchy and I'll put ECACs up at the site.

While I'm on the topic, I think it would be a neat idea to set up a Wiki devoted to PCT. I've recently used MediaWiki (the one that runs Wikipedia <http://en.wikipedia.org/wiki/Main_Page&gt;\) to set up three Wikis on Unix systems (including my home Mac). It takes something under half an hour to set up if you have access to the full shell, though filling all the possible articles could take many lifetimes. But having stuff developed through a Wiki would prevent things from getting lost and forgotten in a way that simple archives don't. If the CSG Web site is hosted on a Unix machine, it would be dead simple to set up. A Wiki could even take over much of the function of the mailing list, too, though it's not best suited for the rapid interchange you get in the mailing list.

It sounds good but I'd like to learn more about it before I start messing with the CSG server.

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[Martin Taylor 2005.02.18.01.21]

[From Rick Marken (2005.02.17.2140)]
OK. I'll make you a deal. You send me the equations for The Bomb in the hierarchy and I'll put ECACs up at the site.

What equations? And why should there be a linkage?

The Bomb is something that exists in a complex environment with many feedback paths connecting output and perception. The strength of the feedback paths can vary, and on some occasions the sign of the overall feedback can change. That control system then goes into a positive feedback mode, and if it is in the feedback path of a higher-level control system, the higher one might also go into a positive feedback mode. It's like a classical avalanche. Maybe it stops after one move, maybe it cascades away up the hierarchy. Its a systemic property, not a property of one control system.

Martin

[Martin Taylor 2004.02.18.01.25]

[From Rick Marken (2005.02.17.2140)]

Martin Taylor (2005.02.17.2354) --

While I'm on the topic, I think it would be a neat idea to set up a Wiki devoted to PCT. I've recently used MediaWiki (the one that runs Wikipedia <http://en.wikipedia.org/wiki/Main_Page&gt;\) to set up three Wikis on Unix systems (including my home Mac).

It sounds good but I'd like to learn more about it before I start messing with the CSG server.

Sure, you would have to. You need to know what you are doing before you try something like that, even if it is pretty simple if the server runs MySQL and PHP 4 (not 5, yet -- a few unimportant MediaWiki functions don't work properly in 5). But to see the value a Wiki can have, just play with Wikipedia for a while (and while you are at it, just do a search for PCT. You'll be surprised what you find!

Martin

[From Bill Powers (2005.02.18.0118 MST)]

Rick Marken (2005.02.17.2140)--

OK. I'll make you a deal. You send me the equations for The Bomb in the hierarchy and I'll put ECACs up at the site.

Maybe I can help a little, though I'm not quite sure this is how to do it. What's needed is an environment with two paths between your action and the controlled result. One path has a sign that's the opposite of the other path. If the net effect of an action produces a net negative feedback, you can control. If the negative feedback path is then lost (say it comes up against a limit), the feedback suddenly becomes net positive, and the effect is like a sign reversal in those experiments we did some years ago. In a simple tracking situation, we'd expect about a half second delay, after which the internal sign reversal would take place and control would be regained. But during that half second, you can see the bomb going off -- the exponential runaway.

If this were a more complex situation, it would take longer to make the correction, if you could figure out how to do it at all. It's not necessary to lose the negative feedback path completely; all that has to happen is that the gain through that path becomes less than the gain through the positive feedback path.

Look at the efforts now going in between Israel and Palestine. The positive feedback path is "You hurt me a little, so I'll hurt you a little more to make you stop." When both sides adopt that policy the result is exponential runaway until a limit is reached. At the same time, different people on both sides are looking for ways to reduce the hurt to the other side. The net result depends on which kind of effort outweighs the other. Successful diplomacy would involve the MOL, trying to get both sides to percieve the effect of the positive feedback loop, and abandon that symmetrical policy which is the real cause of the bloodshed.

I'm not familiar with the Wikis, and am conserving my attention right now for the book. But it seems like a good idea.

Best,

Bill P.

[Martin Taylor 2005.02.18.09.26]

[From Bill Powers (2005.02.18.0118 MST)]

Rick Marken (2005.02.17.2140)--

OK. I'll make you a deal. You send me the equations for The Bomb in the hierarchy and I'll put ECACs up at the site.

Maybe I can help a little, though I'm not quite sure this is how to do it. What's needed is an environment with two paths between your action and the controlled result. One path has a sign that's the opposite of the other path. If the net effect of an action produces a net negative feedback, you can control. If the negative feedback path is then lost (say it comes up against a limit), the feedback suddenly becomes net positive, and the effect is like a sign reversal in those experiments we did some years ago. In a simple tracking situation, we'd expect about a half second delay, after which the internal sign reversal would take place and control would be regained. But during that half second, you can see the bomb going off -- the exponential runaway.

I see that as the trigger for the Bomb. The real explosion happens when that reversal causes a higher-level control system to go into positive feedback, and so on up the chain. Meantime, if the systems are nonlinear, the increased error in a higher-level system causes increased output to the reference inputs of different supporting lower-level systems, which increases their error and could drive some of them into unstable conditions. The Bomb is the indefinitiely limited propagation of the initial moment of positive feedback.

It's like the classical sandpile avalanche, in which the trigger is analogous to dropping one sand grain onto the pile: usually there's no propagation, often there's a very short-range effect, but once in a very long while, the effect can propagate a long way and create a really big (metaphoric) explosion (or avalanche in the sandpile). I take a temper tantrum to be one such manifestation of the Bomb.

That's why I said to Rick that the difficulty with setting up equations is that the environment needs to be complex enough not only to support multiple feedback paths, but also to require a control hierarchy of several levels.

Martin

[From Rick Marken (2005.02.18.0830)]

Bill Powers (2005.02.18.0118 MST)

Maybe I can help a little, though I'm not quite sure this is how to do it.
What's needed is an environment with two paths between your action and the
controlled result... If the negative feedback path is then lost (say it
comes up against a limit), the feedback suddenly becomes net positive,
and the effect is like a sign reversal in those experiments we did some
years ago.. But during that half second, you can see the bomb going
off -- the exponential runaway.

OK. But this is not really a Bomb in the hierarchy. This is just what
happens to any control loop where the net sign of the external feedback path
can reverse.

In his reply to you, Martin Taylor (2005.02.18.09.26) says

I see that [the runaway described in Bill Powers (2005.02.18.0118 MST)]
as the trigger for the Bomb. The real explosion happens when that
reversal causes a higher-level control system to go into positive
feedback, and so on up the chain.

This is what I would want to see demonstrated in a model. I can imagine
runaway error (as per the runaway after sign reversal) increasing error at
higher levels. But I don't see how the reversal-caused runaway could cause a
higher-level control system to go into positive feedback.

My guess is that "Bomb" is nice way to describe the "conflict-proneness" of
negative feedback control systems. If a control system acts to counter a
disturbance to a variable that is also being controlled by another control
system, the feedback path through the controlled variable does become
positive (for both systems): increases in output lead to _increases_ in
error. If there is more than this to the Bomb I would love to see it
demonstrated in some kind of working model.

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2005.02.18.1012 MST)]

Rick Marken (2005.02.18.0830) --

My guess is that "Bomb" is nice way to describe the "conflict-proneness" of
negative feedback control systems. If a control system acts to counter a
disturbance to a variable that is also being controlled by another control
system, the feedback path through the controlled variable does become
positive (for both systems): increases in output lead to _increases_ in
error. If there is more than this to the Bomb I would love to see it
demonstrated in some kind of working model.

I think positive feedback is something different from conflict. You don't need two incompatible goals to have positive feedback. What you do need is a system that is trying to reduce error, but doing so by using an action that actually increases the error.

I don't quite see how all the conditions for Martin's multilevel "explosion" could develop. It would seem that every level involved would somehow have to contain the positive feedback mistake, so triggering one level triggers the next, and so on. But the same general effect can be seen if we start with the highest level involved. I used the example of the Israel-Palestine conflict. This is surely a higher-level process. The conflict itself is a cause of many problems, but there is nothing in conflict that demands a runaway result. Kent McClelland showed how the most likely result would be simply a "virtual control system" that wastes a lot of energy but does come to equilibrium.

The bomb shows up when there is a positive feedback effect in the interaction between two parties. That is, when one system acts to reduce its own error (according to its way of understanding what it must do), the effect is actually to increase its own error, not reduce it. That doesn't happen in ordinary conflict. In ordinary conflict the efforts of each system tend to reduce the error in each system, but the effort required to do this increases according to the loop gains and discrepancy of goals. The error simply refuses to get smaller, but it doesn't get larger. In positive feedback, even a slight degree of conflict (or just a disturbance) will (to use Martin's word) "trigger" a runaway condition that escalates immediately as far as it can.

I think it's sufficient that the bomb exist in the means used by the highest pertinent level to correct its own errors. It's not necessary, in my opinion, that any bomblike condition exist at lower levels, other than the fact that the effort by one system to reduce its own error at the highest level disturbs the other system, which counteracts the effort by a larger one that actually increases the first system's error rather than just preventing it from getting smaller. It's that increase that creates the positive feedback.

When the Israelis counter a terrorist move by destroying the homes of the terrorist's family, the result is to increase the terrorist activity, rather than reducing it as was the intended effect of destroying the homes. Oddly, the Israelis seem to have understood that just in the last week or so, and even said that they understood it, and have stopped destroying the homes. So some negative feedback is showing up, and it may in fact reduce the escalation to the level of an ordinary conflict. Then maybe attempts to resolve the conflict will have a chance.

Best,

Bill P.

[From Rick Marken (2005.02.1430)]

Bill Powers (2005.02.18.1012 MST)--

Rick Marken (2005.02.18.0830) --

My guess is that "Bomb" is nice way to describe the "conflict-proneness" of
negative feedback control systems.

I think positive feedback is something different from conflict. You don't
need two incompatible goals to have positive feedback. What you do need is
a system that is trying to reduce error, but doing so by using an action
that actually increases the error.

Yes. I know. I just can't think of natural examples of "falling into" a
positive feedback regime, as in the case of the polarity change runaway,
which is demonstrated on the net at:

Levels of Control

In that experiment, the positive feedback "Bomb" is created by suddenly
changing (via computer programming) the polarity of the connection between
mouse and cursor. I can't think of many everyday situations where something
like that might occur naturally; that is, where you are controlling a
variable and suddenly (or even gradually) your actions have something close
to the opposite effect on the variable.
  

The bomb shows up when there is a positive feedback effect in the
interaction between two parties. That is, when one system acts to reduce
its own error (according to its way of understanding what it must do), the
effect is actually to increase its own error, not reduce it. That doesn't
happen in ordinary conflict. In ordinary conflict the efforts of each
system tend to reduce the error in each system, but the effort required to
do this increases according to the loop gains and discrepancy of goals. The
error simply refuses to get smaller, but it doesn't get larger. In positive
feedback, even a slight degree of conflict (or just a disturbance) will (to
use Martin's word) "trigger" a runaway condition that escalates immediately
as far as it can.

I think it would be hard to discriminate this from the response to a sudden
increase in output from one of the parties to a conflict due to a secular
change in that party's goals.

When the Israelis counter a terrorist move by destroying the homes of the
terrorist's family, the result is to increase the terrorist activity,
rather than reducing it as was the intended effect of destroying the homes.

This could also be seen as both sides pushing against a variable in a
conflict. All you are saying is that as the Israeli's push harder against
the terrorists, the terrorists push back harder against the Israeli's. So,
yes, destroying homes increases terrorist activity. But terrorism also
increases home destroying.

If the "Bomb" is a positive feedback regime like we see in the polarity
reversal demo pointed to above, then I think a better example of it is the
runaway inflation that was apparently produced by Volker and Greenspan from
about 1975 through 1985. This positive feedback regime apparently resulted
from the fact that increases in the Fed funds rate actually have a small,
_positive_ (rather than the expected negative) effect on inflation rate.

Volker and Greenspan wanted to fight inflation, bringing it close to zero.
They believed that the Fed funds rate was inversely related to inflation
rate so they increased the Fed rate to compensate for increases in
inflation. This actually increased inflation, leading to more rate hikes and
more inflation. The nearly exponential runaway is clear in the inflation
data. I don't know how sanity was restored but rates did finally come down
(and inflation along with them). But it looks like we may be going into a
"Bomb" cycle again. There was a report of a larger than expected increase in
inflation last month, probably in response to Fed rate increases over the
last few months, and Greenspan seems poised to keep raising rates to stop
the inflationary spiral that he is creating.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[Martin Taylor 2005.02.18.17.29]

[From Bill Powers (2005.02.18.1012 MST)]

Rick Marken (2005.02.18.0830) --

My guess is that "Bomb" is nice way to describe the "conflict-proneness" of
negative feedback control systems. If a control system acts to counter a
disturbance to a variable that is also being controlled by another control
system, the feedback path through the controlled variable does become
positive (for both systems): increases in output lead to _increases_ in
error. If there is more than this to the Bomb I would love to see it
demonstrated in some kind of working model.

I think positive feedback is something different from conflict. You don't need two incompatible goals to have positive feedback. What you do need is a system that is trying to reduce error, but doing so by using an action that actually increases the error.

I don't quite see how all the conditions for Martin's multilevel "explosion" could develop. It would seem that every level involved would somehow have to contain the positive feedback mistake, so triggering one level triggers the next, and so on.

No, that's not the way I see it (which means that I don't think it's right).

The way I see it is closely tied to the standard notion of reorganization in classical HPCT. The end point of the argument will be that normal reorganization constructs a hierarchy that is critically stable, like the prototypical sandpile and its avalanches. In other words, the hierarchy controls perfectly well under most conditions similar to those it has previously experienced, because that's what reorganization does -- it changes things until control is established.

I'm not sure whether I can get the full argument across in an e-mail message, or whether I should instead do it (when I have more time) in another Web paper. I'm a little loth to do this, because I won't have much time for any subsequent discussion. But here goes nothing :slight_smile:

I'm going to start with the vector representation of control and side-effects associated with a single elementary control unit (ECU), as described in <http://www.mmtaylor.net/PCT/Mutuality/side-effects.html&gt; .

Learning by a single elementary control unit involves altering the orientation of the perceptual and/or the vector of output effect on the world, as described a few pages later, in <http://www.mmtaylor.net/PCT/Mutuality/learning.vectors.1.html&gt;\.

When the two orientations are close, control is very good and side effects are small. When the orientations are perpendicular (which in a high-dimensional world is almost always quite close to being true for random vectors), there is no effect of output on perception, and all the output energy goes into side-effects. If the angle is greater than 90 degrees, the feedback is positive, and the perceptual signal grows until some limiting non-linearity is reached.

These diagrams are not complete descriptions of the ECU, as they omit the reference input and the comparator. In the classic HPCT hierarchy, this doesn't matter, because only the magnitude of the reference signal is compared with the magnitude of the perceptual signal. In other words, the provider of the reference signal does not "know" what effect its value will have on the outer world. Each higher level elementary unit contributes to the reference signal of many lower-level units, some of which control well (output vector nearly aligned with perceptual vector) and some of which control poorly (angle between output vector and perception vector slightly less than 90 degrees).

Each higher level ECU thus has a multiplicity of parallel environmental feedback paths, just as does the ECU that provided the trigger (described in <http://www.mmtaylor.net/PCT/DFS93/DFS93_8.html&gt;\. The global loop gain of the higher level depends on the behaviours of the ECUs to which it provides reference signals. Because of prior reorganization, we can assume that all of these do control their perceptions, which means that reorganization is likely to be slow. Those structures won't change much once reorganization has brought them to a condition in which they do control.

What this means is that many of the higher-level ECUs will be on the verge of being unstable. If a previous trigger event has occurred, control will then have been lost and reorganization will have amended the control structure to nullify that particular positive feedback possibility (or the environment changed back so as to hide the positive feedback element before reorganization did its job).

In the sandpile, each falling grain of sand has some probability of dislodging another, the original and the dislodged one may dislodge others, and so the avalanche may grow. Or not. The more roughly the grains are thrown at the pile, the shallower its slope, and the more robust it is against external disturbances. Likewise with the reorganizing hierarchy. The more it has had to reorganize against environmental variation that has exposed different positive feedback links through the environment, the better will the hierarchy be able to maintain control against other environmental variation.

One also has to recognize that different parts of the hierarchy are included in each other's environment, and side effects of the actions of one part will often disturb the perceptions of another part. Reorganization will have had an opportunity to shuffle the structure against variations in the implied feedback loops through other parts of the same hierarchy. (This is all without considering the possible occurrence of conflict among parts of the hierarchy).

As one gets older, the likelihood of temper tantrums usually decreases, but when one does occur, it may well be devastating, rather than the childish tempest in a teapot that dissipates in minutes. But always, if my analysis is reasonable, the structure is only marginally stable against unexpected variations in the way one's output affects one's perceptions. If the range of variation changes, so will the stability of the control structure.

If the environment has been too stable in childhood (a coddled child), one is quite hard put to it to control against a wild environment. But if the environment has been to variable in childhood, one may never have had the opportunity to reorganize and control, so may not even be able to learn to work properly in a normally civilized environment (perhaps an orphan tossed around from foster home to institution and back would be an example of this). Generally, however, one will be able to control pretty well in environments that are about as stable as those in which one grew up.

For Rick, the main point is that even if in our diagrams we draw the link between output and perception as a single line, it isn't unitary. One's actions affect one's perceptions by many routes and over many time scales. Over time, the relative effectiveness of these routes may change, and it may sometimes happen that blocking a negative feedback component could shift the angle betwee perceptual and output vectors from just below 90 degrees to just over 90 degrees. That's when the Bomb trigger goes off, and therefore when a little or a big Bomb might possibly explode.

Just subjectively -- aren't you a bit more likely to snap at someone if they bug you when you are under some other stress?

Martin

[From Bill Powers (2005.02.19.0343 MST)]

Rick Marken (2005.02.1430) --

Yes. I know. I just can't think of natural examples of "falling into" a
positive feedback regime, as in the case of the polarity change runaway,

Set up two paths in the environment, path a and path b. The controlled variable's value is set by

qi = kd*d + ka*qo + kb*qo.

Let ka = 10 and Kb = -5. The net effect on qi is (ka + kb)*qo or 5 * qo.

OK, now set up a control system to control qi in this environment. It will operate normally. But now put a constraint in the environment such that the effect on qi via ka limits when ka*qo = L (negative disturbance assumed). In other words,

if ka*qo > L then qi = kd*d + L + kb*qo
else qi := kd*d + ka*qo + kb*qo

Since kb is negative, once ka*qo hits the value L, any further increases in a negative-going disturbance will cause qi to decrease instead of increase, which will cause greater error, which will cause qi to decrease even further, and so on.The system will now have positive feedback for all larger values of the disturbance; it will run away as soon as the disturbance exceeds the critical value.

You should be able to model this fairly easily. I predict oscillations around the value of qi at which ka*qo = L, for the person will reverse sign to keep from running away, but that will make the feedback positive when ka*ko < L, which will cause another sign reversal, and so on.

On second thought, I predict that oscillations will occur but that the person will soon give up trying to control qi.

Martin, I think your analysis has passed the point where it can be carried out reliably without a simulation. When I consider the multiple-level effects of a bomb at the lowest level, I don't imagine the explosion propagating upward. Instead, I see a disturbance of higher-level systems occurring, counteracted by shifts in reference signals for other systems that are not involved in the bomb. The necessity for positive feedback at all levels makes it seem unlikely to me that this propagation would happen. Of course I can't support that idea without a simulation, either.

Your concept of the phase-response vector shifting direction so it's just barely on the side of stability is contradicted by measurements of real control systems. What you're talking about is known in (old-fashioned) control engineering circles as "phase margin." As the phase of the effect of an alternating disturbance approaches 180 degrees, the control system shows symptoms of instability by tending to oscillate longer and longer after the disturbance stops. This easily-observed symptom begins as soon as the phase passes 90 degrees lagging. At 90 degrees you see a pure integral response and a smooth asymptotic approach to zero (or minimum) error. When the phase passes 90 degrees, even by a small amount, you see an overshoot, then an over- and undershoot, then damped oscillations lasting longer and longer, until at phase angle of 180 degrees the oscillations are sustained at a constant amplitude forever.

So if natural control systems evolved to just the brink of instability, with a phase margin approaching zero, they would be "ringing" all the time, showing barely-damped oscillations after every transient disturbance. We don't see anything like that. In fact, what we see is pretty close to minimum-time damping, meaning that the system behaves like a slightly leaky integrator with a phase response of very close to 90 degrees -- about as far from instability as it is possible to get.

Best,

Bill P.

[Martin Taylor [2005.02.19.09.56]

[From Bill Powers (2005.02.19.0343 MST)]

Martin, I think your analysis has passed the point where it can be carried out reliably without a simulation. When I consider the multiple-level effects of a bomb at the lowest level, I don't imagine the explosion propagating upward. Instead, I see a disturbance of higher-level systems occurring, counteracted by shifts in reference signals for other systems that are not involved in the bomb. The necessity for positive feedback at all levels makes it seem unlikely to me that this propagation would happen. Of course I can't support that idea without a simulation, either.

I knew I sholdn't have attempted this in an e-mail. I think our intuitions really aren't that far apart, even though it may seem like it from the words.

The key is the words "all" and "unlikely" in the second last line above.

I can't spend much time on this, but...

Let's restrict this to a linear two-level system with one second-level unit and two first-level units. One of the first-level units (unit 1 in the following) contains a potential bomb of the kind you described to Rick, except that the positive-feedback strand is unmasked by some environmental event that blocks the negative feedback strand, rather than being induced by the negative strand having reached a limiting influence on the perceptual signal.

The second level perception is P2 = a12*P1 + a22*P2 (to make it dynamic, you can call all the letters Laplace transforms, if you want). Let's say the reference level for this perception is zero, to keep things simple. The output is O2 = g2*(-P2). Some part of that becomes a reference value for first-level units 1 and 1. let's call them R1 = r21*O2 and R2 = r22*O2. If the first level units are controlling well, then P1 and P2 will approach R1 and R2, and there is no problem.

What happens now, when the loop gain of unit 1 goes positive? For simplicity again, let's assume that it's a simple exponential runaway, rather than a growing oscillation. If the second-level unit is to control under those conditions, it must do so by imposing a countervailing exponential change in the reference R2, meaning that unless somehow r1 and r2 change, it also provides an exponential change to the reference R1. There is a small chance that the exponential change to the reference R1 will be phased so as to reduce the rate of the exponential runaway, but I don't think it could eliminate it, for the same reason that a simple integrator output can't eliminate a static error until after infintie time.

If P1 is increasing exponentially, the question then is whether first-level control unit 2 has sufficient power to keep a22*P2/a12 in balance with P1 (remember we asserted that the second-level reference is zero). There is a chance that it cannot. If that is true, then the second-level system's perceptual signal will be exponentiually increasing -- another way of saying that its loop gain will be positive.

In an assemblage of linear systems, the runaway may be inevitable. But in real systems, nonlinearity soon limits the runaway of first-level unit 1, allowing the still-controlling unit 2 to serve as the controlling path for the upper-level unit if it is strong enough. If it is, the second level unit retains control. Eventually, non-linearity limits the runaway at every level, but it's the structure of linkages that determines which units find their limits in which order. Extrapolating to larger hierarchies, it's that ordering that determines the eventual size of the avalanche (or bomb explosion).

Your concept of the phase-response vector shifting direction so it's just barely on the side of stability is contradicted by measurements of real control systems. What you're talking about is known in (old-fashioned) control engineering circles as "phase margin." ...
So if natural control systems evolved to just the brink of instability, with a phase margin approaching zero, they would be "ringing" all the time, showing barely-damped oscillations after every transient disturbance. We don't see anything like that. In fact, what we see is pretty close to minimum-time damping, meaning that the system behaves like a slightly leaky integrator with a phase response of very close to 90 degrees -- about as far from instability as it is possible to get.

Yes, that's part of the problem in dealing through the brief mode of e-mail.

If you follow through the logic, you'll see that the "near 90 degrees" situation could be sustained only in an environment without disturbances, and in which no control unit emitted side-effects that disturbed another unit's perceptual signal. The "near 90 degrees" system is a thought-experiment condition, just like the artificial rule-based langauge I introduced in my CSG talk in 93. It couldn't survive in any real world. But it does serve to illustrate the mechanism.

Such a "near 90 degree" "blank slate" hierarchy would be like a sandpile in which each sand grain was balanced precisely on to of another, making a pile one grain thick and a mile high. The chance that the next grain would make it collapse would be awfully close to unity, and then following grains would add to a sandpile whose slopes depended on the kinds of grains and on the stability of the table.

The analogy is that reorganization would develop control to the point where it actually worked in the real world, as you describe. At that point, it would be covering the kinds of environmental variations and mutual side-effects encountered by the organism for sufficiently long or sufficiently often to allow reorganization to be effective. And it wouldn't alter control to make it much more stable than that.

That's what I mean by saying that the reorganized hierarchy would be critically stable. Most of the time, it would work. Each setup would be of the kind described in the first half of this message, reorganized to the point where "stable" unit r2 would be adequate to counter the effect of "bomb-trigger" unit r1, but not much more than adequate (and of course, r1 probably would be reorganized to make the unmasking of the potentially positive feedback strand less likely or less damaging).

Nevertheless, in such a structure, the avalanche or bomb explosion could still exist. The reorganization following could be considered "learning", and if the avalanche went high enough, the "learning" could even be called "conversion", in the manner of St. Paul.

···

------------------

If this topic requires much more from me, it will have to wait for late May, or be mistreated by my messages being rather more terse than the recent ones (which I should never have taken the time to write -- deadlines loom).

Martin

[From Rick Marken (2005.02.19.0935)]

Bill Powers (2005.02.19.0343 MST)--

Rick Marken (2005.02.1430) --

Yes. I know. I just can't think of natural examples of "falling into" a
positive feedback regime, as in the case of the polarity change runaway,

Set up two paths in the environment, path a and path b. The controlled variable's value is set by

qi = kd*d + ka*qo + kb*qo.

Let ka = 10 and Kb = -5. The net effect on qi is (ka + kb)*qo or 5 * qo.

OK, now set up a control system to control qi in this environment. It will operate normally. But now put a constraint in the environment such that the effect on qi via ka limits when ka*qo = L (negative disturbance assumed)..

You should be able to model this fairly easily.

You mean, as an actual tracking task? Or as the environmental component of a control loop? Either would, indeed, be pretty easy. And I'm sure you would see a brief excursion into positive feedback (until the subject stops trying to control, just as you predict) when the disturbance exceeds L.

But what I was asking for was a _natural example_ of this. Unless you or Martin can point to a common, real world situation where people a "pushed" into a positive feedback regime by a sufficiently large disturbance (as in your example above) I have to consider the "Bomb in the Hierarchy" nothing more than a model in search of a phenomenon.

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2005.02.19.1225 MST)]

Rick Marken (2005.02.19.0935)--

But what I was asking for was a _natural example_ of this. Unless you or Martin can point to a common, real world situation where people a "pushed" into a positive feedback regime by a sufficiently large disturbance (as in your example above) I have to consider the "Bomb in the Hierarchy" nothing more than a model in search of a phenomenon.

OK, so consider. But I think it suggests some familiar situations, ones in which we strive to maintain control, but to do so must override wishes or urges to behave irrationally, against our own interests, stubbornly insisting on doing things that actually make things worse but not yet prepared to admit that they have no hope of making things better. So we hold things together as long as we can, and when something breaks down the control, the positive feedback takes over. I am thinking specifically of the chief instigator of the recent blowups on CSGnet. Nothing human is foreign to me, some wise man said. I can understand how it is to be that way, and to struggle to behave differently, but without removing the urge to self-destruction (mistaken for an attempt to survive). And how the whole thing just blows up, involuntarily, when the negative feedback fails.

I also think that the Middle East is another example, because there are people there who seem driven to embrace death and destruction, who are constantly taking action to make their own situation worse and using the results as a reason for doing still more of it, apparently thinking their behavior just has GOT to make things better. That bomb is going off all the time; only occasionally and for short periods is it contained by efforts to reduce the tensions and make concessions that will reduce the disturbances and allow some relaxation from the struggle.

I remember another party to the CSG problems saying "You have hurt me, but I am going to hurt you ten times worse." If that were the policy on both sides, wouldn't that cause violent positive feedback? Isn't this the case whenever people try to deal with violence by being even more violent in return? Robert Pirsig coined the term, "Self-stoking cycle," which expresses positive feedback eloquently.

I guess what I'm saying is that we have all sorts of positive feedback loops like these set up inside ourselves and between people. We notice them mainly when the negative feedback that overrides them fails for some reason. Then we see paranoid outbursts. suicidal and homicidal rages, screaming hatred, violence to the limit. It seems out of control, and it is.

The example of putting a limit on the negative feedback was just supposed to illustrate how the bomb effect could happen. I'm sure there are numerous other ways, if you look for them. But I don't think you have to look very far to find real-life examples, like the ones above.

Best,

Bill P.

[From Bill Powers (2005.02.19.1249 MST)]

Martin Taylor [2005.02.19.09.56] --

Let's restrict this to a linear two-level system with one second-level unit and two first-level units. One of the first-level units (unit 1 in the following) contains a potential bomb of the kind you described to Rick, except that the positive-feedback strand is unmasked by some environmental event that blocks the negative feedback strand, rather than being induced by the negative strand having reached a limiting influence on the perceptual signal.

The second level perception is P2 = a12*P1 + a22*P2 (to make it dynamic, you can call all the letters Laplace transforms, if you want). Let's say the reference level for this perception is zero, to keep things simple. The output is O2 = g2*(-P2). Some part of that becomes a reference value for first-level units 1 and 1. let's call them R1 = r21*O2 and R2 = r22*O2. If the first level units are controlling well, then P1 and P2 will approach R1 and R2, and there is no problem.

What happens now, when the loop gain of unit 1 goes positive? For simplicity again, let's assume that it's a simple exponential runaway, rather than a growing oscillation.

The problem here is that you're using two levels that control the same kinds of variables. What happens if one level controls the amount of a sensation, and the next level controls the shape of a configuration composed of those sensations? If the intensity of the sensation of edgeness increased to saturation, does that necessarily disturb the perception of a cube made in part of that edge? If the size of the cube increased to the limit of perception of size, would that cause the perception of rate of change of size to grow without limit? If the growth of the cube went to a limit, would that cause an unlimited amount of perception of the event of blowing up?

I think the fact that new dimensions are included at every new level of perception means that positive feedback at one level becomes meaningless at higher levels, and turns into merely a disturbance -- well, maybe not so mere, but still not positive feedback.

Look, when a disagreement begins to rely on imaginary scenarios, it's time to start modeling and stop talking, isn't it? As you can see, I think your basic concept of the positive feedback bomb concealed by negative feedback is a major advance in understanding some important human problems. Can't we settle for that for now?

Best,

Bill P.

[Martin Taylor 2005.01.19.15.59]

[From Bill Powers (2005.02.19.1249 MST)]

Martin Taylor [2005.02.19.09.56] --

Let's restrict this to a linear two-level system with one second-level unit and two first-level units. One of the first-level units (unit 1 in the following) contains a potential bomb of the kind you described to Rick, except that the positive-feedback strand is unmasked by some environmental event that blocks the negative feedback strand, rather than being induced by the negative strand having reached a limiting influence on the perceptual signal.

The second level perception is P2 = a12*P1 + a22*P2 (to make it dynamic, you can call all the letters Laplace transforms, if you want). Let's say the reference level for this perception is zero, to keep things simple. The output is O2 = g2*(-P2). Some part of that becomes a reference value for first-level units 1 and 1. let's call them R1 = r21*O2 and R2 = r22*O2. If the first level units are controlling well, then P1 and P2 will approach R1 and R2, and there is no problem.

What happens now, when the loop gain of unit 1 goes positive? For simplicity again, let's assume that it's a simple exponential runaway, rather than a growing oscillation.

The problem here is that you're using two levels that control the same kinds of variables. What happens if one level controls the amount of a sensation, and the next level controls the shape of a configuration composed of those sensations?

They are only numbers, whatever the external analyst might think they represent. The one level doesn't know it controls the amount of sensation. It only controls the value of a perceptual variable. Likewise for the level that the external observer thinks is controlling the shape of a configuration. It's just another value of a scalar variable. The control system doesn't "know" the meaning of what it controls.

Martin

[From Bill Powers (2005.02.19.1604 MST)]

Martin Taylor 2005.01.19.15.59–

They are only numbers, whatever
the external analyst might think they represent. The one level doesn’t
know it controls the amount of sensation. It only controls the value of a
perceptual variable. Likewise for the level that the external observer
thinks is controlling the shape of a configuration. It’s just another
value of a scalar variable. The control system doesn’t “know”
the meaning of what it controls.

No, but the computations that take place determine that meaning. Each
level of perception computes invariants from the level below, so if a
particular perception at the level below somehow goes out of control,
there is no reason to suppose that anything at the next level will also
go out of control. Perceptions at a given level are functions of many at
a lower level, not just one.
Furthermore, each level introduces new information. The shape of a cube
is computed from where sensations occur in the visual map, not on
the magnitudes of the sensations, and the whereness is not indicated in
the magnitude of any sensation signal. Sensations must be controlled to
change whereness, but the magnitude of a given sensation is not the
critical variable. The spatial relationship between two objects is left
unchanged if both objects move in the same way, or if they change
brightness or orientation or color. A runaway magnitude at one level does
not necessarily imply a runaway magnitude at the next level. There can,
perhaps, be special cases where that link between levels might exist, but
as a general rule I don’t think it does.

Well, I still believe that this sort of verbal argumentation doesn’t get
us anywhere. Better to produce a mathematical demonstration; then the
outcome won’t depend on who finds the cleverest argument.

Best,

Bill P.

[From Rick Marken (2005.02.19.1740)]

Bill Powers (2005.02.19.1225 MST)--

Rick Marken (2005.02.19.0935)--

But what I was asking for was a _natural example_ of this. Unless you or Martin can point to a common, real world situation where people a "pushed" into a positive feedback regime by a sufficiently large disturbance (as in your example above) I have to consider the "Bomb in the Hierarchy" nothing more than a model in search of a phenomenon.

OK, so consider.

I'm considering.

But I think it suggests some familiar situations, ones in which we strive to maintain control, but to do so must override wishes or urges to behave irrationally, against our own interests, stubbornly insisting on doing things that actually make things worse but not yet prepared to admit that they have no hope of making things better. So we hold things together as long as we can, and when something breaks down the control, the positive feedback takes over. I am thinking specifically of the chief instigator of the recent blowups on CSGnet.

I also think that the Middle East is another example, because there are people there who seem driven to embrace death and destruction, who are constantly taking action to make their own situation worse and using the results as a reason for doing still more of it...

I remember another party to the CSG problems saying "You have hurt me, but I am going to hurt you ten times worse." ...

I guess what I'm saying is that we have all sorts of positive feedback loops like these set up inside ourselves and between people. We notice them mainly when the negative feedback that overrides them fails for some reason.

These seem more like situations were there is little or no feedback rather than negative or positive feedback. These people seem to have little or no effect on the variables they are trying to control. What they are doing looks to me more like what happens in a compensatory tracking task when the connection between mouse and cursor is surreptitiously broken. The subject continues to do what they believe "must" be right, moving the mouse left as the cursor drifts right and right as the cursor drifts left. If the disturbance is oscillatory and not too large the subjects feels like their actions (which are much greater than they would be if the loop were actually closed) are effective, so they keep doing it. What I think you are seeing are the actions of people who have no control but desperately want control and have convinced themselves that they know how to get it. I looks to me more like superstition than a bomb. But perhaps superstition is the bomb.

Best regards

Rick

Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2005.02.19.1945 MST)]

Rick Marken (2005.02.19.1740) --

I guess what I'm saying is that we have all sorts of positive feedback loops like these set up inside ourselves and between people. We notice them mainly when the negative feedback that overrides them fails for some reason.

These seem more like situations were there is little or no feedback rather than negative or positive feedback.

Is that really how you see the Palestinian suicide attacks on Israel and the Israeli counterattacks on Palestinians? I think both sides are "retaliating" for what the other side has done, with some idea that if you retaliate hard enough the other side will see that it's not worth while continuing the attacks. But of course the harder each side retaliates against the other, the greater is the anger from the victims, and the more resolved they become to strike back hard enough to put an end to the attacks. The only limits to the attacks are (on the Israeli side) reluctance to suffer world condemnation for genocide, and (on the Palestinian side) a lack of resources. Of course what each side is doing is maintaining or increasing its own errors, not decreasing them.

What wrong with that analysis? If what you do increases your error and leads to your doing it even more, that's positive feedback, isn't it?

Best,

Bill P.

···

These people seem to have little or no effect on the variables they are trying to control. What they are doing looks to me more like what happens in a compensatory tracking task when the connection between mouse and cursor is surreptitiously broken. The subject continues to do what they believe "must" be right, moving the mouse left as the cursor drifts right and right as the cursor drifts left. If the disturbance is oscillatory and not too large the subjects feels like their actions (which are much greater than they would be if the loop were actually closed) are effective, so they keep doing it. What I think you are seeing are the actions of people who have no control but desperately want control and have convinced themselves that they know how to get it. I looks to me more like superstition than a bomb. But perhaps superstition is the bomb.

Best regards

Rick

Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400