# Stability and Control

[From Bruce Abbott (2013.12.22.0935 EST)]

Boris Hartman posted something this morning that turned on a light for me: W. Ross Ashby’s definition of stability. According to Ashby, “Every stable system has the property that if displaced from a state of equilibrium and released, the subsequent movement is so matched to the initial displacement that the system is brought back to the state of equilibrium” (Ashby, 1960).

Rick Marken has asserted that stability is a property that properly belongs only to equilibrium systems, to be distinguished from the property of control, which belongs only to control systems. Rick bases this assertion on the definition of stability provided by control engineer Brian Douglas: ‘Stability is a measure of a system’s response to return to zero after being disturbed.’ From this statement Rick concluded that stability refers only to systems that return to their initial state (zero deviation from it) following a impulse disturbance. Control systems, on the other hand, can counteract even continuous disturbances. From this distinction, Rick concluded that stable systems and control systems are different kinds of system.

But Ashby’s definition of stability basically states a ‘Test for the Stable System.’ Equilibrium systems like Douglas’ ball-in-a-bowl example meet this test: push the ball away from the bowl’s center and release it, and the ball returns (after a few oscillations that gradually die out) to the bottom of the bowl. By definition, the ball’s position is stable. But good control systems also meet this test: After an impulse disturbance, the controlled variable returns to its initial value. So control systems that behave this way are stable systems.

The distinction to be made is not between stable systems and control systems, it is between equilibrium systems and control systems. Equilibrium systems (like the ball-in-a-bowl) are passively stable, returning to their initial state after an impulse disturbance by converting the energy in the disturbance to restorative force. When given a continuous disturbance, they simply move to a new equilibrium value rather than returning to their initial value. Control systems are actively stable: they use their own energy source to actively oppose the effect of a disturbance on the controlled perception, whether the disturbance is brief of continuous.

In the real world, equilibrium and control systems often work together. For example, many aircraft wings have dihedral – looking from the front of the aircraft, they form a shallow V. A gust of wind acting from the side may tip the aircraft’s wing, but as soon as the gust is over, the wing will level itself because the vertical component of the wing’s lift is greater for the more level half. But control systems aboard the aircraft (in the form of a pilot or ‘autopilot’) add active control that can oppose even continuously acting disturbances. Because of passive stability, the control systems aboard the aircraft do not have to work as hard to maintain good control.

Bruce

[From Rick Marken (2013.12.22.1210)]

Bruce Abbott (2013.12.22.0935 EST)--

BA: W. Ross Ashbyï¿½s definition of stability. According to Ashby, "Every stable
system has the property that if displaced from a state of equilibrium and
released, the subsequent movement is so matched to the initial displacement
that the system is brought back to the state of equilibrium" (Ashby, 1960).

Rick Marken has asserted that stability is a property that properly belongs
only to equilibrium systems, to be distinguished from the property of
control, which belongs only to control systems.

RM: Not quite. I am saying that the "stability" described here by
Ashby -- the stability exhibited by what we can call "equilibrium
systems" -- is not the same as the stability exhibited by control
systems. In other words, I'm saying that the stability described by
Ashby is not control.

BA: Rick bases this assertion
on the definition of stability provided by control engineer Brian Douglas:
'Stability is a measure of a system's response to return to zero after being
disturbed.' From this statement Rick concluded that stability refers only
to systems that return to their initial state (zero deviation from it)
following a impulse disturbance. Control systems, on the other hand, can
counteract even continuous disturbances. From this distinction, Rick
concluded that stable systems and control systems are different kinds of
system.

RM: Not correct. What Rick concluded is that the _behavior_ that
Douglas (and Ashby) calls "stability" -- the return of a variable to
the initial, equilibrium or zero state after a transient disturbance
-- is different than the behavior called "control" -- where a variable
remains in an initial, equilibrium or zero state, protected from the
effects of disturbance (sorry Boris).

BA: But Ashby's definition of stability basically states a 'Test for the Stable
System.' Equilibrium systems like Douglas' ball-in-a-bowl example meet this
test: push the ball away from the bowl's center and release it, and the ball
returns (after a few oscillations that gradually die out) to the bottom of
the bowl. By definition, the ball's position is stable. But good control
systems also meet this test: After an impulse disturbance, the controlled
variable returns to its initial value. So control systems that behave this
way are stable systems.

RM: I have no problem saying that both equilibrium and control systems
are stable systems as long as it is understood that control and
stability are two different phenomena and , thus, require quite
different explanations. You are saying that stability is seen when a
variable returns to its initial value after an impulse disturbance.
You correctly note that by this test both an uncontrolled variable --
like the ball in the bowl -- and a controlled variable -- like the
distance between cursor and target -- appear to be stable. But if you
apply a continuous rather than a transient disturbance -- as in the
TCV -- you will quickly see that the ball in the bowl is _only_ stable
while the distance between cursor and target is controlled. This is
the important distinction (from a PCT perspective) because behavior
that is _only_ stable can be explained without the need for control
theory; behavior that IS control can be explained only by control
theory.

Psychologists who don't want to look at behavior as a control
phenomenon (because it would require them to see behavior as
purposeful) are, therefore, happy to see the behavior of living
systems as being like that of the behavior of systems that are _only_
stable -- like that of the ball in the bowl. That way they don't have
to confront the fact that the phenomenon they are trying to explain --
the behavior of living organisms-- is not just stable but also a
process of control. Bill was always having problems with people who
said that control theory is unnecessary because we can use
"equilibrium" theories -- ie. lineal causal theories -- to explain the
behavior -- when we tried to present PCT to conventional
psychologists. Indeed, in a talk I gave at Ucla a few years ago one of
the prominent professors in the audience scoffed when I said that
purposeful behavior is control and gave the example of the ball in the
bowl to show that all this control theory stuff was unnecessary; it's
all just the work of "equilibrium systems.

So while this discussion about the distinction between stability and
control may seem impossibly esoteric and trivial to many on CSGNet, it
is very relevant and real to me because I have experienced the effects
of treating stability as equivalent to control -- in the form of snide
reviewer comments on my articles and in haughty attempts at "gotcha"
remarks at conference presentations. I don't really mind it when it
comes from people who don't understand -- and clearly don't _want_ to
understand PCT -- but it saddens me when it comes from people who are
ostensibly fans of PCT.

BA: The distinction to be made is not between stable systems and control
systems, it is between equilibrium systems and control systems. Equilibrium
systems (like the ball-in-a-bowl) are passively stable, returning to their
initial state after an impulse disturbance by converting the energy in the
disturbance to restorative force. When given a continuous disturbance, they
simply move to a new equilibrium value rather than returning to their
initial value. Control systems are actively stable: they use their own
energy source to actively oppose the effect of a disturbance on the
controlled perception, whether the disturbance is brief of continuous.

RM: Right, two different mechanisms to explain two different
would have had no problems with them.

Best

Rick

···

In the real world, equilibrium and control systems often work together. For
example, many aircraft wings have dihedral ï¿½ looking from the front of the
aircraft, they form a shallow V. A gust of wind acting from the side may
tip the aircraft's wing, but as soon as the gust is over, the wing will
level itself because the vertical component of the wing's lift is greater
for the more level half. But control systems aboard the aircraft (in the
form of a pilot or 'autopilot') add active control that can oppose even
continuously acting disturbances. Because of passive stability, the control
systems aboard the aircraft do not have to work as hard to maintain good
control.

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

On Sun, Dec 22, 2013 at 6:35 AM, Bruce Abbott <bbabbott@frontier.com> wrote:

[From Bruce Abbott (2013.12.22.0935 EST)]

Boris Hartman posted something this morning that turned on a light for me:
W. Ross Ashbyï¿½s definition of stability. According to Ashby, "Every stable
system has the property that if displaced from a state of equilibrium and
released, the subsequent movement is so matched to the initial displacement
that the system is brought back to the state of equilibrium" (Ashby, 1960).

Rick Marken has asserted that stability is a property that properly belongs
only to equilibrium systems, to be distinguished from the property of
control, which belongs only to control systems. Rick bases this assertion
on the definition of stability provided by control engineer Brian Douglas:
'Stability is a measure of a system's response to return to zero after being
disturbed.' From this statement Rick concluded that stability refers only
to systems that return to their initial state (zero deviation from it)
following a impulse disturbance. Control systems, on the other hand, can
counteract even continuous disturbances. From this distinction, Rick
concluded that stable systems and control systems are different kinds of
system.

But Ashby's definition of stability basically states a 'Test for the Stable
System.' Equilibrium systems like Douglas' ball-in-a-bowl example meet this
test: push the ball away from the bowl's center and release it, and the ball
returns (after a few oscillations that gradually die out) to the bottom of
the bowl. By definition, the ball's position is stable. But good control
systems also meet this test: After an impulse disturbance, the controlled
variable returns to its initial value. So control systems that behave this
way are stable systems.

The distinction to be made is not between stable systems and control
systems, it is between equilibrium systems and control systems. Equilibrium
systems (like the ball-in-a-bowl) are passively stable, returning to their
initial state after an impulse disturbance by converting the energy in the
disturbance to restorative force. When given a continuous disturbance, they
simply move to a new equilibrium value rather than returning to their
initial value. Control systems are actively stable: they use their own
energy source to actively oppose the effect of a disturbance on the
controlled perception, whether the disturbance is brief of continuous.

In the real world, equilibrium and control systems often work together. For
example, many aircraft wings have dihedral ï¿½ looking from the front of the
aircraft, they form a shallow V. A gust of wind acting from the side may
tip the aircraft's wing, but as soon as the gust is over, the wing will
level itself because the vertical component of the wing's lift is greater
for the more level half. But control systems aboard the aircraft (in the
form of a pilot or 'autopilot') add active control that can oppose even
continuously acting disturbances. Because of passive stability, the control
systems aboard the aircraft do not have to work as hard to maintain good
control.

Bruce

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

[From Bruce Abbott (2013.12.22.1440 EST)]

RM: Rick Marken (2013.12.22.1210) --

Bruce Abbott (2013.12.22.0935 EST)

BA: W. Ross Ashby's definition of stability. According to Ashby,
"Every stable system has the property that if displaced from a state
of equilibrium and released, the subsequent movement is so matched to
the initial displacement that the system is brought back to the state of

equilibrium" (Ashby, 1960).

Rick Marken has asserted that stability is a property that properly
belongs only to equilibrium systems, to be distinguished from the
property of control, which belongs only to control systems.

RM: Not quite. I am saying that the "stability" described here by Ashby --
the stability exhibited by what we can call "equilibrium
systems" -- is not the same as the stability exhibited by control
systems. In other words, I'm saying that the stability described by Ashby is
not control.

Correct. But stability as defined by Ashby (and Douglas) is a property of a
good control system. Control is a process. Stability is a property.

In his lecture on stability, Douglas gives the example of a ball in a bowl,
as we've previously noted. If the bowl is narrow, with steep sides, then
even rather severe and continuous disturbances (such as tipping the bowl to
60 degrees off the vertical) will have little effect on the ball's position
relative to the bowl's bottom center. Similarly, a mass wedged tightly
between two very strong springs whose other ends are anchored to the walls
will hardly move at all if force is exerted against the mass in the
direction of the springs. Similarly, a control system may allow very little
change in the controlled variable if its gain is high. Like the
aforementioned springs, we characterize a high-gain control system as
exhibiting "stiffness." The difference between control systems and
equilibrium systems isn't so much in their resistance to disturbance as in
the way they achieve such resistance -- passively in the case of equilibrium
systems, actively in the case of control systems.

The problem with equilibrium systems is that they can make it extremely
difficult to exert control over the variables they stabilize. Douglas gives
the example of an overly stable aircraft. To the pilot, the aircraft may
seem nearly unresponsive to his or her efforts to control it. It "wants" to
fly straight and level. Well-designed aircraft usually exhibit some degree
of passive stability, but to allow sensitive control, this passive stability
cannot be too great. In fact, highly controllable aircraft such as fighters
and stunt planes are designed to be almost unstable in flight, to allow
maximum sensitivity to control actions. Consequently, flying them well
(controlling them well) requires a high degree of skill on the part of the
pilot.

Bruce

[From Rick Marken (2013.12.22.1610)]

Bruce Abbott (2013.12.22.1440 EST)–
RM: Not quite. I am saying that the “stability” described here by Ashby –
the stability exhibited by what we can call “equilibrium
systems” – is not the same as the stability exhibited by control
systems. In other words, I’m saying that the stability described by Ashby is
not control.

BA: Correct. But stability as defined by Ashby (and Douglas) is a property of a
good control system. Control is a process. Stability is a property.
RM: Control is also a phenomenon. A FACT (again, see subtitle to LCS III)! The stability described by Ashby is not control.

BA: In his lecture on stability, Douglas gives the example of a ball in a bowl,
as we’ve previously noted. If the bowl is narrow, with steep sides, then
even rather severe and continuous disturbances (such as tipping the bowl to
60 degrees off the vertical) will have little effect on the ball’s position
relative to the bowl’s bottom center…Similarly, a control system may
allow very little change in the controlled variable if its gain is high…The
difference between control systems and equilibrium systems isn’t so
much in their resistance to disturbance as in the way they achieve
such resistance – passively in the case of equilibrium systems, actively
in the case of control systems.
RM: This is false. The difference between control systems and equilibrium systems is completely in their resistance to disturbance: there is resistance to disturbance in a control system and there is no resistance to disturbance in an equilibrium system. Indeed, in an equilibrium system there is no disturbance at all. A disturbance is defined with respect to a controlled variable. There is no controlled variable in an equilibrium system because there is no control going on in an equilibrium system. None at all.
In the equilibrium system that is the ball in the bowl, the force acting to push the ball up the side of the bowl and the restoring forces that bring it back to the initial state are just physical forces acting on the ball, as Newton explained. These forces are not disturbances to a controlled variable or the counter forces of a control system; they are just physical forces acting on objects according to Newton’s laws.
The difference in the observed stability of the ball in the steep versus shallow bowl is exactly what is predicted from Newton’s laws. from a TCV perspective – thinking of these forces as possibly disturbances to a controlled variable – the effects of the forces exerted on the ball, in both the steep and shallow bowls, are exactly what are expected from physical considerations. Remember that a crucial component of The Test is seeing whether disturbances have the expected effect meaning the effect that would be predicted from a physical analysis of the situation under the assumption that there is no control going on. A first year physics student (who was good at differential calculus, which eliminates me, though I could probably do it with a computer program using difference equations) could plot the expected path of the ball in the steep and shallow bowls after a force vector of fixed duration was applied to the ball. If, however, the ball was under control, then the effect of the force would be completely different than what was expected based on physical law.
Note that the student could tell that the ball is not controlled in one case and that it is controlled in the other without knowing anything about the process that accomplishes the control. That is, the student could determine the FACT that the ball is under control based simply on observable events – the time course and amplitude of the force vector and the concomitant time course of the movements of the ball. As in the tracking task the expected relationship between disturbance and controlled variable (ball movement) would not be what is expected. If the relationship between “disturbance” (force vector) and ball movement was what was expected (predicted by the differential equations) then the student would see nothing more than the fact that objects move under applied force in just the way Newton said they would.

BA: The problem with equilibrium systems is that they can make it extremely
difficult to exert control over the variables they stabilize.

RM: Yes, very good point. I even discuss an example of that in my paper on 'Degrees of freedom in behavior" (see p. 187, " of “Mind Readings”, the paragraph starting with "It is interesting to note…). The stability of the design of an aircraft affects the nature of the feedback connection between pilot output and aircraft performance variables. Indeed, read the whole paper to see one reason why the idea that equilibrium systems can account for the stability of human behavior (which is the stability that is actually control) has led psychology astray.

Best

Rick

···

Douglas gives
the example of an overly stable aircraft. To the pilot, the aircraft may
seem nearly unresponsive to his or her efforts to control it. It “wants” to

fly straight and level. Well-designed aircraft usually exhibit some degree
of passive stability, but to allow sensitive control, this passive stability
cannot be too great. In fact, highly controllable aircraft such as fighters

and stunt planes are designed to be almost unstable in flight, to allow
maximum sensitivity to control actions. Consequently, flying them well
(controlling them well) requires a high degree of skill on the part of the

pilot.

Bruce

Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.

``````                                               -- Bertrand Russell
``````

[From Bruce Abbott (2013.22.2230 EST)]

Rick Marken (2013.12.22.1610) –

Bruce Abbott (2013.12.22.1440 EST)
RM: Not quite. I am saying that the “stability” described here by Ashby –
the stability exhibited by what we can call “equilibrium
systems” – is not the same as the stability exhibited by control
systems. In other words, I’m saying that the stability described by Ashby is
not control.

BA: Correct. But stability as defined by Ashby (and Douglas) is a property of a
good control system. Control is a process. Stability is a property.
RM: Control is also a phenomenon. A FACT (again, see subtitle to LCS III)! The stability described by Ashby is not control.

Yes: The stability defined by Ashby (and a whole host of engineers) is not control. It is a property exhibited by some types of systems, including control systems.

BA: In his lecture on stability, Douglas gives the example of a ball in a bowl,
as we’ve previously noted. If the bowl is narrow, with steep sides, then
even rather severe and continuous disturbances (such as tipping the bowl to
60 degrees off the vertical) will have little effect on the ball’s position
relative to the bowl’s bottom center…Similarly, a control system may
allow very little change in the controlled variable if its gain is high…The
difference between control systems and equilibrium systems isn’t so
much in their resistance to disturbance as in the way they achieve
such resistance – passively in the case of equilibrium systems, actively
in the case of control systems.
RM: This is false. The difference between control systems and equilibrium systems is completely in their resistance to disturbance: there is resistance to disturbance in a control system and there is no resistance to disturbance in an equilibrium system. Indeed, in an equilibrium system there is no disturbance at all. A disturbance is defined with respect to a controlled variable. There is no controlled variable in an equilibrium system because there is no control going on in an equilibrium system. None at all.

It would be accurate to say that there is no ACTIVE resistance to disturbance in an equilibrium system. But there IS passive resistance to disturbance. Place a bowling ball on a soft mattress and the ball will sink a good way before it comes to rest. Place the same ball on a solid tabletop and the ball will not sink noticeably into the table. Thus the tabletop does a much better job of resisting the disturbance to its surface curvature than the mattress. Air currents that tip the wings of an airplane disturb the aircraft along its roll axis, forcing the left wing to rise or fall relative to the right wing. The air currents and the weight of the bowling ball both act as disturbances to these equilibrium systems; you can’t make those disturbances disappear by redefining the term “disturbance.” You say “a disturbance is defined . . .” – by whom?

In the equilibrium system that is the ball in the bowl, the force acting to push the ball up the side of the bowl and the restoring forces that bring it back to the initial state are just physical forces acting on the ball, as Newton explained. These forces are not disturbances to a controlled variable or the counter forces of a control system; they are just physical forces acting on objects according to Newton’s laws.

I don’t see how this distinguishes an equilibrium system from a control system. The forces that disturb a controlled variable and the counteractions that resist those disturbances are also physical forces that obey the laws of physics, are they not? A real difference between these systems lies in the source of the counterforce, not in whether the forces involved are physical ones. In the equilibrium system the counterforce comes from the (stored) energy of the disturbance; in the control system it is supplied by the control system’s own energy source.

The difference in the observed stability of the ball in the steep versus shallow bowl is exactly what is predicted from Newton’s laws. From a TCV perspective – thinking of these forces as possibly disturbances to a controlled variable – the effects of the forces exerted on the ball, in both the steep and shallow bowls, are exactly what are expected from physical considerations. Remember that a crucial component of The Test is seeing whether disturbances have the expected effect meaning the effect that would be predicted from a physical analysis of the situation under the assumption that there is no control going on. A first year physics student (who was good at differential calculus, which eliminates me, though I could probably do it with a computer program using difference equations) could plot the expected path of the ball in the steep and shallow bowls after a force vector of fixed duration was applied to the ball. If, however, the ball was under control, then the effect of the force would be completely different than what was expected based on physical law.

I’m tempted to answer simply that the control system ALSO behaves exactly as expected based on a correct understanding of the physics involved, but somehow that seems a bit snarky so I won’t. What I will say is that the ball in the narrow, high bowl resists disturbances to its position better than the one in a wide, shallow bowl, even though its position is not being actively controlled.

Note that the student could tell that the ball is not controlled in one case and that it is controlled in the other without knowing anything about the process that accomplishes the control. That is, the student could determine the FACT that the ball is under control based simply on observable events – the time course and amplitude of the force vector and the concomitant time course of the movements of the ball. As in the tracking task the expected relationship between disturbance and controlled variable (ball movement) would not be what is expected. If the relationship between “disturbance” (force vector) and ball movement was what was expected (predicted by the differential equations) then the student would see nothing more than the fact that objects move under applied force in just the way Newton said they would.

That’s all true, but it doesn’t change the fact that the ball in the narrow bowl resists disturbances to its position more strongly than the ball in the wide bowl. That is, the bowl’s position is more stable in the former case than in the latter. Its position would be even more stable, relative to its initial position, if we embedded the ball in concrete.

BA: The problem with equilibrium systems is that they can make it extremely
difficult to exert control over the variables they stabilize.

RM: Yes, very good point. I even discuss an example of that in my paper on 'Degrees of freedom in behavior" (see p. 187, " of “Mind Readings”, the paragraph starting with "It is interesting to note…). The stability of the design of an aircraft affects the nature of the feedback connection between pilot output and aircraft performance variables. Indeed, read the whole paper to see one reason why the idea that equilibrium systems can account for the stability of human behavior (which is the stability that is actually control) has led psychology astray.

Ah, now you’re talking . . .

Bruce

[From Rick Marken (2013.12.24.0950)]

Bruce Abbott (2013.22.2230 EST)--

Rick Marken (2013.12.22.1610) --

> Bruce Abbott (2013.12.22.1440 EST)

RM: Control is also a _phenomenon_. A FACT (again, see subtitle to LCS III)! The stability described by Ashby is not control.

BA: Yes: The stability defined by Ashby (and a whole host of engineers) is not control. It is a property exhibited by some types of systems, including control systems.

RM: I would say that the stability defined by Ashby is not a property
systems; it's a description of the behavior of systems. Specifically,
it's a description of the behavior of what is presumed to the the
variable that is "stabilized" by the system: the "stable" variable in
open loop systems, the "controlled" variable in control systems. For
example, it is a description of the behavior of the ball as it
oscillates back to the bottom of the bowl after release (see the first
Douglas lecture) in the ball in bowl system. It is also presumably a
description of the behavior of the behavior of the cursor as it
oscillates around the target in a compensatory tracking task.

I don't think stability is really the same for both these types of
systems. The stability observed in open-loop systems (like the damped
oscillation of the ball in the bowl) seems to me to be quite different
than the stability observed in closed-loop systems (like the temporal
variations cursor in a tracking task). One difference is this: The
stability observed in open-loop systems is the behavior of a variable
_after_ application of a stimulus (disturbance, if you wish); the
stability observed in closed loop systems is the behavior of a
variable _during_ the application of a disturbance. The other
difference is particularly pertinent to living control systems: in a
closed-loop system the observed stability of a variable does not
necessarily reflect the actual stability of the system - where here
"stability" is defined as the system's ability to bring a variable to
it's initial state. The problem is that, unlike the behavior of a
"stable" variable in an open-loop system, the behavior of a "stable"
(controlled) variable in an open-loop system depends, to a large
extent, on secular variation in the reference specification for the
state of the variable. I demonstrate the problem in a paper I
presented years ago at a CSG conference in Wales. The paper is
available here:

https://dl.dropboxusercontent.com/u/31298693/ThermoPeople.doc

The relevant part of the paper is in the section subtitled "Is It
Error or Is It Intended?". In particular, consider the results of a
simulation that are presented in Figure 1. these results show that the
exact same variance of the cursor in a compensatory tracking task --
the exactly same "stability" per Ashby (and Douglas) -- could be due
to variance in the reference of a high gain (highly stable) control
system or to the variance in output of a low gain (low stability)
control system.

This little demonstration (which I should re-do and publish for the
sake of those who believe that behavior can be explained in terms of
"passive" equilibrium" models [thanks to Matti Kolu (2013.12.24.0920
CET) for posting a recent example of such a paper], shows that Ashby's
definition of stability (as the observed behavior of a variable after
-- or even during! -- disturbance) tells nothing about the actual
stability of the system that maintains the stability of the variable
(in the sense of the ability of the system to keep a variable in a
reference, initial or pre-selected state).

Now that I think of it, the reason I find Douglas' discussion of
stability to be so infuriating (besides the fact that it encourages
papers like the one by Latash, Levin, Scholz and Schï¿½ner that Matti
referred to) is because it violates one of the fundamental tenets of
PCT: you can't tell what a system is doing by just looking at what
it's doing.

But thanks to this discussion I think I know what my next research
project will be about! So thanks Bruce and Martin: Once again you have
given me something fun to work on. I can't think of a better Xmas
present!

Best regards and Merry Xmas to all

Rick

···

RM: This is false. The difference between control systems and equilibrium systems is completely in their resistance to disturbance: there is resistance to disturbance in a control system and there is _no resistance to disturbance_ in an equilibrium system. Indeed, in an equilibrium system there is no disturbance at all. A disturbance is defined with respect to a _controlled variable_. There is no controlled variable in an equilibrium system because there is no _control_ going on in an equilibrium system. None at all.

BA: It would be accurate to say that there is no ACTIVE resistance to disturbance in an
equilibrium system. But there IS passive resistance to disturbance.

RM The difference in the observed stability of the ball in the steep versus shallow bowl is exactly what is predicted from Newton's laws. From a TCV perspective -- thinking of these forces as possibly disturbances to a controlled variable -- the effects of the forces exerted on the ball, in both the steep and shallow bowls, are exactly what are _expected_ from physical considerations. Remember that a crucial component of The Test is seeing whether disturbances have the _expected effect_ meaning the effect that would be predicted from a physical analysis of the situation under the assumption that there is _no_ control going on. A first year physics student (who was good at differential calculus, which eliminates me, though I could probably do it with a computer program using difference equations) could plot the expected path of the ball in the steep and shallow bowls after a force vector of fixed duration was applied to the ball. If, however, the ball was under control, then the effect of the force would be completely different than what was expected based on physical law.

Iï¿½m tempted to answer simply that the control system ALSO behaves exactly as expected based on a correct understanding of the physics involved, but somehow that seems a bit snarky so I wonï¿½t. What I will say is that the ball in the narrow, high bowl resists disturbances to its position better than the one in a wide, shallow bowl, even though its position is not being actively controlled.

Note that the student could tell that the ball is not controlled in one case and that it is controlled in the other without knowing anything about the _process_ that accomplishes the control. That is, the student could determine the FACT that the ball is under control based simply on observable events -- the time course and amplitude of the force vector and the concomitant time course of the movements of the ball. As in the tracking task the expected relationship between disturbance and controlled variable (ball movement) would not be what is expected. If the relationship between "disturbance" (force vector) and ball movement was what was expected (predicted by the differential equations) then the student would see nothing more than the fact that objects move under applied force in just the way Newton said they would.

Thatï¿½s all true, but it doesnï¿½t change the fact that the ball in the narrow bowl resists disturbances to its position more strongly than the ball in the wide bowl. That is, the bowlï¿½s position is more stable in the former case than in the latter. Its position would be even more stable, relative to its initial position, if we embedded the ball in concrete.

> BA: The problem with equilibrium systems is that they can make it extremely
> difficult to exert control over the variables they stabilize.

RM: Yes, very good point. I even discuss an example of that in my paper on 'Degrees of freedom in behavior" (see p. 187, " of "Mind Readings", the paragraph starting with "It is interesting to note...). The stability of the design of an aircraft affects the nature of the feedback connection between pilot output and aircraft performance variables. Indeed, read the whole paper to see one reason why the idea that equilibrium systems can account for the stability of human behavior (which is the stability that is actually control) has led psychology astray.

Ah, now youï¿½re talking . . .

Bruce

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

[Martin Taylor 2013.12.24.16.34]

[From Rick Marken (2013.12.22.1210)]

Bruce Abbott (2013.12.22.0935 EST)--
BA: W. Ross Ashbyï¿½s definition of stability. According to Ashby, "Every stable
system has the property that if displaced from a state of equilibrium and
released, the subsequent movement is so matched to the initial displacement
that the system is brought back to the state of equilibrium" (Ashby, 1960).
Rick Marken has asserted that stability is a property that properly belongs
only to equilibrium systems, to be distinguished from the property of
control, which belongs only to control systems.

RM: Not quite. I am saying that the "stability" described here by
Ashby -- the stability exhibited by what we can call "equilibrium
systems" -- is not the same as the stability exhibited by control
systems. In other words, I'm saying that the stability described by
Ashby is not control.

All men are mortal
Aristophanese is a man.
Therefore Aristophanese is mortal.

Correct reasoning, but

All men are mortal,
Aristophanese is mortal
Therefore Aristophanese is a man.

Incorrect reasoning.

I don't remember anyone claiming that all systems that exhibit stability are control systems. Stability is not control in the same way that toughness is not an oak tree. Oak trees have toughness, control systems have stability. Not all things with toughness are oak trees. I know it's dangerous when dealing with Rick, but I leave the corresponding analogy to the reader.

RM: I have no problem saying that both equilibrium and control systems
are stable systems as long as it is understood that control and
stability are two different phenomena

Yep, coldness and ice cream are two different phenomena

and , thus, require quite
different explanations.

and thus require quite different explanations.

Martin

[From Rick Marken (2013.12.24.1830)]

Martin Taylor (2013.12.24.16.34)–
MT: I don’t remember anyone claiming that all systems that exhibit stability are
control systems.
RM: Me neither. I believe the claim was that all control systems exhibit stability. And in my last post I think I showed that this is not the case; a very good control system can appear to be highly unstable. So I disagree with myself when I said:

RM: I have no problem saying that both equilibrium and control systems
are stable systems as long as it is understood that control and
stability are two different phenomena
I do have a problem saying that both equilibrium and control systems are stable systems. I would rather say that control systems can appear to be stable or unstable; an equilibrium system is always stable.

Best

Rick

···

Yep, coldness and ice cream are two different phenomena

and , thus, require quite
different explanations.

and thus require quite different explanations.

Martin

Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
– Bertrand Russell

Well Rick, I think that your provocation was not necessary. But you can't

···

-----Original Message-----
From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Richard Marken
Sent: Sunday, December 22, 2013 9:12 PM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Stability and Control

[From Rick Marken (2013.12.22.1210)]

Bruce Abbott (2013.12.22.0935 EST)--

BA: W. Ross Ashby's definition of stability. According to Ashby, "Every

stable

system has the property that if displaced from a state of equilibrium and
released, the subsequent movement is so matched to the initial

displacement

that the system is brought back to the state of equilibrium" (Ashby,

1960).

Rick Marken has asserted that stability is a property that properly

belongs

only to equilibrium systems, to be distinguished from the property of
control, which belongs only to control systems.

RM: Not quite. I am saying that the "stability" described here by
Ashby -- the stability exhibited by what we can call "equilibrium
systems" -- is not the same as the stability exhibited by control
systems. In other words, I'm saying that the stability described by
Ashby is not control.

BA: Rick bases this assertion
on the definition of stability provided by control engineer Brian Douglas:
'Stability is a measure of a system's response to return to zero after

being

disturbed.' From this statement Rick concluded that stability refers only
to systems that return to their initial state (zero deviation from it)
following a impulse disturbance. Control systems, on the other hand, can
counteract even continuous disturbances. From this distinction, Rick
concluded that stable systems and control systems are different kinds of
system.

RM: Not correct. What Rick concluded is that the _behavior_ that
Douglas (and Ashby) calls "stability" -- the return of a variable to
the initial, equilibrium or zero state after a transient disturbance
-- is different than the behavior called "control" -- where a variable
remains in an initial, equilibrium or zero state, protected from the
effects of disturbance (sorry Boris).

HB :
Rick how can "variable" remain in "unchanged state" ? Even without a tiny
oscilation ?

And you call the mechanism, which "keep" variable in unchanged state" or
cause the variable to remain in the same equilibrium, initial, zero state -
control. Did I understand you right ?

I'm wondering how would you classify behavior of Watt's governer (Corlis
engine) or thermostat? By YOUR "definition" first is "protecting" the
controlled variable speed and the second is "protecting" the controlled
variable temperature, all the time "keeping" both variables in "unchanged
state" or as you say "remaining them in initial state" probably all the time
of control. How am I doing ?

Bill described the Watt's governer together with the engine as : "The
flyball governer, toghether with the engine it goverens, belongs to a large
class of devices known as negative feedback control system. These control
systems.//.can explain a fundamental aspect of how every living thing works,
from the tiniest amoeba to the being who is reading these words. The
mechanism by which this self-regulating machine worked was just about 100
years old". (B:CP, 2005). Notice that he is describing the mechanism as
self-regulative not "protecting".

Similar description of Watt's governer you find in Ashby's book. Only that
he is talking about stable system. ".if any transient disturbance slows or
accelerates the engine, the governer brings the speed back to the usual
value. By this return the system demonstrates it's stability. In both cases
"the field" or "aal line of behaviors" show changes in respect to initial
state. Or we could say that "lines of behaviors" starts and ends in the
initial position (zero state). They didn't mentione "remaining" variables in
initial state.
The same mechanism goes for Thermostat. You can find it in books of both
authors.

Oh, I don't know why I boder myself with citations and definitions. You just
got through the Bill's book (2005) and probably you know citations and
defitnitions "by heart" . :))

BA: But Ashby's definition of stability basically states a 'Test for the

Stable

System.' Equilibrium systems like Douglas' ball-in-a-bowl example meet

this

test: push the ball away from the bowl's center and release it, and the

ball

returns (after a few oscillations that gradually die out) to the bottom of
the bowl. By definition, the ball's position is stable. But good control
systems also meet this test: After an impulse disturbance, the controlled
variable returns to its initial value. So control systems that behave

this

way are stable systems.

RM: I have no problem saying that both equilibrium and control systems
are stable systems as long as it is understood that control and
stability are two different phenomena and , thus, require quite
different explanations. You are saying that stability is seen when a
variable returns to its initial value after an impulse disturbance.
You correctly note that by this test both an uncontrolled variable --
like the ball in the bowl -- and a controlled variable -- like the
distance between cursor and target -- appear to be stable. But if you
apply a continuous rather than a transient disturbance -- as in the
TCV -- you will quickly see that the ball in the bowl is _only_ stable
while the distance between cursor and target is controlled. This is
the important distinction (from a PCT perspective) because behavior
that is _only_ stable can be explained without the need for control
theory; behavior that IS control can be explained only by control
theory.

Psychologists who don't want to look at behavior as a control
phenomenon (because it would require them to see behavior as
purposeful) are, therefore, happy to see the behavior of living
systems as being like that of the behavior of systems that are _only_
stable -- like that of the ball in the bowl. That way they don't have
to confront the fact that the phenomenon they are trying to explain --
the behavior of living organisms-- is not just stable but also a
process of control. Bill was always having problems with people who
said that control theory is unnecessary because we can use
"equilibrium" theories -- ie. lineal causal theories -- to explain the
behavior -- when we tried to present PCT to conventional
psychologists. Indeed, in a talk I gave at Ucla a few years ago one of
the prominent professors in the audience scoffed when I said that
purposeful behavior is control and gave the example of the ball in the
bowl to show that all this control theory stuff was unnecessary; it's
all just the work of "equilibrium systems.

So while this discussion about the distinction between stability and
control may seem impossibly esoteric and trivial to many on CSGNet, it
is very relevant and real to me because I have experienced the effects
of treating stability as equivalent to control -- in the form of snide
reviewer comments on my articles and in haughty attempts at "gotcha"
remarks at conference presentations. I don't really mind it when it
comes from people who don't understand -- and clearly don't _want_ to
understand PCT -- but it saddens me when it comes from people who are
ostensibly fans of PCT.

BA: The distinction to be made is not between stable systems and control
systems, it is between equilibrium systems and control systems.

Equilibrium

systems (like the ball-in-a-bowl) are passively stable, returning to their
initial state after an impulse disturbance by converting the energy in the
disturbance to restorative force. When given a continuous disturbance,

they

simply move to a new equilibrium value rather than returning to their
initial value. Control systems are actively stable: they use their own
energy source to actively oppose the effect of a disturbance on the
controlled perception, whether the disturbance is brief of continuous.

RM: Right, two different mechanisms to explain two different
would have had no problems with them.

Best

Rick

In the real world, equilibrium and control systems often work together.

For

example, many aircraft wings have dihedral - looking from the front of the
aircraft, they form a shallow V. A gust of wind acting from the side may
tip the aircraft's wing, but as soon as the gust is over, the wing will
level itself because the vertical component of the wing's lift is greater
for the more level half. But control systems aboard the aircraft (in the
form of a pilot or 'autopilot') add active control that can oppose even
continuously acting disturbances. Because of passive stability, the

control

systems aboard the aircraft do not have to work as hard to maintain good
control.

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

On Sun, Dec 22, 2013 at 6:35 AM, Bruce Abbott <bbabbott@frontier.com> wrote:

[From Bruce Abbott (2013.12.22.0935 EST)]

Boris Hartman posted something this morning that turned on a light for me:
W. Ross Ashby's definition of stability. According to Ashby, "Every

stable

system has the property that if displaced from a state of equilibrium and
released, the subsequent movement is so matched to the initial

displacement

that the system is brought back to the state of equilibrium" (Ashby,

1960).

Rick Marken has asserted that stability is a property that properly

belongs

only to equilibrium systems, to be distinguished from the property of
control, which belongs only to control systems. Rick bases this assertion
on the definition of stability provided by control engineer Brian Douglas:
'Stability is a measure of a system's response to return to zero after

being

disturbed.' From this statement Rick concluded that stability refers only
to systems that return to their initial state (zero deviation from it)
following a impulse disturbance. Control systems, on the other hand, can
counteract even continuous disturbances. From this distinction, Rick
concluded that stable systems and control systems are different kinds of
system.

But Ashby's definition of stability basically states a 'Test for the

Stable

System.' Equilibrium systems like Douglas' ball-in-a-bowl example meet

this

test: push the ball away from the bowl's center and release it, and the

ball

returns (after a few oscillations that gradually die out) to the bottom of
the bowl. By definition, the ball's position is stable. But good control
systems also meet this test: After an impulse disturbance, the controlled
variable returns to its initial value. So control systems that behave

this

way are stable systems.

The distinction to be made is not between stable systems and control
systems, it is between equilibrium systems and control systems.

Equilibrium

systems (like the ball-in-a-bowl) are passively stable, returning to their
initial state after an impulse disturbance by converting the energy in the
disturbance to restorative force. When given a continuous disturbance,

they

simply move to a new equilibrium value rather than returning to their
initial value. Control systems are actively stable: they use their own
energy source to actively oppose the effect of a disturbance on the
controlled perception, whether the disturbance is brief of continuous.

In the real world, equilibrium and control systems often work together.

For

example, many aircraft wings have dihedral - looking from the front of the
aircraft, they form a shallow V. A gust of wind acting from the side may
tip the aircraft's wing, but as soon as the gust is over, the wing will
level itself because the vertical component of the wing's lift is greater
for the more level half. But control systems aboard the aircraft (in the
form of a pilot or 'autopilot') add active control that can oppose even
continuously acting disturbances. Because of passive stability, the

control

systems aboard the aircraft do not have to work as hard to maintain good
control.

Bruce

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4259 / Virus Database: 3658/6941 - Release Date: 12/22/13

[Martin Taylor 2013.12.26.12.03]

This is a very strange thread, almost surreal. Here's my brief abstract of how we got from Bruce's link to show CSGnet readers what is meant by classical control theory to here (or at least "here" as of a couple of days ago. I've been almost incommunicado since an ice storm knocked out our power and cold temperatures induced us to go and stay with our son in an unaffected part of Toronto.

BA: Here's a link if you want to know about classical control theory.

RM: I stopped looking at it very shortly because they talked about "stability" which has nothing to do with control.

BA and MT: "Stability" is nothing to do with control. Control is a way of achieving stability.

BA also: What distinguishes control from self-restoring systems like the ball-in-a-bowl is that the disturbance provides the energy that allows the ball to reach its stable position whereas a control system provides the energy separately.

MT: Bruce is right.

RM: Bruce is wrong. Control is ... (a subthread ensued on different possible definitions of "control").

BA stays silent.

MT. Stability and control are quite different concepts.

RM: Wrong. Control and stability are quite different concepts. To know whether control is happening you have to use the TCV.

MT: Shows pictures of two of the three kinds of dynamical attractors that represent the three kinds of stability (fixed point, and oscillator, leaving "strange" aside), and points out that by looking at the orbit one could not tell whether the stability is due to a control system or a ball-in-bowl kind of system.

RM: Wrong. Looking at the orbits one can't tell whether the stability is due to a control system or a ball-in-bowl kind of system. To know whether control is happening you have to use the TCV.

MT: We were talking about whether "stability" can be applied to control systems.

RM: No we weren't. We were talking about how to tell whether control is happening.

MT: Stability and control are concepts in a different domain, like "cold" and "ice cream".

As of Xmas Eve:

> Martin Taylor (2013.12.24.16.34)--

> MT: I don't remember anyone claiming that all systems that exhibit stability are
> control systems.

RM: Me neither. I believe the claim was that all control systems exhibit stability. And in my last post I think I showed that this is not the case; a very good control system can appear to be highly unstable.

That was never the claim. The claim was that all control systems are dynamical systems, and all dynamical systems have the _property_ of "stability".

It was RM's denial of this, which he gave as a reason for not wanting to know any more about modern control system theory than he could get from a few minutes of watching just one of series of videos, that led to the whole thread. The problem, as reported by RM, was that the lecturer used the term "stability", and therefore could know nothing about control, and so it could not be worth wasting RM's time watching more than those few minutes.

Whether a dynamical system (control or otherwise) is positively or negatively stable (technically, whether their Lyapunov exponents are negative or positive) depends on the specific system. But whether it is a control system or not, it has the property of stability.

[Obviously, I've left out more than 90% of the thread, but the above is how I understand the gist of it.]

Martin

[From Rick Marken (2013.12.26.1500)]

···

Martin Taylor (2013.12.26.12.03)–

MT: This is a very strange thread, almost surreal.

RM: They often are; but they also often produce such wonderful fruits!

Martin Taylor (2013.12.24.16.34)–

MT: I don’t remember anyone claiming that all systems that exhibit stability are

control systems.

RM: Me neither. I believe the claim was that all control systems exhibit stability. And in my last post I think I showed that this is not the case; a very good control system can appear to be highly unstable.

MT: That was never the claim. The claim was that all control systems are dynamical systems, and all dynamical systems have the property of “stability”.

RM: OK, could you tell me again what the property of stability is.

MT: Whether a dynamical system (control or otherwise) is positively or negatively stable (technically, whether their Lyapunov exponents are negative or positive) depends on the specific system. But whether it is a control system or not, it has the property of stability.

RM: OK, so all dynamical systems have the property of stability. Could you tell me what that property – how to measure it – is so that I can see if, indeed, it is a property of control systems.

Thanks

Best

Rick

Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
– Bertrand Russell

[Martin Taylor 2013.12.27.00.40]

``````OK, but first we need a base, a couple of loose definitions, from
``````

which to build. (1) A dynamical system consists of a set of variables that affect
one another so that a change in one variable will directly influence
variation in at lest one of the others, and in which each variable
is directly affected by changes in the value of at least one of the
others. Example: a simple control loop, in which every variable is
influenced by its predecessor in the loop, and a couple of variables
are influenced by external variables as well. (2) A dynamical system has an inside and an outside. The inside
consists of a set of variables that conform to definition 1; the
outside consists of all those variables that influence the inside
but are not directly influenced by inside variables. Example: a
simple control loop, in which the reference and disturbance values
are outside, because they are not directly influenced by the values
of the variables in the loop (though there may be external
connections through which they are indirectly influenced by what
happens in the loop). Another example is the control hierarchy
(including all the environmental feedback paths), whose inside
consists of lots of interconnected control loops and whose outside
consists of all the disturbances (and influences from the
reorganizing system, if you want to include them).
(3) A state of a dynamical system of N variables can be described by
a vector of length 2N that consists of the value and rate of change
of all the variables. That vector defines a location in a
2N-dimensional “phase space”.
(4) The dynamics of a dynamical system are the traces of the state
vector over time in the 2N-dimensional state space.
(5) The dynamics can be divided into two components, intrinsic and
extrinsic. 5a) Sometimes “the dynamics of the system” is taken to refer
only to the intrinsic dynamics. The intrinsic dynamics refer to all
the the traces of the state vector when the values and rates of
change of the outside variables are held constant and the values and
rates of change of the inside variables are started at some state.
The intrinsic dynamics traces are called “orbits”. There is exactly
one orbit through any point in the state space of the loop.
Example: In a simple control loop, set the
reference and the disturbance to some arbitrary values, define an
initial state for the perceptual value, the output value and all the
other loop values, and see how those internal values change over
time. In a loop with good control, the perceptual value will
approach the reference value and the output value will approach the
negative of the disturbance value. The vector that describes the
variables of the loop as a whole follows some orbit towards its
asymptotic value.
5b) The extrinsic dynamics refer to the traces that occur
when the outside variables change in externally determined ways.
These traces follow no predetemined path, as they depend on the
unspecified variation of the external variables.
If those loose definitions and examples are understood we can talk
about “stability”. “STABILITY” CONCERNS ONLY THE INTRINSIC DYNAMICS OF A DYNAMICAL
SYSTEM.
In an earlier message, I showed pictures to suggest two kinds of
stabiity, fixed point and oscillator. The difference is whether the
attractor is a single point in phase space or a closed orbit. Here’s
that picture again. Only two of the possibly many dimensions of the
phase space of the systems are shown. It doesn’t matter which two,
but you should imagine all the dimensions converging in the same way
to either the fixed point or a closed orbit.
There is a third kind, a “strange attractor” of fractal dimension,
but it isn’t really relevant here, so I will not discuss it further.
It appears in many kinds of nonlinear feedback systems, and is
associated with what is technically called “chaos”.
system depends on how fast it approaches the attractor. The
technical details are complex and I will leave them until later,
after considering some examples.
Consider a simple feedback loop structured like a trivially simple
control loop with two external variables (in a control loop we call
them reference “r” and disturbance “d”). Maybe our loop is a control
loop, and maybe it isn’t. It is so simple that it has only three
intrinsic variables “p” (perception), “e” (error), and “o” (output).
I’ll use quotes to refer to the variables, and no quotes to refer to
their values.
Set the values of the external variables at r and d, and set their
rates of change at zero. Set the initial values of the intrinsic
variables at some random set of values. The vector of values will
trace some path, called an “orbit”, through the 6-dimensional state
space. There is only one orbit through any location in the state
space. If the orbit approaches a state defined by velocities all
zero (a fixed point), the system is stable. If that state is the
same for some finite set of initial values, and has the asymptotic
values p = r, e = 0, o = r-d, then our feedback loop is a control
loop. It is stable, at least for the region of the state space for
which the orbits converge to a fixed point. That region is called a
“basin of attraction”, and has an “attractor” at p = r, e = 0, o =
-d, and dp/dt, de/dt, do/dt all = 0. It may sometimes be convenient
to translate the axes of the state space so as to set the attractor
at {0, 0, 0, 0, 0, 0} (and it will be convenient to do so when we
consider the “Test for the Controlled Variable” --TCV).
Real control loops have a basin of attraction limited by the ranges
over which the physical parameters can vary, but within the basin of
attraction, all the orbits converge to the same fixed point. Note carefully that nothing about the orbits or the structure of the
basin of attraction says anything about the mechanism of the dynamic
system. All it says is that if the attractor is a fixed point at the
specified location in the phase space, the dynamic system is acting
as a control system would do, by providing output that counteracts
the disturbance, and that brings the perception near the reference
value. It does not say how this result is achieved. A necessary
condition fpr a dynamic system to be a “control system” is the
existence of a basin of attraction around the specific fixed point
attractor at p = r, e = 0, o = r-d, and dp/dt, de/dt, do/dt all = 0.
Is this a sufficient condition for whether a dynamical system is a
control system? No it is not. As is often said, “You can’t tell what
someone is doing by watching what they are doing”, and the same goes
for the orbits, which are really just a way of watching what the
system is doing – albeit more precisely than just by watching the
output variable. How, then, can one use the orbits that define the
dynamics of the system to test whether a system is a control system?
The answer is the standard answer – use the Test for the Controlled
Variable (TCV), or at least one aspect of it. The aspect of the TCV
that we need is the change of disturbance value. If d is changed to
d’ and the system exhibits a new set of orbits with a new fixed
point attractor at p = r, e = 0, o = r-d’ and velocities zero no
matter what the new value of d, then the system is a control system.
Why do we not need the rest of the TCV criteria (for example making
sure that the putative environmental variable can be sensed)? The
answer is that we are analyzing a predefined dynamical system, for
which the only influence on p is the combination of influences from
o and d, and the only influence on o is the combination of
influences from r and p by way of e. So we have already defined that
the environmental variable influences (and is the only influence on)
the perception, and that the output influences the environmental
variable. In a more complex situation, such as is always the case in
the real world, we must consider those other criteria.
Now consider the “ball in a bowl” system. It has no extrinsic
variables, although in practice some historical extrinsic force has
set the initial conditions (as is also the case in practice for the
control loop). The initial conditions have the ball at a particular
place on the bowl surface moving with some velocity across the
surface. The state space therefore has 4 dimensions, surfaces being
2-dimensional. In the absence of friction, the ball has, and keeps,
some energy that is proportional to the square of its velocity. That
energy doesn’t change throughout the orbit from any one location in
the state space, so the orbit defines a locus of constant energy in
the state space. After some time, possibly a very long time if the
bowl shape is very complicated, the ball will return arbitrarily
closely to its initial location in the state space. The ball has a
closed orbit, like planets around their star (almost–it’s not
strictly true of planets). Each orbit is separate and the orbits do
not converge. This system has no attractor and no basin of
attraction. It is stable in that none of the orbits fly off toward
infinity, but has stability zero.
If the ball encounters friction as it rolls around the bowl, the
situation changes. (There’s a new intrinsic variable that represents
the energy lost from the ball into heat, but we will ignore that
part of the state space vector). The orbit from any initial location
passes through places in the state space of continually reducing
energy, until it asymptotically approaches a physical location with
velocity zero from which all directions in the phase space lead to
higher energies in the ball. This system has positive stability
(though the exponents that describe it technically are negative).
In a bowl of complicated shape, there may be several such locations
(local minima). Each is a fixed point attractor with its own basin
of attraction. Where the basins of attraction of two different
attractors meet, there is a curve, which happens to be an orbit, on
which the ball might balance until moved microscopically into one or
other of the meeting basins of attraction. That orbit is a
“repellor”. At the attractor, there is a region in which the system
is unstable (has a negative stability value).
Returning to the simple feedback loop structured like a control
loop, we know that it can go unstable if the gain is too large for
the transport lag, or if the loop gain is greater than +1. When the
loop is unstable, the orbits diverge. Maybe the perceptual value
increases (decreases) exponentially, maybe it has an ever-increasing
amplitude of oscillation. The other variables in the loop show
similar behaviour. The orbits of such a system look like those in
the above diagrams with the arrows reversed. The orbits diverge
I think we are now at a point where we can talk about numeric
measures of stability.I’m not going to talk about matrices and
eigenvalues, as I properly should do. Instead, I will just describe
the situation geometrically.
Around any point in the phase space {x1, x2… dx1/dt, dx2/dt,…}
one can imagine a small hypervolume we can label “D” defined by
“deltas” for the 2N values. This hypervolume has a size that is the
product of all the deltas. Through every point of the initial
hypervolume there is an orbit of the intrinsic dynamics of the
system. Now imagine a whole lot of trials in which the system is started off
at different points within the hypervolume “D”. After a given time t
in any one trial, the system will have arrived at a new location in
the phase space. The set of all these locations over all the trials
determines a new hypervolume “D1”. If the size of D1 is less than
the size of D, the original location is in a part of the state space
where the system is stable. Since we are talking about very small
deltas, which we can conceptually reduce toward zero, we can treat
the local behaviour as linear, and thus the change in size as
exponential over time. The numeric value of the stability is
actually a vector, because the hypervolume could squash in one
direction while expanding in another. But such system are usually
chaotic, and we are ignoring them for now. One convenient measure of stabiity is the negative of the value of
the largest exponent. Within a basin of attraction, this value is
everywhere positive. At a repellor, it is negative. For a ball on a
flat surface, it is zero. Now consider the stability of our simplest control system. It has a
gain and a transport lag that together affect its stability. If the
gain is zero, its stability is zero because the values of its
variables don’t change unless the external variables change their
values. As the gain increases, the stability first increases, then
decreases until it becomes negative and the signal values explode
toward infinity. Low positive stability means that the system takes
quite a while for the hypervolume D to squish down and for the orbit
to approach the attractor. High stability means that the orbits
converge rapidly, and the system corrects quickly to changing values
of the reference and the disturbance. Low stability because of low
gain means the system reacts sluggishly; low stability because of
excessive gain for the give lag means that the system overshoots
before it settles.
with , and I
expect that there are online courses if you want to delve more
deeply into the issues (MIT, for example, might have some).
Martin

···

[From Rick Marken (2013.12.26.1500)]

``````          Martin
``````

Taylor (2013.12.26.12.03)–

``````          MT: This is a very strange thread, almost surreal.
``````
``````          RM: They often are; but they also often produce such
``````

wonderful fruits!

Martin Taylor (2013.12.24.16.34)–

``````              > MT: I don't remember anyone claiming that all
``````

systems that exhibit stability are

``````              > control systems.

RM: Me neither. I believe the claim was that all
``````

control systems exhibit stability. And in my last post
I think I showed that this is not the case; a very
good control system can appear to be highly unstable.

``````          MT: That was never the claim. The claim was that all
``````

control systems are dynamical systems, and all dynamical
systems have the property of “stability”.

``````          RM: OK, could you tell me again what the property of
``````

stability is.

``````          MT:
``````

Whether a dynamical system (control or otherwise) is
positively or negatively stable (technically, whether
their Lyapunov exponents are negative or positive) depends
on the specific system. But whether it is a control system
or not, it has the property of stability.

``````      RM: OK, so all dynamical systems have
``````

the property of stability. Could you tell me what that
property – how to measure it – is so that I can see if,
indeed, it is a property of control systems.

``````  RM: OK, so all dynamical systems have the
``````

property of stability. Could you tell me what that property – how
to measure it – is so that I can see if, indeed, it is a property
of control systems.

[Martin Taylor 2013.12.28.11.57]

``````"intrinsic" should be "external".
``````

“At the attractor” should be “at the repellor”.
There are undoubtedly other typos. I hope you can make sense of it
and note the typos for what they are.
Martin

···

Some typos in

[Martin Taylor 2013.12.27.00.40]

``````  If the ball encounters friction as it rolls around the bowl, the
``````

situation changes. (There’s a new intrinsic variable that
represents the energy lost from the ball into heat, but we will
ignore that part of the state space vector).

``````  In a bowl of complicated shape, there may be several such
``````

locations (local minima). Each is a fixed point attractor with its
own basin of attraction. Where the basins of attraction of two
different attractors meet, there is a curve, which happens to be
an orbit, on which the ball might balance until moved
microscopically into one or other of the meeting basins of
attraction. That orbit is a “repellor”. At the attractor, there is
a region in which the system is unstable (has a negative stability
value).

[From Rick Marken (2013.12.28.1245)]

···

Martin Taylor (2013.12.27.00.40)–

``````MT: OK, but first we need a base, a couple of loose definitions, from
``````

which to build…

RM: OK, I read that. Now to the definition, right?

``````MT: To answer your specific question: Numerically, the stability of a
``````

system depends on how fast it approaches the attractor. The
technical details are complex and I will leave them until later,
after considering some examples.

RM: I guess not. My specific question was “how do I measure stability?” not “what does stability depend on?”.

``````MT: Consider a simple feedback loop structured like a trivially simple
``````

control loop with two external variables (in a control loop we call
them reference “r” and disturbance “d”)…

RM: Still no measure of stability.

RM: No. I just want a formula and a definition of the variables that go into the formula so I can measure the stability of an equilibrium system. I can give you the formula for the stability of a simple control system. How about giving me the formula for the stability of a simple equilibrium system, like a pendulum or mass-spring system.

Best

Rick

Richard S. Marken PhD
The only thing that will redeem mankind is cooperation.
– Bertrand Russell

``````      RM: OK, so all dynamical systems have
``````

the property of stability. Could you tell me what that
property – how to measure it – is so that I can see if,
indeed, it is a property of control systems

[Martin Taylor 2013.12.28.16.49]

``````But do you understand the concept? That's all I tied to get across.
``````

If you understand Lyapunov exponents, the matrix algebra and the
implicationsof the various eigenvalues, you can go to the website to
which I pointed you and its associated links to get the formulae. I
don’t think there’s any point in trying to run a mechanics course on
CSGnet when it is much better done elsewhere in the Internet.
All I hope is that you will stop making your silly pronunciamentos
researchers say.
Martin

···

[From Rick Marken (2013.12.28.1245)]

``````            Martin Taylor
``````

(2013.12.27.00.40)–

``````                    RM: OK, so all dynamical
``````

systems have the property of stability. Could
you tell me what that property – how to measure
it – is so that I can see if, indeed, it is a
property of control systems

``````            MT: OK, but first we need a base, a couple of loose
``````

definitions, from which to build…

RM: OK, I read that. Now to the definition, right?

``````            MT: To answer your
``````

specific question: Numerically, the stability of a
system depends on how fast it approaches the attractor.
The technical details are complex and I will leave them
until later, after considering some examples.

``````          RM: I guess not. My specific question was "how do I
``````

measure stability?" not “what does stability depend on?”.

``````            MT: Consider a
``````

simple feedback loop structured like a trivially simple
control loop with two external variables (in a control
loop we call them reference “r” and disturbance “d”)…

RM: Still no measure of stability.

``````          >MT: Does this answer your question, at least
``````

conceptually?

``````          RM: No. I just want a
``````

formula and a definition of the variables that go into the
formula so I can measure the stability of an equilibrium
system. I can give you the formula for the stability of a
simple control system. How about giving me the formula for
the stability of a simple equilibrium system, like a
pendulum or mass-spring system.

[Martin Taylor 2013.12.28.16.56]

[From Rick Marken (2013.12.28.1245)]

RM: No. I just want a formula and a definition of the variables that go into the formula so I can measure the stability of an equilibrium system. I can give you the formula for the stability of a simple control system. How about giving me the formula for the stability of a simple equilibrium system, like a pendulum or mass-spring system.

I seriously doubt you can give a formula for the stability of a control system. But go ahead and try it. I base my scepticism on the level of understanding you have shown to date of the whole CONCEPT of stability. Let's make it very simple, and ask about the formula for the stability of a simple control system in which all the functions and transmission connections are unity gain and no lag, except for two. The output is a simple integrator with a gain rate G/sec, and the environmental feedback path has a lag L seconds. What is the formula that gives the corresponding Lyapunov exponents for the three variables p, e, and o?

I confess I do not know the answer at this moment. But you say you do, so I am very interested to see it.

Martin

[From Rick Marken (2013.12.28.1810)]

Martin Taylor (2013.12.28.16.56)--

RM: No. I just want a formula and a definition of the variables that go
into the formula so I can measure the stability of an equilibrium system. I
can give you the formula for the stability of a simple control system. How
about giving me the formula for the stability of a simple equilibrium
system, like a pendulum or mass-spring system.

MT: I seriously doubt you can give a formula for the stability of a control
system. But go ahead and try it.

RM: OK, here goes. I will use the formula for measuring stability that
Bill gives in his "Quantitative Analysis of Purposive Systems" paper
(which is reprinted in LCS I, the formula and explanation is on p
161):

S = 1 - (Vexp/Vobs)^1/2

Where S is the measure of the stability of a variable, Vexp is the
expected variance of the variable if there were no control, and Vobs
is the observed variance of the variable. In a tracking task, the
variable we are interested in is the distance between target and
cursor, t-c. Assuming that the subject has a fixed reference for this
distance then Vobs is just the variance of t-c over the course of a
tracking run. Vexp is computed assuming that there is no control, in
which case the two variables that affect t-c, mouse variations, m, and
disturbance variations, are expected to have independent effects on
t-c. So Vexp = Vm + Vd (where Vm is mouse variance and Vd is
disturbance variance) since the variance of a variable that is the sum
two independent (uncorrelated) variables is equal to the sum of the
variances of those two variables (apparently Bill was being coy when
he said he didn't know much about statistics).

If there is no control (t-c is not being stabilized), then Vobs will
be equal to Vexp and S will equal 0. If control is very good then Vexp
will be >>Vobs and S will be large and negative (the negative is to
show that the feedback in the system is negative; with positive
feedback Vobs would be >> Vexp and S would be large and positive).

So there you have it; a nice, simple way to measure stability.

Note that by this measure of stability, the behavior of the ball in a
bowl -- any bowl, regardless of the height of the sides -- would
always have an S value of precisely 0. Vexp for the ball is the damped
oscillation of the ball predicted by the differential equations of
motion based on Newton's laws; Vobs is the observed oscillation of the
ball, which should correspond nearly exactly to the predicted
oscillation. So Vexp will be nearly equal to Vobs and S = 0.

One final note: I include a measure of stability in my tracking demo:
Nature of Control. In this demo
the Stability measure is just (Vexp/Vobs)^1/2. I don't subtract from
1. So is you don't control the cursor - target distance at all the
Stability measure is 1.0. The higher the Stability measure, the more
stable the controlled variable (t-c).

And one even more final note: This stability measure works only when
you can be sure that the reference specification for the controlled
variable is fixed (a constant). If the reference is varying, you have
to know how the reference is varying over time in order to accurately
measure stability. I'll discuss this problem in more detail once
you've replied to this (and probably dismissed it for lack of
reference to people with Russian last names;-)

Best

Rick

I base my scepticism on the level of

···

understanding you have shown to date of the whole CONCEPT of stability.
Let's make it very simple, and ask about the formula for the stability of a
simple control system in which all the functions and transmission
connections are unity gain and no lag, except for two. The output is a
simple integrator with a gain rate G/sec, and the environmental feedback
path has a lag L seconds. What is the formula that gives the corresponding
Lyapunov exponents for the three variables p, e, and o?

I confess I do not know the answer at this moment. But you say you do, so I
am very interested to see it.

Martin

--
Richard S. Marken PhD

The only thing that will redeem mankind is cooperation.
-- Bertrand Russell

[Martin Taylor 2013.12.28.23,15]

[From Rick Marken (2013.12.28.1810)]

Martin Taylor (2013.12.28.16.56)--

RM: No. I just want a formula and a definition of the variables that go
into the formula so I can measure the stability of an equilibrium system. I
can give you the formula for the stability of a simple control system. How
about giving me the formula for the stability of a simple equilibrium
system, like a pendulum or mass-spring system.

MT: I seriously doubt you can give a formula for the stability of a control
system. But go ahead and try it.

RM: OK, here goes. I will use the formula for measuring stability that
Bill gives in his "Quantitative Analysis of Purposive Systems" paper
(which is reprinted in LCS I, the formula and explanation is on p
161):

S = 1 - (Vexp/Vobs)^1/2

Where S is the measure of the stability of a variable, Vexp is the
expected variance of the variable if there were no control, and Vobs
is the observed variance of the variable.

I was hoping you would not come back with this, because it shows that you did not read my tutorial, despite commenting on it. I quote the most significant line in it, which I capitalized (something I rarely do) because of its central importance: " "STABILITY" CONCERNS ONLY THE INTRINSIC DYNAMICS OF A DYNAMICAL SYSTEM."

Between the quote above and the quote that immediately follows, I wrote: "I base my scepticism on the level of understanding you have shown to date of the whole CONCEPT of stability." Your answer fully justifies my scepticism. Here's the question I posed. How does your response come close to being relevant?:

MT. Let's make it very simple, and ask about the formula for the stability of a simple control system in which all the functions and transmission connections are unity gain and no lag, except for two. The output is a simple integrator with a gain rate G/sec, and the environmental feedback path has a lag L seconds. What is the formula that gives the corresponding Lyapunov exponents for the three variables p, e, and o?

(I should have added "and their time derivatives", but that's of little matter in this case).

What you describe has nothing AT ALL to do with the stability of the control system. Nor does your formula mention the two variables that determine the stability in the simplest control loop. All you are describing is the behaviour of the variance of one variable of the control system in response to variation of one of the external variables, which is irrelevant to loop stability. In the past, we have called that variance ratio the "Control ratio", which is a good name for it.

In response to Martin Taylor (2013.12.28.14.56)

RM: The fact that control and "ball in a bowl" (equilibrium) behaviors
have different explanations means that they are different phenomena.

Nobody has disputed that. I had surmised that this was actually what you intended to talk about. But I'm not sure why you would want to talk about it, since it is almost universally taken to be self-evident, at least among those who have read CSGnet for more than a few weeks.

RM: From my perspective I find the failure to distinguish behavior
like that of the ball in the bowl (call it "equilibrium" or
"stability" or "attractor" or whatever you like) from control behavior
to be extremely annoying because, for me, the first principle of PCT
is that behavior (the behavior of living systems) IS control.

So far as I know, nobody on CSGnet has failed to make this distinction. Except possibly you.

and

[From Rick Marken (2013.12.28.1245)]

to Martin Taylor (2013.12.27.00.40)--
RM. How about giving me the formula for the stability of a simple equilibrium system, like a pendulum or mass-spring system.

I neglected to answer this in my initial responses to Rick's message. So here's a quick answer now. In a pendulum without friction, stability is zero. With friction, the height of the swing drops (to a first order) exponentially with time, asuming the friction loss is proportional to the swing energy. The stability of the pendulum is the negative of that exponent. When you include nonlinearities of friction and the circularity of the pendulum arc, the exponent changes over time, so the stability varies with swing angle. The same approach works for the mass-spring system. With no frictional losses, stability is zero, With frictional losses, the height excursion of the mass drops exponentially over time. The stability is the negative of the exponent.

I know it's hopeless, but it would be REALLY nice if you would fire your arrows at real enemies instead of creating imaginary enemies where they do not exist.

And I know this also is hopeless, but it would also be nice if you would take some time to try to understand when I write one of my infrequent tutorials. I would have have no objection if you were to criticise them for what they contain, because that would suggest either that you understood them enough to find an error (and God knows, there are bound to be many of those), or that there was something there that you didn't understand and on which you would like clarification. But to answer with the control ratio my request -- for you to tell the formula for loop stability of a loop with specified gain and lag -- implies that you didn't even read through the definitions on which the concept of stability is based, let alone try to understand the meaning of the term in dynamic analysis.

I realize I'm sounding like a broken record, but just once in a while would it be possible for you to respond to what is written rather than what you imagine someone would probably have written if they were who you imagine they are? I know "Imagination Mode" is much quicker than control in the real world, but interactions in the real world go much better if controlled perceptions are based on what one's senses detect out there.

Martin

[From Bruce Abbott (2013.12.29.1140 EST)]

Here are some screen shots from the LiveBlock demo of LCS III. In the first one, I’ve set the gains as high as the limits of the program will allow me to set them, giving a loop gain of 500:

Plotted at bottom are the values of the reference and perceptual signals. For the demonstration I used the reference signal slider to rapidly change the reference signal value between +15 and -15 units (green line). Notice how the perceptual signal (blue line) overshoots the reference value, then oscillates around it as the excursions diminish. This behavior is called “ringing.” Although ringing is not a sign of good control, the system is nevertheless stable since the perceptual signal eventually converges to a single value. By reducing the gain a bit we can eliminate the ringing.

In the next screenshot, the gains have been returned to their default values, but not the “input delay” has been increased. This change increases the lag in the system.

Increasing the lag has again introduced ringing. Again the perceptual signal eventually converges on a single value so the system is stable, although once again it is on the verge of instability.

Ringing not confined to poorly tuned control systems. You can also observe it in equilibrium systems like the ball-in-the-bowl example if there is a bit of friction in the system. The ball oscillates in a diminishing pattern until it comes to rest at the minimum energy point, the bottom of the bowl. If you tip the bowl up a bit, the ball will still settle at the lowest point n the bowl, but this will be a different location than before – similar to a change of reference value in a control system. The new location is not a reference value, however; it’s just a new equilibrium position relative to the bowl’s inner surface.

The final screenshot shows what happens when the parameters of gain and/or lag are such that the system becomes unstable.

The system actually started to ring before I had a chance to change the reference. The magnitude of the excursions above and below the reference level increases with each swing. The amplitude is running off to infinity (or at least the maximum output the system is capable of delivering to the CV) and can lead to destruction of a physical output device. What is happening is that the usual negative feedback has changed to positive feedback because the output is getting out of phase with the changes to the CV: by the time the system begins to correct a swing in one direction, the CV is changing in the other direction and the output amplifies that swing rather than attenuating it.

From these demonstrations I think it’s safe to conclude that stability/instability is a property of behavior that can be observed in dynamical systems in general, including both control systems and equilibrium systems.

Bruce

[Martin Taylor 2013.12.29.12.56]

``````Using the assumption that all the waveforms in the loop have the
``````

same growth rates, which is a pretty good assumption since all the
functions are linear, I have computed stability measures for Bruce’s
conditions. I printed out Bruce’s message and measured as best I
could the parameters based on the values in the panel. The problem
is that the lines are pretty thick compared to the range of data. I
judged the time scale by looking at when the excursion started after
the change in reference value, and assuming I could correctly read
the “Input delay” panel item. However, here are my estimates.
Stability 2.0 per second
Stability 0.6 per second
Stability -0.95 per second
Actually it probably started to ring as soon as you started the
program, but the excursions were initially infinitesimal.
That was an interesting exercise. Something that would be even more
interesting would be to have a series of these with a given lag and
gain varying upward from zero until the stability goes negative.
Martin

···
``````        [From Bruce
``````

Abbott (2013.12.29.1140 EST)]

``````        Here are
``````

some screen shots from the LiveBlock demo of LCS III. In
the first one, I’ve set the gains as high as the limits of
the program will allow me to set them, giving a loop gain of
500:

``````        The system
``````

actually started to ring before I had a chance to change the
reference.