[From Bill Powers (931229.1900 MST)]

Fort Lewis' computer center will be down all day tomorrow, so
don't expect any more from me until Friday.

Cliff Joslyn (931229) --

I appreciate your attempts to find a mathematical and technical
definition of stability. To help you toward your goal, I have
some technical definitions of a stable control system that you
might find useful. I am concerned here only with "dynamic
stability," not stability in the sense that a system with high
loop gain can "stabilize" a controlled variable against

1. Unstable (the opposite of control)

As soon as you turn on the control system, either the damned
thing starts oscillating in larger and larger cycles until it's
making huge beautiful square waves, or it heads straight for
infinity until it blows a fuse.

2. Piss-poor excuse for stable control

When you turn on the control system, the controlled variable is
brought somewhere near the vicinity of the intended reference
level, but it is continually quivering and oscillating and
jumping back and forth, and never does settle down within specs.

3. Sort of stable control, or OK, yeah, I guess you could say
it's controlling.

Any change in the reference signal, or any disturbance, causes an
error that oscillates in diminishing phase-space circles until
eventually, after a few seconds, hours, or days, it disappears up
its own point attractor.

4. Pretty good stable control; or, it's a good cost-performance
tradeoff so why don't we just go ahead and tell the customer it's

When you click the reference signal to a new setting, the
controlled variable follows it reasonably quickly to the newly
specified value, overshoots and undershoots one or two times in a
minor way, and comes to a steady value before you really start
getting concerned. A sudden disturbance causes a small excursion
of the controlled variable away from the reference level, but it
gets corrected after one or two wobbles before you hardly even
notice the error.

5. Stable control; or that, man, is what I call a control system.

When a step-change in the reference signal occurs, the controlled
variable changes at the same time to a new steady state, without
any overshoot or undershoot, so quickly that you'd have to look
closely to see the transition. When a step-disturbance suddenly
occurs, the controlled variable jumps immediately (by a small
amount) to a new value, without overshoot or undershoot, and
stays there. A dynamically stable control system is extremely
undramatic in its behavior.



Note that a dynamically stable control system does not
necessarily stabilize the controlled variable against
disturbances very well, or make the controlled variable exactly
equal to the reference value. A control system can be stable in
the sense that after a disturbance or change in reference signal,
the approach to the new final state occurs in one single
reasonably fast move without overshoot, yet the system may have
such a low gain that the final state does not entail exact
cancellation of the disturbance, or entail bringing the
perceptual signal to an exact match with the reference signal. We
could call stability against disturbances "static" stability,
because we measure it by using a steady disturbance. Static
stability is a matter of steady-state loop gain. Dynamic
stability is a matter of handling transient effects.

Most of the phase-space diagrams I have seen, if they represented
the behavior of a real control system, would fall under 2. or 3.
above. I have never seen enough data on a system represented by
the usual loops and spirals to estimate the static stability that
is represented. The phase-space diagram of what I call a good
control system would not be very interesting (a single arc from
point A to point B, as in Tom Bourbon's recent plots).

Bill P.