The Role of Error in Control Systems

[From Fred Nickols (2003.05.24.0804)] --

I changed the topic line because I'm pulling out a piece of Bill's response
to Bruce G in the Words thread...

Bill Powers (2003.05.23.2030 MDT)]

Bruce Gregory (2003.0523.2042)--

I see a problem. When I ride my bike, I find I must pedal. According to
this definition, pedaling is not a goal, since I do not stop pedaling
once I begin (until I reach my destination). Yet I think of pedaling as
a controlled behavior with an ever changing reference level. Where have
I gone wrong?

What's wrong is thinking of control systems as completely correcting error.
In special cases, it's possible to get very close to zero error, but all
that keeps any real control system in action is the remaining error. If the
error ever actually went to zero, the action would eventually run down and
cease.

The reason it seems that errors are corrected is that in many cases the
sensitivity of the output function to error is very high. This does not
mean that large amounts of output are produced, but that errors are kept
very small in relation to the reference signal. The perceptions very nearly
match the reference signal, yet the small discrepancies that remain are
sufficient to cause all the variations in action that prevent the error
from getting any larger.

Hmm. A long, long time ago, when I first joined this list, I asserted
something similar, drawing on my technical training as a fire control
technician in the Navy. I said that it was my understanding about
servomechanisms (e.g., the kind of systems that control the movement of gun
mounts) that error never went to zero, the these systems always "hunted
about the point of correspondence" (correspondence being between the actual
and ordered position of the gun mount), even if the hunting was every so
slight. If memory serves, you, Bill, told me that I didn't really
understand how servomechanisms worked. Yet, your comments above seem to
support what I asserted a long time ago. I made my remarks about
servomechanisms in relation to the notion in PCT that zero error calls for
no response. I was making the point that zero error is not likely to occur
and cited the old fire control systems on which I used to work as an example.

So, it seems to me that zero error is indeed the point at which no response
is required, however, as a practical matter, zero error -- in a
servomechanism or in human behavior -- seems unlikely to occur (I'm
speaking of true zero). On the other hand, and again as a practical
matter, there is the notion of "tolerances," of values that are plus or
minus some amount in relation to some targeted value. A servomechanism --
and human behavior -- both seem capable of staying within tolerances, that
is, of not actually staying on true zero but of "hunting" about that true
zero inside some upper and lower limit and that staying inside this limit
is good enough for practical purposes.

Do I have it right this time?

Regards,

Fred Nickols
nickols@safe-t.net

[From Bill Powers (2003.05.24.0i843 MDT)]

Fred Nickols (2003.05.24.0804) --

Hmm. A long, long time ago, when I first joined this list, I asserted
something similar, drawing on my technical training as a fire control
technician in the Navy. I said that it was my understanding about
servomechanisms (e.g., the kind of systems that control the movement of gun
mounts) that error never went to zero, the these systems always "hunted
about the point of correspondence" (correspondence being between the actual
and ordered position of the gun mount), even if the hunting was every so
slight.

Hunting is not necessary. That term was used to designate a control system
that was either in a "limit cycle" or was simply on the verge of runaway
instability. The characteristic behavior was a regular variation of the
action ocurring at a particular frequency and never ceasing.

Some limit cycles have to do with "bang-bang" control, meaning control that
uses an on-off mechanism somewhere in the loop. Bang, it's on, and bang,
it's off, with no gradations between on and off. Of course the other
variables in the loop would have to change smoothly as macroscopic physical
variables always do, so what we would see would be a smooth rise and fall,
the way the temperature of a room "hunts" around an average temperature
when controlled by an ordinary household thermostat.

In continuous control systems, all variables can change smoothly and
continuously from one value to another value, going through all
intermediate values on the way. Limit cycles (if they occur) result from a
combination of instability and nonlinearity. The control system is actually
unstable, and if it were linear the oscillations would get larger and
larger until the system self-destructed. The nonlinearities, however, cause
a loss of energy when the swings get large enough to go far into an
increasingly nonlinear region, losing more energy on each cycle as the
swings get larger. An equilibrium condition is created in which the losses
during each cycle balance the positive feedback that is producing the
oscillations and the oscillations level out in amplitude.

Finally, continuous linear control systems can be close to instability, the
instability showing up as damped oscillations that occur after any
disturbance. The closer to instability the system is, the longer it will
take for oscillations to die out after a brief disturbance (like hitting a
tuning fork or giving a swing a single shove). In a stable system, the
oscillations never occur; the disturbed variable simple returns smoothly to
its previous state after the disturbance is removed.

Long ago, before control engineers had had much experience, many control
systems showed hunting, the damped oscillation in response to disturbances.
If there were continuous, random, small disturbances, one would see the
controlled variable (like a gun on a mount) going through very small
regular oscillations that never ceased, and were of a particular frequency,
the natural resonant frequency of the unstable system. This is when the
legend started that all control systems had to hunt around their
equilibrium positions. The rumor spread to disciplines outside control
engineering, took on a life of its own, and now is one of those "facts"
about control that everyone knows, despite it's being untrue.

Before long, control engineers no longer designed control systems that
hunted; the hunting wasted energy and wore out mechanical parts, so the
engineers didn't stop improving their techiques until they did away with
it. After that, control systems would simply bring their controlled
variables to within a small amount of the set-points or reference levels
and keep them there. The only variations in output were then those that
counteracted varying disturbances, and there was no after-effect of brief
disturbances in the form of damped oscillations.

That, in story form, is roughly the story of hunting. Basically, hunting
exists either because the design is too cheap to prevent it (a home
thermostat) or because some engineer didn't know his business.

If memory serves, you, Bill, told me that I didn't really
understand how servomechanisms worked. Yet, your comments above seem to
support what I asserted a long time ago.

Sorry about the insult, but you have to specify which part of what you said
I was talking about. I was speaking about the "hunting" part. What you said
about error never going exactly to zero was correct. The error goes to a
steady value just large enough that, when amplified and turned into action,
it is just sufficient to maintain the error at that size. This can be a
perfectly steady state when the reference signal is constant, and would be
except for small random disturbances, which are opposed by small and
opposite variations in output. There is, in a properly-designed control
system, no "ringing" or tendency to continue oscillating after a transient
disturbance. There is no hunting. To say that all control systems hunt is
like saying that all wheels squeak.

<So, it seems to me that zero error is indeed the point at which no response
>is required, however, as a practical matter, zero error -- in a
>servomechanism or in human behavior -- seems unlikely to occur (I'm
>speaking of true zero).

That is true; if for any reason the error ever went to zero and stayed
there, the action would go to zero and stay there (with one exception that
will be discussed below). If action from the control system is required to
maintain a controlled variable at a nonzero reference level, the error will
never go to zero because some error is needed to sustain the action. But an
external disturbance could be applied in the same direction as the action,
aiding it. Slowly increasing the disturbance would bring the controlled
variable gradually closer to its reference level. Doing that would decrease
the error and thus decrease the action, so eventually we would see the
controlled variable being maintained exactly at its reference level by the
steady disturbance alone, while the action of the control system had just
dropped to zero. You can work out what would happen after that if the
disturbance kept on increasing.

Finally, the exception mentioned above. There is a type of output function
that gives a control system the ability to sustain an action even when the
error has dropped to zero. This is the "integrating" output function. As
long as there is any error, the output of this function keeps increasing,
raising the action to a higher and higher level. It increases rapidly when
the error is large, and slowly when the error is small, but it always keeps
increasing until the error is in fact zero. Of course it changes similarly
in the opposite direction if the error becomes negative, which is how the
output can be returned to zero when that is needed.

A control system with this kind of output function can sustain a steady
action even after the error has dropped to zero. In human control systems,
there are evidently some integrating output functions because in modeling
tracking behavior we get the best fit by assuming that kind of output
function. Other examples exist. But human integrators are imperfect; a
small steady error will cause the output to increase steadily, but the
larger it gets the slower the increase is, until finally the increase
stops. We call that a "leaky" integrator because it acts like a bucket with
a hole in the bottom. The higher the water level gets, the greater the
water pressure and the faster the water leaks out. so if a faucet is
trickling water into the bucket, there will come a point where the water
level stops increasing. The pressure is great enough to make the rate of
leakage equal the rate at which water is being added. In a leaky output
function, if the error goes exactly to zero, the output will eventually
leak away and the action will go to zero.

Because of the leakiness, human integrator-style control systems do require
some continuous small error in order to sustain a continued steady output.
That is what we find with the modeling; giving the model a perfect
integrator produces behavior that fits the data quite well, but making the
integrator a little leaky produces an even better fit.

So you win some and you lose some, Fred. What you learned from your
experiences with servomechanisms was mostly correct, but not entirely.

Best,

Bill P.