[From Bill Powers (2008.10.20.1650 MDT)]
Tracy B. Harms (2008-10-20 13:49 Pacific)--
My immediate curiosity is about the difference between discrete models
and continuous systems. You said that "Unless you use a very small
value of dt, chances are good that some of the systems will go into
oscillation because of having too much loop gain." In a non-digital
system there is no approximation occurring over a discrete dt, it
seems to me, there are only the actually continuous changes of the
components. Yet, I'd wager that the same sort of oscillation due to
too much loop gain would be possible. I can't quite put my finger on
the right way to ask the question that floats in front of me as I
compare these two things...
Your instincts are excellent. There are indeed two causes of
oscillation in digital simulations. One is real, and the other is
what I call a "computational oscillation."
The real oscillation can happen, for example, when the underlying
continuous system has more than a single integration in the feedback
loop. With only one integration, all continuous control systems are
unconditionally stable with any loop gain no matter how large (until
you reach limits set by the speed of conduction of signals in a
wire). With two integrations and a high enough loop gain, the lag
added by the second integration (or by any other means) makes it
possible for the loop gain, nominally negative, to become positive
and greater than 1 at some high-enough frequency. When that happens,
the loop will go into spontaneous oscillations which increase in
amplitude until some physical limit is reached. This happens in real
control systems and can shake them to pieces. Needless to say,
oscillating systems do not control.
The computational oscillation happens only in a digital
implementation of a continuous feedback loop. Basically, there is a
transport lag of one iteration-time (dt) in any closed loop that is
digitally simulated. If the components of the system correct errors
too rapidly, so signals can change appreciably in the time
represented by one dt, the result of an error signal can, with
sufficient loop gain, be too large a correction so that on the next
iteration, the error is larger than and of opposite sign to that at
the start of the previous iteration. Once that happens, obviously the
overcorrections will simply get larger and larger, each full cycle of
the oscillation having a duration of exactly twice the iteration rate.
That's the giveaway that the oscillations aren't real. The
oscillation period of 2*dt says that these are artifacts of the
method of computation and do not represent anything in the real
continuous system being modeled. To get rid of them, we simply make
sure that one element in the control loop responds slowly enough so
that the delayed corrections are smaller than the original error. We
usually put that slowing into the output function, though it can go
anywhere. If the environment or any component in the loop contains a
natural slowing factor, such as an integration like the one that
turns a force into a velocity, that is sufficient by itself. We would
then have to be sure that dt is short enough so that with the desired
loop gain, computational oscillations do not occur. Since no more
than one integration is allowed in a control loop if it's to be
unconditionally stable, we can't add another slowing factor, and
we're stuck with the one that exists because it's part of a model and
if we change it it won't be an accurate model any more. Shortening dt
or lowering the loop gain is then the only option, and if we're
modeling an existing system, we can't alter the loop gain either. So
that leaves only shortening dt as a way to stop the computational oscillations.
Why don't we want to shorten dt if we don't have to? Purely practical
reasons. If dt has to be one millionth of a second, that means we
have to do one million iterations of the simulation to show the
behavior over only one second of time. That could slow down the
computations to the point where the whole project becomes
impractical. Forty years ago, when I was programming on a Digital
Equipment Corporation PDP-8/S, simulating the behavior of 500 control
systems as we now are doing was simply beyond reach. Making dt small
enough would have meant using half a semester to do a one-minute run.
One addition of 12-bit numbers took 36 microseconds (instead of
today's couple of nanoseconds for 32-bit numbers).
Here's a new approach, where the columns indicate
the errors which 100%, 75%, 50%, and 25% of the systems are
less-than-or-equal-to. Here are some results from runs on systems
sized at 100 and 500:
It looks as if you've really got the whole model running correctly.
When you have a large number of systems, convergence to a final
steady state can be very, very slow since we're not doing any
optimizations at all. If you made dt adjustable, or the loop gain,
you might be able to set it to obtain the fastest feasible
convergence for a given input matrix. But there will always be some
random input matrices that are close to the critical state
(determinant of zero) and contain a lot of conflict between control
systems, and then you'll just have to wait. Maybe you could increase
dt or the loop gain to speed them up, but that would probably cause
trouble with the control systems that happen to have nice parameters.
I don't think you want to get into adjusting gain for each control system.
Remember that my point in writing this demo was to test the idea that
assemblages of control systems could independently control different
aspects of a common pool of environmental variables simply by making
the output matrix the transpose of the input matrix. I didn't know
then that Richard Kennaway would come along and prove that this was
correct with about three steps of mathematical reasoning.
Best,
Bill P.
No virus found in this outgoing message.
Checked by AVG - http://www.avg.com
Version: 8.0.173 / Virus Database: 270.8.1/1734 - Release Date: 10/20/2008 7:25 AM