Sampling problems

[From Bill Powers (960119.1930 MST)]

Martin Taylor (960119 14:14) --

In Simcon, every function is assumed to operate at the same time as
all other functions and to have a delay time of 1 iteration. ...

     But this is irrelevant to the question of how Simcon assures that
     the spectrum of any variable has no substantial components at
     frequencies above 1/2dt. And THAT is the critical question if your
     Simcon simulation is to tell you anything about the analogue system
     it is simulating.

I think you're looking for a non-problem. As I said, we simply select a
dt such that decreasing it further has no effect on the outcome of the
simulation. Since we're modeling physical systems, there is always some
point in the simulation where physical time enters, and that limits the
actual bandwidth of the whole system. You don't have to think about
spectra and Nyquist critera and all that fancy stuff. The ultimate test
is whether the simulation behaves like the real system.

If you insert filtering into the stimulation, you're making a statement
about that part of the physical system -- you're modeling a system with
filters in it.

     The penalties I am concerned with are those that occur with dt
     values big enough to lead to important aliasing, and that's the
     problem that showed up in your compututation of "optimum slowing
     factor", as we showed together in an exchange of messages some
     years ago. It's quite different from the RC filter effect that
     stabilizes the analogue filter.

Yes, quite so, but it's not different from the RC filter effect. The
computational oscillation problem was a discovery that explained a
number of anomalous results obtained before I realized it could happen.
It occurred because of setting the value of dt to too large a value in
trying to model a system. The problem showed up only when the gain was
raised too far, or the RC time constant being simulated was set to too
short a time. But this problem is easy to fix.

With dt small enough we can use the same leaky-integrator approach to
model systems containing leaky integrators, making sure that dt is so
small that the leaky integrator is working far from the limit where
computational oscillations might occur. You say essentially the same
thing, but making the slowing factor the variable that needs to be
limited:

     Putting the slowing factor small enough provides a low-pass filter
     at one point in the loop, making sure that the simulation actually
     does look like the analogue loop. If the analogue loop has such a
     filter, then dt will be short enough that there is no important
     aliasing. But not having aliasing affect the simulation is
     different from having an analogue filter that is stable because the
     high-frequency gain is low enough.

Yes. But the simple fix that makes the simulation like the real system
is to make dt small enough, not to increase the slowing factor. The
slowing factor has to be set to reproduce the R-C time constants in the
real system; that's not an arbitrarily adjustable parameter. The
oscillation problem was discovered because with the dt's originally
used, adjusting the slowing factor to the value it had to have to
simulate the real system resulted in these oscillations. By reducing dt,
we were able to reach the required value without the oscillations. It is
dt that we can choose for convenience and to avoid oscillations. The
slowing factor is part of the simulation, and can't be adjusted at will.

Actually, "aliasing" isn't quite the right word here, as there is no
sampling of an independently varying signal in a simulation. The
simulation itself creates all signals and can't make them vary any
faster than the iteration rate. There's no question of sampling a signal
too few times per cycle; the fastest that the simulation can make a
variable rise and fall is one cycle per two iterations. The Nyquist
criterion is automatically met.

Where aliasing is possible is in taking the real data by sampling analog
variables. When I take data for comparison against a simulation, the
physical variables are sampled at a rate already determined to be more
than fast enough to represent the fastest changes accurately (at least 5
samples for any one-way transition, or 10 per full cycle). The only
borderline cases are those involving Apple computers, where the sampling
rate can be 25 samples per second or lower (depending on the language
the program is written in). On PCs, we normally sample at 63 samples per
second, and in special cases, using 800x600 super VGA graphics, 84
samples per second. Whatever sampling rate is used to take the data is
also used to define dt in simulations. These higher rates exceed the
Nyquist requirement by a large margin. As I think I have mentioned
before, I have tested data sampling rates up to 1000 per second, so I
know what is needed to represent data without aliasing problems. The
problems begin to be noticeable below about 25 samples per second.

Martin, there's really nothing new about the considerations you're
bringing up. I've had them in mind since I started doing simulations
back in the '70s. It's the sort of thing any engineer would
automatically think about, and the solutions are much simpler than you
make them out to be.

     All in all, what I'd like to achieve is an understanding on the
     part of simulators that they should do what you said you do, in the
     paragraph quoted above (or anything that has the equivalent effect
     of ensuring that your results are not contaminated by aliasing and
     can therefore be believed as a description of the analogue loop
     being simulated).

Well, good. But isn't this getting to be overkill? The main message to
simulators is "be sure you sample the data often enough to catch all
important changes." And then of course, use that sampling interval as dt
in simulations. I'd say, if you have the memory capacity and the
equipment, sample 200 times per second and forget about this problem.

···

-----------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 960122 00:15]

Bill Powers (960119.1930 MST)

Martin, there's really nothing new about the considerations you're
bringing up. I've had them in mind since I started doing simulations
back in the '70s. It's the sort of thing any engineer would
automatically think about, and the solutions are much simpler than you
make them out to be.

I only brought up the topic because two items showed up again recently--
Rick's two-step loop, and you restated the so-called "optimum slowing
factor that corrects the error in one iteration" notion that we disproved
so long ago. Maybe you did think of it in the 70's, and maybe it is the
sort of thing every engineer would automatically think of, but sometimes
it gets forgotten, and erroneous and misleading statements get made
as a consequence, and then some of them get passed on as fact by those
that believe the source rather than the science.

Actually, what is simple is the work-around that you use, not the
solution. It's not really worth pursuing further, provided that we
don't get into other misleading consequences of undersampling. However,
contrary to what you say, the problem is indeed the problem of aliasing.
If you don't include a specific filter in the loop under study (and an
integrating output stage IS such a filter), then the simulated loop
doesn't have a specifiable bandwidth for its signals, and you don't
know whether the loop dynamics is introducing spectral components that
are aliased until you try the workaround.

There's plenty else to discuss, and if nothing else misleading shows up,
I'm dropping the topic right here.

Martin