[Martin Taylor 970311 11:30]
Hans Blom, 970311
> (Martin Taylor 970310 10:50)
>>Given the formalism of difference equations, where data are defined
>>only at the points T, T=Dt, T+2dt, T+3dt, etc, it is indeed
>>impossible to talk about intermediate times. As to the "real"
>>system, it can usually be expressed in this formalism. Shannon says
>>so ;-).
>I find this an obtuse answer, given the context of the original.
Do you find that the performance of Bill's PCT controller can be
adequately expressed with the graphs he presents on the computer's
display?
You cannot tell from one single display whether it is adequate. If the
same display comes up (plotted in simulated seconds, not numeric counts
of dt samples) when dt is reduced substantially, then yes, it had been
adequately expressed.
>The question was whether "one-jump" correction would be found,
>without oscillation, in a correct MCT control system in which dt was
>defined by the observation sample interval of the REAL control
>system being simulated.
Here we have this discrepancy between the "real" thing and its
simulation again. We cannot "know" the real thing; we have to be
satisfied with a "simulation", regrettably. And the simulation is
"good enough" if it adequately captures all the "interesting"
properties of what we could possibly perceive.
You forget that there are THREE worlds to be considered, not two. There
is the world observed by the controller, the world observed by the analyst,
and the world in which the controller acts. There's actually a fourth world,
the unknowable outer world that all this is supposed to simulate.
Every DT msec, your controller observes the world in which it acts. Its
effects on the world in which it acts are continuous, or at least as fine
grained in time as that world allows. The world in which the controller
acts is a simulation of something. Of what? Of some simplified idea of
what the "real" world might be. The simulation is supposed to show how
a controller of something physical (in this case a theodolite) would behave
in the everyday world of tangible objects. Personally, I don't care whether
the time passing in this real world in which I live is quantized into
intervals of about 10^-43 sec. On a time-scale of nanoseconds or greater,
it seems to be continuous.
If you are simulating a world in which time appears to run continuously,
using a simulation in which the situation can be expressed only at those
discrete moments at which the analyst/experimenter observes, you have
to be sure that the behaviour of the simulated entities in the world
doesn't change when you change your observation moments or sampling
interval. This isn't a Heisenberg world in which simply observing it
differently changes the way it behaves.
In the simulation world, the one in which you and Bill are testing out
different methods of controlling a simulated theodolite, the theodolite
has a certain (simulated) mass, moment of inertia, viscous drag, springs,
magnetic characteristics..., most of which are taken to be of no interest.
Some of them, you have between you agreed as specifications that affect
the _continuous_ behaviour of the obsject in the _continuous_ simulation
world. The ones of no interest are of no interest because they don't affect
the behaviour of the object in the simulated world, though they might
affect the behaviour or a real theodolite if you made a real-world
controller based on the simulation results.
This theodolite moves smoothly in the simulated world, whether you, the
analyst/observer look at it often or seldom. In the simulation world, you
have constructed the reality. You _know_ what is real in the simulation
world. In it, you are God the Creator. And since you have asserted that this
simulation world is supposed to correspond in important ways to what you
can observe of the tangible world, time in it MUST seem to be continuous.
That means that you can observe it every simulated nanosecond, every
simulated millisecond, or every simulated year, and it won't affect the
value of the observation at (simulated) Feb 10, 1992 at 10:32:16.15672038.
You have chosen to present a model-based control system that observes this
continuously varying simulation world at discrete moments separated by DT
simulated seconds. You assert that if you first look at the simulated
theodolite at time T, then at time T+DT the theodolite will have moved
to its reference position, and will forever thereafter be at its reference
position with zero error. You did not (initially) assert that the theodolite
would be at its reference position only at the moments the modelled
discrete-sampling controller looked at it, although that now seems to be
what you are asserting.
From the point of view of a person in this simulated world who wanted to
make an observation using this simulated theodolite, it would matter greatly
whether between the observation moments T+DT, T+2DT, ... T+kDT (at which
it is at its reference position) the theodolite oscillated wildly, coming
to the reference only at those moments, or whether it came to its reference
at T+DT and stayed there calmly. To see which is the case, you have to observe
the simulation world at moments _between_ the moments when the discrete-
sampling conmtroller observes the world.
>The only way to test this is to use a simulation sampling interval
>much shorter than the sampling intrval of the real system being
>simulated. If you don't do that, you have no idea whether your
>simulation results apply to the real system or whether they are
>contaminated by huge amounts of aliasing.
Correct. In practice, choosing a "small enough" dt works fine.
Testing whether dt is small enough is usually simple enough: try a
number of ever smaller values and stop when things don't change
anymore.
Right, but if at the same time you change DT, the intervals at which
the modelled controller observes the world, you have changed the entity
you are simulating, and have to reduce dt yet again, to keep it smaller
than DT. You can't win, reducing DT to equal dt and dt to stay smaller
than DT forever 
>There's a basic principle here: NEVER believe any simulation results
>that happen in a single simulation sampling interval.
Right!
Well, if you agree with this, why do you present results of simulations
in which the _only_ important result is something that happens in a single
simulation interval?
>As for "Shannon says so"...Shannon says nothing about systems in
>which the observation bandwidth is less than the signal bandwidth,
>except that the signal CANNOT be reconstituted exactly except by a
>pure fluke.
Shannon tells us that going to ever smaller values of dt is useless
from a certain point on. That's the important thing here. Going to
the limit of dt=0, as Bill Powers suggests, is overkill.
You don't go to zero. You go until reducing dt any further makes no
difference that matters to you, the analyst/observer of the simulated
world.
>If you don't KNOW that the signal bandwidth has no components
>outside the band set by the sampling rate, you cannot make
>assertions about the behaviour of the simulated world except at the
>sampling moments (at best).
Once again: on the computer's display, only _points_ are plotted. Do
you really want to know what happens in between those points if the
time intervals between points are chosen small enough?
No, not if they are small enough. You've tested whether they are small
enough by looking to see whether they give the same results as larger
or smaller values. Or by a serious analysis of the spectra of your signals.
>Check the behaviour of the MCT model that observes every Dt seconds,
>using a simulation that samples every dt seconds, where Dt > 10*dt.
Can you be more specific? The MCT model observes AND CONTROLS
(generates a new output) every dt seconds. Do yo suggest that the
controller have a far higher observation rate than its control rate?
No--that the ANALYST has a far higher observation rate than the
CONTROLLER's observation rate.
I appreciate your warnings, however. I routinely observe them...
I think you don't understand them, if what you say here represents what
you routinely observe.
Martin