# simulation sampling

[Martin Taylor 970310 10:50]

[Hans Blom, 970310c]

(Martin Taylor 970306 14:10)

>Hans says that one cannot talk about the varying values between the
>observed sample points. But the real system has values at all
>moments in time, whether they are observed or not.

Given the formalism of difference equations, where data are defined
only at the points T, T=Dt, T+2dt, T+3dt, etc, it is indeed
impossible to talk about intermediate times. As to the "real" system,
it can usually be expressed in this formalism. Shannon says so ;-).

I find this an obtuse answer, given the context of the original. The
question was whether "one-jump" correction would be found, without
oscillation, in a correct MCT control system in which dt was defined
by the observation sample interval of the REAL control system being
simulated.

The only way to test this is to use a simulation sampling interval much
shorter than the sampling intrval of the real system being simulated. If
you don't do that, you have no idea whether your simulation results apply
to the real system or whether they are contaminated by huge amounts of
aliasing.

There's a basic principle here: NEVER believe any simulation results that
happen in a single simulation sampling interval. They may be right (in
Tracy Harms' word, "true") but they may not. Unlike the real world, in
which we cannot determine "truth", in the simulation world we can. We can
observe and modify the simulation world to match the assumed real world
as closely as we like, and observe whether our results change as we change
the match precision.

If we find that our simulation results change appreciably when we change
the degree of match of the simulation with the (assumed) real world, we have
reason to worry about the "truth" of those results when they are applied
to the "real" real world. Of course, no matter how well the simulation
matches the assumed real world, we know nothing of its "truth" when applied
to the real real world. All we know is whether what we get from the
assumed real world fails us when we use it in the real real world. As,
I think, both Tracy and Bill Powers assert in their different ways in their
recent dialogue.

As for "Shannon says so"...Shannon says nothing about systems in which the
observation bandwidth is less than the signal bandwidth, except that the
signal CANNOT be reconstituted exactly except by a pure fluke. And it's
not Shannon who says that the signal can be reconstituted exactly if the
signal bandwidth is confined to the observation bandwidth (i.e. samples
at least every 1/2W seconds). It's a fact of Fourier analysis.

If you don't KNOW that the signal bandwidth has no components outside the
band set by the sampling rate, you cannot make assertions about the behaviour
of the simulated world except at the sampling moments (at best). And if you
don't test your model by sampling more often than the period in which it
claims to have generated some significant event, you don't know at all
whether that event occurs even in the assumed real world, let alone the
real real world.

Check the behaviour of the MCT model that observes every Dt seconds, using
a simulation that samples every dt seconds, where Dt > 10*dt.

Martin

[Hans Blom, 970311]

(Martin Taylor 970310 10:50)

Given the formalism of difference equations, where data are defined
only at the points T, T=Dt, T+2dt, T+3dt, etc, it is indeed
impossible to talk about intermediate times. As to the "real"
system, it can usually be expressed in this formalism. Shannon says
so ;-).

I find this an obtuse answer, given the context of the original.

Do you find that the performance of Bill's PCT controller can be
adequately expressed with the graphs he presents on the computer's
display? Note that he plots only _points_, not line segments. That
does not seem to matter, however. It _would_ matter if the number of
points plotted were too small to show all the "information" in the
signals. So little "interesting" occurs _between_ those points, that
the points by themselves are adequate.

The question was whether "one-jump" correction would be found,
without oscillation, in a correct MCT control system in which dt was
defined by the observation sample interval of the REAL control
system being simulated.

Here we have this discrepancy between the "real" thing and its
simulation again. We cannot "know" the real thing; we have to be
satisfied with a "simulation", regrettably. And the simulation is
"good enough" if it adequately captures all the "interesting"
properties of what we could possibly perceive.

The only way to test this is to use a simulation sampling interval
much shorter than the sampling intrval of the real system being
simulated. If you don't do that, you have no idea whether your
simulation results apply to the real system or whether they are
contaminated by huge amounts of aliasing.

Correct. In practice, choosing a "small enough" dt works fine.
Testing whether dt is small enough is usually simple enough: try a
number of ever smaller values and stop when things don't change
anymore.

There's a basic principle here: NEVER believe any simulation results
that happen in a single simulation sampling interval.

Right!

As for "Shannon says so"...Shannon says nothing about systems in
which the observation bandwidth is less than the signal bandwidth,
except that the signal CANNOT be reconstituted exactly except by a
pure fluke.

Shannon tells us that going to ever smaller values of dt is useless
from a certain point on. That's the important thing here. Going to
the limit of dt=0, as Bill Powers suggests, is overkill.

If you don't KNOW that the signal bandwidth has no components
outside the band set by the sampling rate, you cannot make
assertions about the behaviour of the simulated world except at the
sampling moments (at best).

Once again: on the computer's display, only _points_ are plotted. Do
you really want to know what happens in between those points if the
time intervals between points are chosen small enough? Do you want
movies and TV shows to be presented at a higher repetition rate?
They, too, consist of still pictures repeated at a certain rate. Yet
we don't miss much, it seems (although occasionally slow motion may
let us see details that otherwise pass by too quickly).

Check the behaviour of the MCT model that observes every Dt seconds,
using a simulation that samples every dt seconds, where Dt > 10*dt.

Can you be more specific? The MCT model observes AND CONTROLS
(generates a new output) every dt seconds. Do yo suggest that the
controller have a far higher observation rate than its control rate?
Why should we want to do that? In any practical MCT system that I
know of, the two rates are the same (except when occasional inputs
are missing; then the observation rate can be smaller). Limiting the
control rate obviously makes the controller more sluggish; its
control bandwidth will be smaller than its observation bandwidth. But
when both are "high enough" (compared to the dominant time constant
of the system to be controlled), little difference will be observed
in practice.

I appreciate your warnings, however. I routinely observe them...

Greetings,

Hans

[Martin Taylor 970311 11:30]

Hans Blom, 970311
> (Martin Taylor 970310 10:50)

>>Given the formalism of difference equations, where data are defined
>>only at the points T, T=Dt, T+2dt, T+3dt, etc, it is indeed
>>impossible to talk about intermediate times. As to the "real"
>>system, it can usually be expressed in this formalism. Shannon says
>>so ;-).

>I find this an obtuse answer, given the context of the original.

Do you find that the performance of Bill's PCT controller can be
adequately expressed with the graphs he presents on the computer's
display?

You cannot tell from one single display whether it is adequate. If the
same display comes up (plotted in simulated seconds, not numeric counts
of dt samples) when dt is reduced substantially, then yes, it had been

>The question was whether "one-jump" correction would be found,
>without oscillation, in a correct MCT control system in which dt was
>defined by the observation sample interval of the REAL control
>system being simulated.

Here we have this discrepancy between the "real" thing and its
simulation again. We cannot "know" the real thing; we have to be
satisfied with a "simulation", regrettably. And the simulation is
"good enough" if it adequately captures all the "interesting"
properties of what we could possibly perceive.

You forget that there are THREE worlds to be considered, not two. There
is the world observed by the controller, the world observed by the analyst,
and the world in which the controller acts. There's actually a fourth world,
the unknowable outer world that all this is supposed to simulate.

Every DT msec, your controller observes the world in which it acts. Its
effects on the world in which it acts are continuous, or at least as fine
grained in time as that world allows. The world in which the controller
acts is a simulation of something. Of what? Of some simplified idea of
what the "real" world might be. The simulation is supposed to show how
a controller of something physical (in this case a theodolite) would behave
in the everyday world of tangible objects. Personally, I don't care whether
the time passing in this real world in which I live is quantized into
intervals of about 10^-43 sec. On a time-scale of nanoseconds or greater,
it seems to be continuous.

If you are simulating a world in which time appears to run continuously,
using a simulation in which the situation can be expressed only at those
discrete moments at which the analyst/experimenter observes, you have
to be sure that the behaviour of the simulated entities in the world
doesn't change when you change your observation moments or sampling
interval. This isn't a Heisenberg world in which simply observing it
differently changes the way it behaves.

In the simulation world, the one in which you and Bill are testing out
different methods of controlling a simulated theodolite, the theodolite
has a certain (simulated) mass, moment of inertia, viscous drag, springs,
magnetic characteristics..., most of which are taken to be of no interest.
Some of them, you have between you agreed as specifications that affect
the _continuous_ behaviour of the obsject in the _continuous_ simulation
world. The ones of no interest are of no interest because they don't affect
the behaviour of the object in the simulated world, though they might
affect the behaviour or a real theodolite if you made a real-world
controller based on the simulation results.

This theodolite moves smoothly in the simulated world, whether you, the
analyst/observer look at it often or seldom. In the simulation world, you
have constructed the reality. You _know_ what is real in the simulation
world. In it, you are God the Creator. And since you have asserted that this
simulation world is supposed to correspond in important ways to what you
can observe of the tangible world, time in it MUST seem to be continuous.
That means that you can observe it every simulated nanosecond, every
simulated millisecond, or every simulated year, and it won't affect the
value of the observation at (simulated) Feb 10, 1992 at 10:32:16.15672038.

You have chosen to present a model-based control system that observes this
continuously varying simulation world at discrete moments separated by DT
simulated seconds. You assert that if you first look at the simulated
theodolite at time T, then at time T+DT the theodolite will have moved
to its reference position, and will forever thereafter be at its reference
position with zero error. You did not (initially) assert that the theodolite
would be at its reference position only at the moments the modelled
discrete-sampling controller looked at it, although that now seems to be
what you are asserting.

From the point of view of a person in this simulated world who wanted to

make an observation using this simulated theodolite, it would matter greatly
whether between the observation moments T+DT, T+2DT, ... T+kDT (at which
it is at its reference position) the theodolite oscillated wildly, coming
to the reference only at those moments, or whether it came to its reference
at T+DT and stayed there calmly. To see which is the case, you have to observe
the simulation world at moments _between_ the moments when the discrete-
sampling conmtroller observes the world.

>The only way to test this is to use a simulation sampling interval
>much shorter than the sampling intrval of the real system being
>simulated. If you don't do that, you have no idea whether your
>simulation results apply to the real system or whether they are
>contaminated by huge amounts of aliasing.

Correct. In practice, choosing a "small enough" dt works fine.
Testing whether dt is small enough is usually simple enough: try a
number of ever smaller values and stop when things don't change
anymore.

Right, but if at the same time you change DT, the intervals at which
the modelled controller observes the world, you have changed the entity
you are simulating, and have to reduce dt yet again, to keep it smaller
than DT. You can't win, reducing DT to equal dt and dt to stay smaller
than DT forever

>There's a basic principle here: NEVER believe any simulation results
>that happen in a single simulation sampling interval.

Right!

Well, if you agree with this, why do you present results of simulations
in which the _only_ important result is something that happens in a single
simulation interval?

>As for "Shannon says so"...Shannon says nothing about systems in
>which the observation bandwidth is less than the signal bandwidth,
>except that the signal CANNOT be reconstituted exactly except by a
>pure fluke.

Shannon tells us that going to ever smaller values of dt is useless
from a certain point on. That's the important thing here. Going to
the limit of dt=0, as Bill Powers suggests, is overkill.

You don't go to zero. You go until reducing dt any further makes no
difference that matters to you, the analyst/observer of the simulated
world.

>If you don't KNOW that the signal bandwidth has no components
>outside the band set by the sampling rate, you cannot make
>assertions about the behaviour of the simulated world except at the
>sampling moments (at best).

Once again: on the computer's display, only _points_ are plotted. Do
you really want to know what happens in between those points if the
time intervals between points are chosen small enough?

No, not if they are small enough. You've tested whether they are small
enough by looking to see whether they give the same results as larger
or smaller values. Or by a serious analysis of the spectra of your signals.

>Check the behaviour of the MCT model that observes every Dt seconds,
>using a simulation that samples every dt seconds, where Dt > 10*dt.

Can you be more specific? The MCT model observes AND CONTROLS
(generates a new output) every dt seconds. Do yo suggest that the
controller have a far higher observation rate than its control rate?

No--that the ANALYST has a far higher observation rate than the
CONTROLLER's observation rate.

I appreciate your warnings, however. I routinely observe them...

I think you don't understand them, if what you say here represents what
you routinely observe.

Martin

[Martin Taylor 970311 11:30]

In the simulation world, the one in which you and Bill are testing
out different methods of controlling a simulated theodolite, the
theodolite has a certain (simulated) mass, moment of inertia,
viscous drag, springs, magnetic characteristics..., most of which
are taken to be of no interest.

... resulting in a _model_ of the real thing which is a (usually
severe) simplification of the real thing.

Some of them, you have between you agreed as specifications that
affect the _continuous_ behaviour of the obsject in the _continuous_
simulation world.

In our case, the problem is different. We have accepted a continuous
model J*a = u, but this continuous model cannot be represented
_exactly_ in (digital) computer code. We can come close, in fact as
close as we want (computational errors allowing). But then saying
that the continuous model is "exact" whereas the discrete one --
which comes arbitrarily close to the continuous one -- isn't, does
not seem quite reasonable to me.

This theodolite moves smoothly in the simulated world, whether you,
the analyst/observer look at it often or seldom.

Why can the simulated world not be a discrete one?

In the simulation world, you have constructed the reality. You
_know_ what is real in the simulation world. In it, you are God the
Creator.

Sure. So I, a modest God, create a discrete world ;-).

And since you have asserted that this simulation world is supposed
to correspond in important ways to what you can observe of the
tangible world, time in it MUST seem to be continuous.

This is the fallacy. I can talk about a piece of matter in terms of
discrete entities such as atoms and their positions, expressed in
numbers with a finite number of decimals. I can talk about the world
in terms of a discrete, finite number of words. I can present a movie
as a succession of discrete pictures. I can talk of time passing in
terms of number of days. Or minutes. Or nanoseconds. What's wrong
with all that? I can and do talk about (model) supposedly continuous
notions in languages with a discrete number of terms. And so do you.

Greetings,

Hans

[Martin Taylor 970318 14:00]

Hans Blom Tue, 18 Mar 1997 14:48:38 +0100
[Martin Taylor 970311 11:30]

>And since you have asserted that this simulation world is supposed
>to correspond in important ways to what you can observe of the
>tangible world, time in it MUST seem to be continuous.

This is the fallacy. I can talk about a piece of matter in terms of
discrete entities such as atoms and their positions, expressed in
numbers with a finite number of decimals. I can talk about the world
in terms of a discrete, finite number of words. I can present a movie
as a succession of discrete pictures. I can talk of time passing in
terms of number of days. Or minutes. Or nanoseconds. What's wrong
with all that? I can and do talk about (model) supposedly continuous
notions in languages with a discrete number of terms. And so do you.

Nice evasion of the point. You can talk about your theodolite however
you like. But if you want the simulation world in which your simulated
theodolite lives to have some relevance to a real world in which a real
theodolite might be controlled, the way your simulated theodolite behaves
should not change with the way you talk about how it behaves--or by how
often you observe its behaviour. And the simulated world should be like
the real world in its relevant aspects. That's all I'm saying.

To reiterate: There's a world in which the theodolite and its control
system live. That can be as discrete as you wish. You are God the Creator
of this world...as I said before, and you agreed:

In the simulation world, you have constructed the reality. You
_know_ what is real in the simulation world. In it, you are God the
Creator.

Sure. So I, a modest God, create a discrete world ;-).

There's a set of observations of this world that the theodolite control
system makes. You, the designer of the control system, can choose to
have it make those observations as often or as seldom as you like. What
you CANNOT do is to make the design of the world depend on how often the
theodolite control system observes it--nor on how often the analyst
observes the theodolite.

Whether the simulation world is a good or poor simulation of the real
world is a different matter, which again does not depend on how you talk
about the real world. Whether you can determine how well the simulation
world matches the real world depends on how you observe both. But how
the simulation world actually behaves does not depend on how you observe
it.

If your programs make the behaviour of the simulation world depend
on how often you (experimenter) or the theodolite control system observe
it, then the simulation world is not a consistent world, and is not worth
spending words or CPU cycles on.

Martin