[Hans Blom, 960713]
(Bill Powers (960712.2045 MDT))
But let me try to use "English" and see if I can make the
distinction clear nevertheless. You can consider this discussion as
an informal but useful definition of "disturbance". In the standard
PCT diagram, the "disturbance" enters at some point in the "world",
but it is not specified where:
Hans, you're beating a dead horse.
I assume that I can interpret this as meaning that you agree with
what I said in this post. Good, that gives a basis for common under-
standing. Given that basis, let me tease apart two more things:
uncertainty and frequency, because you confuse the two and their
effects. In other words, you make an unrealistic distinction between
noise and disturbance.
A warning for non-techies: stop reading now. Although I will attempt
to use clear language, I do presuppose some technical knowledge about
electronic filters and probability distributions.
The reason your model could do this was that the
disturbance changed smoothly, and you could accurately estimate the next
value of a compensating change in output to oppose it.If you had included noise terms, however, their frequency of variation
would have included very high frequencies; their value at t would not
have predicted their value at t + dt.
I interpret this thus: a disturbance changes slowly and smoothly,
noise rapidly and unpredictably. Let's see. I am sure that you are
familiar with the fact, that any kind of noise (colored, pink) can be
made by filtering white noise, like this
white noise ---------- filtered or colored noise
------>----| filter |------>-------
···
----------
You use this method yourself in order to create your disturbances. We
can choose the filter's characteristics any way we want. Thus, we can
create "slowness" or "smoothness", that is a low frequency signal, or
we can choose a high cutoff frequency if we want rapid variations.
Where is the uncertainty? It is purely the white noise source that
causes it; the filter is purely deterministic. White noise does not
discriminate between frequencies; it contains all equally. White
noise sources exist; a "noise diode" (a noisy sort of Zener diode)
provides noise that is white up to GigaHertz frequencies. And a
computer's random generator also comes close, although that is a
_sampled_ white noise source.
Yet, we might want to specify the one property of that can make white
noise generators different, and that is their probability density
function. A pseudo-random binary sequence, for instance, generates
only the numbers 0 and 1. The computer's random generator generates a
uniform distribution of numbers between 0 and 1 (or between 0 and some
specified N). A noise diode generates a normal (or Gaussian) distri-
bution, that is, the numbers usually (in 90% of the cases) fall
between + and - some value (the standard deviation), but are occasion-
ally larger. For some reason or another, nature seems to like normal
distributions.
Where is the frequency dependency? In the filter. Whatever the
noise's distribution, it can be low pass filtered and result in slow,
smooth signals. Strange enough, if you do this with noise of any
other distribution than normal, the resulting distribution will be
normal or close to it. Normal seems to be, well, normal... As a
consequence, the assumption that the white noise source has a normal
distribution is almost universally made. If the noise isn't normal,
the assumption has little consequences: deviations from normality will
usually result in very small errors.
What do we do with this knowledge? Well, theoretically one could feed
the colored noise through an "inverse" filter and recreate the
original white noise. In practice this can be done as well, up to the
bandwidth of the sensor that perceives the signal, and up to the
processing capabilities of the circuit or machine that performs the
inversion. But usually this is not necessary. Simpler methods exist
to discover what filter was used: correlation, for instance, will do
the trick; the signal's frequency or power spectrum reveals it as
well.
So let me reinterpret your remarks above: "noise" is high frequency;
"disturbance" is low frequency. And you can compensate for low
frequency "disturbances", but not for high frequency "noise".
First on terminology. If we consider the signal only, let us talk
about "noise". Noise can be white or have any coloring, depending on
the filter. And let us talk about "disturbance" when the signal
stands into a relation to something that is disturbed. In my "dead
horse" post, you could say that in case (1) the effect of the action
is disturbed because something or somebody else acts as well; and in
case (2) you could say that the perception is disturbed. If we do
this, we have meaningful definitions of both "noise" and "disturb-
ance".
This is different from what you say:
So "noise" consists of variations that are too fast or too unpredictable
to be opposed moment by moment, while "disturbances" are variations that
can feasibly be opposed by suitable variations in the system output.
As stated, this is too imprecise. The cause of the _unpredictability_
is the noise source, the cause of _rapidity_ is the filter. Whether we
can "control away" a disturbance depends on the capabilities of our
perceptual and motor apparatus. If you define disturbances as those
variations that CAN be opposed, you conflate the colored noise with
the control that the system is capable of. That would mean that what
is a disturbance for one person need not be one for another, or for
that same person in a different mood. I don't like the subjectivity
that this introduces. With this definition, people will never agree on
what is what.
There is no way for the system to distinguish between low-
frequency disturbances of the plant and direct low-frequency
disturbances of perception.
And here you are plainly wrong, as I demonstrated in my "dead horse"
post -- which may therefore not be so dead after all...
I gave the example of tracking a satellite in orbit. If you had
followed my reasoning, you would have seen the difference. Without
control, in case (1) the satellite's heighth will turn out to be a
random walk with height variations that grow without limit (or until
it crashes); even with perceptual inaccuracies, this will be
discovered sooner or later. In case (2), there is no height
variation, so the perceived height will always remain between the same
fixed limits. So a non-control system can determine the difference.
Now, are you suggesting that a control system is less smart? It need
not be. There is are ways in which the difference can be detected and
in which a control system can use the distinction. That was what my
previous post was all about. And is was about how crucial the
distinction is. In case (1), the variations need to be controlled
away. In case (2), we'd better do nothing. If your controller reacts
the same in both cases, I wouldn't want it: it isn't ecologically
acceptable anymore to waste fuel...
What is a "low" frequency? In my model, it is a frequency within the
bandwidth of (primarily) the output process.
This is a good point but, again, it has to be carefully teased apart
from what we considered above. We have to live with our limitations.
If my motor apparatus does not allow actions above some limiting
frequency, it makes no sense for our perceptual apparatus to have a
(much) higher bandwidth, nor for our "processing" apparatus in between
-- the nervous system. Assuming that everything had infinite
bandwidth except our actuators, we would be able to "compute" a
perfectly accurate action, but we wouldn't be able to deliver it. We
know this in control engineering; if coarse control is allowed, an 8-
bit processor will do, in very accurate control we will want to
compute with 64 bit double precision numbers. This can be linked,
maybe, to a finding of comparative anatomists: the motor area and the
perceptual area of the brain are of very similar sizes in all
organisms thus far analyzed.
In your model, there's no
specific limitation on that bandwidth (although realistically there
should be), but there is a limitation on the speed of adaptation of the
Kalman filter.
This raises the question how fast we can -- or should -- adapt. In an
earlier example of mine -- fitting a best straight line through a
number of "noisy" points -- I showed that each subsequent correction
(NOT observation) is given a weight 1/N, where N is the counter. Thus
each new correction is taken less seriously, one could say. And so it
should be: the end result is, nevertheless, that all points are
weighted equally. Considering this process in terms of a bandwidth is
possible, but it easily leads to confusion. A statistical analysis is
much clearer.
If you try to make the system oppose arbitrary
disturbances, you also reduce its ability to control "blind."
No. Both can coexist. What can NOT coexist is a good internal model
and rapid changes of the "laws of nature". The latter results in the
fact that we can only average over short periods of time. And that
results in low signal to noise ratios. And that results in inaccurate
models. And that is exactly as it should be: if the "laws of nature"
vary rapidly, we cannot get to know them very well and we cannot rely
on the applicability of that knowledge very well. The very best can
be pretty bad in a world whose laws fluctuate unpredictably. Yet that
bad can be optimal, in the sense that no other system would be able
do do better.
So, in summary, the distinction I am making between "noise" and
"disturbance" is not where in the process the fluctuations are injected.
The distinction is in whether the output of the system changes so as to
oppose the fluctuations instant by instant. Noise is not opposed instant
by instant, but disturbances are (when good control exists).
And when control is not so good? Or absent? Do the disturbances
become noise? I will not start to use this confusing definition. It
is far too subjective for me. How do you like my distinctions? Could
you live with them? Dead horse again?
Hans Blom, 960712 --
Hans, if you think that the outputs by which you thread a needle in the
dark are the same as those you use when controlling visually, you simply
haven't tried it. There is NO RESEMBLANCE. You should have cursed Mary
for her data, not thanked her: her report shows that your claim is
totally wrong.
Which claim precisely? That you CAN thread a needle in the dark?
Please consult the archives about the origin of claim and counter-
claim and what they were about. Or, better yet, let's stop this
altogether.
In the dark, you are controlling a completely
different set of inputs -- not just running the same old model without
the feedback.
Now we're getting somewhere! Because we have done it so infrequently,
we are not very proficient at threading needles in the dark. That's
why I suggested to Rick to contact his local Union of Blind Seam-
stresses ;-). I bet one could have a pretty good control system for
threading needles in the dark once it becomes routine. Why, I'm
hardly able to thread a needle in broad daylight!
Greetings,
Hans