[Martin Taylor 2004.12.25.11.06]
[From Bill Powers (2004.12.25.0600 MST)]
Martin Taylor 2004.12.25.00.44 --I stand corrected. I'd forgotten that it was intended mainly to
compensate for the impulse response of the feedback path. But now
you say so, it seems obvious that the effect would apply to that
regularity even more strongly than it would to the regularities in
the disturbance, because the bandwidth of changes in that impulse
response would ordinarily be very low.The result of the learning with the AC is that the control system
can oppose the effects of external disturbances up to its full
bandwidth, for any waveform of disturbance. This, even if it learns
with no external disturbances, strictly through random
experimentation with the reference signal.
Fine, I understand that very well, since you pointed it out.
Just to be annoying, though, I might (tongue firmly in cheek) ask how
the control system randomly experiments with the value of its own
reference signal?
I don't know what you're imagining here, but it doesn't really fit
the case. What happens is that the transfer function of the output
function changes until when combined with the transfer function of
the feedback path it produces close to a single-pole response, a
damped exponential response to a step disturbance. This then sees to
it that for ANY waveform of disturbance, the error is as small as it
can be.
Ok with this, too.
I've been pondering your idea of the 1Hz controller controlling
against a 1 KHz disturbance. I don't think that's possible. The
so-called narrow-band disturbance is still causing perturbations of
the controlled variable at a frequency of 1 KHz, and to oppose those
perturbations, the controller would have to produce output
fluctuations at 1 Khz, also. This means that the output function
would have to be a tunable 1-KHz oscillator with controllable
amplitude, adjusted to match the effects of the disturbance closely
enough to cancel the 1KHz perturbations, and the input function
would have to be able to detect phase and amplitude variations
accurately over the same narrow bandwidth. So all you have is a
narrow-band controller controlling against a narrow-band disturbance
-- the bandwidths have to match.
Right. That's essentially what I have been trying to say. You are
imagining what I am imagining -- a heterodyned system like an AM
radio would be an example. Receiver on the PIF side, transmitter on
the output side. Amplitude and phase change slowly, though the signal
waveform changes fast. The actual shape of the disturbance waveform
shouldn't matter, provided the PIF and the output function are
appropriately tuned. The system predicts through many cycles of the
disturbance waveform, and compensates for those rapid changes.
I don't think I need say more, do I?
Beyond this, I think we are into semantics, rather than physics or
engineering. But maybe not. We'll see.
Without worrying about the meaning of the word "prediction", one
can still plot out the probability distribution of possible
continuations of any given waveform, knowing its effective
bandwidth. After approximately T = 1/2W seconds, the distribution
is effectively the same as it would have been had you been
initially given no specific initial waveform. We can agree on that,
I assume?Yes, but the controller doesn't control against effects of the
distribution. It opposes, as near as possible, against the
instantaneous amplitude of the waveform.
True.
That is why no prediction is required, or for that matter, possible.
How can you reconcile the last phrase of this sentence with the "Yes"
that began the same paragraph? You seem to agree that prediction is
indeed possible to some degree out as far in the future as T = 1/2W.
As an engineer, I don't see how you could think of disagreeing with
that, so I take "Yes" at its face value.
Whether prediction is "required" is another question. Sure, the
controller can "oppose, as near as possible, against the
instantaneous amplitude of the [disturbance] waveform" so far as it
knows at T0 what that instantaneous amplitude had been at
T0-tau(perceptual lag). Its output will then become effective against
the disturbance at time T0+tau(effector lag). So, at T0 +
tau(effector lag), the controller opposes what the disturbance had
been tau(perceptual lag)+tau(effector lag) earlier. Would it not be a
better controller if it could predict, at T0, the most probably value
of the disturbance instantaneous amplitude tau(perceptual
lag)+tau(effector lag) into the future, and compensate for that?
I've mentioned on CSGnet some time ago an experience I had with doing
exactly that (consciously). At a party, we were playing ping-pong
outdoors by the light of the full moon. In low light, one's vision is
delayed (you can test this out by yourself--Google for "Pulfrich
effect" should tell you how). I knew this, but luckily my opponents
didn't, or didn't take advantage. I quickly learned to hit the ball
early, actually at the moment it seemed to be passing over the net,
rather than waiting till I saw it onto the bat. It's a strange
feeling, but quickly becomes "normal". In other words, my control was
vastly improved by controlling using the predicted point of arrival
of the ball rather than using its instantaneously perceived position.
I think the point does have some importance, when you consider that
the bandwidths of many socially relevant disturbances can be stated
in micro- or nano-Hz, which means that in the absence of control the
time scale of better-than-chance prediction can extend into days,
years, or even, in some cases such as global warming or the decay of
nuclear waste, centuries.
This all is getting hazardously close to the long-dormant issue of
information in control. It seems pointless to let that volcano erupt
again unless there is something new to be said.
Gotta go open presents at my daughter's house.
Enjoy the kids. We are just staying home. Will watch some videos over
a bottle of pseudo-champagne later this afternoon!
Martin