[From Bill Powers (2009.06.09.0528 MDT)]
Martin Taylor 2009.06.08.16.27 –
BP to Rick, earlier: Re: Martin’s comments about zero correlation of a
variable with its integral. I think your answer is correct, that the
closed loop changes things. The main thing it changes is the apparent
time constant of the loop. An integrating system in a feedback loop looks
like a leaky integrator in an open loop. The time constant is determined
by the loop gain, getting shorter as the gain goes up. A short time
constant means that the output follows the disturbance more quickly than
it would if there were only a pure integrator present without the
feedback. The phase relationships inside the loop are also affected by
the feedback. I can’t get any more specific than that because (a) I am
part of an international conspiracy sworn not to reveal the details, and
(b) I havent done the math.
[MT 2009.06.08]
I wrote the following before Bill Powers’s unexpected repudiation
of the normal equations used to analyze control loops, contained in his
second sentence above. I thought about it for a while, since on the face
of it, Bill’s comment that “the closed loop changes things”
seemed to contradict the basic assumption of the kind of loop analysis
usually used on CSGnet, namely that qi = qo + qd = G(qr-qi) + qd where G
is the output function, and all other pathways are assumed to be unity
multipliers.
If qi is qo + d and qo is integral(qi), which equation are we to believe
if d = 0? With d = 0, we have qi being identical to qo, but we have qo
being the integral of qi. The answer is in the differential
equations.
From the assumption that
when qr = 0 then qo = G(qi), we get the usual result for loop gain and so
forth. That is the basis on which I made my calculations about the
relation between control ratio and the correlation between p (= qi) and
qd. But if Bill is saying that this is no longer to be considered to be
so, then all bets are off. So I have to assume Bill meant something
different, and the usual equations are still considered permissible on
CSGnet.
I don’t think I’m repudiating anything – just speaking a bit loosely,
which sometimes has the same effect.
The equations in a simple form with reference signal = 0 are:
qo = -gain* INTEGRAL(qi)
qi = qo + d
Substituting, we have
qo = -gain*INTEGRAL(qo + d) or, differentiating,
d(qo)/dt = -gainqo - gaind, and with zero disturbance,
d(qo)/dt = -gain * qo.
Let qo = exp(-kt).
d(qo)/dt = -k* exp(-kt);
therefore
-kexp(-kt) = -gainexp(-kt), or
k = gain.
The time constant is 1/k, so it varies inversely with the loop gain
k.
If the output is a pure integrator, as you say the input qi will be out
of phase with the output by 90 degrees at all frequencies and the
correlation will be zero because the XY term in the calculation will
average zero. The gain will make no difference. However, if we use a
leaky integrator, there will be less than 90 degrees of phase shift and
some correlation will appear. At low enough frequencies, qo will
vary almost in phase with qi. The amount of “leak” determines
how low that frequency has to be.
The loop gain affects the phase and amplitude of the relationship between
the disturbance and the output quantity.
MT: Argument: (Rick’s point)
When a perception is controlled, the variation in the perceptual variable
due to the disturbance is much smaller than the variation of the
disturbance. …
So. One factor is that relatively small amounts of noise drastically
reduce the correlation between perception and output when control is
good.
Argument: (My analysis) If any component process of pathway in the loop
decorrelates its input from its output, then variables contributing to
the input to that pathway or process will be decorrelated from the output
of the process.
BP: So far both your argument and Rick’s appear right to me.
Rick says that I have claimed
that the decorrelation is due to a “phase lag due to
integration”. This is true for every Fourier component of the
signal, but then Rick wrongly says: “you can test this by looking
at the lagged correlation between i and o; if it were a phase lag
phenomenon then the lagged correlations should increase as the lag
approached the phase difference.” He is wrong in saying that
this test should work, because the lagged correlation is a time lag, not
a phase lag. If the signals are not sinusoids, you can’t equate a time
lag to a phase lag, because the relation is different for every frequency
component of the signal. You can, however, say that if every component of
the output is uncorrelated (is pi/2 out of phase with) the corresponding
component of the input, then the output signal is uncorrelated with the
input signal. That is what happens with a perfect
integrator.
You are right and so is Rick. True, lagging the correlation will not have
the same effect on phase differences at all frequencies. However, since
in real systems gain decreases with increasing frequency, a lagged
correlation of a real-world integrated signal would increase the
correlation.
I delayed this posting because
Bill said: “I haven’t done the math”, and I thought it
worthwhile to do the math. I do two forms, a general one using the
correlation angles that we finally got sorted out May 25, and a specific
one for a purely sinusoidal disturbance. Although I present it here, the
first form is actually presented in a more general way on the Web page I
have cited:
<
http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html
.
If the disturbance is a sinusoid, does it really follow that the
perception is in phase with the disturbance? You seem to imply this by
saying the control ratio is simply abs(d/p). But the output is in phase
with the disturbance (minus 180 degrees), so the input has to be 90
degrees out of phase with the disturbance and so does the perception. It
could be that I’m misreading your notation – the font you use must be
different from what I have available because there are strange symbols
like a boldface caret and boldface letters like h, c, m, and &
here and there, and what is most probably supposed to be an integration
sign comes out as #.
I have a problem with this: “The correlation between any two vectors
is the cosine of the angle between them.” I thought this was true
only if each vector was an ordered array of successive values of one
variable and the angle was between the two vectors in a multidimensional
space, one dimension per value in the arrays. If you say x = f(t) and y =
g(t), where x and y are the magnitudes of two vectors at right angles to
each other (along the X and Y axes), it does not seem to me that the
correlation between them is necessarily the cosine of the angle between
them, which is zero. Either what you say above is false, or you have not
said what you mean to say.
MT: The analysis assumes that
there is no time-lag in any pathway, no noise, and the output function G
is a pure integrator.
- Disturbance “d” is an arbitrary waveform
I use the Heaviside operational calculus here.
BP: I think this turns out more or less the same as my manipulations
above, except for the parts about the control ratio.
MT: I think that’s enough for this
message. I hope that everything makes sense, and clarifies the issues
that had been raised in the earlier discussion.
I’m afraid we have different definitions of “clarification.”
The math you use assumes a rather advanced understanding on the part of
those for whom you are trying to clarify things, so for me at least there
is a net loss of clarity.
Got to go.
Best,
Bill P.