[From Bruce Nevin (2006.12.20 09:30 EST)]
Bill Powers (2006.12.16.1515 MST) –
I prefer mathematical derivations that reflect the physical situation properly. What you say is algebraically true. But it is not physically true. If I inject a perceptual signal into a control system, or change the reference signal or G, there will be no effect on the value of d as implied by your equation. If I change d, r, or G, there will be an effect on p as indicated by my equation. I have now said all of that twice, which makes it true.
This argument seems weird to me, and quite unlike you, Bill. What the equation implies is that if p changes (and r & G are unchanged) it must because there was a change in d – not that a change in p causes a change in d due to the observer somehow extrasystemically injecting a signal into p. The equation says nothing about causality or even temporal antecedence. Neither equation says that the single term on the left is determined by the expression the right, if you interpret “determine” as “cause”, they simply assert a correspondence (equality).
/Bruce
···
From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Saturday, December 16, 2006 6:21 PM
To:
CSGNET@LISTSERV.UIUC.EDU
Subject: Re.: PCT-Specific Methodology
Martin Taylor 2006.12.16.1514.
Of course it uses feedback effects. It's the usual derivation around the control loop, the same derivation you used to contradict mine. We both arrive at p = Gr - Gp + d, which we then develop in two different ways. I simply move the "G" terms to the other side of the equal sign, giving d = p + Gp - Gr, whereas you combine the "p" terms, to give
G 1
p = ----- r + ----- d,
1 + G 1 + GThey are exactly the same thing, aren’t they?
No. Mine implies that p is determined by G, r, and d, which is physically true. Yours implies that d is determined by p, G, and r, which is physically false. If you vary d, r, or G, p will change. But if you vary p, G or r, d will not change, even if your equation says it will…
Your derivation is algebraically correct, but false as a description of physical relationships. Algebra doesn’t know anything about dependent and independent variables.
So the perceptual signal is a dependent variable which depends on just two independent variables, r and d.
Exactly. You like mathematical derivations... so, given the equation you arrive at (as one does with the usual derivation that I used) d is equally a function of p, G, and r. Equally, r is a function of d, G, and p. You know any three of them and you can derive the fourth.
I prefer mathematical derivations that reflect the physical situation properly. What you say is algebraically true. But it is not physically true. If I inject a perceptual signal into a control system, or change the reference signal or G, there will be no effect on the value of d as implied by your equation. If I change d, r, or G, there will be an effect on p as indicated by my equation. I have now said all of that twice, which makes it true.
Note that G/(1+G) approaches 1 as G becomes much greater than 1. The 90-degree phase shift which you say reduces correlations to zero is greatly modified by this expression (see below for the case in which G is an integrator).
No it isn't. The ONLY reason for the 90 degree phase shift is the assumption that the output function is a perfect integrator.
I was pointing out that the phase shift through the whole control system, or the one implied by the solved equations, is different from the phase shift through the output function. You apparently read my remark to mean that the phase shift in the output function itself is modified, which is not what I meant to say. Or what I meant to mean.
Even with the perfect integrator, the output varies so it remains about equal and opposite to the disturbance, with a phase shift that varies from zero at very low frequencies to 90 degrees at very high frequencies where the amplitude response approaches zero.
The phase shift in question is the phase shift between the error signal and the output signal. A pure integrator gives a 90 degree phase shift at ALL frequencies. The integral of a cosine is the corresponding sine, and vice-versa.
Yes, but the next cited part explains what I meant:
The negative feedback makes the frequency response of the whole system different from the frequency response of the integrating output function.
The frequency response of the integrating output function is of the form 1/f, with a 90-degree phase shift over the whole range of frequencies. The frequency response of the whole system, as determined by varying the frequency of a sine-wave disturbance or reference signal and observing the output quantity, is not of that form.
For one thing, if the time constant of a leaky-integrator output function is T seconds, the time constant of a response of the whole system to a disturbance is T/(1+G), where G is the loop gain.
I have a note about the leaky integrator and its effect on the frequency effects on the correlation near the end. A leaky inegrator is not a perfect integrator.
The leakiness is not the point. Even with a perfect integrator, there will be a negative correlation between d and o, higher at lower frequencies but present for any real waveform. What I said would have been clearer if I had just deleted the side-remark about leaky integrators.
Separating the real and imaginary parts, we have G^2 Gw = ------------- - j ----------- G^2 + w^2 G^2 + w^2 From this we can see that as the integrating factor G increases. and as the frequency decreases (remember that w is 2*pi*frequency), the real part of the factor G/(1+G) approaches 1. As G increases and w *increases* , the imaginary (90-degree phase shifted) part approaches zero.
For the loop as a whole, yes. I think my animated diagram illustrates this. Actually, you don't need to go into that kind of complex arithmetic analysis. All you need is the knowledge that the Laplace transform is linear, and you can operate on the transforms as though they were simple scalar variables.
Yes, but their meaning is not all that transparent. To me, anyway. They may look like scalar variables, but they aren’t.
The correlation of the error signal with the output of the integrator will always be zero. However, a correlation lagged 90 degrees will be perfect,
You can't "lag" the correlation 90 degrees, except at one frequency. The correlation is time-domain, and you can only lag it by delta t. There will be a frequency (an infinite set of them, actually) for which a given delta t gives a lagged cross-correlation of unity, but that's a complete red-herring in this discussion.
If you calculate a correlation between sin(wt) and cos(w(t - tau)), where tau is set to correspond to a phase shift near the low end of the observed range of frequencies, there will be a nonzero correlation between those two functions, because the low-frequency amplitudes are greater than the high-frequency amplitudes. So it’s only a partial red herring – say a pale pink herring. I did forget that the correlation for a given lag will not be perfect for other frequencies
I hope mine doesn't, too. I think you have quite misunderstood it. I could be quite wrong, but when I went through it again this morning, I didn't find a mistake. Your comments haven't (yet) helped me to find a mistake.
Perhaps this comment will clarify what I do and don’t consider to be mistakes.
Best,
Bill P.