[spam] Re.: PCT-Specific Methodology

[From Bill Powers (2006.12.16.0830 MST)]

Martin Taylor 2006.12.16.10.15 –

Without commenting directly on
the discussion (which I think a bit misinformed) of
“traditional” statistical methods, I suggest that if you are
interested in the question of correlation between observable disturbance
and observable control system output, you have a look at
<
http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html

I remember seeing the original, but don’t remember why I didn’t comment
on it. I think you got misled by working at too high a level of
abstraction and not really working out the details.
In the first place, your analysis shows no sign of feedback effects,
perhaps because you solved for an independent variable in developing your
first starting point.
p = o + d = Ge + d = G(r-p) + d = Gr

  • Gp + d (where G represents the output function)
    which yields d = p + Gp - Gr.
    That’s starting point 1.
    The disturbance is not, of course, determined by the perceptual
    signal or the reference signal. It is an independent variable. Here is
    the correct derivation:
    p = o + d

    p = Gr - Gp + d, which then leads to
    p(1 + G) = Gr + d, or

G 1
p = ----- r + ----- d,
1 + G 1 + G
So the perceptual signal is a dependent variable which depends on just
two independent variables, r and d.
Note that G/(1+G) approaches 1 as G becomes much greater than 1. The
90-degree phase shift which you say reduces correlations to zero is
greatly modified by this expression (see below for the case in which G is
an integrator).
Even with the perfect integrator, the output varies so it remains about
equal and opposite to the disturbance, with a phase shift that varies
from zero at very low frequencies to 90 degrees at very high frequencies
where the amplitude response approaches zero. The negative feedback makes
the frequency response of the whole system different from the frequency
response of the integrating output function. For one thing, if the time
constant of a leaky-integrator output function is T seconds, the time
constant of a response of the whole system to a disturbance is T/(1+G),
where G is the loop gain.
In Jagacinski and Flach, Control Theory for Humans, there are discussions
of the relationship between Laplace transforms and the frequency domain.
A system with a first-order lag (in the limit of large lags, an
integrator) is discussed in chapter 4 of that book. Chapter 5 is useful
as well. To change a Laplace equation into a time-domain equation, simply
substitute jw (j = square root of minus 1, w = omega = 2pifrequency).
So if the output gain is G/s, the time-domain expression would be
G/(jw)
Suppose the gain is all contained in the output function as you
assume and all the other functions are unity multipliers. We can then see
what the expression G/(1+G) amounts to in the time domain.
The expression G/(1+G) then becomes
G/jw G
-------- = ---------
1 + G/jw G + jw
Multiply numerator and denominator by G - jw to get the imaginary part
into the numerator only:
G(G -
jw)
= ------------
(G+jw)(G-jw)
G^2 - Gjw
= ------------
. G^2 + w^2
Separating the real and imaginary parts, we have

G^2
Gw
= ------------- - j

···
        G^2 +

w^2 G^2 + w^2
From this we can see that as the integrating factor G increases. and as
the frequency decreases (remember that w is 2pifrequency), the real
part of the factor G/(1+G) approaches 1. As G increases and w
increases, the imaginary (90-degree phase shifted) part approaches
zero.

The tangent of the phase angle is the imaginary part divided by the real
part, or w/G.

For a given frequency of disturbance, as the gain factor increases the
perceptual signal approaches equality to the reference signal, and the
effect of the disturbance on it decreases toward zero.

If you like, you can work out a comparable expression for the output o,
and then calculate the ratio of o to d. It will approach -1, with the
imaginary part going to zero at high gains or low frequencies.

What you say about correlations applies only to the output integrator,
not the whole system. The correlation of the error signal with the output
of the integrator will always be zero. However, a correlation lagged 90
degrees will be perfect, which is not true for any other form of output
function. And the main point here is the the system as a whole does NOT
behave as a pure integrator, but in fact approaches the behavior of a
pure proportional system as the gain increases. Then everything we have
been saying about correlations holds true.

Hope my derivations don’t contain mistakes – it’s been a long time since
I last tried to do that stuff with complex numbers, though I had a little
practice in corresponding with Flach.

Best,

Bill P.

Re.: PCT-Specific Methodology
[Martin Taylor 2006.12.16.15.41]

[From Bill Powers (2006.12.16.0830
MST)]

Martin Taylor 2006.12.16.10.15 –

Without commenting directly on the
discussion (which I think a bit misinformed) of “traditional”
statistical methods, I suggest that if you are interested in the
question of correlation between observable disturbance and observable
control system output, you have a look at <
Correlation between perception and disturbanc

I remember seeing the original, but don’t remember why I didn’t
comment on it. I think you got misled by working at too high a level
of abstraction and not really working out the details.

In the first place, your analysis shows no sign of feedback effects,
perhaps because you solved for an independent variable in developing
your first starting point.

Of course it uses feedback effects. It’s the usual derivation
around the control loop, the same derivation you used to contradict
mine. We both arrive at p = Gr - Gp + d, which we then develop in two
different ways. I simply move the “G” terms to the other
side of the equal sign, giving d = p + Gp - Gr, whereas you combine
the “p” terms, to give

G 1

p = ----- r + ----- d,

1 + G     1 + G

They are exactly the same thing, aren’t they?

So the perceptual signal is a dependent
variable which depends on just two independent variables, r and
d.

Exactly. You like mathematical derivations… so, given the
equation you arrive at (as one does with the usual derivation that I
used) d is equally a function of p, G, and r. Equally, r is a function
of d, G, and p. You know any three of them and you can derive the
fourth.

Note that G/(1+G) approaches 1 as G
becomes much greater than 1. The 90-degree phase shift which you say
reduces correlations to zero is greatly modified by this expression
(see below for the case in which G is an integrator).

No it isn’t. The ONLY reason for the 90 degree phase shift is the
assumption that the output function is a perfect integrator.

Even with the perfect integrator, the
output varies so it remains about equal and opposite to the
disturbance, with a phase shift that varies from zero at very low
frequencies to 90 degrees at very high frequencies where the amplitude
response approaches zero.

The phase shift in quaestion is the phase shift between the error
signal and the output signal. A pure integrator gives a 90 degree
phase shift at ALL frequencies. The integral of a cosine is the
corresponding sine, and vice-versa.

The negative feedback makes the frequency
response of the whole system different from the frequency response of
the integrating output function. For one thing, if the time constant
of a leaky-integrator output function is T seconds, the time constant
of a response of the whole system to a disturbance is T/(1+G), where G
is the loop gain.

I have a note about the leaky integrator and its effect on the
frequency effects on the correlation near the end. A leaky inegrator
is not a perfect integrator.

Separating the real and imaginary parts, we have

G^2 Gw
= ------------- - j

        G^2
  • w^2 G^2 + w^2
    from this we can see that as the integrating factor G increases. and
    as the frequency decreases (remember that w is 2pifrequency), the
    real part of the factor G/(1+G) approaches 1. As G increases and
    w increases, the imaginary (90-degree phase shifted) part
    approaches zero.

For the loop as a whole, yes. I think my animated diagram
illustrates this. Actually, you don’t need to go into that kind of
complex arithmetic analysis. All you need is the knowledge that the
Laplace transform is linear, and you can operate on the transforms as
though they were simple scalar variables.

What you say about correlations applies
only to the output integrator, not the whole system.

No. When I talk about the correlations across the output
integrator, that’s what I talke about. If I talk about correlations
between d and o, or between p and r, or whatever, those are the
correlations I am talking about (in the context of the feedback loop,
of course).

The correlation of the error signal
with the output of the integrator will always be zero. However, a
correlation lagged 90 degrees will be perfect,

You can’t “lag” the correlation 90 degrees, except at
one frequency. The correlation is time-domain, and you can only lag it
by delta t. There will be a fequency (an infinite set of them,
actually) for which a given delta t gives a lagged cross-crrelation of
unity, but that’s a complete red-herring in this discussion.

Hope my derivations don’t contain
mistakes – it’s been a long time since I last tried to do that stuff
with complex numbers, though I had a little practice in corresponding
with Flach.

I hope mine doesn’t, too. I think you have quite misunderstood
it. I could be quite wrong, but when I went through it again this
morning, I didn’t find a mistake. Your comments haven’t (yet) helped
me to find a mistake.

Martin

Re.: PCT-Specific Methodology
[Martin Taylor 2006.12.17.22.22]

[From Bill Powers (2006.12.17.1530
MDT)]
Martin Taylor 2006.12.17.14.16 –
Not to drag this out, but I’ve very confused about something
you seem to be saying. In a previous post you said the correlation
between two vectors was the cosine of the angle between them. In this
post you say

In the cited analysis, when the reference
signal is uniformly zero, the maximum correlation between the
perceptual signal and the disturbance is 1/CR, where CR is the control
ratio – (fluctuations in the disturbance)/(fluctuations in the
perceptual signal).

This makes it sound as if a correlation is just a partial
derivative.

How do you get such an implication out of this? CR is an
amplitude ratio that has a potential range of between 1 and infinity,
and you know the derivation of it quite precisely, since you’ve been
criticizing that same derivation with some authority over the last
couple of days.

Can’t you have a correlation
between x and y of 0.9 even when the slope of the relationship is y =
0.1x? Have we been talking about different things again?

Of course you can. I fail to see where you are coming from.

Ar you OK with the result that “the absolute value of the
minimum correlation between d and o must be sqrt(1 - 1/CR^2)?” if
there’s no loop transport delay, no noise, and the output function is
a pure integrator?

Martin