Causality does not imply correlation

[From Rick Marken (2009.05.27.0840)]

Bruce Abbott (2009.05.27.0850 EDT)--

If I may jump in here . . .

Please!

RM: It turns out that if you increase the gain of the system (change the
Gain number from what it is to 90, say) the i-o correlation _does_ go to
0.0.

BA: If the output variations completely cancelled the input variations, the
variance in the input would be zero and the correlation undefined.

I meant that it goes to zero but doesn't get there. The correlation I
got with high gain (and no noise added) was .015. That's zero in my
book. It's certainly a surprisingly low correlation between variables
where one (i) is supposed to be causing the other (o).

But real
systems do not perfectly cancel the disturbance; the remainder is the
portion of variation in the disturbance that is uncorrelated (or nearly so)
with the output.

Maybe. But we're correlating i and o so the remaining tiny
correlation between i and o is the remnants of variance in both i and
o that are correlated. Since the value of i is determined by both d
and o it seems unlikely that the tiny correlation between i and o is
determined only by the portion of variation in the disturbance that is
uncorrelated with output. It's got to be the portion of variation in
d+o that is uncorrelated with o.

RM: This is kind of a cool observation because it suggests that a second,
unknown disturbance is not needed in order to observed the lack of
correlation between i and o in a closed loop system; it just has to be a
high gain closed loop control system.

Kennaway added the second, small unknown random disturbance to model that
portion of a disturbance that is unknown to the experimenter (in a real
system being measured, as opposed to a simulation where the disturbance is
known perfectly). It lowers the correlations between disturbance and output,
and between disturbance and input, where "disturbance" is the known portion.
The control system "sees" the total disturbance and behaves accordingly,
producing the usual low correlation between input and output. So yes, adding
the second, unknown disturbance is not required in order to demonstrate a
low correlation between input and output. Raising the gain (up to a point)
makes the system more responsive to error, so the output more closely
mirrors the disturbance function, increasing the portion of variation in
output that is correlated with the disturbance and decreasing the
correlation between input and output.

Good points. But I myself had always thought that the startling
finding of a low correlation between i and o -- a finding that
destroys the basis of experimental psychology, by the way, which is
based on an open loop model that assumes a high correlation from IV
(disturbance, d) and sensory input (i) and from i to DV (o) --
required the addition of noise to the model. So did Bill, I believe,
and he was the one who first showed that r.di ~ 0, r.io ~ 0 and r.do ~
-1 as a challenge to the open loop model of psychology. My finding of
a near zero correlation between i and o with no noise (if this result
holds up to further scrutiny) shows that you don't need to assume a
noise component in a closed-loop system in order to see that looking
for one way relationships between variables in such a system (which is
what conventional methods do) is a fool's errand.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Rick Marken (2009.05.27.1320)]

Two quick notes, a correction and a celebration. First the correction:
In an earlier post I said:

Rick Marken (2009.05.27.0840)--

Since the value of i is determined by both d
and o it seems unlikely that the tiny correlation between i and o is
determined only by the portion of variation in the disturbance that is
uncorrelated with output. It's got to be the portion of variation in
d+o that is uncorrelated with o.

I think that last sentence should say: It's got to be the tiny portion
of variation in d+o that is _correlated_ with o. Since we're talking
about that small correlation between i and o in the high gain case.

Now the celebration: Great article on honeybee navigation, Bruce! Nice
find. Be fun to try to model some of this stuff, no?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bill Powers (2009.05.27.1744 MDT)]

Rick Marken (2009.05.27.1320) --

Re: Martin's comments about zero correlation of a variable with its integral. I think your answer is correct, that the closed loop changes things. The main thing it changes is the apparent time constant of the loop. An integrating system in a feedback loop looks like a leaky integrator in an open loop. The time constant is determined by the loop gain, getting shorter as the gain goes up. A short time constant means that the output follows the disturbance more quickly than it would if there were only a pure integrator present without the feedback. The phase relationships inside the loop are also affected by the feedback. I can't get any more specific than that because (a) I am part of an international conspiracy sworn not to reveal the details, and (b) I havent done the math.

Best,

Bill P.

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.
http://www.nod32.com

[From Bruce Abbott (2009.05.28.0905 EDT)]

Rick Marken (2009.05.27.1320) --

Now the celebration: Great article on honeybee navigation, Bruce! Nice

find. Be fun to try to model some of this stuff, no?

Yes, if we can acquire an adequate set of data to test the model against. In
a follow-on article, Srinivasan (with S. W. Zhang) provides evidence that
the system involved in centering the honeybee as the bee flies between the
walls of a tunnel is insensitive to the direction of motion (!). I'm a bit
confused about this: it implies that a set of vertical stripes moving
backward on one wall and forward at the same speed on the other wall would
result in the bee centering itself in the tunnel. But maybe I'm
misunderstanding Srinivasan's findings. You can find the article at
Orientation and Communication in Arthropods - Google Books (in a book entitled
"Orientation and Communication in Arthropods" by Miriam Lehrer). The article
is available at the link on this page entitled "Visual Control of Honeybee
Flight." The article provides a nice diagram of the experimental setup,
which displayed a checkerboard pattern via a CRT screen as the visual
pattern for one wall of the tunnel. The main purpose of this study was to
identify differences between the mechanisms involved in the centering
response and the "optomotor response." The latter involves the bee turning
to stabilize an image that otherwise would be drifting across the visual
field, thus helping the bee to maintain a straight course against
disturbances that induce a turning motion and thus produce the visual drift.

Whatever the specifics of the model turn out to be, one thing is already
clear from this research -- insects control their perceptions.

Bruce A.

__________ NOD32 4111 (20090528) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bill Powers (2009.06.09.0528 MDT)]

Martin Taylor 2009.06.08.16.27 –

BP to Rick, earlier: Re: Martin’s comments about zero correlation of a
variable with its integral. I think your answer is correct, that the
closed loop changes things. The main thing it changes is the apparent
time constant of the loop. An integrating system in a feedback loop looks
like a leaky integrator in an open loop. The time constant is determined
by the loop gain, getting shorter as the gain goes up. A short time
constant means that the output follows the disturbance more quickly than
it would if there were only a pure integrator present without the
feedback. The phase relationships inside the loop are also affected by
the feedback. I can’t get any more specific than that because (a) I am
part of an international conspiracy sworn not to reveal the details, and
(b) I havent done the math.

[MT 2009.06.08]

I wrote the following before Bill Powers’s unexpected repudiation
of the normal equations used to analyze control loops, contained in his
second sentence above. I thought about it for a while, since on the face
of it, Bill’s comment that “the closed loop changes things”
seemed to contradict the basic assumption of the kind of loop analysis
usually used on CSGnet, namely that qi = qo + qd = G(qr-qi) + qd where G
is the output function, and all other pathways are assumed to be unity
multipliers.

If qi is qo + d and qo is integral(qi), which equation are we to believe
if d = 0? With d = 0, we have qi being identical to qo, but we have qo
being the integral of qi. The answer is in the differential
equations.

From the assumption that
when qr = 0 then qo = G(qi), we get the usual result for loop gain and so
forth. That is the basis on which I made my calculations about the
relation between control ratio and the correlation between p (= qi) and
qd. But if Bill is saying that this is no longer to be considered to be
so, then all bets are off. So I have to assume Bill meant something
different, and the usual equations are still considered permissible on
CSGnet.

I don’t think I’m repudiating anything – just speaking a bit loosely,
which sometimes has the same effect.

The equations in a simple form with reference signal = 0 are:

qo = -gain* INTEGRAL(qi)

qi = qo + d

Substituting, we have

qo = -gain*INTEGRAL(qo + d) or, differentiating,

d(qo)/dt = -gainqo - gaind, and with zero disturbance,

d(qo)/dt = -gain * qo.

Let qo = exp(-kt).

d(qo)/dt = -k* exp(-kt);

therefore

-kexp(-kt) = -gainexp(-kt), or

k = gain.

The time constant is 1/k, so it varies inversely with the loop gain
k.

If the output is a pure integrator, as you say the input qi will be out
of phase with the output by 90 degrees at all frequencies and the
correlation will be zero because the XY term in the calculation will
average zero. The gain will make no difference. However, if we use a
leaky integrator, there will be less than 90 degrees of phase shift and
some correlation will appear. At low enough frequencies, qo will
vary almost in phase with qi. The amount of “leak” determines
how low that frequency has to be.

The loop gain affects the phase and amplitude of the relationship between
the disturbance and the output quantity.

MT: Argument: (Rick’s point)
When a perception is controlled, the variation in the perceptual variable
due to the disturbance is much smaller than the variation of the
disturbance. …

So. One factor is that relatively small amounts of noise drastically
reduce the correlation between perception and output when control is
good.

Argument: (My analysis) If any component process of pathway in the loop
decorrelates its input from its output, then variables contributing to
the input to that pathway or process will be decorrelated from the output
of the process.

BP: So far both your argument and Rick’s appear right to me.

Rick says that I have claimed
that the decorrelation is due to a “phase lag due to
integration”. This is true for every Fourier component of the
signal, but then Rick wrongly says: “you can test this by looking
at the lagged correlation between i and o; if it were a phase lag
phenomenon then the lagged correlations should increase as the lag
approached the phase difference
.” He is wrong in saying that
this test should work, because the lagged correlation is a time lag, not
a phase lag. If the signals are not sinusoids, you can’t equate a time
lag to a phase lag, because the relation is different for every frequency
component of the signal. You can, however, say that if every component of
the output is uncorrelated (is pi/2 out of phase with) the corresponding
component of the input, then the output signal is uncorrelated with the
input signal. That is what happens with a perfect
integrator.

You are right and so is Rick. True, lagging the correlation will not have
the same effect on phase differences at all frequencies. However, since
in real systems gain decreases with increasing frequency, a lagged
correlation of a real-world integrated signal would increase the
correlation.

I delayed this posting because
Bill said: “I haven’t done the math”, and I thought it
worthwhile to do the math. I do two forms, a general one using the
correlation angles that we finally got sorted out May 25, and a specific
one for a purely sinusoidal disturbance. Although I present it here, the
first form is actually presented in a more general way on the Web page I
have cited:
<
http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html

.

If the disturbance is a sinusoid, does it really follow that the
perception is in phase with the disturbance? You seem to imply this by
saying the control ratio is simply abs(d/p). But the output is in phase
with the disturbance (minus 180 degrees), so the input has to be 90
degrees out of phase with the disturbance and so does the perception. It
could be that I’m misreading your notation – the font you use must be
different from what I have available because there are strange symbols
like a boldface caret and boldface letters like h, c, m, and &
here and there, and what is most probably supposed to be an integration
sign comes out as #.

I have a problem with this: “The correlation between any two vectors
is the cosine of the angle between them.” I thought this was true
only if each vector was an ordered array of successive values of one
variable and the angle was between the two vectors in a multidimensional
space, one dimension per value in the arrays. If you say x = f(t) and y =
g(t), where x and y are the magnitudes of two vectors at right angles to
each other (along the X and Y axes), it does not seem to me that the
correlation between them is necessarily the cosine of the angle between
them, which is zero. Either what you say above is false, or you have not
said what you mean to say.

MT: The analysis assumes that
there is no time-lag in any pathway, no noise, and the output function G
is a pure integrator.

  1. Disturbance “d” is an arbitrary waveform

I use the Heaviside operational calculus here.

BP: I think this turns out more or less the same as my manipulations
above, except for the parts about the control ratio.

MT: I think that’s enough for this
message. I hope that everything makes sense, and clarifies the issues
that had been raised in the earlier discussion.

I’m afraid we have different definitions of “clarification.”
The math you use assumes a rather advanced understanding on the part of
those for whom you are trying to clarify things, so for me at least there
is a net loss of clarity.

Got to go.

Best,

Bill P.

[Martin Taylor 2009.06.09.10.44]

[From Bill Powers (2009.06.09.0528 MDT)]

Martin Taylor 2009.06.08.16.27 –

I can’t fully respond to this message without the answers to a couple
of questions, but I can make a couple of comments about parts of it.

The loop gain affects the phase and amplitude of the relationship
between
the disturbance and the output quantity.

Yes. The magnitude of this effect was one intermediate result of my
analysis. Or perhaps its conceptual starting point.

Now the questions.

  1. Did the PDF come through with the proper symbols, because if you are
    referring to the PDF when you mention strange symbols such as “#” when
    there should be an integral sign, I can see why you might be making
    comments such as: “If the disturbance is a sinusoid, does it really
    follow that the
    perception is in phase with the disturbance?” On the other hand, if you
    were able to read the PDF symbols, I can’t see why you would make such
    a comment. Were the strange symbols in the main text or in the PDF?

You are right and so is Rick. True, lagging the correlation will not
have
the same effect on phase differences at all frequencies. However, since
in real systems gain decreases with increasing frequency, a lagged
correlation of a real-world integrated signal would increase the
correlation.

  1. Are you sure of this, and if so, can you provide or point to a proof
    or demonstration? It’s not intuitively obvious to me that this should
    be the case. I can see that it might be the case for a band-limited
    signal, or possibly for a high-pass filtered signal, but it is not
    obvious to me that it is true for one with a flat spectrum.

Another comment.

If the disturbance is a sinusoid, does it really follow
that the
perception is in phase with the disturbance? You seem to imply this by
saying the control ratio is simply abs(d/p).

Control ratio is the ratio of the amplitudes, or at least it has been
in our discussions over the years. Phase doesn’t come into it.

MT: I think that’s enough for this
message. I hope that everything makes sense, and clarifies the issues
that had been raised in the earlier discussion.

I’m afraid we have different definitions of “clarification.”
The math you use assumes a rather advanced understanding on the part of
those for whom you are trying to clarify things, so for me at least
there
is a net loss of clarity.

Might that be because of the problem with the “Symbol” font? I don’t
for a minute believe the math is really too advanced for you. As you
say, the Heaviside notation does the same as your own. I just used it
for typographical clarity. Everything else is, I think, familiar to you
other than the correlation angle part, which you correctly describe: ““The
correlation between any two vectors
is the cosine of the angle between them.” I thought this was true
only if each vector was an ordered array of successive values of one
variable and the angle was between the two vectors in a
multidimensional
space, one dimension per value in the arrays.
” All vectors are such
arrays of successive values, so this applies to any vector pair whose
elements are defined on the same set of dimensions.
But you followed this by saying: "If you say x = f(t) and y =
g(t), where x and y are the magnitudes of two vectors at right angles
to
each other (along the X and Y axes), it does not seem to me that the
correlation between them is necessarily the cosine of the angle between
them, which is zero.
" The problem is that f(t) represents a
function of time, not a magnitude. If f(t) and g(t) are uncorrelated,
then they must be plotted in the vector space at right angles to one
another. If two time sequences are found to be at right angles to each
other when plotted in the space, then they are indeed uncorrelated. The
relationship works both ways, since the calculation of the cosine of
the angle from the sequence of sample values is identical to the
calculation of the correlation. When the functions are continuous, the
space of description becomes infinite-dimensional, but the space that
contains the two vectors is still two-dimensional; to calculate the
angle or the correlation one uses integrals instead of sums, but the
form of the calculations doesn’t change.
I do have something to say about the step function disturbance. It was
important to me, because it was your message about it that led to my
initial delay in responding, thinking “Could Bill and Rick be right
that closing the loop actually changes the function relating o to p?
Certainly the exponential decay of the perception as the new
disturbance value is compensated does not seem to lead to a zero
correlation between perception and output.
” That concern was my
reason for wanting to do the math, and for buying the MathMagic
typesetting software to show the sinusoid analysis after I had done the
math. Then after I had done the sinusoidal analysis, I was able to
figure out the problem with the step function.

I had written a paragraph about the step function disturbance for the
message to which you are responding, but I thought it better not to
include it in the initial “mathematical message” for fear of
complicating the issue. The bottom line, however, is that in this
situation, both “control ratio” and “correlation” are meaningless, both
changing continuously as a function of time since the event. Over
infinite time, control ratio becomes infinite, and correlation
infinitesimal. It should be true that their product remains at 1.0 for
all times after the event, but I haven’t looked at that possibility. I
had looked at the possibility of analyzing a repetitive square or
triangular wave, but didn’t do it, because I thought that since they
have perfectly ordinary Fourier transforms, the analysis of the sine
wave was sufficient. The only other disturbance type of interest is
probably random noise, and I haven’t tried to compute CR and
correlation explicitly for that.

At one point in drafting the message, I did include more text and
another correlation-angle figure dealing with noise either in the
perceptual pathway or as an added disturbance, but again I thought the
added complication might detract from the clarity of the main message.
I might submit them as a separate message after the sinusoidal and
generic forms seem to be clear.

Much more relevant to real control loops are the effects of
nonlinearities, time delays and spectral characteristics, which
interact. When they come into play, the relation
“correlation(perception-disturbance) <= 1/CR” no longer is always
true.

Martin

[From Bill Powers (2009.05.29.0831 MDT)]

Bruce Abbott (2009.05.28.0905 EDT) –

BA: In a follow-on article,
Srinivasan (with S. W. Zhang) provides evidence that

the system involved in centering the honeybee as the bee flies between
the

walls of a tunnel is insensitive to the direction of motion (!). I’m a
bit

confused about this: it implies that a set of vertical stripes
moving

backward on one wall and forward at the same speed on the other wall
would

result in the bee centering itself in the tunnel. But maybe I’m

misunderstanding Srinivasan’s findings.

BP: I think the author may have misunderstood. Here is Fig. 1 from the
article.

(Attachment 1cda98.jpg is missing)

···

===========================================================================

[]

==========================================================================

The model I suggest has the bee trying to equalize the perceived angular
velocity of the two gratings. Notice that when the lower grating moves
(lower arrow) in direction opposite to the direction of flight (upper
arrow in gray band), the gray band showing position in the tunnel moves
toward the upper wall. That would reduce the angular velocity of
the moving (lower) grating and increase that of the stationary (upper)
one, as it should. To correct the imbalance of left and right angular
velocities, the bee would have to move toward the upper wall, as it
does.

Perhaps the authors slipped in their bee-centered interpretation and
thought that the grating velocity of the lower wall, because of being
reversed, would subtract from the flight velocity. Actually it increases
the apparent visual angular velocity of the grating as seen by the bee
and should be added to the flight velocity in computing angular velocity
of the lower wall in the figure.

Best,

Bill P.

[Martin Taylor 2009.06.08.16.27]

Combining responses to several messages, mostly identified by initials
and date,


[From Rick Marken (2009.05.26.2140)]
It turns out that if you increase the gain of the system (change the
Gain number from what it is to 90, say) the i-o correlation go
to 0.0. So Martin is right in his guess but, I the wrong reasons. It's
not because of a phase lag due to integration; you can test this by
looking at the lagged correlation between i and o; if it were a phase
lag phenomenon then the lagged correlations should increase as the lag
approached the phase difference. But, in fact, the correlations never
get very far from 0 as lag increases. I think the 0 correlation
results from the fact that, with high gain, the output variations are
completely canceling the perceptual variations that that cause them;
the confounding causal relationships cancel each other out so neither
dominates in the i-o correlation.
This is kind of a cool observation because it suggests that a second,
unknown disturbance is not needed in order to observed the lack of
correlation between i and o in a closed loop system; it just has to be
a high gain closed loop control system.

[From Bill Powers (2009.05.27.1744 MDT)] commenting on Rick Marken
(2009.05.27.1320) –
Re: Martin’s comments about zero correlation of a variable with its
integral. I think your answer is correct, that the closed loop changes
things. The main thing it changes is the apparent time constant of the
loop. An integrating system in a feedback loop looks like a leaky
integrator in an open loop. The time constant is determined by the loop
gain, getting shorter as the gain goes up. A short time constant means
that the output follows the disturbance more quickly than it would if
there were only a pure integrator present without the feedback. The
phase relationships inside the loop are also affected by the feedback.
I can’t get any more specific than that because (a) I am part of an
international conspiracy sworn not to reveal the details, and (b) I
havent done the math.
[MT 2009.06.08] -----------
I wrote the following before Bill Powers’s unexpected repudiation of
the normal equations used to analyze control loops, contained in his
second sentence above. I thought about it for a while, since on the
face of it, Bill’s comment that “the closed loop changes things” seemed
to contradict the basic assumption of the kind of loop analysis usually
used on CSGnet, namely that qi = qo + qd = G(qr-qi) + qd where G is the
output function, and all other pathways are assumed to be unity
multipliers. From the assumption that when qr = 0 then qo = G(qi), we
get the usual result for loop gain and so forth. That is the basis on
which I made my calculations about the relation between control ratio
and the correlation between p (= qi) and qd. But if Bill is saying that
this is no longer to be considered to be so, then all bets are off. So
I have to assume Bill meant something different, and the usual
equations are still considered permissible on CSGnet.
Two independent factors are in play when we talk about the correlation
between perception and output. Rick has been talking about one of them,
while I have been talking about the other. Let me try to sort them out.
Preamble: Every pathway and process in a control loop is causal. Each
pathway and process
works the same whether it is in a control loop or not (this is what
Bill denies above, but I simply cannot see that denial as anything but
a repudiation of everything Bill has taught about how to analyze
control loops, so I have to assume he meant something else, though I
cannot fathom what that something else might be, if it was supposed to
be relevant to the discussion).
For a pathway,
the signal at the output end depends only on the signal at the input
end at some previous moment (or possibly time span, if the pathway acts
like a processor); for a process (such as a perceptual input
function, a comparator, or an output function) the output depends only
on the current state and history of the various inputs to the process.
The fact of control depends on this being at least approximately true.
I say “approximately” because in the real world there is always noise
to be considered, and neural pathways are not consistent when examined
in the finest detail.
Argument: (Rick’s point) When a perception is controlled, the variation
in the perceptual variable due to the disturbance is much smaller than
the variation of the disturbance. The difference is the influence of
the output on the perception. If there were no noise at all in the
system, and if the output were perfectly in phase with the disturbance,
then the disturbance, perception, and output would all be correlated
±1.0. Of course this condition cannot exist in any real world control
system. The situation changes radically
when there is any noise, whether it be in the form of an addition to
the disturbance or anywhere in the loop. Since the “left-over”
uncorrected part of the disturbance is much smaller than the variation
in either the disturbance or the output, the noise can quickly swamp
the part of the perceptual signal due to the disturbance, dramatically
reducing the correlation between perception and disturbance, and hence
between disturbance and output, since if control remains good, the
output continues to match the inverse of the disturbance. Rick is
correct in this.
So. One factor is that relatively small amounts of noise drastically
reduce the correlation between perception and output when control is
good.
Argument: (My analysis) If any component process of pathway in the loop
decorrelates its input from its output, then variables contributing to
the input to that pathway or process will be decorrelated from the
output of the process. A perfect integrator does this perfectly. In the
usual control loop analysis, as above, we have qo = G(qi) (or in the
notation I prefer and will use below o = G§) when the reference is a
constant zero. If G is a perfect integration, then o is orthogonal to
p. It is an example of a completely causal relation (o is completely
causally determined by the history of p and by nothing else other than
the form of G) in which the correlation between the two causally
related variables is zero.
Rick says that I have claimed that the decorrelation is due to a “phase
lag due to integration”. This is true for every Fourier component of
the signal, but then Rick wrongly says: “you can test this by
looking at the lagged correlation between i and o; if it were a phase
lag phenomenon then the lagged correlations should increase as the lag
approached the phase difference
.” He is wrong in saying that this
test should work, because the lagged correlation is a time lag, not a
phase lag. If the signals are not sinusoids, you can’t equate a time
lag to a phase lag, because the relation is different for every
frequency component of the signal. You can, however, say that if every
component of the output is uncorrelated (is pi/2 out of phase with) the
corresponding component of the input, then the output signal is
uncorrelated with the input signal. That is what happens with a perfect
integrator.

I delayed this posting because Bill said: “I haven’t done the math”,
and I thought it worthwhile to do the math. I do two forms, a general
one using the correlation angles that we finally got sorted out May 25,
and a specific one for a purely sinusoidal disturbance. Although I
present it here, the first form is actually presented in a
more general way on the Web page I have cited:
.
The analysis assumes that there is no time-lag in any pathway, no
noise, and the output function G is a pure integrator. 1. Disturbance “d” is an arbitrary waveform
I use the Heaviside operational calculus here. All that one needs to
know about this is that “D” represents differentiation, 1/D represents
integration, and it is permissible to use the D operator as though it
is an ordinary variable. I don’t know the underlying mathematics of why
this works, but we were taught it in engineering school as a useful and
valid technique. Here, I use it in what is ordinarily a
non-controversial way (though it’s always hard to guess what will turn
out to be controversial on CSGnet :slight_smile:
Control loop
p = o+d = G(r-p) + d
Setting r = 0
p = d - Gp
d = (1+G)p
Setting G = 1/D (a pure integrator)
d = (1+1/D)p = ((1+D)/D)p
Dd = p + Dp
Integrating both sides: d = p/D + p
Here we use the correlation angles, and note that the vector
representing the time waveform of Dp is orthogonal to the vector p.
It’s easier to draw the figure than to talk about it, as it is 2-D.

The correlation between p and d is cos

(Attachment vector_correl_pd.jpg is missing)

correl_pd_sinewave.pdf (20.9 KB)

···

does__http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html

q = p/d. The control ratio CR is,
by definition, d/p, or, in the other notation, qd/qi.

So, for the conditions stated, Correlation between perception and
disturbance is 1/CR.

To show analysis for the specific example, with a pure sine wave as the
disturbance, I had to buy a math typesetting program, which I did. I
could have used it for the more general form above, but it’s a bit more
cumbersome than I needed for the general derivation. For the sine wave
analysis, the ASCII notation would have been almost unintelligible. I
find that if I try to insert it as an image in-line in the message, it
is very low resolution, and some of the exponents become illegible, so
I’ve attached it as a PDF. Maybe I’ll find a way to create a more
legible in-line form for some later derivation. The bottom line for
this one is that when one does the explicit correlation calculation for
a sine-wave disturbance, the result is again that the correlation
between perception and disturbance is 1/CR.

Note that both calculations are based on the fact that a waveform is
uncorrelated with its integral (and with its derivative), and use the
assumptions that there are no time delays in the control loop, that
there is no noise, and that all pathways and processes are unit
multipliers except for the output function, which is a pure integrator.

I think that’s enough for this message. I hope that everything makes
sense, and clarifies the issues that had been raised in the earlier
discussion.

Martin

From Bill Powers (2009.06.09.1350 MDT)]

Martin Taylor 2009.06.09.10.44 –

Now the questions.

  1. Did the PDF come through with
    the proper symbols, because if you are referring to the PDF when you
    mention strange symbols such as “#” when there should be an
    integral sign, I can see why you might be making comments such as:
    “If the disturbance is a sinusoid, does it really follow that the
    perception is in phase with the disturbance?” On the other hand, if
    you were able to read the PDF symbols, I can’t see why you would make
    such a comment. Were the strange symbols in the main text or in the
    PDF?

Here is a screen shot of what I see in the PDF:

Emacs!

This is with the Foxit reader. In Adobe reader 9.0 it’s a little better
for overlap of symbols but the symbols are the same.

Best,

Bill P.

(Attachment 1961dd.jpg is missing)

Bill Powers wrote:

From Bill Powers (2009.06.09.1350 MDT)]

Martin Taylor 2009.06.09.10.44 –

Now the questions.

  1. Did the PDF come
    through with
    the proper symbols, because if you are referring to the PDF when you
    mention strange symbols such as “#” when there should be an
    integral sign, I can see why you might be making comments such as:
    “If the disturbance is a sinusoid, does it really follow that the
    perception is in phase with the disturbance?” On the other hand, if
    you were able to read the PDF symbols, I can’t see why you would make
    such a comment. Were the strange symbols in the main text or in the
    PDF?

Here is a screen shot of what I see in the PDF:

Emacs!

This is with the Foxit reader. In Adobe reader 9.0 it’s a little better
for overlap of symbols but the symbols are the same.

Very odd. It’s not the Symbol font characters that go wrong – at least
not all of the wrong characters are in symbols font. Ordinary brackets
( and ) become bold c and m most places but not all, and in a couple of
places they become bold ^ and h. The two “&” signs are right arrows
with a double shaft like =>.

I enquired of the software developers, and they suggested a method of
including the image in-line. I’ll try it below: … it didn’t work.
I’ve sent another enquiry, and asked about the PDF problem.

Martin

(Attachment Re Causality does not imply co.jpg is missing)

···

Bill Powers wrote:

From Bill Powers (2009.06.09.1350 MDT)]

Martin Taylor 2009.06.09.10.44 –

Now the questions.

  1. Did the PDF come
    through with
    the proper symbols, because if you are referring to the PDF when you
    mention strange symbols such as “#” when there should be an
    integral sign, I can see why you might be making comments such as:
    “If the disturbance is a sinusoid, does it really follow that the
    perception is in phase with the disturbance?” On the other hand, if
    you were able to read the PDF symbols, I can’t see why you would make
    such a comment. Were the strange symbols in the main text or in the
    PDF?

Here is a screen shot of what I see in the PDF:

Emacs!

With the help of the MathMagic developers, I think the problem has been
cracked. Here it is, in-line. I hope it shows up nicely for you.

Martin

(Attachment Re Causality does not imply co1.jpg is missing)

(Attachment correl_pd_sinewave.jpg is missing)