Causality does not imply correlation

[Martin Taylor 2009.05.25.00.20]

[From Bill Powers (200io./05.24.1720 M<DT)]

[Martin Taylor 2009.05.24.14.52]

Are we talking about the same thing? I'm assuming your "i" is perception.

"i" is the "input" to the controlled variable -- that is, it's the disturbance as we usually define it. I think this happened because of using someone else's equation and preserving the notation. Richard Kennaway introduced "d2", a second disturbance of the CV that is not taken into account but which causes random fluctuations in the controlled variable and destroys the perfect correlation otherwise expected.

Thanks. That explains everything!

Sorry for introducing confusion beyond the normal quota :slight_smile:

Martin

[From Rick Marken (2009.05.25.0915)]

Martin Taylor (2009.05.24.14.52)--

I said 1/CR is the maximum correlation you should get, if the output
function is a pure integrator.

Are we talking about the same thing? I'm assuming your "i" is
perception. My calculations were for the maximum correlation between
disturbance and perception.

OK, so 1/CR is the maximum correlation between d and i (i is the
perceptual variable in my model). Here are the results of a non-leaky
integrator model:

1/CR .86 .16 .02 .004 .0002

r.di -.42 -.36 -.34 -.31 -.24

I guess you could say that r.di never gets above 1/CR, but I don't
think that really tells you much about the relationship between r.di
and quality of control. And, besides, I thought it was -1/CR that is
the maximum correlation; if that's true and "maximum correlation"
means most positive then the first values (which would be -.86 and
-.42) would violate the rule since -42 is higher than -.86. If,
however, maximum correlation means the absolute size of the
correlation, then all the values above violate the rule, whether your
using 1/CR or -1/CR.

It seems very odd to me that if "i" is indeed perception, that it should
have such a high correlation with output. If the output function is an
integrator, then o is uncorrelated with e (error). If the reference value
is fixed, e is (-p + constant), so p is uncorrelated with o. If i is p, then
r.io should also be 0.0. Something is fishy here, and I'm thinking that
maybe "i" is not perception (or its modelling equivalent, sensory input).

Yes, i is definitely the perceptual variable; it's what is compared to
the reference to produce the error that is integrated into the
output.But the fact is the correlation between i and o that I get from
the model is very different depending on the type of model I use. The
results for a pure, non-leaky integrator model controlling at a level
such that CR = 247 (1/CR =.004) lookx like this:

r.io 0.39
r.do -0.999
r.di -0.36

These are the results with no noise. The moderate positive correlation
between i and o is quite a surprise. This i - o correlation is not
changed much at all when you look at lagged correlation (output
following perception). However, when I put a leak into the integration
I get results that look like this:

r.io -0.1071
r.do -0.9998
r.di 0.1285

This is for a system controlling quite well: CR = 2134. Now I get a
weak negative correlation between i and o. By the way, when I do a
partial correlation on the first results above r.io|d = .728. When I
do it on the second results above, I get r.io|d = .22: position i - o
correlation in both cases. Of course, the actual, true relationship
between i and o is o = -1.0* i. So if partial correlation worked,
r.io|d should be close to -1.0.

What all this is telling me is that using correlational methods to
analyze the causal links in a closed loop system is just useless. The
reason is, I think, very simple. The observed relationship between i
and o is part of a closed loop: i is having an effect on o while o is
having an effect on i. When we look at the correlation between i and o
we tend to look at it through S-R theory glasses and think of it as
reflecting something about the effect of i on o (because, after all,
perception must be guiding action in a control task). But this is just
a perceptual bias. The correlation between i and o also reflects the
simultaneous effect of o on i. So the i - o correlation you observe in
a closed loop task confounds the forward effect of i on o and the
backward effect of o on i. The actual i-o correlation that is
observed, whether positive, negative , large or small, depends on the
nature of the i-->o _and_ o-->i connection of the closed loop system
involved.

I guess my basic conclusion from this long and somewhat frustrating
analysis is kind of what I knew already but didn't really understand
quite as clearly as I do now: you simply cannot use open-loop models
to correctly understand the behavior of a closed loop system; and
regression is an open loop model and people are closed loop systems.

Now I want to get back to using multiple regression (MR) to analyze
the results of my Mind Reading demo;-) (What I'm doing on that is an
appropriate use of MR because I am not using it to study causal
relationship in the system; I'm just using it to see which of several
somewhat inter-correlated disturbances is being resisted).

Best

Rick

__________ NOD32 4098 (20090522) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bill Powers (2009.05.25.1106 MDT)]

Rick Marken (2009.05.25.0915) --

Can you guys remind me what CR means?

Best,

Bill P.

__________ NOD32 4098 (20090522) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.
http://www.nod32.com

[Martin Taylor 2009.05.25.13.30]

[From Bill Powers (2009.05.25.1106 MDT)]

Rick Marken (2009.05.25.0915) --

Can you guys remind me what CR means?

In my analysis, it is the ratio of the amplitude of the disturbance to the amplitude of the perceptual variable (standard deviation, range, or whatever linear variable). The amplitude of the disturbance is what the amplitude of the perceptual variable would be in the absence of control. It's not a power ratio in my analysis. I hope Rick is using the same. His results seem quite incompatible with my analysis, and I'd like to find out why.

Martin

__________ NOD32 4098 (20090522) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Rick Marken (2009.05.25.1115)]

Martin Taylor (2009.05.25.13.30)

Bill Powers (2009.05.25.1106 MDT)]

Rick Marken (2009.05.25.0915) --

Can you guys remind me what CR means?

In my analysis, it is the ratio of the amplitude of the disturbance to the
amplitude of the perceptual variable (standard deviation, range, or whatever
linear variable).

I used the ratio of disturbance variance to perceptual signal
variance. If your analysis is based on sds rather than variances then
the appropriate values of CR for my analysis are the square root of
the values I posted. So instead of:

1/CR .86 .16 .02 .004 .0002

r.di -.42 -.36 -.34 -.31 -.24

it should be

1/CR .92 .4 .14 .06 .014

r.di -.42 -.36 -.34 -.31 -.24

I hope Rick is using the same.

Yes, except for the square root part. But that didn't seem to make
much difference.

His results seem quite
incompatible with my analysis, and I'd like to find out why.

I just don't think it's that important of a thing to try to figure out
any more. I don't see what knowing the maximum correlation between
disturbance and perceptual signal buys you. Even knowing _exactly_
what that correlation is buys you nothing, at least not in terms of
using this information to determine the causal function relating
perception to output. My analysis shows that even if you can predict
variations in p based on d, you can't determine the causal path from p
to o by looking at the observed relationship between p and o because
this relationship confounds both the forward causal effect of p on o
with the feedback effect of o on p. Put simply, you can't learn about
the causal path from p to o in a closed loop system by looking at the
relationship between p and o. You can only figure out what this causal
path is like by using modeling and comparing the behavior of the model
to the behavior of the system under study.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4098 (20090522) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[Martin Taylor 2009.05.25.17.25]

[From Rick Marken (2009.05.25.1115)]
Martin Taylor (2009.05.25.13.30)

Bill Powers (2009.05.25.1106 MDT)]
Rick Marken (2009.05.25.0915) --
Can you guys remind me what CR means?
In my analysis, it is the ratio of the amplitude of the disturbance to the
amplitude of the perceptual variable (standard deviation, range, or whatever
linear variable).

I used the ratio of disturbance variance to perceptual signal
variance. If your analysis is based on sds rather than variances then
the appropriate values of CR for my analysis are the square root of
the values I posted. So instead of:
1/CR .86 .16 .02 .004 .0002
r.di -.42 -.36 -.34 -.31 -.24
it should be
1/CR .92 .4 .14 .06 .014
r.di -.42 -.36 -.34 -.31 -.24
I hope Rick is using the same.
Yes, except for the square root part. But that didn't seem to make
much difference.
His results seem quite
incompatible with my analysis, and I'd like to find out why.
I just don't think it's that important of a thing to try to figure out
any more.

Any time an analysis fails, I think it is important to find out why.
Whether or not you think the end result is of any interest, a faulty
analysis in one context suggests that related analyses may be faulty in
a context that you do think would be of interest. So I think it is
important to figure out what is wrong.

Bill and you do not agree on what “i” represents. Bill says it is a
contribution to the disturbance, the value that is added to the output
and possibly to a noise signal to generate the sensory input (which is
numerically identical to the perceptual signal. You say it is the
perceptual signal itself. Would you mind confirming that Bill is
misinformed, and that “i” really IS the perceptual signal that is input
to a simple subtracting comparator, and that the resulting error signal
is the input to a pure integrator output function?

I don't see what knowing the maximum correlation between
disturbance and perceptual signal buys you.

The original reason I created the Web page in 1998 was to demonstrate
to people that good control enforces a decorrelation between
disturbance and perception, or, by extension, between perception and
output. It’s to demonstrate mathematically exactly what you have shown
experimentally – that the perceptual input does not drive the output.

More generically, the fact that you don’t see something is no guarantee
that it is hidden from everybody.

Even knowing _exactly_
what that correlation is buys you nothing, at least not in terms of
using this information to determine the causal function relating
perception to output.

Correct. One would not be trying to use it for that purpose, because it
wouldn’t be useful.

My analysis shows that even if you can predict
variations in p based on d, you can't determine the causal path from p
to o by looking at the observed relationship between p and o because
this relationship confounds both the forward causal effect of p on o
with the feedback effect of o on p.

Precisely. My analysis was intended to show exactly that. More to the
point, if you have a pure integrator output function and a simple
subtractor as a comparator function, the correlation between perception
and output ought to be uniformly zero, which would make it rather hard
to use that permanent result for any useful purpose.

Put simply, you can't learn about
the causal path from p to o in a closed loop system by looking at the
relationship between p and o. You can only figure out what this causal
path is like by using modeling and comparing the behavior of the model
to the behavior of the system under study.

So I would think you would be encouraging me to be sure the analysis is
correct, rather than saying it is uninteresting and unimportant.

Martin

__________ NOD32 4098 (20090522) Information __________

Detta meddelande är genomsökt av NOD32 Antivirus.

http://www.nod32.com

[From Bill Powers (2009.05.26.0741 MDT)]

Martin Taylor 2009.05.25.17.25 –

MT:Bill and you do not agree on
what “i” represents. Bill says it is a contribution to the
disturbance, the value that is added to the output and possibly to a
noise signal to generate the sensory input (which is numerically
identical to the perceptual signal.

BP: We’re developing too many different versions of what “i”
is. My understanding, and the way I used it, was to give it exactly the
same meaning we give to “d”, the disturbing variable that
affects the input (controlled) quantity independently of the system’s
output and through a path other than the environment feedback
function.

It is certainly not the same thing as the perceptual signal as I used
it. Here is my summary of all the variables (use Courier
font):

        d2

(kennaway’s)
r

v
v

d -->Fd–> qi ----> Fi ----> p ----> C------>e
----> Fo ---->qo

(i)
^

<---------- Ff ----------------------------------

I agree with you that this needs to be straightened out before we drop
it. It will come up again.

Best,

Bill P.

__________ NOD32 4098 (20090522) Information __________

Detta meddelande är genomsökt av NOD32 Antivirus.

http://www.nod32.com

[From Rick Marken (2009.05.26.0845)]

Bill Powers (2009.05.26.0741 MDT)

Martin Taylor 2009.05.25.17.25 --

MT:Bill and you do not agree on what "i" represents...

BP: We're developing too many different versions of what "i" is. My
understanding, and the way I used it, was to give it exactly the same
meaning we give to "d", the disturbing variable that affects the input
(controlled) quantity independently of the system's output and through a
path other than the environment feedback function.

It is certainly not the same thing as the perceptual signal as I used it.
Here is my summary of all the variables (use Courier font):

����������� d2 (kennaway's)���������� r
����������� |������������������������ |
����������� v������������������������ v
�d -->Fd--> qi ----> Fi ----> p ----> C------>e ----> Fo ---->qo
(i)�������� ^������������������������������������������������ |
����������� |������������������������������������������������ |
������������ <---------- Ff ----------------------------------

I agree with you that this needs to be straightened out before we drop it.
It will come up again.

Well, seeing i as equivalent to d is certainly new to me. My i is
actually your qi. In the model runs, i =o + d, where my o is
equivalent to your qo. My i is also equivalent to p because Fi in my
model is the unity multiplier (p = i * 1).

I think this is getting unnecessarily confusing. The idea that
disturbance and input are equivalent in your mind strikes me as an
unnecessarily confusing way of putting it. But if you really thought
this (and were not just indulging Martin again) then why didn't you
(or Martin) note that the r.di should have been 1.0 for all values of
1/CR? I think Martin (and you) knew perfectly well what i was (the
equivalent of qi, the controlled variable in the tracking task) since
I believe I said more than a couple times that i is the distance
between cursor and target. I think Martin hopes that his analysis can
be saved by putting a different perceptual function, Fi, (other than
the identity multiplier) between i and p (or between qi and p, if you
want to be difficult about it). I can easily put any perceptual
function Martin suggests into my model and see if it helps.

By the way, why call the perceptual function Fi if you think of i as
the disturbance variable? Methinks though dost bend over backwards too
much.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4098 (20090522) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[Martin Taylor 2009.05.26.12.10]

[From Rick Marken (2009.05.26.0845)]

Bill Powers (2009.05.26.0741 MDT)
Martin Taylor 2009.05.25.17.25 --
MT:Bill and you do not agree on what "i" represents...

d2 (kennaway's)           r
            >                         >
            v                         v
 d -->Fd--> qi ----> Fi ----> p ----> C------>e ----> Fo ---->qo
(i)         ^                                                 |
            >                                                 >
             <---------- Ff ----------------------------------
I agree with you that this needs to be straightened out before we drop it.
It will come up again.
My i is
actually your qi. In the model runs, i =o + d, where my o is
equivalent to your qo. My i is also equivalent to p because Fi in my
model is the unity multiplier (p = i * 1).

Good. That settles part of what I need in order to investigate why my
analysis fails.

The mystery to me is why r.io is not uniformly 0, if the reference is
constant and the output function is a pure integrator. Could you
confirm that both of those are correct (and correctly programmed – no
slowing factor, for example).

I think Martin (and you) knew perfectly well what i was (the
equivalent of qi, the controlled variable in the tracking task) since
I believe I said more than a couple times that i is the distance
between cursor and target. I think Martin hopes that his analysis can
be saved by putting a different perceptual function, Fi, (other than
the identity multiplier) between i and p (or between qi and p, if you
want to be difficult about it). I can easily put any perceptual
function Martin suggests into my model and see if it helps.

I don’t want any different perceptual function, because the analysis
assumes exactly what you say it is – a unity multiplier.

I don’t want to “save” my analysis. If it’s wrong, I want to know why,
and to stop propagating misinformation to whoever comes across the Web
page in which it occurs, I’m hoping that the resident analysis expert
with the initials RK will look at my 11-year-old analysis and perhaps
find out what might be wrong with it. For reference, it’s at
.
It’s quite possible I made an elementary mistake; as I said on the Web
page, it seems too simple to be true, but I couldn’t find any problem
in 1998, and I can’t find any problem now, except that your model that
seems to be the one used in the analysis gives conflicting results.
I want to discover the correct analysis, whatever it may be. Perhaps
you could send your code, so that I can be sure that we are using the
same model, rather than having me keep asking for information.
Martin
__________ NOD32 4098 (20090522) Information __________
Detta meddelande är genomsökt av NOD32 Antivirus.

···

http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html

http://www.nod32.com

[From Rick Marken (2009.05.26.1330)]

Martin Taylor (2009.05.26.12.10) --

Rick Marken (2009.05.26.0845)]

[RM] My i is
actually your qi. In the model runs, i =o + d, where my o is
equivalent to your qo. My i is also equivalent to p because Fi in my
model is the unity multiplier (p = i * 1).

[MT] Good. That settles part of what I need in order to investigate why my
analysis fails.

The mystery to me is why r.io is not uniformly 0, if the reference is
constant and the output function is a pure integrator. Could you confirm
that both of those are correct (and correctly programmed -- no slowing
factor, for example).

Sure, I'll do that. I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero. I had expected
r.io to be -1.0 when there was no noise (based only on intuition) but
found it to be generally low and positive, though never 0.0. That's
when I realized that r.io confound the causal effect of i on o with
that of o on i. r.io can't tell you about the "forward" causal link
from i to o because of this confounding, which (I realized) would also
keep r.io from being -1.0. But I can imagine now that it could be 0.0,
though it never comes out that way in the noiseless model.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[Martin Taylor 2009.05.26.18.05]

[From Rick Marken (2009.05.26.1330)]

   I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero.

o is the integral of e, which is r-p. p is your i. So o = -integral(p) + constant. A variable is uncorrelated with its integral, so o should be uncorrelated with p if the output function is a pure integral.

Martin

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bill Powers (2009.05.26.1616 MDT)]

Rick Marken (2009.05.26.1330) --

Martin Taylor (2009.05.26.12.10) --

RM: Sure, I'll do that. I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero.

Martin is talking about r the reference signal, not r the correlation.

What's needed here is an organized development in which you formally define every signal, function, and variable and get the notation straightened out so one symbol always means one thing.

Martin is also talking about a system with an integrating output function while you seem to be modeling it as having an output function that is a proportional gain, k. David Goldstein noticed that and asked me about it in a phone call.

I know that all the formal stages of defining and declaring variables are a drag and a bore, but when it's not done, confusion very quickly mounts and takes over. There's a reason for all that tedious foreplay.

Best,

Bill P.

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Rick Marken (2009.05.26.1700)]

Martin Taylor 2009.05.26.18.05]

[From Rick Marken (2009.05.26.1330)]

��I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero.

o is the integral of e, which is r-p. p is your i. So o = -integral(p) +
constant. A variable is uncorrelated with its integral, so o should be
uncorrelated with p if the output function is a pure integral.

Oy. Martin, you forgot that p = o + d. You're "derivation" neglects
the fact that p and o exist in a closed loop relationship. So you have
two simultaneous equations to work with:

1) o = -integral(p) + constant

and the one you forgot:

2) p = f(o+d)

It makes a difference. Trust me;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Rick Marken (2009.05.26.1730)]

Bill Powers (2009.05.26.1616 MDT)

RM: Sure, I'll do that. I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero.

Martin is talking about r the reference signal, not r the correlation.

I think you can see from Martin's last post that this is not the case.
You can tell from where he said: "A variable is uncorrelated with its
integral, so o should be uncorrelated with p if the output function is
a pure integral." It's the "uncorrelated" part that gives it away.
Besides, since we've gone through several e-mail exchanges where r.io
was used to refer to a correlation, it would be beyond weird to have
it suddenly used to refer to a reference signal. And a reference
signal for what? The relationship between i and o? You can do better
than that, Bill.

What's needed here is an organized development in which you formally define
every signal, function, and variable and get the notation straightened out
so one symbol always means one thing.

I think that's a fine idea. Here the code for my model:

Err = -p

o = o + (Gain * Err - Damping * o) * dt

p = d + o

Err is r-p so the code has r fixed at 0. The output function is an
integral which integrates (accumulates, really) a small fraction, dt,
of the current error determined output (Gain* Err) into the output
variable o on each iteration of the model. The integrator leaks if
Damping > 0. The data I presented comes from a model that had no leak
(Damping = 0). The leak does make a difference -- with the leak, r.io
can go negative (but then r.di goes positive). Finally, the current
value of o is added to the current values of a filtered noise
disturbance variable, d, to produce the perceptual input, p, that
determines the increment to o on the next iteration of the model.

Martin is also talking about a system with an integrating output function
while you seem to be modeling it as having an output function that is a
�proportional gain, k. David Goldstein noticed that and asked me about it in
a phone call.

I said a couple of times that the output function was a non-leaky
integrator; I can also switch it to be a leaky integral; and, as I
noted above, I do get somewhat different results with the leak. I've
also tried a proportional output and that produces results (in terms
of correlations) that are similar to those you get with the integral
controller.

I know that all the formal stages of defining and declaring variables are a
drag and a bore, but when it's not done, confusion very quickly mounts and
takes over. There's a reason for all that tedious foreplay.

I think the confusion only results from your particular approach to
foreplay, which seems to involve dumping on me in order to win the
love of others more coy. It's OK; I've used that ploy myself in the
old days;sometimes it works! :wink:

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bill Powers (2009.05.26.1948 MDT)]

From Rick Marken (2009.05.26.1730) --

I think you can see from Martin's last post that this is not the case.
You can tell from where he said: "A variable is uncorrelated with its
integral, so o should be uncorrelated with p if the output function is
a pure integral." It's the "uncorrelated" part that gives it away.

OK, I probably misread it. Anyway, your code should settle the problem of what the model is. If Martin will post his, that should do it for that issue.

The leaky integrator can also be viewed as a proportional amplifier with a time constant. It will have a steady-stage gain that can be represented as k for low-frequency disturbances. The higher the Damping coefficient, the lower the steady-state proportional gain will be, and the wider its bandwidth.

Besides, since we've gone through several e-mail exchanges where r.io
was used to refer to a correlation, it would be beyond weird to have
it suddenly used to refer to a reference signal. And a reference
signal for what? The relationship between i and o? You can do better
than that, Bill.

No, the reference signal for p, which is the target-cursor difference in positions. It's normally set to zero for tracking. That's what I thought Martin was talking about.

> What's needed here is an organized development in which you formally define
> every signal, function, and variable and get the notation straightened out
> so one symbol always means one thing.

I think that's a fine idea. Here the code for my model:

Err = -p

o = o + (Gain * Err - Damping * o) * dt

p = d + o

I'm a little confused because "i" doesn't appear in these program steps. Which variable is it?

Err is r-p so the code has r fixed at 0.

That's what I thought Martin was referring to when he used the symbol r (He didn't say io.r).

> I know that all the formal stages of defining and declaring variables are a

> drag and a bore, but when it's not done, confusion very quickly mounts and
> takes over. There's a reason for all that tedious foreplay.

I think the confusion only results from your particular approach to
foreplay, which seems to involve dumping on me in order to win the
love of others more coy.

The post was addressed to both Martin and you. You're both getting pretty sloppy about notation, so I'm an equal-opportunity dumper.

Best,

Bill P.

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Rick Marken (2009.05.26.1927)]

Bill Powers (2009.05.26.1948 MDT)--

your code should settle the problem of what the model is. If Martin
will post his, that should do it for that issue.

Let's hope so!

Rick Marken (2009.05.26.1730) --

Err = -p

o = o + (Gain * Err - Damping * o) * dt

p = d + o

I'm a little confused because "i" doesn't appear in these program steps.
Which variable is it?

Sorry. I just didn't think it was necessary to put it into the code.
Here is the code with i (your qi) in it:

Err = -p
o = o + (Gain * Err - Damping * o) * dt
i = d + o
p = i

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[Martin Taylor 2009.05.26.23.11]

[From Rick Marken (2009.05.26.1700)]
Martin Taylor 2009.05.26.18.05]


[From Rick Marken (2009.05.26.1330)]
  I'd also appreciate it if you would show me the
derivation that leads you to expert r.io to be zero.
o is the integral of e, which is r-p. p is your i. So o = -integral(p) +
constant. A variable is uncorrelated with its integral, so o should be
uncorrelated with p if the output function is a pure integral.

Oy. Martin, you forgot that p = o + d. You're "derivation" neglects
the fact that p and o exist in a closed loop relationship. So you have
two simultaneous equations to work with:
1) o = -integral(p) + constant
and the one you forgot:
2) p = f(o+d)
It makes a difference. Trust me;-)

[MT] It doesn’t make a difference, in that equation 1 still holds,
making (so far as I can see) o uniformly uncorrelated with p. To the
extent that there’s a lag in computing the integral, the correlation
won’t be exactly zero, but if the compute time step is short compared
to the period of the highest frequency of the disturbance, it shouldn’t
be too far off.

You can see the problem with lag by considering a waveform p = cos(wt),
for which the integral is sin(wt), which is orthogonal to p. But if
there’s a lag in computing the integral, then what is integrated is p(t

  • delta_t) = cos(wt + phi), for which the integral is sin(wt+phi).
    Sin(wt+phi) is not orthogonal to cos(wt) except for phi = n*pi, and if
    phi is pi/2, the lagged integral would be identical to the original
    waveform, correlation 1.0!

It’s the same issue as gives rise to positive feedback oscillations if
the loop gain is too high for the loop delay at high frequencies.

[From Rick Marken (2009.05.26.1927)]

 Here is the code with i (your qi) in it:
Err = -p
o = o + (Gain * Err - Damping * o) * dt
i = d + o
p = i
[MT]That isn't actually code. But it should help. I assume that when you say you tested a pure integrator, "Damping" was zero?
Could you attach or copy the actual program? Also, it would be nice to have either a sample of the disturbance signal or an algorithm to construct it (which would be preferable). The point is that loop delays (which are inevitable in discrete code) interact with the spectrum of the disturbance in affecting the correlation. As I said above, if the loop delay is very short compared to the period of the highest frequency component of the disturbance, and Damping = 0, the conditions of my analysis should be fairly close to being met.
[From Bill Powers (2009.05.26.1616 MDT)]
Martin is talking about r the reference signal, not r the correlation.
[MT] I assumed Rick's notation r.di meant correlation between d and i, so I just went with the flow. So far as I was concerned, the only symbol that I felt unsure of was "i", and Rick cleared that up.
Martin
Martin

__________ NOD32 4106 (20090526) Information __________

Detta meddelande är genomsökt av NOD32 Antivirus.

http://www.nod32.com

[From Rick Marken (2009.05.26.2120)]

Martin Taylor (2009.05.26.23.11)--

Rick Marken (2009.05.26.1700)

Oy. Martin, you forgot that p = o + d.

It makes a difference. Trust me;-)

[MT] It doesn't make a difference,

Does too! :wink:

in that equation 1 still holds, making
(so far as I can see) o uniformly uncorrelated with p.

But equation 2 still holds, too. At the same time as equation 1 holds.

You can see the problem with lag by considering a waveform p = cos(wt), for
which the integral is sin(wt), which is orthogonal to p. But if there's a
lag in computing the integral, then what is integrated is p(t + delta_t) =
cos(wt + phi), for which the integral is sin(wt+phi). Sin(wt+phi) is not
orthogonal to cos(wt) except for phi = n*pi, and if phi is pi/2, the lagged
integral would be identical to the original waveform, correlation 1.0!

All this is true for an open loop system. It's a closed loop system, I'm afraid!

Rick Marken (2009.05.26.1927)]

Here is the code with i (your qi) in it:

Err = -p
o = o + (Gain * Err - Damping * o) * dt
i = d + o
p = i

[MT]That isn't actually code.

It is so;-) It's Visual Basic code.

But it should help. I assume that when you say
you tested a pure integrator, "Damping" was zero?

Yep.

Could you attach or copy the actual program? Also, it would be nice to have
either a sample of the disturbance signal or an algorithm to construct it
(which would be preferable).

You got it. I've attached the whole spreadsheet analysis. The code is
in the "Run Model" button; just right click the button, select "Assign
Macro" and then select "Edit" from the dialog box. The disturbance is
in the first column of the spreadsheet. The spreadsheet calculates all
the correlations as soon as the model completes its run. I'm sure you
could improve this spreadsheet; for example, the disturbance is fixed;
you could have the spreadsheet compute a different disturbance before
every model run (which is run, of course, by pressing the "Run Model"
button). Maybe you can even use it to see why the correlation between
i and o is not zero, or even close to zero. It's the confounding of
i-->o with o-->i, of course!

Best

Rick

TestI-O.xls (171 KB)

···

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande är genomsökt av NOD32 Antivirus.

Content-Type: application/vnd.ms-excel; name="TestI-O.xls"
Content-Disposition: attachment; filename="TestI-O.xls"
X-Attachment-Id: f_fv7iwit50

[From Rick Marken (2009.05.26.2140)]

It turns out that if you increase the gain of the system (change the
Gain number from what it is to 90, say) the i-o correlation _does_ go
to 0.0. So Martin is right in his guess but, I the wrong reasons. It's
not because of a phase lag due to integration; you can test this by
looking at the lagged correlation between i and o; if it were a phase
lag phenomenon then the lagged correlations should increase as the lag
approached the phase difference. But, in fact, the correlations never
get very far from 0 as lag increases. I think the 0 correlation
results from the fact that, with high gain, the output variations are
completely canceling the perceptual variations that that cause them;
the confounding causal relationships cancel each other out so neither
dominates in the i-o correlation.

This is kind of a cool observation because it suggests that a second,
unknown disturbance is not needed in order to observed the lack of
correlation between i and o in a closed loop system; it just has to be
a high gain closed loop control system.

Best

Rick

···

On Tue, May 26, 2009 at 9:19 PM, Richard Marken <rsmarken@gmail.com> wrote:

[From Rick Marken (2009.05.26.2120)]

Martin Taylor (2009.05.26.23.11)--

Rick Marken (2009.05.26.1700)

Oy. Martin, you forgot that p = o + d.

It makes a difference. Trust me;-)

[MT] It doesn't make a difference,

Does too! :wink:

in that equation 1 still holds, making
(so far as I can see) o uniformly uncorrelated with p.

But equation 2 still holds, too. At the same time as equation 1 holds.

You can see the problem with lag by considering a waveform p = cos(wt), for
which the integral is sin(wt), which is orthogonal to p. But if there's a
lag in computing the integral, then what is integrated is p(t + delta_t) =
cos(wt + phi), for which the integral is sin(wt+phi). Sin(wt+phi) is not
orthogonal to cos(wt) except for phi = n*pi, and if phi is pi/2, the lagged
integral would be identical to the original waveform, correlation 1.0!

All this is true for an open loop system. It's a closed loop system, I'm afraid!

Rick Marken (2009.05.26.1927)]

Here is the code with i (your qi) in it:

�Err = -p
�o = o + (Gain * Err - Damping * o) * dt
�i = d + o
�p = i

[MT]That isn't actually code.

It is so;-) It's Visual Basic code.

But it should help. I assume that when you say
you tested a pure integrator, "Damping" was zero?

Yep.

Could you attach or copy the actual program? Also, it would be nice to have
either a sample of the disturbance signal or an algorithm to construct it
(which would be preferable).

You got it. I've attached the whole spreadsheet analysis. The code is
in the "Run Model" button; just right click the button, select "Assign
Macro" and then select "Edit" from the dialog box. The disturbance is
in the first column of the spreadsheet. The spreadsheet calculates all
the correlations as soon as the model completes its run. I'm sure you
could improve this spreadsheet; for example, the disturbance is fixed;
you could have the spreadsheet compute a different disturbance before
every model run (which is run, of course, by pressing the "Run Model"
button). Maybe you can even use it to see why the correlation between
i and o is not zero, or even close to zero. It's the confounding of
i-->o with o-->i, of course!

Best

Rick
--
Richard S. Marken PhD
rsmarken@gmail.com

--
Richard S. Marken PhD
rsmarken@gmail.com

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.

[From Bruce Abbott (2009.05.27.0850 EDT)]

Rick Marken (2009.05.26.2140) --

If I may jump in here . . .

RM: It turns out that if you increase the gain of the system (change the
Gain number from what it is to 90, say) the i-o correlation _does_ go to
0.0. So Martin is right in his guess but, I the wrong reasons. It's not
because of a phase lag due to integration; you can test this by looking at
the lagged correlation between i and o; if it were a phase lag phenomenon
then the lagged correlations should increase as the lag approached the phase
difference. But, in fact, the correlations never get very far from 0 as lag
increases. I think the 0 correlation results from the fact that, with high
gain, the output variations are completely canceling the perceptual
variations that that cause them; the confounding causal relationships cancel
each other out so neither dominates in the i-o correlation.

If the output variations completely cancelled the input variations, the
variance in the input would be zero and the correlation undefined. But real
systems do not perfectly cancel the disturbance; the remainder is the
portion of variation in the disturbance that is uncorrelated (or nearly so)
with the output.

RM: This is kind of a cool observation because it suggests that a second,
unknown disturbance is not needed in order to observed the lack of
correlation between i and o in a closed loop system; it just has to be a
high gain closed loop control system.

Kennaway added the second, small unknown random disturbance to model that
portion of a disturbance that is unknown to the experimenter (in a real
system being measured, as opposed to a simulation where the disturbance is
known perfectly). It lowers the correlations between disturbance and output,
and between disturbance and input, where "disturbance" is the known portion.
The control system "sees" the total disturbance and behaves accordingly,
producing the usual low correlation between input and output. So yes, adding
the second, unknown disturbance is not required in order to demonstrate a
low correlation between input and output. Raising the gain (up to a point)
makes the system more responsive to error, so the output more closely
mirrors the disturbance function, increasing the portion of variation in
output that is correlated with the disturbance and decreasing the
correlation between input and output.

Bruce A.

__________ NOD32 4106 (20090526) Information __________

Detta meddelande �r genoms�kt av NOD32 Antivirus.
http://www.nod32.com