PCT-Specific Methodology

[From Bruce Nevin (2006.12.20 09:30 EST)]

Bill Powers (2006.12.16.1515 MST) –

I prefer mathematical derivations that reflect the physical situation properly. What you say is algebraically true. But it is not physically true. If I inject a perceptual signal into a control system, or change the reference signal or G, there will be no effect on the value of d as implied by your equation. If I change d, r, or G, there will be an effect on p as indicated by my equation. I have now said all of that twice, which makes it true.

This argument seems weird to me, and quite unlike you, Bill. What the equation implies is that if p changes (and r & G are unchanged) it must because there was a change in d – not that a change in p causes a change in d due to the observer somehow extrasystemically injecting a signal into p. The equation says nothing about causality or even temporal antecedence. Neither equation says that the single term on the left is determined by the expression the right, if you interpret “determine” as “cause”, they simply assert a correspondence (equality).

/Bruce
···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Saturday, December 16, 2006 6:21 PM
To:
CSGNET@LISTSERV.UIUC.EDU
Subject: Re.: PCT-Specific Methodology

Martin Taylor 2006.12.16.1514.

Of course it uses feedback effects. It's the usual derivation around the control loop, the same derivation you used to contradict mine. We both arrive at p = Gr - Gp + d, which we then develop in two different ways. I simply move the "G" terms to the other side of the equal sign, giving d = p + Gp - Gr, whereas you combine the "p" terms, to give

G 1
p = ----- r + ----- d,
1 + G 1 + G

They are exactly the same thing, aren’t they?

No. Mine implies that p is determined by G, r, and d, which is physically true. Yours implies that d is determined by p, G, and r, which is physically false. If you vary d, r, or G, p will change. But if you vary p, G or r, d will not change, even if your equation says it will…

Your derivation is algebraically correct, but false as a description of physical relationships. Algebra doesn’t know anything about dependent and independent variables.

  So the perceptual signal is a dependent variable which depends on just two independent variables, r and d.
Exactly. You like mathematical derivations... so, given the equation you arrive at (as one does with the usual derivation that I used) d is equally a function of p, G, and r. Equally, r is a function of d, G, and p. You know any three of them and you can derive the fourth.

I prefer mathematical derivations that reflect the physical situation properly. What you say is algebraically true. But it is not physically true. If I inject a perceptual signal into a control system, or change the reference signal or G, there will be no effect on the value of d as implied by your equation. If I change d, r, or G, there will be an effect on p as indicated by my equation. I have now said all of that twice, which makes it true.

  Note that G/(1+G) approaches 1 as G becomes much greater than 1. The 90-degree phase shift which you say reduces correlations to zero is greatly modified by this expression (see below for the case in which G is an integrator).
No it isn't. The ONLY reason for the 90 degree phase shift is the assumption that the output function is a perfect integrator.

I was pointing out that the phase shift through the whole control system, or the one implied by the solved equations, is different from the phase shift through the output function. You apparently read my remark to mean that the phase shift in the output function itself is modified, which is not what I meant to say. Or what I meant to mean.

  Even with the perfect integrator, the output varies so it remains about equal and opposite to the disturbance, with a phase shift that varies from zero at very low frequencies to 90 degrees at very high frequencies where the amplitude response approaches zero.
The phase shift in question is the phase shift between the error signal and the output signal. A pure integrator gives a 90 degree phase shift at ALL frequencies. The integral of a cosine is the corresponding sine, and vice-versa.

Yes, but the next cited part explains what I meant:

  The negative feedback makes the frequency response of the whole system different from the frequency response of the integrating output function.

The frequency response of the integrating output function is of the form 1/f, with a 90-degree phase shift over the whole range of frequencies. The frequency response of the whole system, as determined by varying the frequency of a sine-wave disturbance or reference signal and observing the output quantity, is not of that form.

  For one thing, if the time constant of a leaky-integrator output function is T seconds, the time constant of a response of the whole system to a disturbance is T/(1+G), where G is the loop gain.
I have a note about the leaky integrator and its effect on the frequency effects on the correlation near the end. A leaky inegrator is not a perfect integrator.

The leakiness is not the point. Even with a perfect integrator, there will be a negative correlation between d and o, higher at lower frequencies but present for any real waveform. What I said would have been clearer if I had just deleted the side-remark about leaky integrators.

  Separating the real and imaginary parts, we have
                 G^2               Gw
        =   ------------- - j -----------
              G^2 + w^2         G^2 + w^2
  From this we can see that as the integrating factor G increases. and as the frequency decreases (remember that w is 2*pi*frequency), the real part of the factor G/(1+G) approaches 1. As G  increases and w *increases*      , the imaginary (90-degree phase shifted) part approaches zero.
For the loop as a whole, yes. I think my animated diagram illustrates this. Actually, you don't need to go into that kind of complex arithmetic analysis. All you need is the knowledge that the Laplace transform is linear, and you can operate on the transforms as though they were simple scalar variables.

Yes, but their meaning is not all that transparent. To me, anyway. They may look like scalar variables, but they aren’t.

  The correlation of the error signal with the output of the integrator will always be zero. However, a correlation lagged 90 degrees will be perfect,
You can't "lag" the correlation 90 degrees, except at one frequency. The correlation is time-domain, and you can only lag it by delta t. There will be a frequency (an infinite set of them, actually) for which a given delta t gives a lagged cross-correlation of unity, but that's a complete red-herring in this discussion.

If you calculate a correlation between sin(wt) and cos(w(t - tau)), where tau is set to correspond to a phase shift near the low end of the observed range of frequencies, there will be a nonzero correlation between those two functions, because the low-frequency amplitudes are greater than the high-frequency amplitudes. So it’s only a partial red herring – say a pale pink herring. I did forget that the correlation for a given lag will not be perfect for other frequencies

I hope mine doesn't, too. I think you have quite misunderstood it. I could be quite wrong, but when I went through it again this morning, I didn't find a mistake. Your comments haven't (yet) helped me to find a mistake.

Perhaps this comment will clarify what I do and don’t consider to be mistakes.

Best,

Bill P.

[From Bill Powers (2006.12.20.0830 MST)]

Bruce Nevin
(2006.12.20 09:30 EST)

Bill Powers (2006.12.16.1515 MST)

I prefer mathematical derivations that reflect the physical
situation properly.

This argument
seems weird to me, and quite unlike you, Bill.
I appreciate your saying that being wierd is quite unlike me.
Thousands would disagree.
What the equation
implies is that if p changes (and r & G are unchanged) it must
because there was a change in d – not that a change in p causes a change
in d due to the observer somehow extrasystemically injecting a signal
into p. The equation says nothing about causality or even temporal
antecedence.
That last sentence is true, and it’s a limitation of algebra. Of
course it’s not a mathematical limitation, but it does result in
misleading implications when analyzing a physical situation.
The notion of independent and dependent variables arises when analyzing a
physical situation mathematically. You can’t tell from the algebra which
variables are independent – that is, which can actually be varied freely
by an outside agency. You simply have to know that from examining the
environment, or postulate it if you’re designing a model. This commonly
arises when doing word problems.
For a somewhat different case, consider this one. A gas tank holds 10
gallons. The car gets 20 miles per gallon. If you fill the tank, how much
gasoline will be in it (g1) after you drive 100 miles?
miles per gallon = e
miles driven = m
tank capacity = C
gallons in tank = g1
g1 = C - m/e
Let’s compute how many gallons will be in the tank (g2) after the car is
now driven in reverse, still pointed the same way, for 100 miles, back to
the starting point. In the same coordinate system as before, the distance
driven is -m, so
g2 = C - m/e + m/e = C
The tank will be full again.
This is the sort of thing that happens when you let mathematics push you
around and don’t pay attention to what you know about the
situation.
That example isn’t quite the same as the one we saw in solving the
control equations. Here’s one that is closer.
Consider a see-saw with total length L and an adjustable fulcrum placed
at distance x from the end we label 1. If we push end 1 a distance d1
downward, and x = L/2, the other end, end 2, will rise a distance d2. The
general relationship is
L-x
d2 = d1 —
x
We can solve this equation for x:
d1L
x = -------
d1 + d2
So now we can ask, given values for d1, d2, and L. how far and which way
will the fulcrum move if we push the A end (d1) down one inch?
The right answer is that it will not move at all. Pushing up or down on
the ends of a seesaw does not change the location of the fulcrum. But if
you just plug the numbers into the equation blindly, letting the algebra
think for you, you can come up with an answer to the question. The
answer, unfortunately, does not apply to the original physical situation,
but to a series of physical situations set up one after the other with
the fulcrum being adjusted a little each time. Explanation follows
below.
I was taught to solve word problems in part by distinguishing between
independent and dependent variables. An independent variable is one that
can stay the same while any of the other variables changes, without
changing the physical situation. So an independent variable can be
assumed to have any value at all that is physically possible. But a
dependent variable can’t be “assumed” to have any value at all
– its value is completely determined by the form and constants of the
relationship and the values of all the independent variables in the
relationship.
When you solve an equation for a dependent variable, you’re showing how
the the values of other variables and constants in the equation affect
it. But the equation doesn’t tell you which variables are dependent and
which are independent. You have to know that in some other way.
If you accidentally solve an equation for an independent variable, there
is nothing in the mathematics to tell you that you’re about to change the
physical situation. The physical situation is changed the moment you
change one of the other variables without changing all the others by the
necessary amounts. Consider our seesaw, with the equations solved for the
position of the fulcrum:
d1
L
x = -------
d1 + d2
Suppose we change just d2. Algebraically, that’s fine – we can calculate
the new value of x. But we have changed something else whether we
intended to or not. Suppose we say d2(new) = d2 + e. This give
us
d1L
x = -----------
d1 + d2 + e
which reduces to
(L-x)
d2 = d1
----- - e
x
compare this to the equation we started with:
(L-x)
d2 = d1*-----
x
Clearly, we have altered the physical situation. To restore the original
equation, we would have to change L to compensate for e, but then we
would have altered the physical situation anyway. There’s no way to get
around it…
That’s what happens when you change a dependent variable to see its
supposed effect on an independent variable. It looks as if you get an
answer, but the answer is a lie – if you think it’s telling you
something about the situation you originally modeled. You can’t actually
change a dependent variable in a system without changing all the other
dependent variables at the same time – not if you want to be talking
about the system you started with.
In the control-system equation we had two solutions, one for p (a
dependent variable) and another for d (an independent variable). In the
latter solution, we have
d = p + Gp - Gr.
If we change p to see its effect on d, we will necessarily have to change
r at the same time by exactly the right amount. If we don’t, and we
insist that only p has changed, we will find that the new set of system
equations is not the same as the set we started with. Without knowing it,
we have changed the premises of the model.
The only possible way to use a solution for an independent variable is to
deduce what it must have been from observations of the other
variables. But the values of each of the the other variables must be
known, including any other independent variables, to get the right
answer. There is no valid way to compute the way the independent variable
depends on any other variable, because it doesn’t depend on any other
variable in the system. That’s what “independent”
means.

Best,

Bill P.

···

[From Bruce Nevin (2006.12.20 15:35 EST)]

Bill Powers (2006.12.20.0830 MST) –

Yes, applied math requires that we interpret mathematical expressions, and a wrong interpretation can lead to absurd inferences. But that’s not a complaint about algebra, it’s a complaint about its interpretation. Unless your objection is that Martin said that a change in p causes a change in d, the complaint seems beside the point. Or unless you are saying that Martin’s interpretation of the equation with d on the left was a misinterpretation that fundamentally changes the imputed model. That’s what I thought was weird.

There’s an assumption in your reply that when you put a single term on the left and a complex expresson on the right of an equation, the single term must be the dependent variable. This is true, if your purpose is to determine the value that the dependent variable takes as a consequence of varying the independent variables. But in that purpose you have already stepped from algebra to its interpretation, from variables in an equation to measured values linked by physical causation. Saying “When you solve an equation for a dependent value … If you accidentally solve an equation for an independent variable …” is firmly in the realm of interpretation. I know you agree with this distinction, because your very complaint is that algebra cannot tell you whether a variable is dependent on or independent of other variables. In an equation, their values are merely correlated over the equals sign. Is it possible for an interpretation to concern only correlation?

You say “The only possible way to use a solution for an independent variable is to deduce what it must have been from observations of the other variables. But the values of each of the the other variables must be known, including any other independent variables, to get the right answer.” But the same assertion can be made of a dependent variable. To get a specific value for any single term on the left of an equation, the values of all variables in the expression on the right side must be known. The only difference is that, in the realm of interpretation, d is a measureable property and r~=p can only be inferred. We can measure qi; we can assume that the observer’s perceived experience of what has been deduced to be the controlled variable corresponds to p; theory tells us that p is the rate of firing of a neuron (or neurons) between an input function and a comparator; but we are not able to measure the perception of another. All these things are true and of critical importance in the domain of interpretation.

I haven’t been able to devote enough attention to the discussion to know Martin’s point, but I kind of doubt that he was asserting that the value of d is determined by the other values in the control loop. And as I was trying to catch up, this caught me up short.

/Bruce
···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Wednesday, December 20, 2006 12:14 PM
To:
CSGNET@LISTSERV.UIUC.EDU
Subject: Re: PCT-Specific Methodology

[From Bill Powers (2006.12.20.0830 MST)]

Bruce Nevin (2006.12.20 09:30 EST)

Bill Powers (2006.12.16.1515 MST) --

I prefer mathematical derivations that reflect the physical situation properly.

This argument seems weird to me, and quite unlike you, Bill.
I appreciate your saying that being wierd is quite unlike me. Thousands would disagree.
What the equation implies is that if p changes (and r & G are unchanged) it must because there was a change in d – not that a change in p causes a change in d due to the observer somehow extrasystemically injecting a signal into p. The equation says nothing about causality or even temporal antecedence.
That last sentence is true, and it’s a limitation of algebra. Of course it’s not a mathematical limitation, but it does result in misleading implications when analyzing a physical situation.
The notion of independent and dependent variables arises when analyzing a physical situation mathematically. You can’t tell from the algebra which variables are independent – that is, which can actually be varied freely by an outside agency. You simply have to know that from examining the environment, or postulate it if you’re designing a model. This commonly arises when doing word problems.
For a somewhat different case, consider this one. A gas tank holds 10 gallons. The car gets 20 miles per gallon. If you fill the tank, how much gasoline will be in it (g1) after you drive 100 miles?
miles per gallon = e
miles driven = m
tank capacity = C
gallons in tank = g1
g1 = C - m/e
Let’s compute how many gallons will be in the tank (g2) after the car is now driven in reverse, still pointed the same way, for 100 miles, back to the starting point. In the same coordinate system as before, the distance driven is -m, so
g2 = C - m/e + m/e = C
The tank will be full again.
This is the sort of thing that happens when you let mathematics push you around and don’t pay attention to what you know about the situation.
That example isn’t quite the same as the one we saw in solving the control equations. Here’s one that is closer.
Consider a see-saw with total length L and an adjustable fulcrum placed at distance x from the end we label 1. If we push end 1 a distance d1 downward, and x = L/2, the other end, end 2, will rise a distance d2. The general relationship is
L-x
d2 = d1 —
x
We can solve this equation for x:
d1L
x = -------
d1 + d2
So now we can ask, given values for d1, d2, and L. how far and which way will the fulcrum move if we push the A end (d1) down one inch?
The right answer is that it will not move at all. Pushing up or down on the ends of a seesaw does not change the location of the fulcrum. But if you just plug the numbers into the equation blindly, letting the algebra think for you, you can come up with an answer to the question. The answer, unfortunately, does not apply to the original physical situation, but to a series of physical situations set up one after the other with the fulcrum being adjusted a little each time. Explanation follows below.
I was taught to solve word problems in part by distinguishing between independent and dependent variables. An independent variable is one that can stay the same while any of the other variables changes, without changing the physical situation. So an independent variable can be assumed to have any value at all that is physically possible. But a dependent variable can’t be “assumed” to have any value at all – its value is completely determined by the form and constants of the relationship and the values of all the independent variables in the relationship.
When you solve an equation for a dependent variable, you’re showing how the the values of other variables and constants in the equation affect it. But the equation doesn’t tell you which variables are dependent and which are independent. You have to know that in some other way.
If you accidentally solve an equation for an independent variable, there is nothing in the mathematics to tell you that you’re about to change the physical situation. The physical situation is changed the moment you change one of the other variables without changing all the others by the necessary amounts. Consider our seesaw, with the equations solved for the position of the fulcrum:
d1
L
x = -------
d1 + d2
Suppose we change just d2. Algebraically, that’s fine – we can calculate the new value of x. But we have changed something else whether we intended to or not. Suppose we say d2(new) = d2 + e. This give us
d1L
x = -----------
d1 + d2 + e
which reduces to
(L-x)
d2 = d1
----- - e
x
compare this to the equation we started with:
(L-x)
d2 = d1*-----
x
Clearly, we have altered the physical situation. To restore the original equation, we would have to change L to compensate for e, but then we would have altered the physical situation anyway. There’s no way to get around it…
That’s what happens when you change a dependent variable to see its supposed effect on an independent variable. It looks as if you get an answer, but the answer is a lie – if you think it’s telling you something about the situation you originally modeled. You can’t actually change a dependent variable in a system without changing all the other dependent variables at the same time – not if you want to be talking about the system you started with.
In the control-system equation we had two solutions, one for p (a dependent variable) and another for d (an independent variable). In the latter solution, we have
d = p + Gp - Gr.
If we change p to see its effect on d, we will necessarily have to change r at the same time by exactly the right amount. If we don’t, and we insist that only p has changed, we will find that the new set of system equations is not the same as the set we started with. Without knowing it, we have changed the premises of the model.
The only possible way to use a solution for an independent variable is to deduce what it must have been from observations of the other variables. But the values of each of the the other variables must be known, including any other independent variables, to get the right answer. There is no valid way to compute the way the independent variable depends on any other variable, because it doesn’t depend on any other variable in the system. That’s what “independent” means.

Best,

Bill P.

Re: PCT-Specific Methodology
[Martin Taylor 2006.12.20.17.58]

[From Bill Powers (2006.12.20.0830
MST)]

Bruce Nevin (2006.12.20 09:30 EST)

Bill Powers
(2006.12.16.1515 MST) –

I prefer mathematical derivations that
reflect the physical situation properly.

What the equation implies is that if p changes
(and r & G are unchanged) it must because there was a change in d
– not that a change in p causes a change in d due to the observer
somehow extrasystemically injecting a signal into p. The equation says
nothing about causality or even temporal
antecedence.

Bill, maybe this whole episoed might have been avoided had I
written:

p + Gp - Gr = d

instead of

d = p + Gp -Gr.

Am I right?

Martin

Yes, applied math requires that
we interpret mathematical expressions, and a wrong interpretation can
lead to absurd inferences. But that’s not a complaint about algebra, it’s
a complaint about its interpretation. Unless your objection is that
Martin said that a change in p causes a change in d, the complaint seems
beside the point. Or unless you are saying that Martin’s interpretation
of the equation with d on the left was a misinterpretation that
fundamentally changes the imputed model. That’s what I thought was
weird.
[From Bill Powers (2006.12.20.1530 MST)]

Bruce Nevin
(2006.12.20 15:35 EST) –

My extended discussion was meant to show that one has to be very careful
in solving equations for independent variables, because (without
inadvertently changing the model described by the equation) the variables
on the right cannot be varied individually to see how the implied value
of the variable on the left changes. There is nothing but one’s general
knowledge to warn that doing this is an error. When the variable on the
left is a dependent variable, however, this can be done without error,
provided the variables have been separated (appear only on one side of
the equal sign).

The common practice in mathematics is to treat the statement on the right
as a function, and the variable on the left as the value of that function
given the present values of the arguments of the function. We would write
d = f(p,r) for the equation Martin used. But when p depends on r and d is
an independent variable, this statement is false, in that the only
permissible values of r and p are those that leave d unchanged, while all
other combinations of values are forbidden.

In programming languages like Pascal this problem is solved by using the
normal equal sign only in logical expressions and tests. Statements like
X := 25*Y + Z explicitly recognize that the value on the left is being
created by the values of the variables on the right. The
“colon-equal” sign (:=) is used in Pascal, and it is not called
the equal sign but the “replacement operator.” The expression
on the right of this sign is evaluated using the current values of the
variables and constants, and the result replaces whatever value the
variable on the left had. In the C language, the normal equal sign means
replacement, but the logical equality sign is ==.

A common bug in computer programs is to assign a value to a variable by
evaluating some expression, and then, before that value is ever used,
setting it AGAIN by evaluating a different expression. Some compilers
will catch this kind of error, flagging it with the statement
“Variable assigned a value that is never used.” In the case of
assigning a value to an independent variable, the first assignment to
that variable would lead to this error message when that variable
appeared again on the left of the replacement operator without ever
having appeared on the right side of a replacement operation, which is
what “using” it means.

The result of all this is that programming languages are more suited to
modeling than plain algebra is, because this problem with solving for
independent variables is explicitly recognized.

Sorry if I’m long-winded about this but I’m working out the exposition as
I go.

Best,

Bill P.

[From Dick Robertson, 2006,12,21,1849CDT]

God, how I wish I had the time, the skill and the
youth to pull all these posts into a textbook for
the next generation. What an endowment it would be.
Is anyone doing it?

Best,

Dick R.

···

----- Original Message -----
From: Bill Powers <powers_w@FRONTIER.NET>
Date: Wednesday, December 20, 2006 5:50 pm
Subject: Re: PCT-Specific Methodology

[From Bill Powers (2006.12.20,1625 MST)]

Martin Taylor 2006.12.20.17.58 --

>Bill, maybe this whole episode might have been

avoided had I written:

>
>p + Gp - Gr = d
>
>instead of
>
>d = p + Gp - Gr.

See my latest (today) post to Bruce. Another

reason I'm being

prolix
here is that I'm snowed in without a car, am in a

cozy warm

apartment
looking out at a bleak landscape with snow going

by horizontally,

and
would just as soon be typing as doing anything else.

What I'm most used to, because of years of

modeling, is solving

equations for single variables on the left as

functions of the

other
variables on the right. Too bad it took me so long

to think of the

word "function." Of course you could reverse the

convention and put

the single variable on the right, as you do above,

but that

wouldn't
change anything.

Perhaps the core of this matter as far as I am

concerned is the

fact
that in the expression p + Gp - Gr, p and r cannot

vary

independently
if the original model is to remain unchanged. I'm

sure you can see

why this is so. On the other hand, when we solve

for the dependent

variable p, we get

p = [G/(1+G)]r + d/(1+G)

with the one dependent variable on the left, and

the two

independent
variables on the right. This means we can evaluate

the expression

on
the right for any pair of values of r and d, and

obtain a value of

p
that is valid without changing the original model.

We can also

solve
the system of equations for e, qo, and (trivially)

qi, with the

same
result: each one is a function of d and r only,

with none of the

other dependent variables appearing.

When you isolate an independent variable on one

side of the

equation,
you can no longer think of the expression on the

other side as a

function, because the arguments of the function

are no longer

independent. And obviously, the value of an

independent variable

does
not depends on any other variable in the system.

In the present

case,
you have one dependent variable on the other side,

along with the

other independent variable, r.

Are you sure that in your animated diagram, each

frame does not

represent the state of a different model?

Best,

Bill P.
'

[From Rick Marken (2006.12.21.1705)]

Dick Robertson (2006,12,21,1849CDT) --

God, how I wish I had the time, the skill and the
youth to pull all these posts into a textbook for
the next generation. What an endowment it would be.
Is anyone doing it?

Yep!

Best

Rick

Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[Bruce Nevin (2006.12.20 22:19 EST)]

Martin Taylor 2006.12.20.17.35 --
(Replying to Bill Powers 2006.12.19.0916 MST)

It's ironic that now, when I am talking about the correlations
aomng the signals associated with the control loop, you have
become interested in the uncertainties of the relations among
the signals, the topic of information-theoretic analyses.

from this post, it is clear that Martin is talking about correlations
among variables, rather than causative relations in the interpretation
(either physical or model interpretation) of the equations.

  /B

[From Bill Powers (2006.12.21.0340 MST)]

Bruce Nevin (2006.12.20 22:19 EST) --

Martin Taylor 2006.12.20.17.35 --
(Replying to Bill Powers 2006.12.19.0916 MST)

> It's ironic that now, when I am talking about the correlations
> aomng the signals associated with the control loop, you have
> become interested in the uncertainties of the relations among
> the signals, the topic of information-theoretic analyses.

From this post, it is clear that Martin is talking about correlations
among variables, rather than causative relations in the interpretation
(either physical or model interpretation) of the equations.

How did you get that out of what he said? I think he meant that the irony is that I am "getting interested in uncertainties," which are NOT causal relationships, whereas he is talking about correlations which, as he uses them, are systematic relationships (they can be less than perfect in systems without noise).

When in the article he originally cited, he speaks (possibly mistakenly) about the output of the system being opposed to disturbances, he is speaking causally, although of course even when speaking causally it is possible to point out that the variations are merely covariations. I don't think "causation" is a very useful concept, anyway. It's much clearer to speak of dependent and independent variables, which may have multiple interactions.

Actually, I think it is possible to talk about uncertainties in relationships without being interested in information-theoretic analysis. In fact, I know it's possible.

Best,

Bill P.

Re: PCT-Specific Methodology
[Martin Taylor 2006.12.21.17.31]

[From Bill Powers (2006.12.21.0300
MST)]

Martin Taylor 2006.12.20.17.58 –

I find, in going back to your cited article, that I can’t make heads
or tails of it.

I’m sorry about that. I had thought it was very straightforward.
Reading your comments in this and previous messages, I think you must
be looking for complications where none exist.

The
correlation between any two vectors is the cosine of the angle between
them. That is why Gp and p are drawn at right angles. Their
correlation is zero. If Gp is large compared to p, d has a correlation
of nearly 1.0 with Gp and nearly zero with p. Since we are dealing
only with the case in which the reference signal is fixed at zero, the
output signal is Ge, which is -Gp. So the disturbance signal is
correlated almost -1.0 with the output signal–as we know to be the
case for good control.

The biggest problem is that when you say “the correlation”
and “the output” you do not indicate which correlation
and which output you mean.

Well, at least in the cited paragraph I do, each time, quite
explicitly say which correlation I am talking about. Maybe there is
some other passage in which I don’t, but I’m not sure where.

The “output”, of course, is the quantity usually
represented by “o”, which has the magnitude Ge, or,
equivalently, -Gp (as stated in the cited paragraph).

It seems that most of the reasoning
stays in your head and never gets out onto the paper.

??

This makes it very hard to
understand what you’re talking about, particularly for someone like me
who is very shaky about Laplace transforms.

I do understand that Laplace transforms allow doing algebraic
manipulations instead of solving differential equations. But in the
examples I have seen, the transformed equations do not resemble the
original equations – for example, the Laplacian of an integration is
1/s, not s, so you end up manipulating a variable in the denominator,
not the numerator.

That hppens ONLY if you expand the Laplace transform, which is
what you don’t do when you are treating the operators algebraically.
Forget that, unless you want to get involved in a complicated analysis
in the s-domain.

That makes quite a difference in
the resulting algebra.

No it doesn’t.

If you want to do an s-domain analysis, fine. Go ahead. I’ll stay
clear of that until I really want to know the waveforms involved. I
always have problems with s-domain analyses, at least I do when there
are time delays. The equations get very complicated. Keeping it at the
Lapalace operator level allows one to keep things very simple. Just
forget about “s”.

In fact, to transform back to the
normal form of the equations requires partial-fraction expansions and
gets very complicated.

Yes. Don’t even start!

You seem to be vastly
oversimplifying that process.

No, only avoiding it entirely, by using only the algebraic
properties of the Laplace operator. Stop looking for complication
where there should be none.

Of course I could simply be unable
to grasp the ideas you’re talking about. It would help if you didn’t
omit all the intermediate steps in your derivations.

Could you cite a place in the derivation where there is a step
that seems to be missing? I’ve gone over it, and I can’t find any such
place. If you can, I’ll try to fill in the details. It’s hard for me
as the author to see what could be causing you difficulty, unless,
perhaps it is the standard statement that the correlation between two
vectors is the cosine of the angle between them.

For example, I can’t see, after much
trying, how you get the statement that the disturbance signal is
correlated almost -1.0 with the output signal. I presume that by
“output signal” you mean the output of the integrator. The
equation you use is

d = p + Gp - Gr

In that equation, d varies in the same direction as Gp, which is the
output of the control system,

No, Gp isn’t the output of the control system. It’s the negative
of the output of the control system. -Gp is the output of the control
system, as you find in your own derivation. Remember, we both used the
same derivation up to the point of deriving p = d - Gp + Gr. There,
you can see that it is “-Gp” not “Gp” that adds to
d to produce p.

As for your puzzlement with the statement in question, if d is
the sum of Gp and p, which are orthogonal, and p is very small
compared to Gp, then Gp is almost aligned with d. So Gp has a
correlation of almost 1.00 with d, which means that -Gp has a
correlation of almost -1.00 with d. Is that one of the missing
steps?

I think the basic reason I have so much
trouble understanding you is that you state your conclusions without
using normal mathematical derivations to show how you got to the
conclusions from the starting point.

What normal mathematical derivations other than algebra and vector
arithmetic would you like?

Without those intermediate steps, I
have no idea how you arrived at the end-point, and can’t satisfy
myself that I grasp your conclusion, or even that it is
correct.

Well, I’ll help as much as I can, but I’m bewildered as to where
to begin. If it results in a rewrite of the Web page, so much the
better.

···

=====================================================

[From Bill Powers (2006.12.21.0340
MST)]

Bruce Nevin (2006.12.20 22:19 EST)

Martin Taylor 2006.12.20.17.35

(Replying to Bill Powers 2006.12.19.0916
MST)

It’s ironic that now, when I am
talking about the correlations
aomng the signals associated with
the control loop, you have
become interested in the
uncertainties of the relations among
the signals, the topic of
information-theoretic analyses.

From this post, it is clear that Martin
is talking about correlations
among variables, rather than causative
relations in the interpretation
(either physical or model interpretation)
of the equations.

How did you get that out of what he said?
I think he meant that the irony is that I am “getting interested
in uncertainties,” which are NOT causal relationships, whereas he
is talking about correlations which, as he uses them, are systematic
relationships (they can be less than perfect in systems without
noise).

Bruce has it exactly right. I’m talking about correlations, with
no implication of causality, just as statistical uncertainties have no
implications of causality.

As for Bill’s seeming approach to information-theoretic analyses,
I meant that if he pursued the line of thought he followed in arriving
at the Chi-square analysis, he would soon arrive at a more complete
information-theoretic analysis. I did not mean to imply that he was
there yet. One day, knowing his track record for following the
analysis where logic leads, with luck :slight_smile:

Martin

[From Bill Powers (2006.12.21.1655 MST)]

Martin Taylor 2006.12.21.17.31 --

I do understand that Laplace transforms allow doing algebraic manipulations instead of solving differential equations. But in the examples I have seen, the transformed equations do not resemble the original equations -- for example, the Laplacian of an integration is 1/s, not s, so you end up manipulating a variable in the denominator, not the numerator.

That hppens ONLY if you expand the Laplace transform, which is what you don't do when you are treating the operators algebraically. Forget that, unless you want to get involved in a complicated analysis in the s-domain.

OK, maybe this is getting us closer to teaching me something. Here are the equations we start with ("INT" is the integral sign):

p = o + d

e = r - p

o = G*INT(e)

Combining and substituting:

p = G*INT(e) + d

   = G*INT(r - p) + d

   = G*INT(r) - G*INT(p) + d

OK, that gets us to the equations you used. But here is where I lose you. There are no Laplace transforms in this equation, yet you proceed to manipulate the variables with an integral in them as if they were Laplace variables -- algebraically. I don't think this is legitimate -- perhaps the problem is that I don't know what's legitimate, but let me say what I think and see where that leaves us.

Before you can switch to using algebra, as I understand it, you have to express r, p, and d as Laplacians of their respective waveforms. We can simplify this by letting r = 0, but we still have to supply a waveform for d so we can substitute the required Laplace variable.

Let L be the script Laplacian operator and s the Laplace variable. Then

L(p) = -G*L[INT(p)] + L(d)

The Laplacian of the integral of a variable is 1/s times the Laplacian of the variable, so

L(p) = -G*(1/s)L(p) + L(d)

Now we can switch to algebra: First we collect terms:

L(p)(1 + G/s) = L(d)

Then we solve for L(p) (or L(d) if you know the form of L(p)):

         s
L(p) = --- * L(d)
        G+s

Given the waveform of the disturbance, we can find the Laplacian of p. Let d be D times a unit step, called H(t) in my "Laplace transforms for electrical engineers". L[H(t)] is 1/s, so we have

         D
L(p) = ---
        G+s

Now we can look up the inverse Laplace transform and find the form of p: it will be

p = D*H(t)*exp(-Gt).

The unit step says this solution is good only for t > 0. The solution is clearly correct: it says that after a unit step disturbance, the perceptual signal will initially have the same magnitude as the disturbance, and then decay exponentially toward zero with a time constant of 1/G seconds.

We can do a similar set of operations for a sine-wave disturbance. the example given in my text is for the form

sin(Wt + phi)*H(t), and the Laplacian of this is

Wo*cos(phi) + s*sin(phi)

···

------------------------
     s^2 + Wo^2

where W means omega or 2*pi*frequency.

I won't try to solve the equation for this form of disturbance -- I might or might not be able to do it.

As you can see, I have a lot of trouble understanding how you could manipulate INT(x) algebraically as you did, and also how this led to plotting p and Gp at right angles to each other. There seems to be no relationship between what you did and what I understand to be the way Laplace transforms are used. That's what I mean by leaving out steps -- I don't see how you got there from here. If you can explain that to me I will have learned something.

Best,

Bill P.

[Martin Taylr 2006.12.22.11.08]

[From Bill Powers (2006.12.21.1655 MST)]

Martin Taylor 2006.12.21.17.31 --

That hppens ONLY if you expand the Laplace transform, which is what you don't do when you are treating the operators algebraically. Forget that, unless you want to get involved in a complicated analysis in the s-domain.

OK, maybe this is getting us closer to teaching me something. Here are the equations we start with ("INT" is the integral sign):

p = o + d

e = r - p

o = G*INT(e)

Combining and substituting:

p = G*INT(e) + d

  = G*INT(r - p) + d

  = G*INT(r) - G*INT(p) + d

OK, that gets us to the equations you used. But here is where I lose you. There are no Laplace transforms in this equation, yet you proceed to manipulate the variables with an integral in them as if they were Laplace variables -- algebraically.

I see.

We start off on the wrong foot because I was unable to use the conventional Laplacian script fonts in the Web page. All of p, o, and d, as well as G, are Laplace transforms.

Just as when you use scalar variables, you can say o = G * e, so, when o, G and e are Laplace transforms, you can say o = G * e.

This all started with Heaviside, who discovered he could get the right answers for differential equations by using the sumbol "D" to represent differentiation. If he wrote y = dx/dt as y = D * x, then the second derivative would be written y = D * D * x, and the integral would be y=x/D. When he did the algebra, it all worked and the right results came out.

Heaviside could never prove that what he was doing was mathematically legitimate. I forget who it was that did prove that, and that it works also for Laplace transforms. I can't prove to you that the algebra is legitimately used when all the variables are Laplace transforms or operators, but I was taught in engineering school that it has been proved so. I leave it at that.

Now, you wrote "p = G*INT(r) - G*INT(p) + d". That's a time domain representation, in which G is a scalar number. In the Laplace domain representation, G*INT is the expression I labelled "G", and the same expression is written (as we did earlier) as "p = G*r -G*p + d".

Before you can switch to using algebra, as I understand it, you have to express r, p, and d as Laplacians of their respective waveforms.

Indeed, that's what I was doing all along. I perhaps ought to make that fact clearer in the Web page.

Let L be the script Laplacian operator and s the Laplace variable. Then

L(p) = -G*L[INT(p)] + L(d)

The Laplacian of the integral of a variable is 1/s times the Laplacian of the variable, so

L(p) = -G*(1/s)L(p) + L(d)

I'm not going to comment on this analysis, which I think is probably correct. The difficulty in it comes at the end, where you have to assert a particular waveform for d. Treating the Laplacians algebraically, you solve for arbitrary waveforms of d, and can make gneral statements.

As you can see, I have a lot of trouble understanding how you could manipulate INT(x) algebraically as you did,

I hope the above is a sufficient explanantion, summarized as "INT(x) is a time domain exression; converting to the Laplace domain permits the use of algebra among the various Laplacians." I looked up Wikipedia on this, and the best I could find quickly was this, on the page "Operator":

============Wikipedia extract=================
Linear operators

Main article: Linear transformation

The most common kind of operator encountered are linear operators. In talking about linear operators, the operator is signified generally by the letters T or L. Linear operators are those which satisfy the following conditions; take the general operator T, the function acted on under the operator T, written as f(x), and the constant a:

     T(f(x) + g(x)) = T(f(x)) + T(g(x))
     T(af(x)) = aT(f(x))

Many operators are linear. For example, the differential operator and Laplacian operator, which we will see later.

···

====================================

and also how this led to plotting p and Gp at right angles to each other.

That's an entirely separate question, and it's the one place where I feel a little insecure in the argument. The argument is that if you do a Fourier analysis of a waveform, each cosine component at the input to the integrator comes out as the corresponding sine, whcih is orthogonal to the original. Here's the questionable statement, of which I'd like a proof or a refutation: "if every Fourier component of one waveform is orthogonal to the corresponding component of another waveform, the two waveforms are themselves orthogonal."

I don't know if this all helps.

Martin

[From Bruce Nevin (2006.12.22.1229 EST)]

Martin Taylr 2006.12.22.11.08 --

  Here's the questionable statement, of which I'd like a
  proof or a refutation: "if every Fourier component of
  one waveform is orthogonal to the corresponding component
  of another waveform, the two waveforms are themselves orthogonal."

If two waveforms can only be orthogonal in one way, this works, but if
there is more than one way (as by Euclidean analogy a perpendicular
above and a perpendicular below are both orthogonal to a horizontal
line, and many horizontal lines rotated about that perpendicular in a
plane are orthogonal to it) then you have to stipulate that the sine
wave Fourier components are all orthogonal in the same "direction". In
radio engineering, the "directions" are such as preclude crosstalk, e.g.

"Minimum frequency-shift keying or minimum-shift keying (MSK) is a
particularly spectrally efficient form of coherent frequency-shift
keying. In MSK the difference between the higher and lower frequency is
identical to half the bit rate. As a result, the waveforms used to
represent a 0 and a 1 bit differ by exactly half a carrier period. This
is the smallest FSK modulation index that can be chosen such that the
waveforms for 0 and 1 are orthogonal. A variant of MSK called GMSK is
used in the GSM mobile phone standard.

Also

  /B

[Martin Taylor 2006.12.22.14.49]

[From Bruce Nevin (2006.12.22.1229 EST)]

Martin Taylor 2006.12.22.11.08 --

  Here's the questionable statement, of which I'd like a
  proof or a refutation: "if every Fourier component of
  one waveform is orthogonal to the corresponding component
  of another waveform, the two waveforms are themselves orthogonal."

If two waveforms can only be orthogonal in one way, this works, ...

Thanks, Bruce.

I don't think you answered the question, but you did suggest a very simple route to the answer, which is that the statement is true. Here's the reasoning.

The Fourier transform of a waveform consists of a series of mutually orthogonal components, which come in pairs: an * sin(nwt) and bn * cos(nwt). The entire waveform is described by adding all the components, as n ranges from zero to a maximum value (which is infinite, if the waveform is infinitely extended in time, but which is otherwise, meaning in all practical cases, finite).

Take two waveforms with the same number of components (i.e. they are equally extended in time). The calculation of their correlation involves their cross-multiplication. We know that any cross-multiplication of a component with index n by a component with index k gives zero when integrated over the whole length of the two waveforms, so that we need consider only the cross-multiplication of components having the same index value n.

By hypothesis (cited at the head of the message), the nth pair of corresponding Fourier components in the two waveforms are orthogonal, meaning that their cross product sums to zero.

Since all the cross-products of all the components are zero, the sum of cross products is zero, and the two entire waveforms are orthogonal.

I'm now much happier that my analysis in the oft-cited Web page <http://www.mmtaylor.net/PCT/Info.theory.in.control/Control+correl.html> is likely to be correct.

Martin

[From Bill Powers (2006.12.23.1030 MST)]

I was too hasty and got the equations a little wrong.

···

==============================================================================
Martin, note: The Laplace variable "s" (or as it is sometimes called, p, a symbol we do not use for obvious reasons) is a complex variable, a + j*w. The components are "damped sinusoidal oscillations, of angular frequency w (frequency f = w/(2*pi) and damping copefficient a." The quote is from "Laplace transforms for electrical engineers" by B. J. Starkey (1954), p 33. He calls those "cisoidal" oscillations.

The Laplace transform is a vector in the complex plane. Starkey takes three chapters of working in the stratosphere before actually getting into Laplace transforms per se -- I think you'd like this book. Note the date of publication -- I really tried to learn control theory the way people were doing it back when PCT started.

I wish I knew how people think up things like the Laplace transform.

Start with the basic system equations, all variables being Laplace transforms of algebraic variables with the same names.

p = o + d
e = r - p
o = ke/s where k is the output integration factor.
                    s is the Laplace variable,

Solve for e:

e = r - o - d
e = r - ke/s - d

Collecting terms,

e(1 + k/s) = r - d

        s
e := -----(r - d)
      k + s

Because o = ke/s, o becomes

        k
o := -----(r - d)
      k + s

Because p = o + d, p becomes

        k
p := -----(r - d) + d, or
      k + s

        k(r - d) + d(s + k)
p := -------------------------- or
              k + s

        kr + sd
p := ----------
         k + s

To turn any of these equations into a frequency-domain representation in the complex plane, substitute jw for s (and don't ask me why that works, either). This will enable us to compute the phase shift of o relative to d when d is a sine wave of any frequency. You say that the cosine of the phase shift angle is the correlation between d and o, so this should give you that correlation:

The solution for o was:

        k
o := -----(r - d)
      k + s

Let r be zero for simplicity.

o = -dk/(k + jw)

Multiply numerator and denominator by complex conjugate of demoninator. This makes the denominator real and transfers imaginary components to the numerator.

      -dk(k - jw)
o = -------------
       k^2 + w^2

      -dk(k - jw)
o = -------------
       k^2 + w^2

The cosine of the phase angle of o relative to d is -d(k^2)/(w^2 + k^2)

You can see that for w close to zero, the cosine of the angle is close to minus 1.00, showing that at low frequencies the output varies almost 180 degrees out of phase with the disturbance. The correlation would be close to negative 1. For k large compared with omega, the phase shift remains near zero over a band of low frequencies.

You can use this method to calculate any of the other correlations you're interested in. Note that all correlations would be lower if there were noise in the system variables. Only the high-frequency components of the disturbance noise would get through to p unopposed. The low-frequency components of the reference signal noise would affect the output and input of the system.

Best,

Bill P.

[From Bruce Nevin (2006.12.23 14:58 EST)]

Bill Powers (2006.12.23.1030 MST)--

To turn any of these equations into a frequency-domain representation
in the complex plane, substitute jw for s (and don't ask me why that

works, either).

Does that require the damping coefficient to be zero?

  /B

···

-----Original Message-----
From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Saturday, December 23, 2006 2:07 PM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: PCT-Specific Methodology

[From Bill Powers (2006.12.23.1030 MST)]

I was too hasty and got the equations a little wrong.

========================================================================

Martin, note: The Laplace variable "s" (or as it is sometimes called, p,
a symbol we do not use for obvious reasons) is a complex variable, a +
j*w. The components are "damped sinusoidal oscillations, of angular
frequency w (frequency f = w/(2*pi) and damping copefficient a." The
quote is from "Laplace transforms for electrical engineers"
by B. J. Starkey (1954), p 33. He calls those "cisoidal" oscillations.

The Laplace transform is a vector in the complex plane. Starkey takes
three chapters of working in the stratosphere before actually getting
into Laplace transforms per se -- I think you'd like this book. Note the
date of publication -- I really tried to learn control theory the way
people were doing it back when PCT started.

I wish I knew how people think up things like the Laplace transform.

======

Start with the basic system equations, all variables being Laplace
transforms of algebraic variables with the same names.

p = o + d
e = r - p
o = ke/s where k is the output integration factor.
                    s is the Laplace variable,

Solve for e:

e = r - o - d
e = r - ke/s - d

Collecting terms,

e(1 + k/s) = r - d

        s
e := -----(r - d)
      k + s

Because o = ke/s, o becomes

        k
o := -----(r - d)
      k + s

Because p = o + d, p becomes

        k
p := -----(r - d) + d, or
      k + s

        k(r - d) + d(s + k)
p := -------------------------- or
              k + s

        kr + sd
p := ----------
         k + s

To turn any of these equations into a frequency-domain representation in
the complex plane, substitute jw for s (and don't ask me why that works,
either). This will enable us to compute the phase shift of o relative to
d when d is a sine wave of any frequency. You say that the cosine of the
phase shift angle is the correlation between d and o, so this should
give you that correlation:

The solution for o was:

        k
o := -----(r - d)
      k + s

Let r be zero for simplicity.

o = -dk/(k + jw)

Multiply numerator and denominator by complex conjugate of demoninator.
This makes the denominator real and transfers imaginary components to
the numerator.

      -dk(k - jw)
o = -------------
       k^2 + w^2

      -dk(k - jw)
o = -------------
       k^2 + w^2

The cosine of the phase angle of o relative to d is -d(k^2)/(w^2 + k^2)

You can see that for w close to zero, the cosine of the angle is close
to minus 1.00, showing that at low frequencies the output varies almost
180 degrees out of phase with the disturbance. The correlation would be
close to negative 1. For k large compared with omega, the phase shift
remains near zero over a band of low frequencies.

You can use this method to calculate any of the other correlations
you're interested in. Note that all correlations would be lower if there
were noise in the system variables. Only the high-frequency components
of the disturbance noise would get through to p unopposed.
The low-frequency components of the reference signal noise would affect
the output and input of the system.

Best,

Bill P.

[From Bill Powers (2006.12.23.1610 MST)]

Bruce Nevin (2006.12.23 14:58 EST) --

> To turn any of these equations into a frequency-domain representation
> in the complex plane, substitute jw for s (and don't ask me why that
>works, either).

Does that require the damping coefficient to be zero?

No -- see the example in the next post I sent (2006.12.23.1030 MST). The damping coefficient in that example is just the gain factor in the integrating output function.

Best,

Bill P.

[Martin Taylor 2006.12.23.19.19]

[From Bill Powers (2006.12.23.1030 MST)]
You say that the cosine of the phase shift angle is the correlation between d and o,

I didn't say that. I said that the correlation between any two vectors is the cosine of the angle between the two vectors in their common basis space.

Martin