Disturbance-output as cause-effect

[From Bruce Abbott (2017.10.20.2015 EDT)]

I have asserted that the only external source of variation in the output of a control system is variation in the effect of the disturbance of the input quantity, given a constant reference signal. (I am ignoring internal “noise” or other internal sources of distortion.) Here I provide a worked example

Proportional control simulator

The following are lines of pseudo-code to implement the simulation:

Input quantity = Disturbance effect + Feedback quantity

Perceptual signal = Input gain * Input quantity

Error signal = Reference signal – Perceptual signal

Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor

Feedback quantity = Feedback gain * Output quantity

“Old Output quantity” is the value of the Output quantity on the previous time-step.

“dt” is the time-step of the simulation in seconds per iteration

“Slowing factor” is a value less than 1 that determines the “leak rate” of the leaky integrator.

Constant parameters:

Input gain = 1

Output gain = 100

Feedback gain = 1

dt = 0.1 seconds

Slowing factor = 0.05

Initial conditions at time zero:

Reference signal = 0

Disturbance effect = 0

Perceptual signal = 0

Error signal = 0

Output quantity = 0

Feedback quantity = 02

First iteration:

My claim: Given the system at rest with zero error and therefore zero output, All the changes around the loop are due to the change in disturbance, which propagates around the loop.

Proof:

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 0 = -10

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -10

Error signal = Reference signal – Perceptual signal = 0 – (-10) = 10

Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor

= 0 + (10010 – 0) * 0.1 * 0.05 = 0 + (1000.005) = 5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 * 5 = 5

Each line of the simulation describes a causal relationship between the variables on the right side of the equal sign (cause) and the variable on the left side of the equal sign (effect). The change in disturbance value produces effects that propagate around the loop. The change in Feedback quantity then feeds back onto the input quantity, changing its value from -10 to -5 for the next iteration.

Second iteration:

My claim: On the second and subsequent iterations, fact that the Output quantity (and therefore the Feedback quantity) may be non-zero is due to the continuing, uncompensated residual effect of the disturbance left over from previous iterations.

Proof:

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 5 = -5

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -5

Error signal = Reference signal – Perceptual signal = 0 – (-5) = 5

Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor

= 0 + (1005 – 0) * 0.1 * 0.05 = 0 + (500.005) = 2.5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 *2. 5 = 2.5

Note that the initial value of +5 for the Feedback quantity was caused by the Disturbance effect during the first iteration. It did not appear by the “magic” of “circular causality.” Circular causality is just an ordinary chain of cause and effect that feeds back onto the input quantity. Given that the Reference signal and Disturbance effect are constant, one can easily trace the sequence around the loop to see that this is true. If either the Reference signal, Disturbance effect, or both are changing, it becomes difficult to attribute current values of the variables around the loop to the Disturbance effect, Reference signal, or combined current and residual effects of each. But only the changes in Reference signal and Disturbance effect are independent causes of those changes.

Loop gain

Just to complete this tutorial, the loop gain of a control system is the product of the gains all around the loop:

Loop gain = Input gain + Output gain + Feedback gain = 1.0 * 100 * 1.0 = 100

A proportional control system experiencing a disturbance will not be able to fully compensate for its effect on the Input quantity, because the output level depends on the error level and an error level of zero would produce no output and thus no compensation. The Disturbance effect on the Input quantity is reduced by the control system by a factor of 1/(1+Loop gain) relative to the effect it would have in the absence of control. In this example

Residual Disturbance effect = Disturbance effect * 1/(1+Loop gain) = -10 * 1/(1+100) = -10 * 1/101 = -10 * 0.0099 = - 0.099. With the Input gain = 1.0, the Perceptual signal = -0,0099. Recall that the Reference signal was set to zero, so this is also the magnitude of the residual error.

I believe my computations are accurate, but if I’ve made any arithmetic errors I will take no offense at having them corrected.

Bruce

[From Rick Marken (2017.10.21.1010)]

···

Bruce Abbott (2017.10.20.2015 EDT)

Â

BA: I have asserted that the only external source of variation in the output of a control system is variation in the effect of the disturbance of the input quantity, given a constant reference signal.  (I am ignoring internal “noiseâ€? or other internal sources of distortion.) Here I provide a worked example

RM: I would count the output of the control system another external source of variation in output. Both the output and disturbance are external to the controlled quantity, which is the cause of the error that causes the output. I think what you mean is that the only external source of variation in the output of a control system that is independent of the output of the organism itself is the effect of the disturbance on the input quantity. Â

Â

BA: First iteration:

Â

My claim: Given the system at rest with zero error and therefore zero output, All the changes around the loop are due to the change in disturbance, which propagates around the loop.

Â

Proof:

Â

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 0 = -10

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -10

Error signal = Reference signal – Perceptual signal = 0 – (-10) = 10

/p>
Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor/u>

               = 0 + (10010 – 0) * 0.1 * 0.05 = 0 + (1000.005) = 5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 * 5 = 5

Â

BA: Each line of the simulation describes a causal relationship between the variables on the right side of the equal sign (cause) and the variable on the left side of the equal sign (effect). The change in disturbance value produces effects that propagate around the loop. The change in Feedback quantity then feeds back onto the input quantity, changing its value from -10 to -5 for the next iteration.

RM: It seems to me that this shows that, given a system at rest with zero error and zero output, the changes around the loop are due to the change in disturbance, the lack of change in the output and the fact that there is a transport lag in the system so that there is no output at all for dt seconds until the output suddenly happens. But ignoring these details and taking your demonstration as a proof that your claim is correct, what does this mean for how we should go about studying living control systems?Â

Best

Rick

 Second iteration:

Â

My claim: On the second and subsequent iterations, fact that the Output quantity (and therefore the Feedback quantity) may be non-zero is due to the continuing, uncompensated residual effect of the disturbance left over from previous iterations.

Â

Proof:

Â

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 5 = -5

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -5

Error signal = Reference signal – Perceptual signal = 0 – (-5) = 5>

Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowinng factor

               = 0 + (1005 – 0) * 0.1 * 0.05 = 0 + (500.005) = 2.5/p>

Feedback quantity = Feedback gain * Feedback quantity = 1.0 *2. 5 = 2.5

Â

Note that the initial value of +5 for the Feedback quantity was caused by the Disturbance effect during the first iteration. It did not appear by the “magicâ€? of “circular causality.â€? Circular causality is just an ordinary chain of cause and effect that feeds back onto the input quantity. Given that the Reference signal and Disturbance effect are constant, one can easily trace the sequence around the loop to see that this is true. If either the Reference signal, Disturbance effect, or both are changing, it becomes difficult to attribute current values of the variables around the loop to the Disturbance effect, Reference signal, or combined current and residual effects of each. But only the changes in Reference signal and Disturbance effect are independent causes of those changes.

Â

Loop gain

Â

Just to complete this tutorial, the loop gain of a control system is the product of the gains all around the loop:

Â

Loop gain = Input gain + Output gain + Feedback gain = 1.0 * 100 * 1.0 = 100

Â

A proportional control system experiencing a disturbance will not be able to fully compensate for its effect on the Input quantity, because the output level depends on the error level and an error level of zero would produce no output and thus no compensation. The Disturbance effect on the Input quantity is reduced by the control system by a factor of 1/(1+Loop gain) relative to the effect it would have in the absence of control. In this example

Â

Residual Disturbance effect = Disturbance effect * 1/(1+Loop gain) = -10 * 1/(1+100) = -10 * 1/101 = -10 * 0.0099 = - 0.099. With the Input gain = 1.0, the Perceptual signal = -0,0099. Recall that the Reference signal was set to zero, so this is also the magnitude of the residual error.

Â

I believe my computations are accurate, but if I’ve made any arithmetic errors I will take no offense at having them corrected.

Â

Bruce

Â

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Abbott (2017.10.21.1500 EDT]

Rick Marken (2017.10.21.1010) –

Bruce Abbott (2017.10.20.2015 EDT)

BA: I have asserted that the only external source of variation in the output of a control system is variation in the effect of the disturbance of the input quantity, given a constant reference signal. (I am ignoring internal “noise� or other internal sources of distortion.) Here I provide a worked example

RM: I would count the output of the control system another external source of variation in output. Both the output and disturbance are external to the controlled quantity, which is the cause of the error that causes the output. I think what you mean is that the only external source of variation in the output of a control system that is independent of the output of the organism itself is the effect of the disturbance on the input quantity

BA: No, I don’t mean that. I believe you are familiar with the terms “exogenous� and “endogenous� in reference to the variables in a system diagram of causal influences. Exogenous variables come from outside the system (external); endogenous variable are located inside the system (internal). Disturbances and reference signals are exogenous variables in a causal diagram of a control system, all the variables inside the loop are endogenous variables. Ignoring system noise, disturbances and reference signal changes are the only independent sources of variation of the components of the loop, including the output component. They affect, but are not affected by, those in-loop variables.

BA: First iteration:

My claim: Given the system at rest with zero error and therefore zero output, All the changes around the loop are due to the change in disturbance, which propagates around the loop.

Proof:

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 0 = -10

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -10

Error signal = Reference signal – Perceptual signal = 0 – (-10) = 10

Output quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor

= 0 + (100*10 – 0) * 0.1 * 0.05 = 0 + (1000**.005) = 5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 * 5 = 5

BA: Each line of the simulation describes a causal relationship between the variables on the right side of the equal sign (cause) and the variable on the left side of the equal sign (effect). The change in disturbance value produces effects that propagate around the loop. The change in Feedback quantity then feeds back onto the input quantity, changing its value from -10 to -5 for the next iteration.

RM: It seems to me that this shows that, given a system at rest with zero error and zero output, the changes around the loop are due to the change in disturbance, the lack of change in the output and the fact that there is a transport lag in the system so that there is no output at all for dt seconds until the output suddenly happens. But ignoring these details and taking your demonstration as a proof that your claim is correct, what does this mean for how we should go about studying living control systems?

BA: That is correct: no change of output other than that produced by the disturbance (even if the change occurs after a delay).

BA: The point is to refute your oft-stated claim that there is no causal connection between disturbance and change in output. I show in the portion you did not comment on (below) that this is true even after the first iteration: with a constant reference, the only cause of the observed variations in output across subsequent iterations of the loop is the initial change of disturbance value. This is true even though on these iterations the input quantity’s value is affected by a non-zero value of the output quantity, because that non-zero value is a residual of the as-yet not offset effect of the original change in disturbance.

BA: I see you are changing ground from “is it true?� to “does it matter?�  It matters if you want to understand how control systems actually do what they do, as opposed to believing that they work according to some kind of magic in which the causal relationship between disturbance and output is abolished by a new form of causality. It doesn’t matter for how we should go about studying living control systems if one understands that the causal relationship between disturbance and output is not what one would expect to find if the system operated in open-loop fashion, but instead is the approximate inverse of the environmental feedback function.

Bruce

···

Second iteration:

My claim: On the second and subsequent iterations, fact that the Output quantity (and therefore the Feedback quantity) may be non-zero is due to the continuing, uncompensated residual effect of the disturbance left over from previous iterations.

Proof:

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 5 = -5

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -5

Error signal = Reference signal – Percepptual signal = 0 – (-5) = 5

Output quantity = Old Output quantity + (Output gain * Error signal – Old Outputt quantity) * dt * Slowing factor

= 0 + (1005 – 0) * 0.1 * 0.05 = 0 + (500.005) = 2.5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 *2. 5 = 2.5

Note that the initial value of +5 for the Feedback quantity was caused by the Disturbance effect during the first iteration. It did not appear by the “magic� of “circular causality.� Circular causality is just an ordinary chain of cause and effect that feeds back onto the input quantity. Given that the Reference signal and Disturbance effect are constant, one can easily trace the sequence around the loop to see that this is true If either the Reference signal, Disturbance effect, or both are changing, it becomes difficult to attribute current values of the variables around the loop to the Disturbance effect, Reference signal, or combined current and residual effects of each. But only the changes in Reference signal and Disturbance effect are independent causes of those changes.

Loop gain

Just to complete this tutorial, the loop gain of a control system is the product of the gains all around the loop:

Loop gain = Input gain + Output gain + Feedback gain = 10 * 100 * 1.0 = 100

A proportional control system experiencing a disturbance will not be able to fully compensate for its effect on the Input quantity, because the output level depends on the error level and an error level of zero would produce no output and thus no compensation. The Disturbance effect on the Input quantity is reduced by the control system by a factor of 1/(1+Loop gain) relative to the effect it would have in the absence of control. In this example

Residual Disturbance effect = Disturbance effect * 1/(1+Loop gain) = -10 * 1/(1+100) = -10 * 1/101 = -10 * 0.0099 = - 0.099. With the Input gain = 1.0, the Perceptual signal = -0,0099. Recall that the Reference signal was set to zero, so this is also the magnitude of the residual error.

I believe my computations are accurate, but if I’ve made any arithmetic errors I will take no offense at having them corrected.

Bruce

[From Rick Marken (2017.10.22.1100)]

···

Bruce Abbott (2017.10.21.1500 EDT)

Â

BA:Â The point is to refute your oft-stated claim that there is no causal connection between disturbance and change in output.Â

RM: The causal connection is between the combined effect of disturbance and output on output. This is captured by the organism function of PCT:Â

o = k (r - p) where p = o + d so o = kr + k(o+d)

RM: The disturbance, d, is certainly part of the causal connection to output,o. So saying that there is “no causal connection” between disturbance and output is, indeed, incorrect. What is true – and what I should have been saying if I wasn’t – is that the causal connection between d and o cannot be seen in the observed relationship between d and o. What Bill showed is that the observed relationship between d and o is the inverse of the feedback connection between d and the controlled variable, q.i. So taking the relationship between d and o to reflect a causal relationship is an understandable mistake; a behavioral illusion.Â

Â

BA: I show in the portion you did not comment on (below) that this is true even after the first iteration: with a constant reference, the only cause of the observed variations in output across subsequent iterations of the loop is the initial change of disturbance value.Â

RM: This sequential state “proof” is quite misleading. You say it proves that the only cause of observed variations in output is the disturbance. If this were true then the correct equation describing the relationship between disturbance and output in a control system would be:Â

o = f(d)Â Â Â Â

RM: where f() is the organism function that is the causal path through the organism from disturbance (stimulus) to output (response). This is the cause effect model of behavior that is the basis of experimental psychology. According to Powers (1978), taking the observed relationship between d and o to reflect the causal path from d to o is a mistake when the system under study is a closed loop system. The actual relationship between o and d for a closed loop system s

o = g^-1(d)Â Â

That is, the observed relationship between o and d reflects the inverse of the feedback function, not the causal connection between d and o. That’s because, contrary to your sequential state “proof”, d is not the only cause of variations in o. Variations in o are caused by the simultaneous effect of d and o on the controlled quantity, q.i.

Â

BA: I see you are changing ground from “is it true?� to “does it matter?�

RM: Yes, that’s because I would like to know why you believe this. Why do you believe that Powers (1978) is the one paper on PCT that got it all wrong?

BA: It matters if you want to understand how control systems actually do what they do, as opposed to believing that they work according to some kind of magic in which the causal relationship between disturbance and output is abolished by a new form of causality.Â

RM: PCT explains how control systems work and it isn’t by “magic” (I see you’ve been talking to Martin;-). It works through normal causal processes that happen to be organized in a negative feedback loop. Powers (1978) gives the equations that define the causal connections in a negative feedback loop. When we study real living control systems we are trying to understand how these systems actually do what they do. Essential to understanding this is determining what perceptual variables they are controlling. Once you know that, you can determine how outputs affect inputs (by inspection) and how inputs affect outputs (by fitting the model to data).Â

RM: Until one understand this, they will not be able to make any contributions to our understanding of how living control systems work. Â

Best

Rick

Â

It doesn’t matter for how we should go about studying living control systems if one understands that the causal relationship between disturbance and output is not what one would expect to find if the system operated in open-loop fashion, but instead is the approximate inverse of the environmental feedback function.

Â

Bruce

 Second iteration:

Â

My claim: On the second and subsequent iterations, fact that the Output quantity (and therefore the Feedback quantity) may be non-zero is due to the continuing, uncompensated residual effect of the disturbance left over from previous iterations.

Â

Proof:

Â

Disturbance effect = -10

Input quantity = Disturbance effect + Feedback quantity = -10 + 5 = -5

Perceptual signal = Input gain * Input quantity = 1.0 * -10 = -5

Error signal = Reference signal – Perceptual signal = 0 – (-5) = 5

OOutput quantity = Old Output quantity + (Output gain * Error signal – Old Output quantity) * dt * Slowing factor

               = 0 + (1005 – 0) * 0.1 * 0.05 = 0 + (500.005) = 2.5

Feedback quantity = Feedback gain * Feedback quantity = 1.0 *2. 5 = 2.5

Â

Note that the initial value of +5 for the Feedback quantity was caused by the Disturbance effect during the first iteration. It did not appear by the “magic� of “circular causality.� Circular causality is just an ordinary chain of cause and effect that feeds back onto the input quantity. Given that the Reference signal and Disturbance effect are constant, one can easily trace the sequence around the loop to see that this is true If either the Reference signal, Disturbance effect, or both are changing, it becomes difficult to attribute current values of the variables around the loop to the Disturbance effect, Reference signal, or combined current and residual effects of each. But only the changes in Reference signal and Disturbance effect are independent causes of those changes.

Â

Loop gain

Â

Just to complete this tutorial, the loop gain of a control system is the product of the gains all around the loop:

Â

Loop gain = Input gain + Output gain + Feedback gain = 10 * 100 * 1.0 = 100

Â

A proportional control system experiencing a disturbance will not be able to fully compensate for its effect on the Input quantity, because the output level depends on the error level and an error level of zero would produce no output and thus no compensation. The Disturbance effect on the Input quantity is reduced by the control system by a factor of 1/(1+Loop gain) relative to the effect it would have in the absence of control. In this example

Â

Residual Disturbance effect = Disturbance effect * 1/(1+Loop gain) = -10 * 1/(1+100) = -10 * 1/101 = -10 * 0.0099 = - 0.099. With the Input gain = 1.0, the Perceptual signal = -0,0099. Recall that the Reference signal was set to zero, so this is also the magnitude of the residual error.

Â

I believe my computations are accurate, but if I’ve made any arithmetic errors I will take no offense at having them corrected.

Â

Bruce


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Abbott (2017.10.23.0920 EDT)]

Rick Marken (2017.10.22.1100) –

Bruce Abbott (2017.10.21.1500 EDT)

BA: The point is to refute your oft-stated claim that there is no causal connection between disturbance and change in output.

RM: This sequential state “proof” is quite misleading. You say it proves that the only cause of observed variations in output is the disturbance. If this were true then the correct equation describing the relationship between disturbance and output in a control system would be:

o = f(d)

RM: where f() is the organism function that is the causal path through the organism from disturbance (stimulus) to output (response). This is the cause effect model of behavior that is the basis of experimental psychology. According to Powers (1978), taking the observed relationship between d and o to reflect the causal path from d to o is a mistake when the system under study is a closed loop system. The actual relationship between o and d for a closed loop system s

o = g^-1(d)

That is, the observed relationship between o and d reflects the inverse of the feedback function, not the causal connection between d and o. That’s because, contrary to your sequential state “proof”, d is not the only cause of variations in o. Variations in o are caused by the simultaneous effect of d and o on the controlled quantity, q.i.

BA: Two points: (1) It is not true that if my sequential analysis were true, “then the correct equation describing the relationship between disturbance and output in a control system would be o = f(d); (2) The only variable affecting o in equation o = g^-1(d) is d, which proves my point that when the reference signal is constant, d is the only external variable affecting o.

BA: The reason why your analysis fails is that you have made the mistake of using quasi steady-state equations to analyze system dynamics. My analysis uses the same equations we use in our computer simulations to track the behavior of a proportional control system over time as it responds to changes in the disturbance variable. Together these equations, applied iteratively, provide an approximate numerical solution; to derive the exact solution requires the dynamical equations of calculus. When you run the simulation, one result is that the system’s output turns out to vary according to the inverse of the feedback function. This proves that your assertion, that my sequential analysis would imply that o = f(d), is false.

BA: Having established the points I wished to prove, I am not going to engage with you further on this issue.

Bruce

[From Rick Marken (2017.10.23.2020)]

···

Bruce Abbott (2017.10.23.0920 EDT)–

Â

BA:   Two points: (1) It is not true that if my sequential analysis were true, “then the correct equation describing the relationship between disturbance and output in a control system would be o = f(d); (2) The only variable affecting o in equation o = g^-1(d) is d, which proves my point that when the reference signal is constant, d is the only external variable affecting o.Â

RM: Though I disagree, I’ll give you point (1). But not point (2). The “behavioral illusion” equation,Â

o = g^-1(d),

doesn’t say that d is the only variable affecting (causing) o. It says that d a variable that is not affecting o. The function g in that equation describes the causal connection from o to the controlled input, q.i, as in,Â

q.i = g(o)

The inverse of this equation describes the causal connection from q.i to o – a causal connection that does not exist (the direction of causality goes from o to q.i). So the observed relationship between d and o is a correlation that does not imply causality. The correlation between d and o exists because of the way causality works when it is organized into a negative feedback loop. Powers (1978) showed that when causality is so organized, there will be a relationship between d and o but that relationship is not a causal relationship but rather, a *side-effect of the operation of causality in a negative feedback loop. *And the observed relationship between o and d will be an approximation to the inverse of the function relating o to q.i, the goodness of the approximation depending on how well q.i is controlled.

RM: What this means, of course, is that the observed relationship between d and o really tells us nothing about the nature of the organism under study. The observed relationship between d and o doesn’t “conceal” the true causal path from d to o; it doesn’t say anything about that path. You can’t “recover” that causal path -- the actual organism function – by taking the inverse of the inverse of the feedback function, for example. So any attempt to understand organisms by looking for relationships between variations in external events (independent variables) and the organism’s responses (dependent variables) – even if this relationship is looked for under controlled conditions -- will tell you nothing about about the behaving system under study. This is only true, of course, if the system under study is a negative feedback control system (N-system, per Powers (1978)); that is, only if it is a living system. This is the conclusion of the analysis in Powers (1978) and it is obviously pretty revolutionary and not something scientific psychologists particularly want to hear (or understand; to paraphrase Upton Sinclair “It’s hard to get people to understand something when success in their career depends on their not understanding it.”).Â

RM: The main things that looking at the relationship between d and o doesn’t tell you is that the organism is controlling some aspect of its environment, q.i, and what that aspect of the environment is. This can be illustrated using my “What is Size” demo (www.mindreadings.com/ControlDemo/Size.html). The graphs below show the relationship between d (the height of a rectangle) and o (width of that rectangle) in two different runs of the experiment. In the first (upper) run the function relating d to o appears quite non-linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k/d. In the second (lower) run the function relating d to o appears to be quite linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k-d.Â

Â

RM: From the experimenter’s perspective, nothing changed from the first to the second run; the d variable (height) is the same and the o variable (height) is the same yet we see two very different functions relating d to o. For some reason the causal path from d to o has changed for no apparent reason. If this were a real psychological experiment aimed at determining the relationship between d and o the inclination of the researcher might conclude that the “true” causal path from d to o is the average of these two results. But, of course, this misses the true reason for the difference in the shape of the relationship between d and o in the two cases. The difference is the variable that is being controlled, q.i.Â

RM: In the first run, the area of the rectangle was controlled and in the second the perimeter of the rectangle was controlled. This change in controlled variable – a change invisible to the researcher because it was made only in the mind of the behaving system – resulted in a change in the observed relationship between d and o because it changed the feedback connection from o to q.i. This is a change in the feedback connection which required no change in the physical link between o and q.i

RM: When area was controlled, q.i = o * d, which is the feedback function from o to q.i Since q.i was being held in a reference state, k, k = o * d and the inverse of this function o = k/d, same as the function that gives the best fit to the upper non-liner relationship between d and o. And when perimeter was controlled, q.i = o + d and since q.i was again being held in a reference state, k, so k = o + d and the inverse is o = k - d, again the same as the function that gives the best fit to the lower linear relationship between d and o.

RM: The point is that what is missing when you just look at the relationship between disturbances (stimuli) and output (responses) is the controlled variable, q.i, which is the link in “organism function” relating d to o. You have to know what an organism is controlling in order to understand why it acts as it does in the presence of certain “stimulus variables”. The revolution of PCT results from the “discovery” of controlled variables; the aspects of the environment around which behavior is organized. It’s controlled variables that have been left out of the explanation of behavior. It’s controlled variables that explain the observed relationships between stimulus and response variables. It’s these stimulus-response relationships that seduced psychologists into thinking that these relationships reveal something about the nature of organisms. Unfortunately, they are a red herring that has blinded psychologists to the most important fact about living organisms; they keep various aspects of their environment in reference states, protected (by variations in their outputs) from disturbances (which appear to act as stimuli).Â

BA:Â Having established the points I wished to prove, I am not going to engage with you further on this issue.

RM: I thank you for engaging as far as you have. You and Martin always give me good ideas for how to explain Bill’s truly revolutionary discoveries. I won’t engage further on this myself-- or at least I’ll try not to. But I will leave you with the final passage from what I believe is to be Bill’s foreword to LCS:IV, the book that he hoped would set out the new direction for the revolutionary new science of life based onPCT:

BP: The massive opposition [to PCT] from some quarters and
the passive resistance from others came as a disappointing surprise, but
perhaps it shouldn’t have. Science has a social as well as an intellectual
aspect. Scientists are not stupid. They can look at an idea and quickly work out
where it fits in with existing knowledge and where it doesn’t. And scientists
are all too human: when they see that the new idea means their life’s work
could end up mostly in the trash-can, their second reaction is simply to think
“That idea is obviously wrong.”
That relieves the sinking feeling in
the pit of the stomach that is the first reaction. Being wrong about something
is unpleasant enough; being wrong about something one has worked hard to learn
and has believed, taught, written about, and researched, is close to
intolerable. All scientists of any talent have had that experience. The best of
them have recognized that their own principles require them to put those
personal reactions aside or at least save them for later. They know that any
such upheaval is going to be important, and ignoring it is not an option. But
those who recognize and embrace a revolution in science are the exception. Most
scientists practice ‘normal science’ within a securely established – and
well-defended – paradigm.
Â
BP: That is what we are up against here, and have
been struggling with since before most of you readers were born. We have spent
that time learning more about this new idea and getting better at explaining
it, but no better at persuading others to change their minds in a serious way
when their career commitments are threatened by it. What we had thought would
be a nutritious and deliciously buttered carrot has proven to function like a
stick. The clearer we have made the idea, the more defenses it has aroused.
Â
BP: We are now facing reality. This is going to be a
revolution whether we like it or not. There are going to be arguments,
screaming and yelling or cool and polite. It’s time to sink or swim.
[Italics mine]

RM: Makes me think Bill really meant it when he said PCT overthrows the basis of scientific psychology.

BestÂ

Rick


Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2017.10.28.11:12 ET)]

Rick Marken (2017.10.22.1100) –

> o = k (r - p) where p = o + d so o = k*r + k*(o+d)

Because it could be confusing, we should be clear that this is not a general formulation, but rather only for the case where r = 0 as stipulated for the proof in Bruce Abbott (2017.10.20.2015 EDT).

The expression (o + d) represents the net effect of output o and disturbance d upon aspects of the environment that are perceived as o.Â

If d = 0 and o = 0 (no disturbance, so no output), for any value of r other than zero it is not the case that p = 0.

The informal expression “no disturbance, so no output” expresses the inverse correlation between the disturbance and the output. Correlation does not entail causation. It doesn’t preclude it either.

In my view, this is not what is important. To deny that the negative correlation of output and disturbance is causation is rhetoric of the wrong kind. It is anti-persuasive rhetoric. Is there an arrow of causation between disturbance and output? Suppose there is. The important thing is that the arrow traverses a negative-feedback control loop. That recognition is cognitive jolt enough.

···

On Mon, Oct 23, 2017 at 11:20 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2017.10.23.2020)]

Bruce Abbott (2017.10.23.0920 EDT)–

Â

BA:   Two points: (1) It is not true that if my sequential analysis were true, “then the correct equation describing the relationship between disturbance and output in a control system would be o = f(d); (2) The only variable affecting o in equation o = g^-1(d) is d, which proves my point that when the reference signal is constant, d is the only external variable affecting o.Â

RM: Though I disagree, I’ll give you point (1). But not point (2). The “behavioral illusion” equation,Â

o = g^-1(d),

doesn’t say that d is the only variable affecting (causing) o. It says that d a variable that is not affecting o. The function g in that equation describes the causal connection from o to the controlled input, q.i, as in,Â

q.i = g(o)

The inverse of this equation describes the causal connection from q.i to o – a causal connection that does not exist (the direction of causality goes from o to q.i). So the observed relationship between d and o is a correlation that does not imply causality. The correlation between d and o exists because of the way causality works when it is organized into a negative feedback loop. Powers (1978) showed that when causality is so organized, there will be a relationship between d and o but that relationship is not a causal relationship but rather, a *side-effect of the operation of causality in a negative feedback loop. *And the observed relationship between o and d will be an approximation to the inverse of the function relating o to q.i, the goodness of the approximation depending on how well q.i is controlled.

RM: What this means, of course, is that the observed relationship between d and o really tells us nothing about the nature of the organism under study. The observed relationship between d and o doesn’t “conceal” the true causal path from d to o; it doesn’t say anything about that path. You can’t “recover” that causal path -- the actual organism function – by taking the inverse of the inverse of the feedback function, for example. So any attempt to understand organisms by looking for relationships between variations in external events (independent variables) and the organism’s responses (dependent variables) – even if this relationship is looked for under controlled conditions -- will tell you nothing about about the behaving system under study. This is only true, of course, if the system under study is a negative feedback control system (N-system, per Powers (1978)); that is, only if it is a living system. This is the conclusion of the analysis in Powers (1978) and it is obviously pretty revolutionary and not something scientific psychologists particularly want to hear (or understand; to paraphrase Upton Sinclair “It’s hard to get people to understand something when success in their career depends on their not understanding it.”).Â

RM: The main things that looking at the relationship between d and o doesn’t tell you is that the organism is controlling some aspect of its environment, q.i, and what that aspect of the environment is. This can be illustrated using my “What is Size” demo (www.mindreadings.com/ControlDemo/Size.html). The graphs below show the relationship between d (the height of a rectangle) and o (width of that rectangle) in two different runs of the experiment. In the first (upper) run the function relating d to o appears quite non-linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k/d. In the second (lower) run the function relating d to o appears to be quite linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k-d.Â

Â

RM: From the experimenter’s perspective, nothing changed from the first to the second run; the d variable (height) is the same and the o variable (height) is the same yet we see two very different functions relating d to o. For some reason the causal path from d to o has changed for no apparent reason. If this were a real psychological experiment aimed at determining the relationship between d and o the inclination of the researcher might conclude that the “true” causal path from d to o is the average of these two results. But, of course, this misses the true reason for the difference in the shape of the relationship between d and o in the two cases. The difference is the variable that is being controlled, q.i.Â

RM: In the first run, the area of the rectangle was controlled and in the second the perimeter of the rectangle was controlled. This change in controlled variable – a change invisible to the researcher because it was made only in the mind of the behaving system – resulted in a change in the observed relationship between d and o because it changed the feedback connection from o to q.i. This is a change in the feedback connection which required no change in the physical link between o and q.i

RM: When area was controlled, q.i = o * d, which is the feedback function from o to q.i Since q.i was being held in a reference state, k, k = o * d and the inverse of this function o = k/d, same as the function that gives the best fit to the upper non-liner relationship between d and o. And when perimeter was controlled, q.i = o + d and since q.i was again being held in a reference state, k, so k = o + d and the inverse is o = k - d, again the same as the function that gives the best fit to the lower linear relationship between d and o.

RM: The point is that what is missing when you just look at the relationship between disturbances (stimuli) and output (responses) is the controlled variable, q.i, which is the link in “organism function” relating d to o. You have to know what an organism is controlling in order to understand why it acts as it does in the presence of certain “stimulus variables”. The revolution of PCT results from the “discovery” of controlled variables; the aspects of the environment around which behavior is organized. It’s controlled variables that have been left out of the explanation of behavior. It’s controlled variables that explain the observed relationships between stimulus and response variables. It’s these stimulus-response relationships that seduced psychologists into thinking that these relationships reveal something about the nature of organisms. Unfortunately, they are a red herring that has blinded psychologists to the most important fact about living organisms; they keep various aspects of their environment in reference states, protected (by variations in their outputs) from disturbances (which appear to act as stimuli).Â

BA:Â Having established the points I wished to prove, I am not going to engage with you further on this issue.

RM: I thank you for engaging as far as you have. You and Martin always give me good ideas for how to explain Bill’s truly revolutionary discoveries. I won’t engage further on this myself-- or at least I’ll try not to. But I will leave you with the final passage from what I believe is to be Bill’s foreword to LCS:IV, the book that he hoped would set out the new direction for the revolutionary new science of life based onPCT:

BP: The massive opposition [to PCT] from some quarters and
the passive resistance from others came as a disappointing surprise, but
perhaps it shouldn’t have. Science has a social as well as an intellectual
aspect. Scientists are not stupid. They can look at an idea and quickly work out
where it fits in with existing knowledge and where it doesn’t. And scientists
are all too human: when they see that the new idea means their life’s work
could end up mostly in the trash-can, their second reaction is simply to think
“That idea is obviously wrong.”
That relieves the sinking feeling in
the pit of the stomach that is the first reaction. Being wrong about something
is unpleasant enough; being wrong about something one has worked hard to learn
and has believed, taught, written about, and researched, is close to
intolerable. All scientists of any talent have had that experience. The best of
them have recognized that their own principles require them to put those
personal reactions aside or at least save them for later. They know that any
such upheaval is going to be important, and ignoring it is not an option. But
those who recognize and embrace a revolution in science are the exception. Most
scientists practice ‘normal science’ within a securely established – and
well-defended – paradigm.
Â
BP: That is what we are up against here, and have
been struggling with since before most of you readers were born. We have spent
that time learning more about this new idea and getting better at explaining
it, but no better at persuading others to change their minds in a serious way
when their career commitments are threatened by it. What we had thought would
be a nutritious and deliciously buttered carrot has proven to function like a
stick. The clearer we have made the idea, the more defenses it has aroused.
Â
BP: We are now facing reality. This is going to be a
revolution whether we like it or not. There are going to be arguments,
screaming and yelling or cool and polite. It’s time to sink or swim.
[Italics mine]

RM: Makes me think Bill really meant it when he said PCT overthrows the basis of scientific psychology.

BestÂ

Rick


Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rick Marken (2017.10.28.17435)]

···

Bruce Nevin (2017.10.28.11:12 ET)

> o = k (r - p) where p = o + d so o = k*r + k*(o+d)

BN: Because it could be confusing, we should be clear that this is not a general formulation, but rather only for the case where r = 0 as stipulated for the proof in Bruce Abbott (2017.10.20.2015 EDT).

RM: Not true. The reference signal, r, is a variable.

BN: The expression (o + d) represents the net effect of output o and disturbance d upon aspects of the environment that are perceived as o.Â

BN: If d = 0 and o = 0 (no disturbance, so no output), for any value of r other than zero it is not the case that p = 0.

BN: The informal expression “no disturbance, so no output” expresses the inverse correlation between the disturbance and the output. Correlation does not entail causation. It doesn’t preclude it either.

RM: There are many problems with the expression “no disturbance, so no output”, not least of which being that it is wrong. If there were “no disturbance” (a phrase that only makes sense if it means “no effect of any variables on the controlled variable other than the system’s output”) there could still be output (which also only makes sense if it means “variation in the output variable”) due to variations in the reference specification for the controlled variable, r.Â

BN: In my view, this is not what is important. To deny that the negative correlation of output and disturbance is causation is rhetoric of the wrong kind.

 RM: I don’t think it’s rhetoric. It’s just a statement of fact; one of the most important facts about behavior that comes from PCT. What Powers (1978) showed is that the appearance of a causal relationship between disturbance and output in a control system has led scientific psychology astray since its founding as a discipline. It has led scientific psychologists to focus on studying these illusory causal relationships while ignoring the reason why these relationships exist - controlled variables.

BN: It is anti-persuasive rhetoric. Is there an arrow of causation between disturbance and output? Suppose there is. The important thing is that the arrow traverses a negative-feedback control loop. That recognition is cognitive jolt enough.

RM: The goal isn’t cognitive jolt; the goal is to explain why the study of living organisms should focus on describing the variables these organisms control, not on the variables that appear to “cause” behavior.Â

BestÂ

Rick

/Bruce


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Mon, Oct 23, 2017 at 11:20 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2017.10.23.2020)]

Bruce Abbott (2017.10.23.0920 EDT)–

Â

BA:   Two points: (1) It is not true that if my sequential analysis were true, “then the correct equation describing the relationship between disturbance and output in a control system would be o = f(d); (2) The only variable affecting o in equation o = g^-1(d) is d, which proves my point that when the reference signal is constant, d is the only external variable affecting o.Â

RM: Though I disagree, I’ll give you point (1). But not point (2). The “behavioral illusion” equation,Â

o = g^-1(d),

doesn’t say that d is the only variable affecting (causing) o. It says that d a variable that is not affecting o. The function g in that equation describes the causal connection from o to the controlled input, q.i, as in,Â

q.i = g(o)

The inverse of this equation describes the causal connection from q.i to o – a causal connection that does not exist (the direction of causality goes from o to q.i). So the observed relationship between d and o is a correlation that does not imply causality. The correlation between d and o exists because of the way causality works when it is organized into a negative feedback loop. Powers (1978) showed that when causality is so organized, there will be a relationship between d and o but that relationship is not a causal relationship but rather, a *side-effect of the operation of causality in a negative feedback loop. *And the observed relationship between o and d will be an approximation to the inverse of the function relating o to q.i, the goodness of the approximation depending on how well q.i is controlled.

RM: What this means, of course, is that the observed relationship between d and o really tells us nothing about the nature of the organism under study. The observed relationship between d and o doesn’t “conceal” the true causal path from d to o; it doesn’t say anything about that path. You can’t “recover” that causal path -- the actual organism function – by taking the inverse of the inverse of the feedback function, for example. So any attempt to understand organisms by looking for relationships between variations in external events (independent variables) and the organism’s responses (dependent variables) – even if this relationship is looked for under controlled conditions -- will tell you nothing about about the behaving system under study. This is only true, of course, if the system under study is a negative feedback control system (N-system, per Powers (1978)); that is, only if it is a living system. This is the conclusion of the analysis in Powers (1978) and it is obviously pretty revolutionary and not something scientific psychologists particularly want to hear (or understand; to paraphrase Upton Sinclair “It’s hard to get people to understand something when success in their career depends on their not understanding it.”).Â

RM: The main things that looking at the relationship between d and o doesn’t tell you is that the organism is controlling some aspect of its environment, q.i, and what that aspect of the environment is. This can be illustrated using my “What is Size” demo (www.mindreadings.com/ControlDemo/Size.html). The graphs below show the relationship between d (the height of a rectangle) and o (width of that rectangle) in two different runs of the experiment. In the first (upper) run the function relating d to o appears quite non-linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k/d. In the second (lower) run the function relating d to o appears to be quite linear; and, indeed, the relationship between o and d would be fit best by an equation of the form o = k-d.Â

Â

RM: From the experimenter’s perspective, nothing changed from the first to the second run; the d variable (height) is the same and the o variable (height) is the same yet we see two very different functions relating d to o. For some reason the causal path from d to o has changed for no apparent reason. If this were a real psychological experiment aimed at determining the relationship between d and o the inclination of the researcher might conclude that the “true” causal path from d to o is the average of these two results. But, of course, this misses the true reason for the difference in the shape of the relationship between d and o in the two cases. The difference is the variable that is being controlled, q.i.Â

RM: In the first run, the area of the rectangle was controlled and in the second the perimeter of the rectangle was controlled. This change in controlled variable – a change invisible to the researcher because it was made only in the mind of the behaving system – resulted in a change in the observed relationship between d and o because it changed the feedback connection from o to q.i. This is a change in the feedback connection which required no change in the physical link between o and q.i

RM: When area was controlled, q.i = o * d, which is the feedback function from o to q.i Since q.i was being held in a reference state, k, k = o * d and the inverse of this function o = k/d, same as the function that gives the best fit to the upper non-liner relationship between d and o. And when perimeter was controlled, q.i = o + d and since q.i was again being held in a reference state, k, so k = o + d and the inverse is o = k - d, again the same as the function that gives the best fit to the lower linear relationship between d and o.

RM: The point is that what is missing when you just look at the relationship between disturbances (stimuli) and output (responses) is the controlled variable, q.i, which is the link in “organism function” relating d to o. You have to know what an organism is controlling in order to understand why it acts as it does in the presence of certain “stimulus variables”. The revolution of PCT results from the “discovery” of controlled variables; the aspects of the environment around which behavior is organized. It’s controlled variables that have been left out of the explanation of behavior. It’s controlled variables that explain the observed relationships between stimulus and response variables. It’s these stimulus-response relationships that seduced psychologists into thinking that these relationships reveal something about the nature of organisms. Unfortunately, they are a red herring that has blinded psychologists to the most important fact about living organisms; they keep various aspects of their environment in reference states, protected (by variations in their outputs) from disturbances (which appear to act as stimuli).Â

BA:Â Having established the points I wished to prove, I am not going to engage with you further on this issue.

RM: I thank you for engaging as far as you have. You and Martin always give me good ideas for how to explain Bill’s truly revolutionary discoveries. I won’t engage further on this myself-- or at least I’ll try not to. But I will leave you with the final passage from what I believe is to be Bill’s foreword to LCS:IV, the book that he hoped would set out the new direction for the revolutionary new science of life based onPCT:

BP: The massive opposition [to PCT] from some quarters and
the passive resistance from others came as a disappointing surprise, but
perhaps it shouldn’t have. Science has a social as well as an intellectual
aspect. Scientists are not stupid. They can look at an idea and quickly work out
where it fits in with existing knowledge and where it doesn’t. And scientists
are all too human: when they see that the new idea means their life’s work
could end up mostly in the trash-can, their second reaction is simply to think
“That idea is obviously wrong.”
That relieves the sinking feeling in
the pit of the stomach that is the first reaction. Being wrong about something
is unpleasant enough; being wrong about something one has worked hard to learn
and has believed, taught, written about, and researched, is close to
intolerable. All scientists of any talent have had that experience. The best of
them have recognized that their own principles require them to put those
personal reactions aside or at least save them for later. They know that any
such upheaval is going to be important, and ignoring it is not an option. But
those who recognize and embrace a revolution in science are the exception. Most
scientists practice ‘normal science’ within a securely established – and
well-defended – paradigm.
Â
BP: That is what we are up against here, and have
been struggling with since before most of you readers were born. We have spent
that time learning more about this new idea and getting better at explaining
it, but no better at persuading others to change their minds in a serious way
when their career commitments are threatened by it. What we had thought would
be a nutritious and deliciously buttered carrot has proven to function like a
stick. The clearer we have made the idea, the more defenses it has aroused.
Â
BP: We are now facing reality. This is going to be a
revolution whether we like it or not. There are going to be arguments,
screaming and yelling or cool and polite. It’s time to sink or swim.
[Italics mine]

RM: Makes me think Bill really meant it when he said PCT overthrows the basis of scientific psychology.

BestÂ

Rick


Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery