Ashby's Law of Requisite Variety

[From Bruce Abbott (2012.10.12.2110 EST)]

Rick Marken (2012.12.10.1630)--

Bill Powers (2012.12.10.0805 MST)

BP: My current impression, helped by Bruce A and Martin himself is
that it is only the observer who calculates the information and uses
it to analyze the control system.

Bill is right; the control system doesn't calculate the information and then
use it as a means of control. As both Martin and I have had to repeat almost
incessantly, information theory is a tool for ANALYSIS. Analysis is
performed by a person, not the control system in question. And Martin has
provided a very nice illustration of how such an analysis can be applied to
one type of control system.

RM: That's fine. Then my little demo shows that control can happen when
there is no information about the disturbance observed in the output of the
control system.

Ah, but we've already demonstrated that there is. My post (and Martin's)
explained how it's still there, even in your two-level control example.
Saying it ain't so is just denial on your part. Or perhaps you just didn't
read those posts, or made no attempt to comprehend the analysis.

Now, here's a question for YOU, if you're up to the challenge. When control
is excellent, with high gain, and the reference signal is constant, how is
it that the pattern of variation of the disturbance is mirrored by the
pattern of variation of the feedback to the CV, without that pattern being
evident in the error signal? (I suspect that this the reason you don't
believe that the information content of the disturbance is not transmitted
via the error signal to the feedback variable.)

Follow-up question: In the case described above, the feedback waveform
almost perfectly matches the disturbance waveform. In what sense is it that
the feedback waveform tells you nothing about the disturbance waveform?
(Nothing = no information in the disturbance waveform appears in the
feedback waveform.)

Bruce

[Martin Taylor 2012.12.10.23.28]

[From Rick Marken (2012.12.10.0845)]

MT: It looks as though you intended the two sentences quoted above to be in contradiction with each other, though my "long and elaborate description" argued that they are actually mutually supportive, as you know if you read the message in question.

RM: I did read the message, at least up to the point where you said:

qo(t) = -d0*(1-e^(-G*(t-t0))).

Pretty fancy equation but if it's meant to describe the behavior of
variables in a control loop then it's nonsense because output
variations, qo(t), are not a function of disturbance variations, d, in
a control system. Once again, you have left out the controlled
variable.

This is the second time recently that you have denied the validity of the usual equations for a control system in your obsessive effort to prove --- I'm not sure what you want to prove, other than that you feel compelled to find some way to show that what I say must be wrong.

Here's the derivation of the equation. I use Laplace transforms to make it easy. It's easy this way because for a linear system the equations are the same as for the static analysis, and quite often you can look up the time functions when you have finished the algebra. If you want to check out the specific expressions, you can find them in Wikipedia at <Laplace transform - Wikipedia.

o(s) = G*e(s)/s
       = G*(r(s) - p(s))/s
       = -G*i(s)/s since r is constant zero, and we are taking the perceptual input function to be p(t) = i(t).
       = -G*(o(s) + d(s))/s

o(s)*(1+G/s) = -G*d(s)/s
o(s) = -d(s)*(G/s)/(1+G/s)
       = -d(s)*G/(s+G)

For a step function of size D at time 0, d(s) = D/s. In my example that you stopped reading when you determined that the usual control loop equations don't apply in this thread, the step size was d0 and the time of the step was t0.

o(s) = -d0*G/(s*(s+G)), which means
o(t) = -d0*(1-e^-G(t-t0))

which is what I wrote. (When I wrote it, I was actually going from memory of an analysis by Kennaway, which I can't find at the moment, so I'm glad I did get it right.)

Now maybe you will be encouraged to read beyond that point in my message. When you have done that you might be able to answer the questions that you asked in the part of the message you didn't read.

You never did answer my question as to whether you understood from the first part of my long message how control and measurement are closely related.

Martin

[From Rick Marken (2012.12.10.2130)]

Bruce Abbott (2012.10.12.2110 EST)]

BP: My current impression, helped by Bruce A and Martin himself is
that it is only the observer who calculates the information and uses
it to analyze the control system.

BA: Bill is right; the control system doesn’t calculate the information and then
use it as a means of control.

RM : You forgot to include Bill’s sentence that preceded your quote:

BP: The impression he [Martin] gives is that the information IS there, and is thus
somehow used by the control system to do its controlling – or that controlling
depends on the control system’s somehow knowing about that information.

RM: This was based on the following from Martin to me" “If you have actually read the message you know that in addition to showing how and how rapidly information gets from the disturbance to the output…”

So at least Martin thinks that information is more than an analysis tool; it is something that actually goes from disturbance to output, presumably guiding the output so that it compensates for the disturbance.

RM: Then my little demo shows that control can happen when
there is no information about the disturbance observed in the output of the
control system.

BA: Ah, but we’ve already demonstrated that there is.

RM: I don’t recall your doing that. I think you did say something about this control task requiring a two level control system. But the graph of the results of this control task shows that there is no information about the disturbance in the output (zero correlation between disturbance and output); and today I’ve also shown that there is no information about the polarity of the changing feedback function (zero correlation between polarity and output). So using information theory as an analysis tool shows that there is nothing to analyze.

BA: My post (and Martin’s) explained how it’s still there

RM: Yes, you both explained why it (information) is still there even though there is no evidence that it is there at all. Doesn’t this kind of go against your “it’s only for analysis” story? My actual analysis of the information transmitted from disturbance to output and from feedback function to output showed that there can be good control with no evidence of information about either the disturbance or feedback function in the output of a controller who is controlling a variable just fine.

BA: Now, here’s a question for YOU, if you’re up to the challenge. When control
is excellent, with high gain, and the reference signal is constant, how is
it that the pattern of variation of the disturbance is mirrored by the
pattern of variation of the feedback to the CV, without that pattern being
evident in the error signal?

RM: First, I have shown in my little demo that control can be excellent, with high gain, and the pattern of variation of the disturbance will not necessarily mirror the disturbance. But the best answer to your question about the relationship between error and the output mirroring the disturbance to a controlled variable (when it does) was given by Bill in one of his recent posts to me:

BP: A negative feedback control system doesn’t need to know anything about the disturbing variable, because all it needs to know is the behavior of the controlled variable. If that variable departs from the reference level for any reason at all, and the reason does not include changes that make the control system stop working, the control system will experience an error which the output function converts into a rate of change of the output in the direction that will cause the error to decrease. The control systems we describe do no calculations using information as a variable.

RM: There is simply no need for information theory as an explanation or as a tool for the analysis of control systems. The only reason I can see for clinging to information theory at all is to maintain one’s belief in the basic tenets of the causal model of behavior; that is, to cling to the belief that PCT requires no fundamental change in the way we go about the business of studying and explaining the behavior of living organisms.

BA: Follow-up question: In the case described above, the feedback waveform
almost perfectly matches the disturbance waveform. In what sense is it that
the feedback waveform tells you nothing about the disturbance waveform?

(Nothing = no information in the disturbance waveform appears in the
feedback waveform.)

RM: I’ve never heard of a feedback waveform; feedback functions, sure, but waveforms?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2012.12.10.2215)]

Martin Taylor (2012.12.10.23.28) –

RM: I did read the message, at least up to the point where you said:

qo(t) = -d0*(1-e^(-G*(t-t0))).

Pretty fancy equation but if it’s meant to describe the behavior of

variables in a control loop then it’s nonsense because output

variations, qo(t), are not a function of disturbance variations, d, in

a control system. Once again, you have left out the controlled

variable.

MT: Here’s the derivation of the equation…

o(s) = G*e(s)/s

  = G*(r(s) - p(s))/s

  = -G*i(s)/s  since r is constant zero, and we are taking the perceptual input function to be p(t) = i(t).

= -G*(o(s) + d(s))/s

RM: I think this is where the error happens. I think you need another equation here:

      i(s) = F*o(s)/s+d(s)/s

where F is the feedback function connecting output to input. Then we would get

     = -G*(F*o(s)+d(s)/s

or something like that. I don’t know how to do Laplace algebra. But I know that your disturbance-output equation, which is

o(s) = -d(s)*G/(s+G)

should be something like

o(s) = -d(s)*1/(F+s)

That is, the relationship between output, o(s) and disturbance, d(s) is the inverse of the feedback function connecting output to controlled variable, i(s). Your equation says that the organism (via the transfer function or gain, G) transforms the disturbance directly into output. But this is precisely what PCT shows not to be true; Bill showed it algebraically in his 1978 Psych Review paper (see pp. 144-146 in Living Control Systems I).

Maybe Richard Kennaway (whose posts you really should address) and/or Bill can give you the correct Laplace derivation of the disturbance - output relationship in a closed loop control system. But I’m pretty sure that the derivation you are using is wrong, and it’s wrong in just the way needed to preserve the S-R view of behavior. Because that’s what you derivation says.

Your equation o(s) = -d(s)*G/(s+G) is just a fancy way of saying R = f(S); behavior is caused by external stimuli. And you (and Bruce, who approved of your analysis) certainly don’t believe that…do you?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.11.10.32]

[From Rick Marken (2012.12.10.2215)]

It's really extraordinary the lengths to which you will go to try to

avoid the conclusion that there is mutual information between the
disturbance and the output. But here I think you must have reached
the extreme of irrationality.

      Martin

Taylor (2012.12.10.23.28) –

        RM: I did

read the message, at least up to the point where you said:

          qo(t) = -d0*(1-e^(-G*(t-t0))).



          Pretty fancy equation but if it's meant to describe the

behavior of

          variables in a control loop then it's nonsense because

output

          variations, qo(t), are not a function of disturbance

variations, d, in

          a control system. Once again, you have left out the

controlled

          variable.
      MT: Here's the derivation of the equation...



      o(s) = G*e(s)/s

            = G*(r(s) - p(s))/s

            = -G*i(s)/s  since r is constant zero, and we are taking

the perceptual input function to be p(t) = i(t).
= -G*(o(s) + d(s))/s

      RM: I think this is where the error happens. I think you need

another equation here:

                i(s) = F*o(s)/s+d(s)/s



      where F is the feedback function connecting output to input. 

Then we would get

               = -G*(F*o(s)+d(s)/s
That is almost correct for the more general case; you just missed

the fact that the environmental feedback path is likely to have some
time dynamics, so it should read

-G*(F(s)*o(s)+d(s))/s

but since in the control system we were analysing the environmental

feedback path is a simple connector, F(s) = 1.0, leaving the
equation as I stated it.


That is, the relationship between output, o(s) and
disturbance, d(s) is the inverse of the feedback function
connecting output to controlled variable, i(s).

True. The inverse of 1.0 is 1.0.
      Your equation says that the organism (via the transfer

function or gain, G) transforms the disturbance directly into
output. But this is precisely what PCT shows not to be true;
Bill showed it algebraically in his 1978 Psych Review paper
(see pp. 144-146 in Living Control Systems I).

OK, let's run through the simple algebraic equations of PCT 101.

qo = G*e

   = G*(r-p)

   = G*(r-qi)  if the perceptual input function is a unity

multiplier

   = G*(r-(qo+qd))  if the environmental feedback function is a

unity multiplier

qo*(1+G) = G*(r -qd)

qo = r*(G/(1+G) -qd*(G/(1+G))

if r is a constant zero, as in the case I was analyzing

qo = -qd*(G/(1+G))

For large G (good control) this approaches

qo = -qd
      Maybe Richard Kennaway (whose posts you really should address)

and/or Bill can give you the correct Laplace derivation of the
disturbance - output relationship in a closed loop control
system.

I will answer Kennaway's post separately, after changing the subject

line. I should be very interested if he were to show that my
derivation is wrong in any important way.

      But I'm pretty sure that the derivation you are using is

wrong, and it’s wrong in just the way needed to preserve the
S-R view of behavior. Because that’s what you derivation says.

      Your equation o(s) = -d(s)*G/(s+G) is just a fancy way of

saying R = f(S); behavior is caused by external stimuli. And
you (and Bruce, who approved of your analysis) certainly don’t
believe that…do you?

No. What I, and I suppose Bruce, Bill and you, believe is that a

control system produces output that keeps the controlled perceptual
variable near the value of the reference variable by opposing the
disturbance.

Also what I, and I suppose Bruce and Bill but not you, believe is

that this fact has certain consequences that can be computed from
knowledge of the properties of the paths and functions of the
control system.

It so happens that when the paths are simple connectors with no time

delay and all the functions other than a pure integrator output
function are unit multipliers, one consequence is that in the long
term the output matches the negative of the disturbance. Considered
as a function of time rather than a long-term stable solution, the
approach of the output value to the disturbance value is exponential
(as is the approach of the input value to the reference value).

Changing the nature of the paths and functions will change the

waveforms and the stable values achieved an infinite time after a
step change in the disturbance, but they will not change the fact
that the final output value will be computable from the disturbance
value. The final output value will be F^-1(d) (where F^-1(.) is the
inverse of the the environmental feedback path and d is the value of
the disturbance after the step. The reason this is so is that the
system controls its perception and the perception is a function only
of the value of the input quantity.

I wonder what objection you will think of next?

Martin

[From Rick Marken (2012.12.11.1315)]

Martin Taylor (2012.12.11.10.32)–

MT: It's really extraordinary the lengths to which you will go to try to

avoid the conclusion that there is mutual information between the
disturbance and the output. But here I think you must have reached
the extreme of irrationality.

RM: Now, now Martin. Let’s let those observing this discussion make their own judgment about my level of irrationality. Most probably already agree with you anyway so don’t worry.

RM: Then we would get

               = -G*(F*o(s)+d(s)/s
MT: That is almost correct for the more general case; you just missed

the fact that the environmental feedback path is likely to have some
time dynamics, so it should read

-G*(F(s)*o(s)+d(s))/s



but since in the control system we were analysing the environmental

feedback path is a simple connector, F(s) = 1.0, leaving the
equation as I stated it.

RM: I think that’s a bad idea because it leaves out the important fact that if is F (the feedback function) and not G (the organism function) that determines the relationship between o and d. And, indeed, the presence of G in you disturbance-output equation above seems wrong as well. The gain function, G, falls out of my linear solution to the relationship between o and d. Let me show you what I get when I use just linear functions in the analysis.

I start with two equations that define a closed-loop control system:

(1) o =k.o* (r-p)

(2) p = k.f *o+ k.e * d

Equation (1) is the “system” or “organism” equation that shows the relationship between output, o, and input, p. k.o is the “organism function” which is assumed to be linear. So the processes in the organism that convert input into output are represented by the coefficient, k.o, of the linear equation that converts error (r-p) into output (o). Equation (2) is the “environment” equation that converts output and disturbances, d, into an input perception, p. k.f is the “feedback function” and k.e is the “disturbance function”. Again, both functions are assumed to be linear. In our basic tracking task k.f and k.e are typically 1. So we often write equation 2 as p = o + d. But, in fact, these functions can take on any value and can even change over time.

Solving (1) and (2) simultaneously for o I get:

(3) o = 1/[(1/k.o)+k.f]*r - k.e/[(1/k.o+k.f]*d

This equation can be simplified by assuming that k.o is a very large number (it represents loop gain) and r = 0. When we make these assumptions, the term 1/k.o goes to 0 as does r so the the first term in the equation disappears and equation (3) simplifies to:

(4) o = -(k.e/k.f)*d

Notice that the organism function, k.o, does not show up in the relationship between o and d when k.o (gain) is very large. This means that the better the control of p, the less the organism has to do with the observed relationship between o and d. The main determinants of the relationship between o and d when control is good (when the system is high gain) are k.e (the function determining the effect of d on p) and k.f (the function determining the relationship between o and p).

In the typical tracking task, where k.e and k.o are 1, we say (and find) that

(5) o = -d

But if we change k.e or k.f then the relationship between o and d will change. If, for example, we reverse the sign of the effect of the disturbance on the controlled perception – changing k.e from 1 to -1 (leaving k.f =1) we will find that o = -(-1)*d so that

(6) o = d

If we change the values of k.e or k.f through time we will see that o is proportional to the changing values of these coefficients. That’s what I did in my changing feedback function demo, where k.f was changed periodically from -1 to 1. Now that I’ve gone through this derivation again I can see that a much simpler way to have made my point (without the need for assuming a separate control system) would have been to simply vary k.e over time. In that case the correlatoin between d and o could have easily been made equal to 0.0 with control as good as it would be if k.e had remained equal to 1.0.

Equation (4) is the really important one; it’s the equation that defines (for a linear system) the behavioral illusion. It says that for a control system controlling p with high gain, the observed relationship between d and o depends on the feedback and disturbance functions, not on the organism at all. But there is no magic involved; in a control system, the observed relationship between d and o is, indeed, a side effect of the system doing what Bill said it does: it acts to keep the controlled perception from varying away from the reference, regardless of the factors that are influencing the state of that perception. When it does this, the relationship between it’s output and disturbances will follow equation (4).

      RM: Maybe Richard Kennaway (whose posts you really should address)

and/or Bill can give you the correct Laplace derivation of the
disturbance - output relationship in a closed loop control
system.

MT: I will answer Kennaway's post separately, after changing the subject

line. I should be very interested if he were to show that my
derivation is wrong in any important way.

RM: Me too!! I suppose it is possible that you get a different answer from the dynamic (Laplace) analysis. I do hope Richard shows the Laplace transform equivalent of my linear (and static) analysis.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.11.16.56]

[From Rick Marken (2012.12.11.1315)]

        Martin Taylor

(2012.12.11.10.32)–

        MT: It's really

extraordinary the lengths to which you will go to try to
avoid the conclusion that there is mutual information
between the disturbance and the output. But here I think you
must have reached the extreme of irrationality.

      RM: Now, now Martin. Let's let those observing this discussion

make their own judgment about my level of irrationality. Most
probably already agree with you anyway so don’t worry.

RM: Then we would get

                         = -G*(F*o(s)+d(s)/s
        MT: That is almost

correct for the more general case; you just missed the fact
that the environmental feedback path is likely to have some
time dynamics, so it should read

        -G*(F(s)*o(s)+d(s))/s



        but since in the control system we were analysing the

environmental feedback path is a simple connector, F(s) =
1.0, leaving the equation as I stated it.

      RM: I think that's a bad idea because it leaves out the

important fact that if is F (the feedback function) and not G
(the organism function) that determines the relationship
between o and d. And, indeed, the presence of G in you
disturbance-output equation above seems wrong as well. The
gain function, G, falls out of my linear solution to the
relationship between o and d. Let me show you what I get when
I use just linear functions in the analysis.

      I start with two equations that define a closed-loop control

system:

      (1)     o  =k.o* (r-p)



      (2)     p = k.f *o+ k.e * d





      Equation (1) is the "system" or "organism" equation that shows

the relationship between output, o, and input, p. k.o is the
“organism function” which is assumed to be linear. So the
processes in the organism that convert input into output are
represented by the coefficient, k.o, of the linear equation
that converts error (r-p) into output (o). Equation (2) is the
“environment” equation that converts output and disturbances,
d, into an input perception, p. k.f is the “feedback function”
and k.e is the “disturbance function”. Again, both functions
are assumed to be linear. In our basic tracking task k.f and
k.e are typically 1. So we often write equation 2 as p = o +
d. But, in fact, these functions can take on any value and can
even change over time.

      Solving (1) and (2) simultaneously for o I get:



      (3)     o = 1/[(1/k.o)+k.f]*r - k.e/[(1/k.o+k.f]*d



      This equation can be simplified by assuming that k.o is a very

large number (it represents loop gain) and r = 0. When we
make these assumptions, the term 1/k.o goes to 0 as does r so
the the first term in the equation disappears and equation (3)
simplifies to:

      (4)     o =  -(k.e/k.f)*d



      Notice that the organism function, k.o, does not show up in

the relationship between o and d when k.o (gain) is very
large. This means that the better the control of p, the less
the organism has to do with the observed relationship between
o and d. The main determinants of the relationship between o
and d when control is good (when the system is high gain) are
k.e (the function determining the effect of d on p) and k.f
(the function determining the relationship between o and p).

       In the typical tracking task, where k.e and k.o are 1, we say

(and find) that

      (5)       o = -d



      But if we change k.e or k.f then the relationship between o

and d will change. If, for example, we reverse the sign of the
effect of the disturbance on the controlled perception –
changing k.e from 1 to -1 (leaving k.f =1) we will find that o
= -(-1)*d so that

      (6) o = d



      If we change the values of k.e or k.f through time we will see

that o is proportional to the changing values of these
coefficients. That’s what I did in my changing feedback
function demo, where k.f was changed periodically from -1 to

  1. Now that I’ve gone through this derivation again I can see
    that a much simpler way to have made my point (without the
    need for assuming a separate control system) would have been
    to simply vary k.e over time. In that case the correlatoin
    between d and o could have easily been made equal to 0.0 with
    control as good as it would be if k.e had remained equal to
    1.0.

       Equation (4) is the really important one; it's the equation
    

that defines (for a linear system) the behavioral illusion. It
says that for a control system controlling p with high gain,
the observed relationship between d and o depends on the
feedback and disturbance functions, not on the organism at
all. But there is no magic involved; in a control system, the
observed relationship between d and o is, indeed, a side
effect of the system doing what Bill said it does: it acts to
keep the controlled perception from varying away from the
reference, regardless of the factors that are influencing the
state of that perception. When it does this, the relationship
between it’s output and disturbances will follow equation (4).

Without going through them line by line, I see nothing wrong with

your equations. They seem to give the expected result for the
control system you defined up front. Neither can I see the relevance
of your exercise. All you have done is put in a couple of extra
linear factors to define a slightly more general control system than
the one I analyzed, with no change to the conclusion that the value
of the output can be computed from knowledge of the loop path
parameters and of the value of the disturbance.

In other words, you have shown yourself that the output contains all

the information from the disturbance.

            RM: Maybe Richard Kennaway (whose posts you really

should address) and/or Bill can give you the correct
Laplace derivation of the disturbance - output
relationship in a closed loop control system.

        MT: I will answer

Kennaway’s post separately, after changing the subject line.
I should be very interested if he were to show that my
derivation is wrong in any important way.

        RM: Me too!! I suppose it is possible that you get a

different answer from the dynamic (Laplace) analysis. I do
hope Richard shows the Laplace transform equivalent of my
linear (and static) analysis.

I could do that, if you want. It's only a matter of replacing all

the time-varying signals and functions with the same letter followed
by (s), writing out the expression in s explicitly for each, solving
the algebra and converting back into time functions. In the case you
analyzed, all it wants is an extra pair of multiplicative constants.

No, we don't get a different answer from the Laplace analysis. What

we get is the waveform of how whatever variable you are looking at
approaches its stable value after a step in the disturbance (and/or
the reference). If your k (my G) and your f are time-varying
functions, the actual waveform of how the input variable and the
output variable change over time may not be exponential, but the
result is still computable. You just have to use k(s) and f(s) where
you have k and f. And the final values after infinite time don’t
change by using the Laplace analysis.

It is very interesting that you attempt to contradict me by changing

the name of one multiplier and adding another that I have already
discussed, thereby making the equations look different although they
are the same apart from the added multiplier.

What next, I wonder?

Martin

[From Bill Powers (2012.12.11.1532 MST)]

Martin Taylor 2012.12.11.16.56]--

(MT to RM): Without going through them line by line, I see nothing wrong with your equations. They seem to give the expected result for the control system you defined up front. Neither can I see the relevance of your exercise. All you have done is put in a couple of extra linear factors to define a slightly more general control system than the one I analyzed, with no change to the conclusion that the value of the output can be computed from knowledge of the loop path parameters and of the value of the disturbance.

BP: If we could just put all the sneering and point-scoring aside for a moment, it would not be hard to see what the problem is here.

The basic observed fact is that, in the final approximations with unity multipliers and all that, and with a constant zero reference signal, qo = -d.

That says that in a very simple high-gain control system, the output varies by very nearly the same amount as and in the opposite direction to the variations in the disturbance. That's what makes control work. So the fact is that the output is very close to being a function of the disturbance: o = F(d). That point is not in question.

What is in question is whether this function represents the forward path through the organism, or the feedback path through the environment, from output back to input. A naive observation of d and o would suggest that the function represents a property of the organism. Stimulus-response theory and behaviorism are the result of not knowing about negative feedback control systems. Without that knowledge, what other intepretation could have been given to the observations? This is one of our strongest arguments: we are not saying the old-time behaviorists were stupid or prejudiced or abnormally subject to illusions. We are saying only that there is a basic principle they didn't know about. Their observations were correct. Their interpretations were wrong.

···

===========================================
Martin, I could agree that the Laplace forms of the equations are simpler, but that is only in their appearance on paper. I get very little intuitive help from those equations which involve integrals from zero to infinity of "cisoidal oscillations" and make differential equations look like algebra, which they are not. I prefer the differential equation form in the time domain, in which I can almost see the physical behavior of the system. Frequency domain is OK, too, in moderation.

I sidetracked myself in a previous post without getting to an important point. I learned the rule of thumb that for unconditional stability, a control system should have just one simple integration in its loop, all the other relationships being proportional. In an analog computer which deals directly with continuous variables, you can then have almost as much loop gain as you want, as long as the gain falls off as 1/frequency; in a digital computer you have to be sure to make dt as small as it needs to be to preserve stability in the digital approximations. All that comes out as fancy theorems about Nyquist stuff, but it's really easy to understand in terms of continuous variables.

It follows that every stable control system acts like a system with one integration, and that means that the correlation between d and o can be high while the correlations of both d and o with the controlled variable are low to zero. No mystery there. Speaking of information as a fluid, it's easy to understand how information is transmitted from d to o without showing up in the intervening controlled variable and perceptual signal. That is, if I'm understanding correctly what you-all are saying about the relationships between correlations and information theory.

If we just focus on the issues and look for simple clear explanations, we can have a civil discussion without name-calling or putting-on of airs.

Best,

Bill P.

[From Fred Nickols (2012.12.11.1700 AZ)]

I apologize for jumping in here but I'm trying to follow this thread and I'm
really flummoxed by this comment from Bill:

That says that in a very simple high-gain control system, the output

varies by

very nearly the same amount as and in the opposite direction to the
variations in the disturbance. That's what makes control work. So the fact

is

that the output is very close to being a function of the disturbance: o =

F(d).

That point is not in question.

[Fred Nickols] I thought the output varies by very nearly the same amount as
and in the opposite direction to the variations in the controlled variable,
not variations in the "disturbance." The disturbance might vary by X but
its impact on the controlled variable might be something other than X. To
say that the output varies in a way that matches variations in the
disturbance implies (to my uneducated mind) a 1:1 correlation between
variations in the disturbance and variations in the controlled variable.

If I've got this wrong I apologize for butting in.

Best regards,

Fred Nickols
Distance Consulting LLC
www.nickols.us

[Fromk Bill Powers (2012.12.12.0315 MST)]

Fred Nickols (2012.12.11.1700 AZ) --

I apologize for jumping in here but I'm trying to follow this thread and I'm
really flummoxed by this comment from Bill:

> That says that in a very simple high-gain control system, the output
>varies by very nearly the same amount as and in the opposite direction to the
> variations in the disturbance.

[Fred Nickols] I thought the output varies by very nearly the same amount as
and in the opposite direction to the variations in the controlled variable,
not variations in the "disturbance." The disturbance might vary by X but
its impact on the controlled variable might be something other than X. To
say that the output varies in a way that matches variations in the
disturbance implies (to my uneducated mind) a 1:1 correlation between
variations in the disturbance and variations in the controlled variable.

Say the controlled variable is the position of your car in its lane, measured sideways from the center of the lane. The disturbance is a force of 50 kilograms to the right (call that the positive direction) exerted by a crosswind on the mass of the car. How much force and in which direction should the wheels of the car be exerting, if you want to keep the car centered in its lane?

Do not confuse "the disturbance" with "change in the controlled variable." In common language, the term is used both to mean the cause of the change in the CV and the change that results from it. In PCT it means only the cause, and we have to use a different word for the effect on the CV. "Perturbation" has been suggested.

A perfect control system does not allow the controlled variable to change at all when a disturbance acts on it. In fact no real control system can accomplish perfect control, not even the compute-and-act kind being talked about by some cyberneticists that computes the required force and then executes it.

The compute-and-act kind can't do it because (a) no real system can measure the state of the disturbing variable or its effects without error, (b) not all causes of perturbations can be predicted or sensed, (c) even if the precise action required could be calculated from observations, no real actuator could carry it out exactly, and (d) in all real systems the information about the cause of a perturbation takes time to reach the controller, and so changes in the disturbance occurring during that lag time can't be instantly sensed so an instantly-computed correction can be transmitted to the controlled variable in zero time.

A negative-feedback control system senses the state of the controlled variable and generates an output that produces an effect on the controlled variable equal and opposite to the effect that any disturbance is having. Initially the output effect rises until it is greater than the effect of the disturbance. This causes the controlled variable to begin changing back toward the reference condition (remembering that no physical variable can change instantly from one value to another). As the CV approaches the reference condition, the output decreases. When the dynamic properties of the system are properly adjusted, the CV will come to a final value and stop changing. At this point, the error between the CV and its reference condition will be just large ehough to produce enough output to keep the error from either increasing or decreasing. The gain in the negative feedback loop determines how much error is needed to maintain a steady state, with larger gain meaning that less error is needed. In real devices, the gain can be increased until the best measuring devices can no longer report any difference between the CV and its reference condition. If those devices are used by the control system to sense the state of the CV, and the gain is high enough, the final error can be reduced to a level that can't be detected any more even by the best measuring devices.

The effect of the output on the CV has to be equal and opposite to the effect of the disturbance on the CV in order for a final steady state to exist. If the net effect is to be nearly zero, and if the functions connecting the output and the disturbance to the CV are identical, then it follows that the output must ened up equal and opposite to the disturbance.

Finally, all real integrators have some amount of leakage. Think of the error as water running from a faucet into a bucket, the bucket being the output function. The bucket has a hole in its bottom. As the water level in the bucket rises, the bucket gets heavier and exerts more force on its surroundings. The water pressure at the bottom increases and the rate of loss of water through the hole increases. Even with a constant trickle (not too fast) going into the bucket, therefore, there will be some water level at which the rate of leakage equals the incoming trickle and the water level stops rising. That is a "leaky integrator." If you increase the trickle a little, the water level will rise to a new equilbrium point. Decreasing the trickle will lower the equilibrium point. So the net effect is that the output increases and decreases as the error increases and decreases, but with a lag due to the time it takes to fill or empty to a new final water level. The leaky integrator acts like an amplifier with a lag built into it that slows the changes down.

See if you can fit that description into your intuitive picture of the control system.

Best,

Bill P.

From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill Powers
Sent: Wednesday, December 12, 2012 4:19 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby's Law of Requisite Variety

[Fromk Bill Powers (2012.12.12.0315 MST)]

Fred Nickols (2012.12.11.1700 AZ) --

>I apologize for jumping in here but I'm trying to follow this thread
>and I'm really flummoxed by this comment from Bill:
>
> > That says that in a very simple high-gain control system, the output
> >varies by very nearly the same amount as and in the opposite
> direction to the
> > variations in the disturbance.

>[Fred Nickols] I thought the output varies by very nearly the same
>amount as and in the opposite direction to the variations in the
>controlled variable, not variations in the "disturbance." The
>disturbance might vary by X but its impact on the controlled variable
>might be something other than X. To say that the output varies in a
>way that matches variations in the disturbance implies (to my
>uneducated mind) a 1:1 correlation between variations in the disturbance
and variations in the controlled variable.

BP: Say the controlled variable is the position of your car in its lane,

measured

sideways from the center of the lane. The disturbance is a force of 50
kilograms to the right (call that the positive direction) exerted by a

crosswind

on the mass of the car. How much force and in which direction should the
wheels of the car be exerting, if you want to keep the car centered in its
lane?

[Fred Nickols] Enough to offset the effects of the wind. I suppose you're
looking for "50 kilograms to the left."

Do not confuse "the disturbance" with "change in the controlled variable."

In

common language, the term is used both to mean the cause of the change in
the CV and the change that results from it. In PCT it means only the

cause,

and we have to use a different word for the effect on the CV.

"Perturbation"

has been suggested.

[Fred Nickols] I don't think I was confusing the two. In your example
above, I believe the disturbance is the wind exerting a force of 50 kg to
the right. The change in the controlled variable (the lane position of the
car) is whatever results from that 50 kg force.

A perfect control system does not allow the controlled variable to change

at

all when a disturbance acts on it. In fact no real control system can

accomplish

perfect control, not even the compute-and-act kind being talked about by
some cyberneticists that computes the required force and then executes it.

[Fred Nickols] By "perfect control system" I assume you mean "theoretically
perfect."

The compute-and-act kind can't do it because (a) no real system can

measure

the state of the disturbing variable or its effects without error, (b) not

all

causes of perturbations can be predicted or sensed, (c) even if the

precise

action required could be calculated from observations, no real actuator

could

carry it out exactly, and
(d) in all real systems the information about the cause of a perturbation

takes

time to reach the controller, and so changes in the disturbance occurring
during that lag time can't be instantly sensed so an instantly-computed
correction can be transmitted to the controlled variable in zero time.

[Fred Nickols] Which, I'm guessing is why my car always moves a little bit
before I can correct its position and why gusts are particularly
problematic.

A negative-feedback control system senses the state of the controlled
variable and generates an output that produces an effect on the controlled
variable equal and opposite to the effect that any disturbance is

having.[Fred Nickols] This is exactly what I thought.

BP: Initially the output effect rises until it is greater than the effect

of the

disturbance. [Fred Nickols] Which I take to be some kind of over

compensation. BP: This causes the controlled variable to begin changing back

toward the reference condition (remembering that no physical variable can
change instantly from one value to another). As the CV approaches the
reference condition, the output decreases. [Fred Nickols] I slack off on

the steering wheel as the car moves back into position. BP: When the dynamic
properties of

the system are properly adjusted, the CV will come to a final value and

stop

changing. At this point, the error between the CV and its reference

condition

will be just large ehough to produce enough output to keep the error from
either increasing or decreasing. [Fred Nickols] In wind from the left,

pushing me to the right, I keep the wheel turned a little more to the left
than I would if there were no wind to continue compensating. BP: The gain in
the negative feedback loop

determines how much error is needed to maintain a steady state, with

larger

gain meaning that less error is needed. In real devices, the gain can be
increased until the best measuring devices can no longer report any
difference between the CV and its reference condition. If those devices

are

used by the control system to sense the state of the CV, and the gain is

high

enough, the final error can be reduced to a level that can't be detected

any

more even by the best measuring devices.

The effect of the output on the CV has to be equal and opposite to the

effect

of the disturbance on the CV in order for a final steady state to exist.

[Fred Nickols] This is the point I was getting at - the difference between
the disturbance (e.g., the wind) and the effect of the disturbance (e.g.,
moving the car). BP: If the

net effect is to be nearly zero, and if the functions connecting the

output and

the disturbance to the CV are identical, then it follows that the output

must

ened up equal and opposite to the disturbance.[Fred Nickols] The effect

of the disturbance?

Finally, all real integrators have some amount of leakage. Think of the

error

as water running from a faucet into a bucket, the bucket being the output
function. The bucket has a hole in its bottom. As the water level in the

bucket

rises, the bucket gets heavier and exerts more force on its surroundings.

The

water pressure at the bottom increases and the rate of loss of water

through

the hole increases. Even with a constant trickle (not too fast) going into

the

bucket, therefore, there will be some water level at which the rate of
leakage equals the incoming trickle and the water level stops rising. That

is a

"leaky integrator." If you increase the trickle a little, the water level

will rise

to a new equilbrium point.
Decreasing the trickle will lower the equilibrium point. So the net effect

is

that the output increases and decreases as the error increases and
decreases, but with a lag due to the time it takes to fill or empty to a

new

final water level. The leaky integrator acts like an amplifier with a lag

built into

it that slows the changes down.

See if you can fit that description into your intuitive picture of the

control

system.[Fred Nickols] I'll give it my best shot. Thanks for taking the

time to clarify.

···

-----Original Message-----

Best,

Bill P.

[From Rick Marken (2012.12.11.0920)]

Bill Powers (2012.12.11.1532 MST)–

Martin Taylor 2012.12.11.16.56]–

(MT to RM): Without going through them line by line, I see nothing wrong with your equations. They seem to give the expected result for the control system you defined up front. Neither can I see the relevance of your exercise. All you have done is put in a couple of extra linear factors to define a slightly more general control system than the one I analyzed, with no change to the conclusion that the value of the output can be computed from knowledge of the loop path parameters and of the value of the disturbance.

BP: If we could just put all the sneering and point-scoring aside for a moment, it would not be hard to see what the problem is here.

RM: I think you put your finger on the reason for the sneering and point-scoring in this next paragraph:

BP: What is in question is whether this function represents the forward path through the organism, or the feedback path through the environment, from output back to input. A naive observation of d and o would suggest that the function represents a property of the organism. Stimulus-response theory and behaviorism [and cognitive theory and most applications of control theory as well-- RM] are the result of not knowing about negative feedback control systems. Without that knowledge, what other interpretation could have been given to the observations? This is one of our strongest arguments: we are not saying the old-time behaviorists were stupid or prejudiced or abnormally subject to illusions. We are saying only that there is a basic principle they didn’t know about. Their observations were correct. Their interpretations were wrong.

RM: I’ve bolded the real problem. When you point this out – that observed disturbance-output (S-R) relationships are basically a side effect of control rather than an effect of the disturbance on output via the organism – it is treated by those immersed in the conventional view of behavior as a personal affront. This just recently happened to me with respect to a paper I’ve finally managed to get published showing that the behavior in apparently “open-loop” tasks (psychology experiments) is actually closed- loop. The negative reviews of that paper, at least one of which was written by a person ostensibly committed to PCT, almost invariably suggested that I was saying that conventional psychologists are stupid or abnormally subject to illusions. I actually had to add a sentence to the paper saying specifically that I was not saying that at all.

It is just plain hard to understand (let alone accept) that observed relationships between stimulus inputs (disturbances to controlled variables) and response outputs, especially nearly perfect relationships like those we see in a tracking task, are not a result of the stimulus causing that response via the organism. The idea that the observed causal connection via the organism is an illusion – a side-effect of acting to keep a perception matching a reference specification – seems to violate the basic tenets not only of conventional psychology but of science itself. This is an enormous disturbance to people who, for whatever reason, are controlling for maintaining their conventional ideas about how behavior works; thus the push-back as sneering and point scoring.

But I find the conflict invigorating (like a good game of racquetball) because, whether I win or lose a particular game, I always come out feeling better (with great new ideas for research). So let’s keep up the dialog, if only for my sake;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.12.12.1030 MST)]

BP: If the

net effect is to be nearly zero, and if the functions connecting
the

output and

the disturbance to the CV are identical, then it follows that the
output

must

ened up equal and opposite to the disturbance.[Fred
Nickols] The effect

of the disturbance?

I thought I was being very careful and complete, yet when I got to this
last question of yours I reread the whole thing and realized that it
seems to mean something different from what I meant to convey. Amazing –
I knew what I meant and failed to see if the words I wrote said what I
meant.
What I should have said is that the disturbing variable is the wind
velocity
relative to the car, and that the effect of the
disturbance is the wind force acting on the car. The driver’s
output quantity is the angle of the steering and front wheels and
the effect of the output quantity is again a force acting on
the car
. The net effect is wind force minus steering force.

When the driver’s output effect is less than the effect of the wind
velocity, the car begins drifting sideways downwind; when output effect
is greater than effect of wind velocity, the car starts drifing sideways
upwind.

If a crosswind suddenly appears, a sideways force appears and the car
starts to drift

downwind. After it has moved some distance the driver reacts and begins
turning the steering wheel and front wheel toward the upwind direction.
The amount of turn increases until it is greater than what is needed to
stop the drift; the car starts drifting back upwind. As the position
error decreases, the driver reduces the angle of the steering wheel,
until (1) the car is very close to its reference position in the center
of the lane, and (2) the effect of the steering force on the car is equal
and opposite to the effect of the wind force on the car so the sideways
drift stops. In this diagram a positive force acts upwind, or to the
left.

(Set display font to Courier)

wind
steering

effect effect

(-force)
(+force)

Wind -------->DRIFT RATE<------------Front wheel angle
<— steering wheel angle

velocity

(Driver output)

(INTEGRATION)
^

v

POSITION OF
CAR

Position error |

--------------->Comparator --------------------->

^

Reference position

At equilibrium, +force = -force, sideways drift rate = zero. The position
of the car continues to change in the upwind direction as long as the net
drift rate is positive, and downwind during negative net drift
rates.

Best,

Bill P.

···

At 06:44 AM 12/12/2012 -0700, Fred Nickols wrote:

[Martin Taylor 2012.12.10.16.59]

[From Rick Marken (2012.12.10.0845)]

Martin Taylor (2012.12.09.23.49)--

RM: Martin just posted a long and elaborate
description of how a control system extracts information about the
disturbance from variations in the input variable. And I have
demonstrated over and over again that there is absolutely no
information about the disturbance in the controlled input to a control
system.

MT: It looks as though you intended the two sentences quoted above to be in
contradiction with each other, though my "long and elaborate description"
argued that they are actually mutually supportive, as you know if you read
the message in question.

RM: I did read the message, at least up to the point where you said:

qo(t) = -d0*(1-e^(-G*(t-t0))).

Pretty fancy equation but if it's meant to describe the behavior of
variables in a control loop then it's nonsense because output
variations, qo(t), are not a function of disturbance variations, d, in
a control system. Once again, you have left out the controlled
variable.

I think you should ask Richard Kennaway whether it is nonsense, since he is the mathematical god around here. Or work out the differential equations yourself.

MT: One pretty well follows from the other. If you have
actually read the message you know that in addition to showing how and how
rapidly information gets from the disturbance to the output, it also shows
why there is nearly (not "absolutely") no information about the disturbance
waveform in the waveform of the input variable of a system that is
accurately controlling its perception.

RM: I re-read the article and could not find such a conclusion
anywhere.

-----Quote:-------

  tx-t0 = 1/(G*log2(e)) = 1/(G*1.443...) seconds

That is the time it takes for q0 to halve its distance to its final value -d0 no matter what its starting value might have been, which means it has gained one bit of information about the value of the disturbance. The bit rate is therefore G*1.443... bits/sec. That is the rate at which the output quantity q0 gains information about the disturbance, and also is the rate at which the input quantity loses information about the disturbance. The input quantity must lose information about the disturbance, because it always approaches the reference value, no matter what the value of the disturbance step.

------end quote--------

  So you are saying that your analysis shows how a control
system extracts information about the disturbance from variations in
the input variable and that there is no information about the
disturbance in the controlled input to a control system.

Try reading the message again. Including the passage I just quoted.

I would
really appreciate it if you could give me a nice, succinct review of
how you showed that that is the case.

OK. The key insight is that it takes time for the effect of a change in the disturbance value to have its effect on the output value. During that time, you might say information bleeds from the input value to the output value (what changes is actually the uncertainties at the input and output about the disturbance and the reference -- there's actually no flow, but there are relationships that act like conservation laws that make it seem as though there is). The exponential graphs in my long message show how the signal values change after a step. I explained how their ranges can be used to estimate uncertainties and information (the change in uncertainty).

In the sense that the uncertainty about the disturbance change at the input increases while the information about the disturbance at the output increases, you can indeed think of information flow. But the information doesn't "guide" the behaviour of the circuit. It's a consequence of the operation of the circuit. The fact that an integrating function passes less of a high-frequency variation in the flow of electrons to its input than it does of a low-frequency variation does not imply that it looks at the electrons coming and going and says to itself "Hmm, these are changing direction pretty fast, so I'll relax a bit and not send out so many". One knows about the electron flow because one observes voltage, knowing the values of relevant resitances, capacitances, inductances, electron charge, and so forth and knows how to use equations to compute the flow. Likewise, one knows about information

MT: Incidentally, having carefully read (:wink: my "long and elaborate
description", did you find the relationship between control and measurement
with which it starts to be interesting or (heaven forbid) conceptually
useful?

RM: I'm afraid not. But Bruce A. did so maybe it was worth the obvious
effort you must have put into it.

I'm a bit surprised that you didn't find it interesting that control and measurement are so closely related. When you think about it, so far as I can see, all instances of measurement are instances of control, though the reverse may not be true. Whenever you "measure" an item, you are comparing the item with some standard that defines the number you come up with. You come up with the number by comparing more and less of the standard to your item until you arrive at a reference value for the difference between them, typically zero. That seemed interesting to me.

But since it does not specify the controlled variable -- whether you measure length or voltage or area or duration -- I can understand why you aren't interested.

Martin

[From Fred Nickols (2012.12.12.1152 AZ)]

Amazing indeed. I think I get it. However, I still need some clarification on one point (at least one).

BP: What I should have said is that the disturbing variable is the wind velocity relative to the car, and that the effect of the disturbance is the wind force acting on the car. The driver’s output quantity is the angle of the steering and front wheels and the effect of the output quantity is again a force acting on the car. The net effect is wind force minus steering force.

I get what you’re driving at (no pun intended) but is the “driver’s output quantity the angle of the steering and the front wheels” or is it muscular effort applied to the steering wheel?

Best regards,

Fred Nickols

Distance Consulting LLC

www.nickols.us

···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill Powers
Sent: Wednesday, December 12, 2012 11:33 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby’s Law of Requisite Variety

[From Bill Powers (2012.12.12.1030 MST)]

At 06:44 AM 12/12/2012 -0700, Fred Nickols wrote:

BP: If the

net effect is to be nearly zero, and if the functions connecting the
output and
the disturbance to the CV are identical, then it follows that the output
must
ened up equal and opposite to the disturbance.[Fred Nickols] The effect
of the disturbance?

I thought I was being very careful and complete, yet when I got to this last question of yours I reread the whole thing and realized that it seems to mean something different from what I meant to convey. Amazing – I knew what I meant and failed to see if the words I wrote said what I meant.
What I should have said is that the disturbing variable is the wind velocity relative to the car, and that the effect of the disturbance is the wind force acting on the car. The driver’s output quantity is the angle of the steering and front wheels and the effect of the output quantity is again a force acting on the car. The net effect is wind force minus steering force.

When the driver’s output effect is less than the effect of the wind velocity, the car begins drifting sideways downwind; when output effect is greater than effect of wind velocity, the car starts drifing sideways upwind.

If a crosswind suddenly appears, a sideways force appears and the car starts to drift
downwind. After it has moved some distance the driver reacts and begins turning the steering wheel and front wheel toward the upwind direction. The amount of turn increases until it is greater than what is needed to stop the drift; the car starts drifting back upwind. As the position error decreases, the driver reduces the angle of the steering wheel, until (1) the car is very close to its reference position in the center of the lane, and (2) the effect of the steering force on the car is equal and opposite to the effect of the wind force on the car so the sideways drift stops. In this diagram a positive force acts upwind, or to the left.

(Set display font to Courier)

       wind         steering   
      effect        effect
     (-force)      (+force)                              

Wind -------->DRIFT RATE<------------Front wheel angle <— steering wheel angle
velocity | (Driver output)
>(INTEGRATION) ^
v |
POSITION OF CAR |
> Position error |
--------------->Comparator --------------------->
^
> Reference position

At equilibrium, +force = -force, sideways drift rate = zero. The position of the car continues to change in the upwind direction as long as the net drift rate is positive, and downwind during negative net drift rates.

Best,

Bill P.

[From Bruce Abbott (2012.12.12.1500 EST)]

Rick Marken (2012.12.11.0920) –

Bill Powers (2012.12.11.1532 MST)

Martin Taylor 2012.12.11.16.56]

(MT to RM): Without going through them line by line, I see nothing wrong with your equations. They seem to give the expected result for the control system you defined up front. Neither can I see the relevance of your exercise. All you have done is put in a couple of extra linear factors to define a slightly more general control system than the one I analyzed, with no change to the conclusion that the value of the output can be computed from knowledge of the loop path parameters and of the value of the disturbance.

BP: If we could just put all the sneering and point-scoring aside for a moment, it would not be hard to see what the problem is here.

RM: I think you put your finger on the reason for the sneering and point-scoring in this next paragraph:

BP: What is in question is whether this function represents the forward path through the organism, or the feedback path through the environment, from output back to input. A naive observation of d and o would suggest that the function represents a property of the organism. Stimulus-response theory and behaviorism [and cognitive theory and most applications of control theory as well-- RM] are the result of not knowing about negative feedback control systems. Without that knowledge, what other interpretation could have been given to the observations? This is one of our strongest arguments: we are not saying the old-time behaviorists were stupid or prejudiced or abnormally subject to illusions. We are saying only that there is a basic principle they didn’t know about. Their observations were correct. Their interpretations were wrong.

RM: I’ve bolded the real problem. When you point this out – that observed disturbance-output (S-R) relationships are basically a side effect of control rather than an effect of the disturbance on output via the organism – it is treated by those immersed in the conventional view of behavior as a personal affront. This just recently happened to me with respect to a paper I’ve finally managed to get published showing that the behavior in apparently “open-loop” tasks (psychology experiments) is actually closed- loop. The negative reviews of that paper, at least one of which was written by a person ostensibly committed to PCT, almost invariably suggested that I was saying that conventional psychologists are stupid or abnormally subject to illusions. I actually had to add a sentence to the paper saying specifically that I was not saying that at all.

RM: It is just plain hard to understand (let alone accept) that observed relationships between stimulus inputs (disturbances to controlled variables) and response outputs, especially nearly perfect relationships like those we see in a tracking task, are not a result of the stimulus causing that response via the organism. The idea that the observed causal connection via the organism is an illusion – a side-effect of acting to keep a perception matching a reference specification – seems to violate the basic tenets not only of conventional psychology but of science itself. This is an enormous disturbance to people who, for whatever reason, are controlling for maintaining their conventional ideas about how behavior works; thus the push-back as sneering and point scoring.

RM: But I find the conflict invigorating (like a good game of racquetball) because, whether I win or lose a particular game, I always come out feeling better (with great new ideas for research). So let’s keep up the dialog, if only for my sake;-)

The “behavioral illusion” has nothing to do with this discussion. Neither Martin nor I have claimed that the relation between disturbance and output reveals anything about the organism, other than the fact that it is controlling the variable in question.

You apparently think that we have been making that claim. I was hoping to gain some insight into how you have come to this conclusion by reading your answers to my two simple questions. Sadly, I am still awaiting them.

Bruce

[From Rick Marken (2012.12.12.1320)]

Bruce Abbott (2012.12.12.1500 EST)–

BA: The “behavioral illusion” has nothing to do with this discussion.

RM: I submit that it has everything to to with it.

BA: Neither Martin nor I have claimed that the relation between disturbance and output reveals anything about the organism, other than the fact that it is controlling the variable in question.

RM: If this were true you certainly wouldn’t need information theory. To determine whether a particular variable is under control using disturbances to the hypothetical controlled variable to see if there is resistance (per “The Test”) what you should look at is the relationship between disturbances and the hypothetical controlled variable (as I do in my demo of the test at http://www.mindreadings.com/ControlDemo/Mindread.html) rather than the relationship between disturbances and outputs. That is, look at the relationship between o and q.i rather than that between o and d. If the hypothetical controlled variable, q.i, is indeed under control then there will be little or no correlation between d and q.i. You could also look for a high negative correlation between d and o (or, as you say, information about d in o) but if you know about the behavioral illusion – in linear form o = -(k.e/k.f)*d – you would know that the size and sign of this correlatoin could be influenced by variations in k.e and k.o. So you are better off looking for a lack of correlation between disturbance and hypothetical controlled input.

BA: You apparently think that we have been making that claim.

RM: Well, then why use information theory as an analysis tool? When you measure the information in the output about the disturbance to a controlled variable you are treating the organism as a communication channel: communicating information about d to o. So you are measuring a characteristic of the organism. But PCT shows that the relationship between disturbance and output has nothing to do with the organism (when control is good). So what you are actually measuring is characteristics of the feedback and disturbance function. You think you are measuring a characteristic of the organism (its ability to transfer information about the disturbance to its output) but what you are actually measuring is the nature of the feedback and disturbance functions of a control loop. You (and Martin) have fallen for the behavioral illusion hook, line and sinker.

BA: I was hoping to gain some insight into how you have come to this conclusion by reading your answers to my two simple questions. Sadly, I am still awaiting them.

RM: I thought I answered them. But hopefully this post will give you some idea of how I came to the conclusion that you (and Martin) have fallen (willingly, apparently, since you guys know PCT) for the behavioral illusion.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Chad Green (2012.12.12.1738 EST)]

Perhaps you both are correct in your assumptions, it's just that your
methods by nature of their dimensionality are limiting your
perspectives, similar to two flatlanders disputing similar phenomena
from separate axes.

Best,
Chad

Chad Green, PMP
Program Analyst
Loudoun County Public Schools
21000 Education Court
Ashburn, VA 20148
Voice: 571-252-1486
Fax: 571-252-1633

"If you want sense, you'll have to make it yourself." - Norton Juster

Richard Marken <rsmarken@GMAIL.COM> 12/12/2012 4:25 PM >>>

[From Rick Marken (2012.12.12.1320)]

Bruce Abbott (2012.12.12.1500 EST)--

****

BA: The “behavioral illusion�? has nothing to do with this

discussion.

RM: I submit that it has everything to to with it.

BA: Neither Martin nor I have claimed that the relation between
disturbance and output reveals anything about the organism, other

than the

fact that it is controlling the variable in question.

RM: If this were true you certainly wouldn't need information theory.
To
determine whether a particular variable is under control using
disturbances
to the hypothetical controlled variable to see if there is resistance
(per
"The Test") what you should look at is the relationship between
disturbances and the hypothetical controlled variable (as I do in my
demo
of the test at Mindreading)
rather than the relationship between disturbances and outputs. That
is,
look at the relationship between o and q.i rather than that between o
and
d. If the hypothetical controlled variable, q.i, is indeed under
control
then there will be little or no correlation between d and q.i. You
could
also look for a high negative correlation between d and o (or, as you
say,
information about d in o) but if you know about the behavioral illusion

···

--
in linear form o = -(k.e/k.f)*d -- you would know that the size and
sign of
this correlatoin could be influenced by variations in k.e and k.o. So
you
are better off looking for a _lack_ of correlation between disturbance
and
hypothetical controlled input.

****

BA: You apparently think that we have been making that claim.

RM: Well, then why use information theory as an analysis tool? When
you
measure the information in the output about the disturbance to a
controlled
variable you are treating the organism as a communication channel:
communicating information about d to o. So you are measuring a
characteristic of the organism. But PCT shows that the relationship
between
disturbance and output has nothing to do with the organism (when
control is
good). So what you are actually measuring is characteristics of the
feedback and disturbance function. You think you are measuring a
characteristic of the organism (its ability to transfer information
about
the disturbance to its output) but what you are actually measuring is
the
nature of the feedback and disturbance functions of a control loop.
You
(and Martin) have fallen for the behavioral illusion hook, line and
sinker.

BA: I was hoping to gain some insight into how you have come to this
conclusion by reading your answers to my two simple questions. Sadly,

I am

still awaiting them.

RM: I thought I answered them. But hopefully this post will give you
some
idea of how I came to the conclusion that you (and Martin) have fallen
(willingly, apparently, since you guys know PCT) for the behavioral
illusion.

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Snarkey Taylor 2012.12.12.19.01]

[From Bill Powers (2012.12.11.1532 MST)]

Martin Taylor 2012.12.11.16.56]--

(MT to RM): Without going through them line by line, I see nothing wrong with your equations. They seem to give the expected result for the control system you defined up front. Neither can I see the relevance of your exercise. All you have done is put in a couple of extra linear factors to define a slightly more general control system than the one I analyzed, with no change to the conclusion that the value of the output can be computed from knowledge of the loop path parameters and of the value of the disturbance.

BP: If we could just put all the sneering and point-scoring aside for a moment, it would not be hard to see what the problem is here.

Where is the sneering and point scoring in what you quote?

Rick has spent message after message claiming that if the environmental feedback path is a function other than a simple connector (a multiplication by 1.0), that makes a difference to the informational relationship between the output and the disturbance as compared to the relationship between the disturbance and the influence of the output. It doesn't, as I have equally often pointed out. The point is that if knowing one variable allows you to know another, the information relationship to a third quantity is the same for both. But he persists in claiming that a non-unity multiplier in the environmental feedback function has got to make a difference. It doesn't, unless the function is one that does not allow you to determine the value of the influence on the input variable from the value of the output (e.g. any environmental feedback function that spreads the effect of the output on the input variable over time).

I asked Rick whether he didn't find it interesting that measurement and control are so close in structure. I ask you the same question. Personally, I had not been aware of the relationship until I needed an example for my tutorial, and I did find it fascinating, if obvious after the fact.

The basic observed fact is that, in the final approximations with unity multipliers and all that, and with a constant zero reference signal, qo = -d.

That says that in a very simple high-gain control system, the output varies by very nearly the same amount as and in the opposite direction to the variations in the disturbance. That's what makes control work. So the fact is that the output is very close to being a function of the disturbance: o = F(d). That point is not in question.

What is in question is whether this function represents the forward path through the organism, or the feedback path through the environment, from output back to input.

Why do you say that is in question? Are you putting it in question? If so, why? I thought that what was in question, at least for Rick, was that to say o= -F(d) meant that one is accepting S-R theory. He has said more than once that in PCT, to say that the output is some function of the disturbance is to revert to S-R beliefs.

A naive observation of d and o would suggest that the function represents a property of the organism. Stimulus-response theory and behaviorism are the result of not knowing about negative feedback control systems. Without that knowledge, what other intepretation could have been given to the observations? This is one of our strongest arguments: we are not saying the old-time behaviorists were stupid or prejudiced or abnormally subject to illusions. We are saying only that there is a basic principle they didn't know about. Their observations were correct. Their interpretations were wrong.

Fair enough, but how is this relevant to the present thread? The behavioural illusion is well known, at least to CSGnet participants, if not to the world in general. It hasn't entered into anything Bruce or I have posted.

===========================================
Martin, I could agree that the Laplace forms of the equations are simpler, but that is only in their appearance on paper. I get very little intuitive help from those equations which involve integrals from zero to infinity of "cisoidal oscillations" and make differential equations look like algebra, which they are not. I prefer the differential equation form in the time domain, in which I can almost see the physical behavior of the system. Frequency domain is OK, too, in moderation.

Fair enough, that's your preference. No problem there. Laplace transforms are not everyone's preference, and their algebra is valid only for linear systems. They aren't differential equations, so they don't make differential equations look like algebra. The algebra of Laplace transforms gets difficult when there are simple transport lags, because lags appear as exponentials in the algebra. There are all sorts of cases in which I would find the use of differential equations preferable. Differential equations can be used in more general situations. But where the use of Laplace transformations is appropriate, they are very handy, and give the same results as the (to me) more complex differential equations. I don't expect you to use them, nor do I expect you to complain when I use them unless I use them improperly.

Martin

[From Rick Marken (2012.12.12.1645)]

I apparently sent something to Martin directly rather than to CSGNet. Here’s Martin’s reply to my post which I thought went to CSGNet.

Best

Rick

···

On Wed, Dec 12, 2012 at 4:38 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2012.12.12.19.31]

[From Rick Marken (2012.12.12.1255)]

      Martin

Taylor (2012.12.10.16.59) –

      MT: The key insight is that it takes time for the effect of a

change in the disturbance value to have its effect on the
output value.

      RM The disturbance has its effect on the output value via its

effect on the input, which is simultaneously affected by both
the disturbance and the output. So there is no effect of just
the disturbance on the output; the effect is of both the
disturbance and output.

True. But we were talking about the effect of a change in the

disturbance. Yes, there is the added change in the output. Without
taking that into account, the analysis would be quite wrong, and
different from the proposed analysis.

      MT: During

that time, you might say information bleeds from the input
value to the output value (what changes is actually the
uncertainties at the input and output about the disturbance
and the reference –

      RM: But the input value is a mix of "information" about the

disturbance and the output combined; so the information about
disturbance and output bleeds simultaneously.

True.

      You analysis assumes that any change in the input is due

only to the disturbance

False. How would we get the exponential approach of the output to

the disturbance if we didn’t take account of the effect of the
output on the input?

      ; but it's not. Everything the input does is a simultaneous

result of variations (or lack thereof) in both disturbance and
output. It’s your failure to recognize the simple fact that

       i = o + d that is the basic failure in your analyses.

No, its the recognition of that fact that make the analyses work.

DId you intend to send this to me privately? I would answer the same

publicly.

Martin


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com