Ashby's Law of Requisite Variety

[Martin Taylor 2012.12.12.19.39]

It depends what you want to use it for. Usually there's more than

one tool to do any job, but also usually one tool is more convenient
than the others. Different jobs are done better with different
tools. As I have said many times over a long period, if you don’t
find information theory useful for a problem you are trying to
solve, just don’t use it. A tool is only as useful as your expertise
with it in the context of the job at hand.
Yep. No problem there. I don’t think anyone has suggested using
information theory to determine whether a particular variable is
under control. I suppose you could, but why, when there are better
ways?
All good. Actually, here is a case where an information approach
might be better, since the changes you characterize by the
multiplier “k” don’t affect the information measures, whereas they
do affect the variable values. Using the information measures, you
are effectively immune to the behavioural illusion under many
circumstances.
Yes. That’s true. But we haven’t been making that claim, even though
we could. It may become interesting at some point, but at the moment
we are just laying the groundwork, in the same way that the algebra
of static control systems lays the groundwork for what you might see
an infinite time after a step change in the disturbance or
reference. What happens at infinite time isn’t really of much use in dealing
with the dynamic characteristics of control systems. To handle the
dynamic characteristics, you need to analyze both the environmental
feedback function and the internal properties. Differential
equations are one way to do that. But so is information theory,
which looks at a different aspect of the dynamic properties. But as
I say, we haven’t talked about that on CSGnet until now.
PCT doesn’t show that. It happens to be a fact of control systems,
demonstrable through the equations, no matter which way you look for
the influences. Do you take the static values for the functions in
the system and work back through the circuit until you complete the
loop? Do you take the differential equations and work back through
the circuit until you complete the loop? Do you take the Laplace
transforms of the functions and work back through the circuit until
you complete the loop? Do you take the information-handling aspects
of the paths and functions and work back through the loop until you
complete the circuit? Or do you use some other tool? They will all
give valid results to different questions, some of them about the
organism.
All of these tools work when control is good and when it isn’t. None
of them work when control is perfect, because that is an
unattainable system in the real world. One technique that is often
useful in figuring out the properties of complex systems is to
stress them in some way. One way to stress a control system is to
use disturbances that make control difficult. When control is good,
there are lots of possible control architecture that could work. You
might be able to tell them apart when control is not good.
But since you are interested only in what variable is being
controlled in any particular circumstance, you would be interested
in doing that. That problem is much more readily solved when control
is good, for all the reasons you state above.
No. The whole of the loop is measured, but in the particular
analysis I presented, all the functional properties were made very
simple – multiplications by 1.0 except for the pure integrator
output function – so as to clarify what was going on, and why the
whole loop matters.
No.
No. That’s still a mystery.
Martin

···

[From Rick Marken (2012.12.12.1320)]

              Bruce

Abbott (2012.12.12.1500 EST)–

              BA:

The “behavioral illusion” has nothing to do with this
discussion.

      RM: I submit that it has everything to to with it.
              BA:

Neither Martin nor I have claimed that the relation
between disturbance and output reveals anything about
the organism, other than the fact that it is
controlling the variable in question.

      RM: If this were true you certainly wouldn't need information

theory.

      To determine whether a particular variable is under control

using disturbances to the hypothetical controlled variable to
see if there is resistance (per “The Test”) what you should
look at is the relationship between disturbances and the
hypothetical controlled variable (as I do in my demo of the
test at http://www.mindreadings.com/ControlDemo/Mindread.html )
rather than the relationship between disturbances and outputs.

      That is, look at the relationship between o and q.i rather

than that between o and d. If the hypothetical controlled
variable, q.i, is indeed under control then there will be
little or no correlation between d and q.i. You could also
look for a high negative correlation between d and o (or, as
you say, information about d in o) but if you know about the
behavioral illusion – in linear form o = -(k.e/k.f)*d – you
would know that the size and sign of this correlatoin could be
influenced by variations in k.e and k.o. So you are better off
looking for a lack of correlation between disturbance and
hypothetical controlled input.

               BA:

You apparently think that we have been making that
claim.

      RM: Well, then why use information theory as an analysis tool?

When you measure the information in the output about the
disturbance to a controlled variable you are treating the
organism as a communication channel: communicating information
about d to o. So you are measuring a characteristic of the
organism.

      But PCT shows that the relationship between disturbance and

output has nothing to do with the organism (when control is
good).

      So what you are actually measuring is characteristics of

the feedback and disturbance function. You think you are
measuring a characteristic of the organism (its ability to
transfer information about the disturbance to its output) but
what you are actually measuring is the nature of the feedback
and disturbance functions of a control loop.

      You (and Martin) have fallen for the behavioral illusion

hook, line and sinker.

              BA:

I was hoping to gain some insight into how you have
come to this conclusion by reading your answers to my
two simple questions. Sadly, I am still awaiting them.

  RM: I thought I answered them. But hopefully this post will give

you some idea of how I came to the conclusion that you (and
Martin) have fallen (willingly, apparently, since you guys know
PCT) for the behavioral illusion.

[From Rick Marken (2012.12.12.1710)]

Snarkey Taylor (2012.12.12.19.01__

ST: Rick has spent message after message claiming that if the environmental feedback path is a function other than a simple connector (a multiplication by 1.0), that makes a difference to the informational relationship between the output and the disturbance as compared to the relationship between the disturbance and the influence of the output. It doesn’t, as I have equally often pointed out.

RM: OK, I’ve put a little tracking demo up on the net at

http://www.mindreadings.com/ControlDemo/InfoDist.html

It’s just a plain old compensatory tracking task. It’s a little tough but after some practice you can get as good at it as I am. I’ve attached the results of one of my tracking runs. The results are presented in the top left. The numbers under C-M, M-D and C-D are the observed correlations between cursor and output (mouse movements), output and disturbance and cursor and disturbance, respectively. RMS Error and Stability are both measures of the quality of control; it was pretty good. RMS of 17.1 means that on average the cursor deviated from the target by 17 pixels, about 4% of the total possible deviation that would have occurred if I had done nothing. The stability factor is the ratio of expected to observed variance of the cursor; actually, it’s the square root of the ratio to the actual variance of the cursor is about 16 time less than it would have been had done nothing. Both of these measures of control mean that I was controlling pretty darn well. But the interesting thing is that the correlation between the disturbance and my outputs (the M-D correlation) was -.001, virtually zero. It doesn’t look there is much information about the disturbance in my outputs. The graph shows a picture of the time variations in the three variables, cursor (C), disturbance(D) and output (M). and you can see that my output variations are completely unrelated to the disturbance yet the cursor stays quite close to the target.

So how much information does output contain about the disturbance in the demo?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.12.20.24]

Missing "not". ....you would NOT be interested in doing that.

Sorry.
Martin

···

[Martin Taylor 2012.12.12.19.39]

  long snip...

  But since you are interested only in what variable is being

controlled in any particular circumstance, you would be interested
in doing that. That problem is much more readily solved when
control is good, for all the reasons you state above.

[Martin Taylor 2012.12.12.20.28]

Firefox says that some plugins are needed to view this demo. I don't

like to install unknown plugins, so do you know what those plugins
might be?
Martin

···

[From Rick Marken (2012.12.12.1710)]

      Snarkey

Taylor (2012.12.12.19.01__

      ST: Rick has spent message after message claiming that if the

environmental feedback path is a function other than a simple
connector (a multiplication by 1.0), that makes a difference
to the informational relationship between the output and the
disturbance as compared to the relationship between the
disturbance and the influence of the output. It doesn’t, as I
have equally often pointed out.

  RM: OK, I've put a little tracking demo up on the net at



  [http://www.mindreadings.com/ControlDemo/InfoDist.html](http://www.mindreadings.com/ControlDemo/InfoDist.html)

From Bruce Abbott (2012.12.12.2030 EST)]

RM: Rick Marken (2012.12.12.1320)–

BA: Bruce Abbott (2012.12.12.1500 EST)

BA: The “behavioral illusion” has nothing to do with this discussion.

RM: I submit that it has everything to to with it.

BA: Neither Martin nor I have claimed that the relation between disturbance and output reveals anything about the organism, other than the fact that it is controlling the variable in question.

RM: If this were true you certainly wouldn’t need information theory.

Yes, in exactly the same way that you wouldn’t need either a time-domain or frequency-domain analysis to analyze the operation of a control system! (Think about it.)

To determine whether a particular variable is under control using disturbances to the hypothetical controlled variable to see if there is resistance (per “The Test”) what you should look at is the relationship between disturbances and the hypothetical controlled variable (as I do in my demo of the test at http://www.mindreadings.com/ControlDemo/Mindread.html) rather than the relationship between disturbances and outputs. That is, look at the relationship between o and q.i rather than that between o and d. If the hypothetical controlled variable, q.i, is indeed under control then there will be little or no correlation between d and q.i. You could also look for a high negative correlation between d and o (or, as you say, information about d in o) but if you know about the behavioral illusion – in linear form o = -(k.e/k.f)*d – you would know that the size and sign of this correlatoin could be influenced by variations in k.e and k.o. So you are better off looking for a lack of correlation between disturbance and hypothetical controlled input.

True enough, but what does this have to do with an information analysis? We’re not using it to determine whether a particular variable is under control, but to analyze system performance in a system in which a particular variable is under control. Just as you would perform a frequency-domain analysis for a similar purpose.

BA: You apparently think that we have been making that claim.

RM: Well, then why use information theory as an analysis tool? When you measure the information in the output about the disturbance to a controlled variable you are treating the organism as a communication channel: communicating information about d to o. So you are measuring a characteristic of the organism. But PCT shows that the relationship between disturbance and output has nothing to do with the organism (when control is good). So what you are actually measuring is characteristics of the feedback and disturbance function. You think you are measuring a characteristic of the organism (its ability to transfer information about the disturbance to its output) but what you are actually measuring is the nature of the feedback and disturbance functions of a control loop. You (and Martin) have fallen for the behavioral illusion hook, line and sinker.

If you transmit a message over the phone and I measure the information I receive in the message, am I measuring a characteristic of the communication channel, or of the message received? Clearly, I am NOT measuring a characteristic of that channel (unless the channel imposes some limit on the information it transmits). In exactly the same way, when you measure the information in the output about the disturbance to a controlled variable, you are treating the organism as a communication channel, but you are NOT measuring a characteristic of the organism (unless the control system as a communication channel imposes some limit on the information it transmits).

Even more important for your assertion that the behavioral illusion is somehow in play here, you are NOT inferring that the function relating disturbance to output is a characteristic of the organism.

BA: I was hoping to gain some insight into how you have come to this conclusion by reading your answers to my two simple questions. Sadly, I am still awaiting them.

RM: I thought I answered them. But hopefully this post will give you some idea of how I came to the conclusion that you (and Martin) have fallen (willingly, apparently, since you guys know PCT) for the behavioral illusion.

No, it completely evades my questions. To save you the trouble of looking them up, there they are again:

Now, here’s a question for YOU, if you’re up to the challenge. When control is excellent, with high gain, and the reference signal is constant, how is it that the pattern of variation of the disturbance is mirrored by the pattern of variation of the feedback to the CV, without that pattern being evident in the error signal? (I suspect that this the reason you don’t believe that the information content of the disturbance is not transmitted via the error signal to the feedback variable.)

Follow-up question: In the case described above, the feedback waveform almost perfectly matches the disturbance waveform. In what sense is it that the feedback waveform tells you nothing about the disturbance waveform? (Nothing = no information in the disturbance waveform appears in the feedback waveform.)

Over to you, Rick!

Bruce

[From Rick Marken (2012.12.12.1730)]

Martin Taylor (2012.12.12.19.39)

      RM: To determine whether a particular variable is under control

using disturbances to the hypothetical controlled variable to
see if there is resistance (per “The Test”) what you should
look at is the relationship between disturbances and the
hypothetical controlled variable (as I do in my demo of the
test at http://www.mindreadings.com/ControlDemo/Mindread.html )
rather than the relationship between disturbances and outputs.

MT: Yep. No problem there. I don't think anyone has suggested using

information theory to determine whether a particular variable is
under control.

RM: Actually, Bruce did in the statement to which I was replying.

              BA:

Neither Martin nor I have claimed that the relation
between disturbance and output reveals anything about
the organism, other than the fact that it is
controlling the variable in question.

RM: It sounds to me like Bruce is saying that neither of you is using informational analysis of of the relationship between disturbance and output for anything other than determining that the system (organism) is controlling the “variable in question” – ie. the variable that is thought to be under control. In other words, Bruce says that you are not claiming to be using informational analysisy for anything other than for testing for controlled variables.

      RM: That is, look at the relationship between o and q.i rather

than that between o and d. If the hypothetical controlled
variable, q.i, is indeed under control then there will be
little or no correlation between d and q.i. You could also
look for a high negative correlation between d and o (or, as
you say, information about d in o) but if you know about the
behavioral illusion – in linear form o = -(k.e/k.f)*d – you
would know that the size and sign of this correlatoin could be
influenced by variations in k.e and k.o. So you are better off
looking for a lack of correlation between disturbance and
hypothetical controlled input.

MT: All good. Actually, here is a case where an information approach

might be better, since the changes you characterize by the
multiplier “k” don’t affect the information measures, whereas they
do affect the variable values. Using the information measures, you
are effectively immune to the behavioural illusion under many
circumstances.

RM: Please show me how the informational measures are not changed by changes in k.e and k.o.

      RM: Well, then why use information theory as an analysis tool?

When you measure the information in the output about the
disturbance to a controlled variable you are treating the
organism as a communication channel: communicating information
about d to o. So you are measuring a characteristic of the
organism.

MT: Yes. That's true. But we haven't been making that claim, even though

we could. It may become interesting at some point, but at the moment
we are just laying the groundwork, in the same way that the algebra
of static control systems lays the groundwork for what you might see
an infinite time after a step change in the disturbance or
reference.

RM: Well, then like Wyle E. Coyote you are laying the groundwork for going into a trompe l’oeil tunnel.

      RM: But PCT shows that the relationship between disturbance and

output has nothing to do with the organism (when control is
good).

MT: PCT doesn’t show that.

RM: Does so;-) That’s what the behavioral illusion equation shows. o = -(k.e/k.f) d. The organism function, k.o does not appear in the relationship between disturbance and output.

MT: It happens to be a fact of control systems

RM: No, it’s not a fact.

MT: ,

demonstrable through the equations

RM: Only if you leave out the closed loop effects of output on itself;-)

Best

Rick

···


Richard S. Marken PhD

rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2012.12.12.1740)]

[Martin Taylor 2012.12.12.20.28]

RM: OK, I’ve put a little tracking demo up on the net at

  [http://www.mindreadings.com/ControlDemo/InfoDist.html](http://www.mindreadings.com/ControlDemo/InfoDist.html)
MT: Firefox says that some plugins are needed to view this demo. I don't

like to install unknown plugins, so do you know what those plugins
might be?

RM: I think it must be Java. I didn;t know that was considered a plug in.

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Re: Ashby's Law of Requisite Variety
[From Erling Jorgensen (2012.12.12.2100 EST)]

It has been hard to keep up with this discussion in real time. And by now
it is tempting just to shrug & figure the discussion has moved on. But on
the other hand, it seems now is the time to get some of my concerns out on
the table, rather than wait (another few years) until this periodic debate
about information theory cycles back around onto CSGNet.

My background is not mathematics. While I have a Ph.D. degree, I always
have to follow closely those who can translate the mathematical concepts
into more intuitive ways of understanding them. However, there are some
points, & indeed fallacies, that have only been lightly touched upon in
this discussion. And I need to see if my understanding matches that of
others.

Fred Nickols (2012.12.2.1530 AZ) initiated the discussion by asking about
how Ashby's "law" fits in with PCT, if at all. Subsequently, Bruce Abbott
(2012.12.3.1910 EST) gave the question a bit more rigor by pointing us to:

Ashby W.R. (1958) "Requisite variety and its implications for
the control of complex systems,"
Cybernetica 1:2, p. 83-99 ( available online at
http://pcp.vub.ac.be/books/AshbyReqVar.pdf in which Ashby shows that his
Law of Requisite Variety is closely related Shannon's Theorem 10.

When I read through Ashby's paper, a huge red flag reared up for me as he
discussed the error-controlled regulator. On page 9, he stated:

"To go to the extreme: if the regulator is totally successful, the error
will be zero unvaryingly, and the regulator will thus be cut off totally
from the information (about D's value) that alone can make it successful
--which is absurd. The error-controlled regulator is thus fundamentally
incapable of being 100 percent efficient."

While trying to make an argument about so-called perfect control, Ashby
slipped in the misunderstanding that the only thing that makes control
possible is knowledge about the disturbance. This is certainly not how
a basic PCT negative feedback control system goes about its business.

Moreover, I think Ashby's formulation contributed to how some of the
subsequent discussion on CSGNet played itself out, with folks talking at
cross purposes about the notion of "information." Some viewed it almost
in a substantive sense, affecting (supposedly) the control loop itself,
while others insisted it was only a theoretical tool of analysis for the
outside observer.

There is also a logical fallacy, I believe, in Ashby's argument, & that
seemed to get picked up by others here in this discussion. An initial
confusion occurred when looking at what happens when the error signal goes
to zero. For a while, others here continued Ashby's mistake that zero
error meant a zero output signal, which sounds natural enough but it
misunderstands how the control equations work. Zero error simply means
_no further change_ in the output signal. This point was first noted by
Rick Marken (2012.12.05.1000), but it may have gotten lost in some of the
wording he used.

Bruce Abbott (2012.12.05.0740) tried to paraphrase Ashby's argument this
way:

If the output of the system perfectly cancelled the effect of the
disturbance, the error signal would never vary from zero. Consequently,
there would be no variation in the output to cancel out variation in the
CV due to the disturbance. Conclusion: perfect control in such a system is
impossible.

The fallacy here is that time has been removed from the analysis. Notice
the word "never." In this postulated perfect control system, "the error
signal would never vary from zero," supposedly meaning "no variation in
the output," but really only signifying no _further_ variation in the
output.

But then time is reintroduced into the situation, by considering variation
coming from the disturbance. The disturbance is now allowed to vary, but
without allowing subsequent variation in the error signal & the output.
This is a logical fallacy, leading to an erroneous set of conclusions.

Stable PCT control systems converge towards perfect control, but time has
to remain part of the analysis. Any variation from the disturbance
(leading to perturbation of the controlled variable) is counteracted by
the effects of the output. As Bill Powers (2012.12.05.0950) noted:

If the output function is an integrator, the output variable becomes a
time series which converges toward a final value of zero error.

"Instant & perfect control" is a fiction, leading to a static analysis &
wrong conclusions. One of those wrong conclusions was that of Ashby, that
an error-controlled regulator would not be as effective as one driven
directly by the disturbance.

When the discussion turned more to Information Theory, the similarity
between a control loop & a communication channel was noted. But it
included confusing language about the information channel being "blocked"
in that (fictional) perfect control system where the error never varied
-- see, for example, Bruce Abbott (2012.12.05.0740). I don't understand
why we wouldn't simply say that the channel is 'not currently conveying
information.' It is not a static, never-changing situation. Just
reintroduce variation into the disturbance, & the error signal starts
changing again.

A more substantive confusion arose for me with this notion of "information
about the disturbance" (appearing in numerous posts.) The technical
understanding of information here means reduction in uncertainty. So
isn't the error signal conveying information (i.e., reducing uncertainty)
about the _perception_, not the disturbance, by saying the perception is
not yet equal to the reference?

That's how I read Bruce Abbott (2012.1205.1545), when he talked about the
error signal:

This signal transmits information to the output function, in that its
values reduce uncertainty about the current state of the perceptual
signal, relative to that of the reference signal.

This even works for the non-static situation of temporary perfect control.
When control is perfect (or we should say _while_ control is perfect, to
emphasize that the situation may change), all uncertainty has been
eliminated, because the perception equals the reference. For as long
as that condition persists, there is no more information to convey. As
soon as the disturbance changes, there is again uncertainty in the form
of the perception not being equal to the reference, which the control
system will work to reduce.

I guess I can also see that while control is perfect, then that means
compensation for the disturbance by the output is also perfect. In that
sense, all the variation of the disturbance is being (inversely) captured
in the output, with no further uncertainty to reduce. But I would almost
say, in that situation, that the flow of information through the channel
of the error signal has been fulfilled, rather than blocked. An alternate
conceptualization, which I found helpful, was that of Martin Taylor
(2012.12.08.1132), that in such a situation the output has perfectly
"measured" the disturbance, not by knowing about it but by arriving at
a perfect counterbalancing amount.

There is a side point that perhaps should be raised regarding colloquial
usage of some of these terms. This is not a claim that I see any of the
participants in this discussion making, but it is a confusion that can
easily arise nonetheless. It is embedded in that phrase "information
about the disturbance."

There is a veritable world hidden behind that word "about," which at least
in ordinary language gives a very misleading impression. Bill Powers
(2012.12.12.0315 MST), in his recent post, anticipates some of where I
was going in my thinking.

Consider the prototypical control example of driving a car, where there
is a crosswind, irregular surface of the road, & uneven tire pressure.
All of those features can cause disturbances to the perception of keeping
the front of the car visually centered in the lane (or thereabouts).

The corrective action to counter those disturbances is brought to bear
by lateral or angular muscular force exerted on the steering wheel.
This is the output (along with all the implementing muscular contractions,
etc.) that is then amplified by the environmental feedback function of
the steering hydraulics & linkage, etc. It is a linguistic stretch to
think that my arm muscles know much of anything "about" winds or tires
or roads.

At most, the pattern of my muscle tensions reproduces something about the
net fluctuating forces (plural) occurring at one point of measurement,
that of the effect on the visual position of the car relative to the
lane. So that vast world of potential disturbances is only reduced as
to its uncertainty at one tiny point of impact, where all those net forces
converge to affect the standing of a controlled perceptual variable.

I think this is similar to the distinction Bill just tried to make, that
maybe "disturbance" should refer to the cause & "perturbation" should
refer to the effect on a controlled variable. If so, then at best the
output accumulates information (i.e., reduced uncertainty) about the net
perturbation.

As an aside, I rather hope we can find a better term than "perturbation,"
seeing as PCT already has an image & marketing problem, so to speak. But
the distinction is an important one, if we want to avoid
misunderstandings.
Personally, I think we should use "disturbance" for the point of impact
on a controlled variable, & "source of disturbances" for what lies behind
them.

At any rate, I needed to put some of my thoughts on this topic into
coherent form. I hope it is not too dated, given how the conversation
has since evolved. I'd appreciate hearing back if any of my formulations
here seem useful to anybody.

All the best,
Erling

[From Rick Marken (2012.12.12.1835)]

From Bruce Abbott (2012.12.12.2030 EST)]

RM: To determine whether a particular variable is under control using disturbances to the hypothetical controlled variable to see if there is resistance (per “The Test”) what you should look at is the relationship between disturbances and the hypothetical controlled variable (as I do in my demo of the test at http://www.mindreadings.com/ControlDemo/Mindread.html) rather than the relationship between disturbances and outputs…

BA: True enough, but what does this have to do with an information analysis?

RM : You said "
Neither Martin nor I have claimed that the relation between disturbance
and output reveals anything about the organism, other than the fact that it is controlling the variable in question", which sounded to me like you use informational analysis to test for controlled variables.

RM: When you measure the information in the output about the disturbance to a controlled variable you are treating the organism as a communication channel: communicating information about d to o. So you are measuring a characteristic of the organism. But PCT shows that the relationship between disturbance and output has nothing to do with the organism (when control is good). So what you are actually measuring is characteristics of the feedback and disturbance function. You think you are measuring a characteristic of the organism (its ability to transfer information about the disturbance to its output) but what you are actually measuring is the nature of the feedback and disturbance functions of a control loop. You (and Martin) have fallen for the behavioral illusion hook, line and sinker.

BA: If you transmit a message over the phone and I measure the information I receive in the message, am I measuring a characteristic of the communication channel, or of the message received? Clearly, I am NOT measuring a characteristic of that channel (unless the channel imposes some limit on the information it transmits).

RM: So you are using information theory to measure a characteristic of the disturbance, which is analogous to the message when you are analyzing the information about the disturbance in the output. How much do learn about the message in the little demo I just posted? (http://www.mindreadings.com/ControlDemo/InfoDist.html)

BA: I was hoping to gain some insight into how you have come to this conclusion[that you and Martin have fallen completely for the behavioral illusion) by reading your answers to my two simple questions. Sadly, I am still awaiting them.

RM: I thought I answered them. But hopefully this post will give you some idea of how I came to the conclusion that you (and Martin) have fallen (willingly, apparently, since you guys know PCT) for the behavioral illusion.

BA: No, it completely evades my questions. To save you the trouble of looking them up, there they are again:

BA: Now, here’s a question for YOU, if you’re up to the challenge. When control is excellent, with high gain, and the reference signal is constant, how is it that the pattern of variation of the disturbance is mirrored by the pattern of variation of the feedback to the CV, without that pattern being evident in the error signal?

RM: I do remember answering them. You probably just didn’t like my answers. I’ll try again.

The first thing to understand is that the mirroring of output (which is what I presume you mean by feedback to the CV) and disturbance only occurs when the feedback and disturbance functions are constants and 1.0 (as they typically are in our tracking tasks). The disturbance pattern will be evident in the error signal in this case (though at a phase lag I believe) is there is no noise in the loop. If there is noise in the loop (as there is in normal human controlling) then the the disturbance pattern will not be evident at all in the error signal; the closed loop integration filters out this noise so that control is still nearly perfect; the output nearly perfectly mirrors the disturbance despite this noise. The reason this works, I believe, is because the noise is like an additional disturbance to the controlled variable. SO, for example, if the noise, e, is added to the error signal, the output will be o + e. So the input will be o+d+e and the error signal will be r-(o+d+e) and the output driven by this error will net cancel out its own noise (that’s the closed loop filtering process).

BA: Follow-up question: In the case described above, the feedback waveform almost perfectly matches the disturbance waveform. In what sense is it that the feedback waveform tells you nothing about the disturbance waveform? (Nothing = no information in the disturbance waveform appears in the feedback waveform.)

RM: Because the observed output waveform is not necessarily a mirrior of the disturbance. For example,suppose the output waveform that you observe is a perfect sine wave. You say that this means that the disturbance is also a sine wave, but 180 degrees out of phase. But it could be that the disturbance is a sine wave that is 0 degrees out of phase with the output. This would be true if, unknown to you, the feedback function were = -1. Or if the disturbance function were -1. Or the disturbance might a sine and cosine wave that add up to produce the total disturbance to the CV. There are many different reasons why you are seeing a particular output waveform other than that it is a mirror of a disturbance variable. Consider again, for example, my area/ perimeter control study. There we see a change in the output waveform with no change at all in the disturbance variable; the change s a result of controlling a different perception.

So these are the reasons why I would say that, in general, the output of a control system contains no information about the disturbance(s) to the variable being controlled by the system.

Hope I passed the audition.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.12.23.10]

[From Rick Marken (2012.12.12.1730)]

        Martin Taylor

(2012.12.12.19.39)

        MT: Yep. No problem there. I don't think anyone has

suggested using information theory to determine whether a
particular variable is under control.

      RM: Actually, Bruce did in the statement to which I was

replying.

        BA:

Neither Martin nor I have claimed that the relation between
disturbance and output reveals anything about the organism,
other than the fact that it is controlling the variable in
question.

      RM: It sounds to me like Bruce is saying that neither of you

is using informational analysis of of the relationship between
disturbance and output for anything other than determining
that the system (organism) is controlling the “variable in
question” – ie. the variable that is thought to be under
control. In other words, Bruce says that you are not claiming
to be using informational analysisy for anything other than
for testing for controlled variables.

Funny, I can't see anything in what you quote that says anything

about information analysis. I think what he said would apply equally
to correlational analysis or “The Test”. He mentions “the
relationship between disturbance and output”.

                RM: That is, look at the relationship between o

and q.i rather than that between o and d. If the
hypothetical controlled variable, q.i, is indeed
under control then there will be little or no
correlation between d and q.i. You could also look
for a high negative correlation between d and o (or,
as you say, information about d in o) but if you
know about the behavioral illusion – in linear form
o = -(k.e/k.f)*d – you would know that the size and
sign of this correlatoin could be influenced by
variations in k.e and k.o. So you are better off
looking for a lack of correlation between
disturbance and hypothetical controlled input.

        MT: All good. Actually, here is a case where an information

approach might be better, since the changes you characterize
by the multiplier “k” don’t affect the information measures,
whereas they do affect the variable values. Using the
information measures, you are effectively immune to the
behavioural illusion under many circumstances.

      RM: Please show me how the informational measures are not

changed by changes in k.e and k.o.

If  y = x, then if you know x = 2, you know y = 2.

If  y = 2x, then if you know x = 2, you know y = 4.

if you know y = kx, then if you know x = 2, you know y = 2k.

Martin
···
                    RM: Well, then why use information theory as

an analysis tool? When you measure the
information in the output about the disturbance
to a controlled variable you are treating the
organism as a communication channel:
communicating information about d to o. So you
are measuring a characteristic of the organism.

        MT: Yes. That's true. But we haven't been making that claim,

even though we could. It may become interesting at some
point, but at the moment we are just laying the groundwork,
in the same way that the algebra of static control systems
lays the groundwork for what you might see an infinite time
after a step change in the disturbance or reference.

      RM: Well, then like Wyle E. Coyote you are laying the

groundwork for going into a trompe l’oeil tunnel.

                RM: But PCT shows that the relationship between

disturbance and output has nothing to do with the
organism (when control is good).

MT: PCT doesn’t show that.

      RM: Does so;-) That's what the behavioral illusion equation

shows. o = -(k.e/k.f) d. The organism function, k.o does not
appear in the relationship between disturbance and output.

        MT: It happens to be a

fact of control systems

      RM: No, it's not a fact.
        MT: , demonstrable

through the equations

      RM: Only if you leave out the closed loop effects of output on

itself;-)

      Best



      Rick
  --

  Richard S. Marken PhD

  rsmarken@gmail.com

  [www.mindreadings.com](http://www.mindreadings.com)

[Martin Taylor 2012.12.12.23.16]

[From Rick Marken (2012.12.12.1740)]

        [Martin Taylor

2012.12.12.20.28]

            RM: OK, I've put a little

tracking demo up on the net at

            [http://www.mindreadings.com/ControlDemo/InfoDist.html](http://www.mindreadings.com/ControlDemo/InfoDist.html)
        MT: Firefox says that some plugins are needed to view this

demo. I don’t like to install unknown plugins, so do you
know what those plugins might be?

      RM: I think it must be Java. I didn;t know that was considered

a plug in.

That might be it. Java now being considered a security risk, I think

system updates have removed it. I’ll look and see if I can get it
back.

Martin

[From Rick Marken (2012.12.12.2140)]

Martin Taylor (2012.12.12.23.10)

                RM: That is, look at the relationship between o

and q.i rather than that between o and d. If the
hypothetical controlled variable, q.i, is indeed
under control then there will be little or no
correlation between d and q.i. You could also look
for a high negative correlation between d and o (or,
as you say, information about d in o) but if you
know about the behavioral illusion – in linear form
o = -(k.e/k.f)*d – you would know that the size and
sign of this correlatoin could be influenced by
variations in k.e and k.o. So you are better off
looking for a lack of correlation between
disturbance and hypothetical controlled input.

        MT: All good. Actually, here is a case where an information

approach might be better, since the changes you characterize
by the multiplier “k” don’t affect the information measures,
whereas they do affect the variable values. Using the
information measures, you are effectively immune to the
behavioural illusion under many circumstances.

      RM: Please show me how the informational measures are not

changed by changes in k.e and k.o.

MT: If y = x, then if you know x = 2, you know y = 2.

If  y = 2x, then if you know x = 2, you know y = 4.

if you know y = kx, then if you know x = 2, you know y = 2k.

RM: Hey, I’ve got an idea. Why don’t we start the day off right tomorrow with you showing me how you would measure information in a simple control task like the one I just posted (http://www.mindreadings.com/ControlDemo/InfoDist.html) or the one at my web site (http://www.mindreadings.com/ControlDemo/BasicTrack.html). Let’s get tangible!

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2012.12.12.2200)]

Erling Jorgensen (2012.12.12.2100 EST)–

EJ: It has been hard to keep up with this discussion in real time. And by now

it is tempting just to shrug & figure the discussion has moved on. But on

the other hand, it seems now is the time to get some of my concerns out on

the table, rather than wait (another few years) until this periodic debate

about information theory cycles back around onto CSGNet.

RM: Thanks for sticking with it, Erling. I know it’s been pretty intense and fast but it’s been enormously worth it to me. Even after 30+ years of this I learn new stuff (or re-learn old stuff more deeply). This discussion - and the little demos I built based on it – has helped me understand the “behavioral illusion” as never before. I knew PCT was revolutionary but now I see just how fundamentally revolutionary it is. Maybe it would be less offensive to conventional psychologists to say that they are studying side effects of control rather than an illusion. But that’s really what is going on. Studying the relationship between environmental variables (disturbances) and behavioral responses (output) is just a dead end. No wonder PCT is despised and rejected (I’m doing our annual sing-a-long Messiah next week so that just clicked in;-) I would be astounded if PCT becomes mainstream psychology in my lifetime (though I would not object if it did). In the meantime I’ll just do my best to get that to happen.

There’s not much that I disagree with in your nice post. All I have to say is just hang in there. I think this apparently very academic, mathematical discussion about information theory really gets to the blood and guts, so to speak, of PCT. The illusion of a link (causal, informational, whatever) from disturbance to output when observing the behavior of a control system is a startling consequence of the fact that behavior is the control of perception.

Best

Rick

···

My background is not mathematics. While I have a Ph.D. degree, I always

have to follow closely those who can translate the mathematical concepts

into more intuitive ways of understanding them. However, there are some

points, & indeed fallacies, that have only been lightly touched upon in

this discussion. And I need to see if my understanding matches that of

others.

Fred Nickols (2012.12.2.1530 AZ) initiated the discussion by asking about

how Ashby’s “law” fits in with PCT, if at all. Subsequently, Bruce Abbott

(2012.12.3.1910 EST) gave the question a bit more rigor by pointing us to:

Ashby W.R. (1958) "Requisite variety and its implications for

the control of complex systems,"

Cybernetica 1:2, p. 83-99 ( available online at

http://pcp.vub.ac.be/books/AshbyReqVar.pdf in which Ashby shows that his

Law of Requisite Variety is closely related Shannon’s Theorem 10.

When I read through Ashby’s paper, a huge red flag reared up for me as he

discussed the error-controlled regulator. On page 9, he stated:

"To go to the extreme: if the regulator is totally successful, the error

will be zero unvaryingly, and the regulator will thus be cut off totally

from the information (about D’s value) that alone can make it successful

–which is absurd. The error-controlled regulator is thus fundamentally

incapable of being 100 percent efficient."

While trying to make an argument about so-called perfect control, Ashby

slipped in the misunderstanding that the only thing that makes control

possible is knowledge about the disturbance. This is certainly not how

a basic PCT negative feedback control system goes about its business.

Moreover, I think Ashby’s formulation contributed to how some of the

subsequent discussion on CSGNet played itself out, with folks talking at

cross purposes about the notion of “information.” Some viewed it almost

in a substantive sense, affecting (supposedly) the control loop itself,

while others insisted it was only a theoretical tool of analysis for the

outside observer.

There is also a logical fallacy, I believe, in Ashby’s argument, & that

seemed to get picked up by others here in this discussion. An initial

confusion occurred when looking at what happens when the error signal goes

to zero. For a while, others here continued Ashby’s mistake that zero

error meant a zero output signal, which sounds natural enough but it

misunderstands how the control equations work. Zero error simply means

no further change in the output signal. This point was first noted by

Rick Marken (2012.12.05.1000), but it may have gotten lost in some of the

wording he used.

Bruce Abbott (2012.12.05.0740) tried to paraphrase Ashby’s argument this

way:

If the output of the system perfectly cancelled the effect of the

disturbance, the error signal would never vary from zero. Consequently,

there would be no variation in the output to cancel out variation in the

CV due to the disturbance. Conclusion: perfect control in such a system is

impossible.

The fallacy here is that time has been removed from the analysis. Notice

the word “never.” In this postulated perfect control system, "the error

signal would never vary from zero," supposedly meaning "no variation in

the output," but really only signifying no further variation in the

output.

But then time is reintroduced into the situation, by considering variation

coming from the disturbance. The disturbance is now allowed to vary, but

without allowing subsequent variation in the error signal & the output.

This is a logical fallacy, leading to an erroneous set of conclusions.

Stable PCT control systems converge towards perfect control, but time has

to remain part of the analysis. Any variation from the disturbance

(leading to perturbation of the controlled variable) is counteracted by

the effects of the output. As Bill Powers (2012.12.05.0950) noted:

If the output function is an integrator, the output variable becomes a

time series which converges toward a final value of zero error.

“Instant & perfect control” is a fiction, leading to a static analysis &

wrong conclusions. One of those wrong conclusions was that of Ashby, that

an error-controlled regulator would not be as effective as one driven

directly by the disturbance.

When the discussion turned more to Information Theory, the similarity

between a control loop & a communication channel was noted. But it

included confusing language about the information channel being “blocked”

in that (fictional) perfect control system where the error never varied

– see, for example, Bruce Abbott (2012.12.05.0740). I don’t understand

why we wouldn’t simply say that the channel is 'not currently conveying

information.’ It is not a static, never-changing situation. Just

reintroduce variation into the disturbance, & the error signal starts

changing again.

A more substantive confusion arose for me with this notion of "information

about the disturbance" (appearing in numerous posts.) The technical

understanding of information here means reduction in uncertainty. So

isn’t the error signal conveying information (i.e., reducing uncertainty)

about the perception, not the disturbance, by saying the perception is

not yet equal to the reference?

That’s how I read Bruce Abbott (2012.1205.1545), when he talked about the

error signal:

This signal transmits information to the output function, in that its

values reduce uncertainty about the current state of the perceptual

signal, relative to that of the reference signal.

This even works for the non-static situation of temporary perfect control.

When control is perfect (or we should say while control is perfect, to

emphasize that the situation may change), all uncertainty has been

eliminated, because the perception equals the reference. For as long

as that condition persists, there is no more information to convey. As

soon as the disturbance changes, there is again uncertainty in the form

of the perception not being equal to the reference, which the control

system will work to reduce.

I guess I can also see that while control is perfect, then that means

compensation for the disturbance by the output is also perfect. In that

sense, all the variation of the disturbance is being (inversely) captured

in the output, with no further uncertainty to reduce. But I would almost

say, in that situation, that the flow of information through the channel

of the error signal has been fulfilled, rather than blocked. An alternate

conceptualization, which I found helpful, was that of Martin Taylor

(2012.12.08.1132), that in such a situation the output has perfectly

“measured” the disturbance, not by knowing about it but by arriving at

a perfect counterbalancing amount.

There is a side point that perhaps should be raised regarding colloquial

usage of some of these terms. This is not a claim that I see any of the

participants in this discussion making, but it is a confusion that can

easily arise nonetheless. It is embedded in that phrase "information

about the disturbance."

There is a veritable world hidden behind that word “about,” which at least

in ordinary language gives a very misleading impression. Bill Powers

(2012.12.12.0315 MST), in his recent post, anticipates some of where I

was going in my thinking.

Consider the prototypical control example of driving a car, where there

is a crosswind, irregular surface of the road, & uneven tire pressure.

All of those features can cause disturbances to the perception of keeping

the front of the car visually centered in the lane (or thereabouts).

The corrective action to counter those disturbances is brought to bear

by lateral or angular muscular force exerted on the steering wheel.

This is the output (along with all the implementing muscular contractions,

etc.) that is then amplified by the environmental feedback function of

the steering hydraulics & linkage, etc. It is a linguistic stretch to

think that my arm muscles know much of anything “about” winds or tires

or roads.

At most, the pattern of my muscle tensions reproduces something about the

net fluctuating forces (plural) occurring at one point of measurement,

that of the effect on the visual position of the car relative to the

lane. So that vast world of potential disturbances is only reduced as

to its uncertainty at one tiny point of impact, where all those net forces

converge to affect the standing of a controlled perceptual variable.

I think this is similar to the distinction Bill just tried to make, that

maybe “disturbance” should refer to the cause & “perturbation” should

refer to the effect on a controlled variable. If so, then at best the

output accumulates information (i.e., reduced uncertainty) about the net

perturbation.

As an aside, I rather hope we can find a better term than “perturbation,”

seeing as PCT already has an image & marketing problem, so to speak. But

the distinction is an important one, if we want to avoid

misunderstandings.

Personally, I think we should use “disturbance” for the point of impact

on a controlled variable, & “source of disturbances” for what lies behind

them.

At any rate, I needed to put some of my thoughts on this topic into

coherent form. I hope it is not too dated, given how the conversation

has since evolved. I’d appreciate hearing back if any of my formulations

here seem useful to anybody.

All the best,

Erling


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Fred Nickols (2012.12.13.0642 AZ)]

I’m with Erling. The term “disturbance” should be used to refer to the effect or impact on the controlled variable. In the driving example, the “disturbance” is the sideways force of the wind being exerted on the auto. The source of that disturbance is the wind. If I’m being jostled and lose my balance, the disturbance is the lateral force that pushed me off center; the source of that could be a person bumping in to me (and he/she could in turn have been bumped by someone else). Maybe, just as I use proximate and ultimate to refer to controlled variables that are linked or connected, perhaps we need to think of proximate and ultimate disturbances. Hmm. Where did that wind come from?

Best regards,

Fred Nickols

Distance Consulting LLC

www.nickols.us

···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Richard Marken
Sent: Wednesday, December 12, 2012 11:04 PM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby’s Law of Requisite Variety

[From Rick Marken (2012.12.12.2200)]

Erling Jorgensen (2012.12.12.2100 EST)–

EJ: It has been hard to keep up with this discussion in real time. And by now
it is tempting just to shrug & figure the discussion has moved on. But on
the other hand, it seems now is the time to get some of my concerns out on
the table, rather than wait (another few years) until this periodic debate
about information theory cycles back around onto CSGNet.

RM: Thanks for sticking with it, Erling. I know it’s been pretty intense and fast but it’s been enormously worth it to me. Even after 30+ years of this I learn new stuff (or re-learn old stuff more deeply). This discussion - and the little demos I built based on it – has helped me understand the “behavioral illusion” as never before. I knew PCT was revolutionary but now I see just how fundamentally revolutionary it is. Maybe it would be less offensive to conventional psychologists to say that they are studying side effects of control rather than an illusion. But that’s really what is going on. Studying the relationship between environmental variables (disturbances) and behavioral responses (output) is just a dead end. No wonder PCT is despised and rejected (I’m doing our annual sing-a-long Messiah next week so that just clicked in;-) I would be astounded if PCT becomes mainstream psychology in my lifetime (though I would not object if it did). In the meantime I’ll just do my best to get that to happen.

There’s not much that I disagree with in your nice post. All I have to say is just hang in there. I think this apparently very academic, mathematical discussion about information theory really gets to the blood and guts, so to speak, of PCT. The illusion of a link (causal, informational, whatever) from disturbance to output when observing the behavior of a control system is a startling consequence of the fact that behavior is the control of perception.

Best

Rick

My background is not mathematics. While I have a Ph.D. degree, I always
have to follow closely those who can translate the mathematical concepts
into more intuitive ways of understanding them. However, there are some
points, & indeed fallacies, that have only been lightly touched upon in
this discussion. And I need to see if my understanding matches that of
others.

Fred Nickols (2012.12.2.1530 AZ) initiated the discussion by asking about
how Ashby’s “law” fits in with PCT, if at all. Subsequently, Bruce Abbott
(2012.12.3.1910 EST) gave the question a bit more rigor by pointing us to:

Ashby W.R. (1958) “Requisite variety and its implications for
the control of complex systems,”
Cybernetica 1:2, p. 83-99 ( available online at
http://pcp.vub.ac.be/books/AshbyReqVar.pdf in which Ashby shows that his
Law of Requisite Variety is closely related Shannon’s Theorem 10.

When I read through Ashby’s paper, a huge red flag reared up for me as he
discussed the error-controlled regulator. On page 9, he stated:

“To go to the extreme: if the regulator is totally successful, the error
will be zero unvaryingly, and the regulator will thus be cut off totally
from the information (about D’s value) that alone can make it successful
–which is absurd. The error-controlled regulator is thus fundamentally
incapable of being 100 percent efficient.”

While trying to make an argument about so-called perfect control, Ashby
slipped in the misunderstanding that the only thing that makes control
possible is knowledge about the disturbance. This is certainly not how
a basic PCT negative feedback control system goes about its business.

Moreover, I think Ashby’s formulation contributed to how some of the
subsequent discussion on CSGNet played itself out, with folks talking at
cross purposes about the notion of “information.” Some viewed it almost
in a substantive sense, affecting (supposedly) the control loop itself,
while others insisted it was only a theoretical tool of analysis for the
outside observer.

There is also a logical fallacy, I believe, in Ashby’s argument, & that
seemed to get picked up by others here in this discussion. An initial
confusion occurred when looking at what happens when the error signal goes
to zero. For a while, others here continued Ashby’s mistake that zero
error meant a zero output signal, which sounds natural enough but it
misunderstands how the control equations work. Zero error simply means
no further change in the output signal. This point was first noted by
Rick Marken (2012.12.05.1000), but it may have gotten lost in some of the
wording he used.

Bruce Abbott (2012.12.05.0740) tried to paraphrase Ashby’s argument this
way:

If the output of the system perfectly cancelled the effect of the
disturbance, the error signal would never vary from zero. Consequently,
there would be no variation in the output to cancel out variation in the
CV due to the disturbance. Conclusion: perfect control in such a system is
impossible.

The fallacy here is that time has been removed from the analysis. Notice
the word “never.” In this postulated perfect control system, “the error
signal would never vary from zero,” supposedly meaning “no variation in
the output,” but really only signifying no further variation in the
output.

But then time is reintroduced into the situation, by considering variation
coming from the disturbance. The disturbance is now allowed to vary, but
without allowing subsequent variation in the error signal & the output.
This is a logical fallacy, leading to an erroneous set of conclusions.

Stable PCT control systems converge towards perfect control, but time has
to remain part of the analysis. Any variation from the disturbance
(leading to perturbation of the controlled variable) is counteracted by
the effects of the output. As Bill Powers (2012.12.05.0950) noted:

If the output function is an integrator, the output variable becomes a
time series which converges toward a final value of zero error.

“Instant & perfect control” is a fiction, leading to a static analysis &
wrong conclusions. One of those wrong conclusions was that of Ashby, that
an error-controlled regulator would not be as effective as one driven
directly by the disturbance.

When the discussion turned more to Information Theory, the similarity
between a control loop & a communication channel was noted. But it
included confusing language about the information channel being “blocked”
in that (fictional) perfect control system where the error never varied
– see, for example, Bruce Abbott (2012.12.05.0740). I don’t understand
why we wouldn’t simply say that the channel is ‘not currently conveying
information.’ It is not a static, never-changing situation. Just
reintroduce variation into the disturbance, & the error signal starts
changing again.

A more substantive confusion arose for me with this notion of “information
about the disturbance” (appearing in numerous posts.) The technical
understanding of information here means reduction in uncertainty. So
isn’t the error signal conveying information (i.e., reducing uncertainty)
about the perception, not the disturbance, by saying the perception is
not yet equal to the reference?

That’s how I read Bruce Abbott (2012.1205.1545), when he talked about the
error signal:

This signal transmits information to the output function, in that its
values reduce uncertainty about the current state of the perceptual
signal, relative to that of the reference signal.

This even works for the non-static situation of temporary perfect control.
When control is perfect (or we should say while control is perfect, to
emphasize that the situation may change), all uncertainty has been
eliminated, because the perception equals the reference. For as long
as that condition persists, there is no more information to convey. As
soon as the disturbance changes, there is again uncertainty in the form
of the perception not being equal to the reference, which the control
system will work to reduce.

I guess I can also see that while control is perfect, then that means
compensation for the disturbance by the output is also perfect. In that
sense, all the variation of the disturbance is being (inversely) captured
in the output, with no further uncertainty to reduce. But I would almost
say, in that situation, that the flow of information through the channel
of the error signal has been fulfilled, rather than blocked. An alternate
conceptualization, which I found helpful, was that of Martin Taylor
(2012.12.08.1132), that in such a situation the output has perfectly
“measured” the disturbance, not by knowing about it but by arriving at
a perfect counterbalancing amount.

There is a side point that perhaps should be raised regarding colloquial
usage of some of these terms. This is not a claim that I see any of the
participants in this discussion making, but it is a confusion that can
easily arise nonetheless. It is embedded in that phrase “information
about the disturbance.”

There is a veritable world hidden behind that word “about,” which at least
in ordinary language gives a very misleading impression. Bill Powers
(2012.12.12.0315 MST), in his recent post, anticipates some of where I
was going in my thinking.

Consider the prototypical control example of driving a car, where there
is a crosswind, irregular surface of the road, & uneven tire pressure.
All of those features can cause disturbances to the perception of keeping
the front of the car visually centered in the lane (or thereabouts).

The corrective action to counter those disturbances is brought to bear
by lateral or angular muscular force exerted on the steering wheel.
This is the output (along with all the implementing muscular contractions,
etc.) that is then amplified by the environmental feedback function of
the steering hydraulics & linkage, etc. It is a linguistic stretch to
think that my arm muscles know much of anything “about” winds or tires
or roads.

At most, the pattern of my muscle tensions reproduces something about the
net fluctuating forces (plural) occurring at one point of measurement,
that of the effect on the visual position of the car relative to the
lane. So that vast world of potential disturbances is only reduced as
to its uncertainty at one tiny point of impact, where all those net forces
converge to affect the standing of a controlled perceptual variable.

I think this is similar to the distinction Bill just tried to make, that
maybe “disturbance” should refer to the cause & “perturbation” should
refer to the effect on a controlled variable. If so, then at best the
output accumulates information (i.e., reduced uncertainty) about the net
perturbation.

As an aside, I rather hope we can find a better term than “perturbation,”
seeing as PCT already has an image & marketing problem, so to speak. But
the distinction is an important one, if we want to avoid
misunderstandings.
Personally, I think we should use “disturbance” for the point of impact
on a controlled variable, & “source of disturbances” for what lies behind
them.

At any rate, I needed to put some of my thoughts on this topic into
coherent form. I hope it is not too dated, given how the conversation
has since evolved. I’d appreciate hearing back if any of my formulations
here seem useful to anybody.

All the best,
Erling


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Abbott (2012.12.13.0930 EST)]

Erling Jorgensen (2012.12.12.2100 EST)]

EJ: It has been hard to keep up with this discussion in real time. And by
now it is tempting just to shrug & figure the discussion has moved on. But
on the other hand, it seems now is the time to get some of my concerns out
on the table, rather than wait (another few years) until this periodic
debate about information theory cycles back around onto CSGNet.

EJ: My background is not mathematics. While I have a Ph.D. degree, I always
have to follow closely those who can translate the mathematical concepts
into more intuitive ways of understanding them.

The same goes for me.

EJ: However, there are some points, & indeed fallacies, that have only been
lightly touched upon in this discussion. And I need to see if my
understanding matches that of others.

. . .

EJ: When I read through Ashby's paper, a huge red flag reared up for me as
he discussed the error-controlled regulator. On page 9, he stated:

EJ: "To go to the extreme: if the regulator is totally successful, the error
will be zero unvaryingly, and the regulator will thus be cut off totally
from the information (about D's value) that alone can make it successful
--which is absurd. The error-controlled regulator is thus fundamentally
incapable of being 100 percent efficient."

EJ: While trying to make an argument about so-called perfect control, Ashby
slipped in the misunderstanding that the only thing that makes control
possible is knowledge about the disturbance. This is certainly not how a
basic PCT negative feedback control system goes about its business.

Ashby is talking here about the possibility of regulating with zero error.
He considers first what he terms an "error-controlled regulator," which is
the sort of control system employed in PCT. He is not saying that this
system requires something within it that "knows" about the disturbance. The
"information about D's value" to which he refers is about how D's value is
changing over time. If the CV were not under control, these variations in D
would be impressed upon the CV (through whatever function relates the
disturbance to its effect on the CV) Those variations in the CV would pass
through the input function to produce variations in the perceptual signal
and, after passing through the comparator, in the error signal. The
variations in the error signal would pass through the output function to the
output, and from there through the environmental feedback function and back
to the CV.

In the control system, the feedback onto the CV will have an effect that
opposes the effect of the D. The magnitude of this effect, which is
determined by the loop gain around the circuit, will depend directly on the
magnitude of the error signal.

But Ashby is assuming perfect control, which means that the error signal is
always zero. If the error is zero, then the feedback must be zero. But if
the disturbance has moved the CV away from its reference value, there must
be error, which the system uses to generate an opposing feedback value.
That's a logical contradiction: the error cannot simultaneously always be
zero (perfect control) and some nonzero value.

This obvious fact is why Ashby refers to the standard control system as
"error-controlled." The variations in compensating feedback come from the
error signal. Since error is what drives the output, error must vary with
the disturbance-driven changes in the CV if the feedback is to vary so as to
compensate for those disturbances.

EJ: Moreover, I think Ashby's formulation contributed to how some of the
subsequent discussion on CSGNet played itself out, with folks talking at
cross purposes about the notion of "information." Some viewed it almost in
a substantive sense, affecting (supposedly) the control loop itself, while
others insisted it was only a theoretical tool of analysis for the outside
observer.

EJ: There is also a logical fallacy, I believe, in Ashby's argument, & that
seemed to get picked up by others here in this discussion. An initial
confusion occurred when looking at what happens when the error signal goes
to zero. For a while, others here continued Ashby's mistake that zero error
meant a zero output signal, which sounds natural enough but it
misunderstands how the control equations work. Zero error simply means _no
further change_ in the output signal. This point was first noted by Rick
Marken (2012.12.05.1000), but it may have gotten lost in some of the wording
he used.

This is incorrect, despite Rick's (2012.12.12.2200) approval. See below.

Bruce Abbott (2012.12.05.0740) tried to paraphrase Ashby's argument this
way:

If the output of the system perfectly cancelled the effect of the
disturbance, the error signal would never vary from zero. Consequently,
there would be no variation in the output to cancel out variation in
the CV due to the disturbance. Conclusion: perfect control in such a
system is impossible.

EJ: The fallacy here is that time has been removed from the analysis.
Notice the word "never." In this postulated perfect control system, "the
error signal would never vary from zero," supposedly meaning "no variation
in the output," but really only signifying no _further_ variation in the
output.

Incorrect. Zero error means zero error. None. If the error is zero, then the
output must necessarily be zero. Remember, the output is computed by
multiplying the error times the gain of the output function. Zero times X is
zero.

EJ: But then time is reintroduced into the situation, by considering
variation coming from the disturbance. The disturbance is now allowed to
vary, but without allowing subsequent variation in the error signal & the
output. This is a logical fallacy, leading to an erroneous set of
conclusions.

Yes, it's a contradiction, which demonstrates, reductio ad absurdum, that
perfect control (zero error at all times) is impossible in this type of
control system. But this does not lead to an erroneous set of conclusions.
(See below)

EJ: Stable PCT control systems converge towards perfect control, but time
has to remain part of the analysis. Any variation from the disturbance
(leading to perturbation of the controlled variable) is counteracted by the
effects of the output. As Bill Powers (2012.12.05.0950) noted:

MT: If the output function is an integrator, the output variable becomes a

time series which converges toward a final value of zero error.

EJ: "Instant & perfect control" is a fiction, leading to a static analysis &
wrong conclusions. One of those wrong conclusions was that of Ashby, that
an error-controlled regulator would not be as effective as one driven
directly by the disturbance.

Yes, instant and perfect control is a fiction, like frictionless motion.
Ashby introduced this fiction to demonstrate that perfect control is not
even theoretically possible when the control system's output is driven by
error. He goes on to demonstrate that, at least theoretically, such perfect
control IS possible in a different kind of controller, one that senses the
disturbance ahead of its effect on the CV and uses this information to
generate a properly scaled output that opposes the effect of this
disturbance on the CV.

Ashby is entirely correct in these theoretical assertions. But Bill Powers
and I have both noted that in the real world the apparent advantage of
Ashby's "feed forward" controller evaporates, for two reasons. First, a
well-designed error-controlled system can keep the error vanishingly small
(even though control is not "perfect" in Ashby's sense). Second, there are
all sorts of difficulties implementing Ashby's controller, not least of
which is knowing exactly how the disturbance will affect the CV and knowing
how to generate an opposing output that has exactly an equal and opposite
effect on the CV. Even if you do come up with the right adjustments, the
whole thing goes to pieces if those relationships should change.
  
EJ: When the discussion turned more to Information Theory, the similarity
between a control loop & a communication channel was noted. But it included
confusing language about the information channel being "blocked" in that
(fictional) perfect control system where the error never varied
-- see, for example, Bruce Abbott (2012.12.05.0740).

Variation in the disturbance is transmitted to the extent that it is
duplicated in the receiver. (This duplication reduces uncertainty about the
variation in the disturbance.) The first "receiver" is the CV; variation in
the disturbance induces variation in the CV. This variation is then
transmitted via the input function to the perceptual signal, then to the
error signal via the comparator function. But it the perfect controller,
there is never any error. But in that case, variation in the disturbance is
not transmitted to the error signal, nor from there to the output, nor from
there through the feedback function to the CV. The channel of information
flow is blocked at the comparator.
  
EJ: I don't understand why we wouldn't simply say that the channel is 'not
currently conveying information.' It is not a static, never-changing
situation. Just reintroduce variation into the disturbance, & the error
signal starts changing again.

In the perfect controller under consideration, the information is NEVER
transmitted because the error signal, under the assumption of perfection,
never varies from zero error.

EJ: A more substantive confusion arose for me with this notion of
"information about the disturbance" (appearing in numerous posts.) The
technical understanding of information here means reduction in uncertainty.
So isn't the error signal conveying information (i.e., reducing uncertainty)
about the _perception_, not the disturbance, by saying the perception is not
yet equal to the reference?

As I mentioned above, information is transmitted from disturbance through
the CV, perceptual signal, error signal, output, feedback, in that order.
Transmission is blocked if the error signal cannot vary from zero.

That's how I read Bruce Abbott (2012.1205.1545), when he talked about the
error signal:

This signal transmits information to the output function, in that its
values reduce uncertainty about the current state of the perceptual
signal, relative to that of the reference signal.

EJ: This even works for the non-static situation of temporary perfect
control.
When control is perfect (or we should say _while_ control is perfect, to
emphasize that the situation may change), all uncertainty has been
eliminated, because the perception equals the reference. For as long as
that condition persists, there is no more information to convey. As soon as
the disturbance changes, there is again uncertainty in the form of the
perception not being equal to the reference, which the control system will
work to reduce.

Well, having no error for a moment is not the definition of perfect control.
In fact, zero error can occur even when there is no control. Maintaining
error at zero at all times is perfect control.

EJ: I guess I can also see that while control is perfect, then that means
compensation for the disturbance by the output is also perfect. In that
sense, all the variation of the disturbance is being (inversely) captured in
the output, with no further uncertainty to reduce. But I would almost say,
in that situation, that the flow of information through the channel of the
error signal has been fulfilled, rather than blocked. An alternate
conceptualization, which I found helpful, was that of Martin Taylor
(2012.12.08.1132), that in such a situation the output has perfectly
"measured" the disturbance, not by knowing about it but by arriving at a
perfect counterbalancing amount.

As I noted, this type of controller cannot achieve perfect control, even
theoretically. Although a practical controller cannot achieve perfect
control, it can come very close if properly designed. But then variation in
the CV induced by the disturbance is being transmitted through the
comparator and output to the feedback variable, even though the error
remains tiny.

EJ: There is a side point that perhaps should be raised regarding colloquial
usage of some of these terms. This is not a claim that I see any of the
participants in this discussion making, but it is a confusion that can
easily arise nonetheless. It is embedded in that phrase "information about
the disturbance."

EJ: There is a veritable world hidden behind that word "about," which at
least in ordinary language gives a very misleading impression. Bill Powers
(2012.12.12.0315 MST), in his recent post, anticipates some of where I was
going in my thinking.

EJ: Consider the prototypical control example of driving a car, where there
is a crosswind, irregular surface of the road, & uneven tire pressure.
All of those features can cause disturbances to the perception of keeping
the front of the car visually centered in the lane (or thereabouts).

EJ: The corrective action to counter those disturbances is brought to bear
by lateral or angular muscular force exerted on the steering wheel.
This is the output (along with all the implementing muscular contractions,
etc.) that is then amplified by the environmental feedback function of the
steering hydraulics & linkage, etc. It is a linguistic stretch to think
that my arm muscles know much of anything "about" winds or tires or roads.

I agree. Information in the sense used in information theory is not
something "known" by the variables and signals. It is just a different way
of quantifying the variation in these variables and signals.

EJ: At most, the pattern of my muscle tensions reproduces something about
the net fluctuating forces (plural) occurring at one point of measurement,
that of the effect on the visual position of the car relative to the lane.
So that vast world of potential disturbances is only reduced as to its
uncertainty at one tiny point of impact, where all those net forces converge
to affect the standing of a controlled perceptual variable.

In other words, uncertainty is reduced only about the actual disturbance
affecting the CV. Potential ones don't factor in.

EJ: I think this is similar to the distinction Bill just tried to make, that
maybe "disturbance" should refer to the cause & "perturbation" should refer
to the effect on a controlled variable. If so, then at best the output
accumulates information (i.e., reduced uncertainty) about the net
perturbation.

Yes, but you still can work backward from perturbation to the information in
the net disturbance that caused the perturbation.

EJ: As an aside, I rather hope we can find a better term than
"perturbation,"
seeing as PCT already has an image & marketing problem, so to speak. But
the distinction is an important one, if we want to avoid misunderstandings.

Personally, I think we should use "disturbance" for the point of impact on a
controlled variable, & "source of disturbances" for what lies behind them.

EJ: At any rate, I needed to put some of my thoughts on this topic into
coherent form. I hope it is not too dated, given how the conversation has
since evolved. I'd appreciate hearing back if any of my formulations here
seem useful to anybody.

Thanks, Earling. Your thoughtful comments are much appreciated. I hope I've
succeeded in clarifying these issues for you. But I should raise one other
issue. If there is a constant disturbance acting on the CV (e.g., a constant
side wind pushing against the car), and the driver is functioning as a good
control system, there will be a constant nonzero output (force keeping the
steering wheel, and thus the car's front wheels, turned to generate a
counterforce that equals the force of the wind. This is not a condition of
zero error in the control system compensating for the side wind. If there
were zero error, there would be no error to generate the muscle force
against the steering wheel and keep the wheels turned to oppose the wind's
effects. The error in this case of constant disturbance would itself be
constant and nonzero.

Bruce

[From Rick Marken (2012.12.13.0850)]

Fred Nickols (2012.12.13.0642 AZ)–

RM: I’m with Erling. The term “disturbance” should be used to refer to the effect or impact on the controlled variable.

RM: I didn’t notice that in Erling’s post but I disagree. I don’t think we should use any term – “perturbation” or “disturbance” – for the effect of of an independent variable (like the wind) on a controlled variable (like the position of a car). The reason is that giving such an effect a name gives the impression that it can be detected (as Martin assumes, incorrectly, in his analysis). The effect of an independent variable on the controlled variable is always combined with the effect of the system’s own output on that variable. The effect of a gust of wind on the position of a car, for example, depends on the state of the output (steering wheel position) at the time of the gust; if you happen to be turning into the gust at the instant the force of the wind increases the effect of the gust on the position of the car is far less than if you are turning away from it.

Remember, a controlled variable, q.i, such as the perceived position of a car on the road, is always a joint and simultaneous result of the effect of independent variables (disturbances, d, like the wind) and system outputs ( o, like steering wheel position): q.i = o + d. A control system perceives and controls only q.i; it has no way of knowing how much any change in the state of the controlled variable, q.i. is due to an effect of d or o.

If you do one of my on-line tracking tasks you will see that much of the movement of the cursor is due to your own outputs (mouse movements) and higher level systems in you can tell that it is often your own outputs that are the main contribution to the sudden movements of the cursor away form the target. Because q.i = o + d a lot of ta control system’s outputs are aimed at reducing error created in par by it’s own outputs.

It’s all closed loop;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2012.12.13.0852)]

[From Rick Marken (2012.12.13.0850)]

Fred Nickols (2012.12.13.0642 AZ)–

RM: I’m with Erling. The term “disturbance” should be used to refer to the effect or impact on the controlled variable.

Boundary issues;-) I meant to use FN rather than RM. I am not talking to myself, at least not on the net;-)

Best

Rick

···

RM: I didn’t notice that in Erling’s post but I disagree. I don’t think we should use any term – “perturbation” or “disturbance” – for the effect of of an independent variable (like the wind) on a controlled variable (like the position of a car). The reason is that giving such an effect a name gives the impression that it can be detected (as Martin assumes, incorrectly, in his analysis). The effect of an independent variable on the controlled variable is always combined with the effect of the system’s own output on that variable. The effect of a gust of wind on the position of a car, for example, depends on the state of the output (steering wheel position) at the time of the gust; if you happen to be turning into the gust at the instant the force of the wind increases the effect of the gust on the position of the car is far less than if you are turning away from it.

Remember, a controlled variable, q.i, such as the perceived position of a car on the road, is always a joint and simultaneous result of the effect of independent variables (disturbances, d, like the wind) and system outputs ( o, like steering wheel position): q.i = o + d. A control system perceives and controls only q.i; it has no way of knowing how much any change in the state of the controlled variable, q.i. is due to an effect of d or o.

If you do one of my on-line tracking tasks you will see that much of the movement of the cursor is due to your own outputs (mouse movements) and higher level systems in you can tell that it is often your own outputs that are the main contribution to the sudden movements of the cursor away form the target. Because q.i = o + d a lot of ta control system’s outputs are aimed at reducing error created in par by it’s own outputs.

It’s all closed loop;-)

Best

Rick


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Fred Nickols (2012.12.13.1005 AZ)]

OK. I’m confused again. If the “system” in question is the driver (me or you) I thought my output (o) was the force I’m exerting on the steering wheel. How did “steering wheel position” get to be my output?

Best regards,

Fred Nickols

Distance Consulting LLC

www.nickols.us

···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Richard Marken
Sent: Thursday, December 13, 2012 9:49 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby’s Law of Requisite Variety

[From Rick Marken (2012.12.13.0850)]

Fred Nickols (2012.12.13.0642 AZ)–

RM: I’m with Erling. The term “disturbance” should be used to refer to the effect or impact on the controlled variable.

RM: I didn’t notice that in Erling’s post but I disagree. I don’t think we should use any term – “perturbation” or “disturbance” – for the effect of of an independent variable (like the wind) on a controlled variable (like the position of a car). The reason is that giving such an effect a name gives the impression that it can be detected (as Martin assumes, incorrectly, in his analysis). The effect of an independent variable on the controlled variable is always combined with the effect of the system’s own output on that variable. The effect of a gust of wind on the position of a car, for example, depends on the state of the output (steering wheel position) at the time of the gust; if you happen to be turning into the gust at the instant the force of the wind increases the effect of the gust on the position of the car is far less than if you are turning away from it.

Remember, a controlled variable, q.i, such as the perceived position of a car on the road, is always a joint and simultaneous result of the effect of independent variables (disturbances, d, like the wind) and system outputs ( o, like steering wheel position): q.i = o + d. A control system perceives and controls only q.i; it has no way of knowing how much any change in the state of the controlled variable, q.i. is due to an effect of d or o.

If you do one of my on-line tracking tasks you will see that much of the movement of the cursor is due to your own outputs (mouse movements) and higher level systems in you can tell that it is often your own outputs that are the main contribution to the sudden movements of the cursor away form the target. Because q.i = o + d a lot of ta control system’s outputs are aimed at reducing error created in par by it’s own outputs.

It’s all closed loop;-)

Best

Rick


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Erling Jorgensen (2012.12.13 1235 EST)]

Fred Nickols (2012.12.13.1005 AZ)]

OK. I'm confused again. If the "system" in question is the driver (me or
you) I thought my output (o) was the force I'm exerting on the steering
wheel. How did "steering wheel position" get to be my output?

My sense is that the force I exert on the steering wheel is the final
output, but that then controls the position of the steering wheel (for
instance, more to the right or more to the left?), which by the
environmental linkage in the car controls the direction of travel of the
wheels, which by the way wheels operate controls the position of the car
in the lane. There may be some implementing layers of control left out
here.

At each level of hierarchical control, the "output" of a given control
system is specifying the reference for the implementing perception that
is next in series lower down. That ultimately turns into a force that
can have an effect in what we traditionally call the environment, and so
that force is the ultimate output.

But it really is control all the way down (or out), which means we have
to look for what perceptual variables (plural) our actions are affecting.

All the best,
Erling

[From Bill Powers (2012.12.13.1108 MST)]

I’m trying to get detached from this long thread, but maybe just one more
peanut…

Bruce Abbott (2012.12.13.0930 EST) –

EJ: When I read through Ashby’s
paper, a huge red flag reared up for me as

he discussed the error-controlled regulator. On page 9, he stated:

EJ: "To go to the extreme: if the regulator is totally successful,
the error

will be zero unvaryingly, and the regulator will thus be cut off
totally

from the information (about D’s value) that alone can make it
successful

–which is absurd. The error-controlled regulator is thus
fundamentally

incapable of being 100 percent efficient."

EJ: While trying to make an argument about so-called perfect control,
Ashby

slipped in the misunderstanding that the only thing that makes
control

possible is knowledge about the disturbance. This is certainly not
how a

basic PCT negative feedback control system goes about its
business.

BA: Ashby is talking here about the possibility of regulating with zero
error.

BA: But Ashby is assuming
perfect control, which means that the error signal is

always zero. If the error is zero, then the feedback must be zero. But
if

the disturbance has moved the CV away from its reference value, there
must

be error, which the system uses to generate an opposing feedback
value.

That’s a logical contradiction: the error cannot simultaneously always
be

zero (perfect control) and some nonzero value.

The whole problem here is that Ashby seemed to believe that some
kind of control system could achieve perfect control, so the fact that
closed-loop systems can’t rules them out of the discussion. But Ashby
assumed wrongly. In fact, when you look into all the reasons why no real
systems can control perfectly by any means, you come to the conclusion
that negative feedback controllers are the most accurate, fastest, and
most reliable kinds of control that exist, all others falling farther
short of perfection.

The main reasons are limitations on the accuracy and repeatability of
actions by the components of the system. The state of a disturbance can’t
be sensed perfectly; the sources of many disturbances can’t be sensed at
all. The amount of action required can’t be calculated perfectly because
the properties of the plant or the environment can’t be sensed perfectly
and anyway do not remain constant, and even if the calculation of the
action required for complete control were carried to 18 decimal places,
no real actuator could execute the actions with anything remotely
approaching that kind of accuracy. And finally, while adjusting the
open-loop system for best control, it is impossible to measure perfectly
the final result, the state of the controlled variable, even with our
best measuring devices.

At the same time, we know that a closed-loop control system can, under
conditions of modest disturbance, control to the limits of measurement
set by our best measuring devices. We know that changes in the controlled
variable due to variations in the properties of the output function and
feedback path are divided by the loop gain, which is routinely in the
range of 1,000 to 100,000, with instances of loop gain of one billion. We
know that the frequency response curve of the output actions can be made
almost perfectly flat over a much wider frequency range than the same
components could achieve in an open-loop system. If the output function
has a finite time-constant, the system as a whole acts as if that time
constant has been divided by the loop gain.

In short, we know that negative feedback control can be used to improve
on the performance of even the very best possible open-loop systems in
almost every respect. Ashby’s assumption that there is ANY means of
achieving perfect control was erroneous, making his whole argument
pointless.

Aside from the fact that many of us (including me) found inspiration in
Ashby’s writings and ideas and clever devices, we have to conclude that
he set a large number of people onto a false path which they still follow
and which still constitutes a formidable blockade in the way of
acceptance of PCT. It is our unpleasant duty to tell others that this
perfectly nice intelligent man did more to harm science than to help
it.

Best,

Bill P.