Ashby's Law of Requisite Variety

[Richard Kennaway (2012.12.06.15:58 GMT)]

[From Bruce Abbott (2012.12.3.1910 EST]

>>[Richard Kennaway (20121203.0853 GMT)]

[From Fred Nickols (2012.12.2.1530 AZ)]

As I understand it, W. Ross Ashby's Law of Requisite Variety asserts that a

control system must be capable of a sufficient variety of actions to control
that which is to be controlled. If it's not, control is not possible.
Ashby's law is sometimes stated as "the complexity of a control system must
equal or exceed that of the system to be controlled." I'm wondering how
Ashby's law fits with PCT, if it does. It seems to me that we sometimes
bite off more than we can chew so to speak and those are instances wherein
our complexity is exceeded by that of the situation/variables we try to
control. The "disturbances" overwhelm us. Comments anyone?

I've never understood what this law actually is. Did Ashby ever give a

mathematical definition of "variety" and formulate and prove this law as a
theorem? I've never found anything that wasn't just words.

You might try Ashby W.R. (1958) "Requisite variety and its implications for
the control of complex systems,"
Cybernetica 1:2, p. 83-99 ( available online at
http://pcp.vub.ac.be/books/AshbyReqVar.pdf in which Ashby shows that his Law
of Requisite Variety is closely related Shannon's Theorem 10. The former is
about the correction of error, induced by disturbances, in a control system,
the latter about the correction of noise in a noisy transmission channel.

Thanks for the paper.

So, if formulated precisely, the law is be more or less the theorem of Shannon's that Ashby draws a connection with, which is about the open-loop situation of figure 3 on p.8 of his paper. To transmit a message over a noisy channel in such a way that the recipient can reconstruct the message despite the noise, the amount of error-correction information ("information" in the sense of Shannon) that must be added to the signal is at least as large as the Shannon information in the noise. Unlike Ashby, Shannon defined his terms and formulated his conclusions precisely enough to prove things like this as mathematical theorems.

Ashby then goes on to claim that because closed-loop control depends on the error, it can't work as well as open-loop control, because closed-loop control depends on having an error in order to act. He even claims that open-loop control is simpler, which is pretty much refuted by the absence of such a thing as a textbook on open-loop control. Even model-based control (the closest thing that real control theory has to what he's describing, but which is still invariably closed-loop) is considered in control theory courses and textbooks to be a more advanced and more complex topic. So the history is against him there, even the history apart from the part of it that is PCT.

How one might apply the concepts of information theory to closed-loop control is not at all clear to me. I've looked for material on this in the past, but never found anything. I've seen enough words expended on this subject that I'm not interested in seeing any more, only hard mathematics. The mathematical tools to address this would be stochastic calculus, the study of continuous random processes, which did not exist in Ashby's time. Of course, the output of a controller that works will be just what it takes to oppose the disturbances. That is what you will see happening when a controller controls. It is not the mechanism whereby it does so.

I have a paper recently in the submission process (and now in the rejection process) which I mentioned here some months back, which examines various current methods of discovering causal information from statistical data. It demonstrates that none of these methods work when applied to control systems. Their authors do not claim that they do, of course, and I've no problem with what they actually do. But in every case, the hypotheses they make about the systems they study turn out to be just what is needed to rule out control systems, and yet without any apparent awareness that there is this large class of systems of great importance in the life and social sciences, for which no tweaking of their general methods can extend them to.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From bill Powers (2012.12.06.1038 MST)]

Richard Kennaway (2012.12.06.15:58 GMT) –

JRK: So, if formulated
precisely, the law is be more or less the theorem of Shannon’s that Ashby
draws a connection with, which is about the open-loop situation of figure
3 on p.8 of his paper. To transmit a message over a noisy channel
in such a way that the recipient can reconstruct the message despite the
noise, the amount of error-correction information
(“information” in the sense of Shannon) that must be added to
the signal is at least as large as the Shannon information in the
noise. Unlike Ashby, Shannon defined his terms and formulated his
conclusions precisely enough to prove things like this as mathematical
theorems.

Ashby then goes on to claim that because closed-loop control depends on
the error, it can’t work as well as open-loop control, because
closed-loop control depends on having an error in order to act. He
even claims that open-loop control is simpler, which is pretty much
refuted by the absence of such a thing as a textbook on open-loop
control. Even model-based control (the closest thing that real
control theory has to what he’s describing, but which is still invariably
closed-loop) is considered in control theory courses and textbooks to be
a more advanced and more complex topic. So the history is against
him there, even the history apart from the part of it that is
PCT.

Thanks, Richard. I’d rather have you answering Martin in this area than
me, because I just don’t have the horsepower for the math.
I think the rejoinder to Ashby’s claim is simply that closed-loop control
works better than open-loop because the accuracy of control depends
primarily on the loop gain and the accuracy of properties in the feedback
path, whereas in the open-loop system the accuracy of control depends on
the accuracy of all the forward components – not only the sensor,
but the computations and the output function where large signals and
forces must be produced with the same accuracy as in the low-energy parts
of the circuit. The 19th century carried that method about as far as
possible. Only when negative feedback was mastered was there a
step-improvement in the accuracy of control.

Good luck with what sounds like an important paper.

Best,

Bill P.

[From Bruce Abbott (2012.12.06.1450 EST)]

Richard Kennaway (2012.12.06.15:58 GMT)--

Thanks for your commentary on Ashby's Law of Requisite Variety and the
application of information theory to the analysis of control systems. Much
appreciated.

RK: I have a paper recently in the submission process (and now in the
rejection process) which I mentioned here some months back, which examines
various current methods of discovering causal information from statistical
data. It demonstrates that none of these methods work when applied to
control systems. Their authors do not claim that they do, of course, and
I've no problem with what they actually do. But in every case, the
hypotheses they make about the systems they study turn out to be just what
is needed to rule out control systems, and yet without any apparent
awareness that there is this large class of systems of great importance in
the life and social sciences, for which no tweaking of their general methods
can extend them to.

I'm not clear on what hypotheses you have in mind that are "just what is
needed to rule out control systems." Do you mean the assumption of lineal
cause and effect (i.e., no feedback)? Something else?

Bruce

[Martin Taylor 2012.12.06.15.55]

[Richard Kennaway (2012.12.06.15:58 GMT)]

How one might apply the concepts of information theory to closed-loop
control is not at all clear to me. I've looked for material on this
in the past, but never found anything. I've seen enough words
expended on this subject that I'm not interested in seeing any more,
only hard mathematics.

I really shouldn't cite this before I've read it, but the abstract seems
to satisfy your requirements -- or at least the second half of it does:

Hugo Touchette and Seth Lloyd, "Information-theoretic approach to the
study of control systems", asXiv:physics/0104007v2 (is that an actual
reference?).

Abstract:
We propose an information-theoretic framework for analyzing control
systems based on the close relationship of controllers to communication
channels. A communication channel takes an input state and transforms it
into an output state. A controller, similarly, takes the initial state
of a system to be controlled and transforms it into a target state. In
this sense, a controller can be thought of as an actuation channel that
acts on inputs to produce desired outputs. In this transformation
process, two different control strategies can be adopted: (i) the
controller applies an actuation dynamics that is independent of the
state of the system to be controlled (open-loop control); or (ii) the
controller enacts an actuation dynamics that is based on some
information about the state of the controlled system (closed-loop
control). Using this communication channel model of control, we provide
necessary and sufficient conditions for a system to be perfectly
controllable and perfectly observable in terms of information and
entropy. In addition, we derive a quantitative trade-off between the
amount of information gathered by a closed-loop controller and its
relative performance advantage over an open-loop controller in
stabilizing a system. This work supplements earlier results [H.
Touchette, S. Lloyd, Phys. Rev. Lett. 84, 1156 (2000)] by providing new
derivations of the advantage afforded by closed-loop control and by
proposing an information-based optimality criterion for control systems.
New applications of this approach pertaining to proportional
controllers, and the control of chaotic maps are also presented.

I attach the full paper, which for quite a while I've been meaning to
try to work through.

Martin

TouchetteLloydInfoControl.pdf (714 KB)

[From Rick Marken (2012.12.06.1430)]

I suppose I should reply to Bruce A and Martin T. But because I know that whatever I say is not going to change their minds I would like to try to move the conversation up a level and ask why they are so interested in information theory and it’s relationship to control. I know why I’m not interested in information theory: it seems (and has proved) to be completely irrelevant to all of the research I have done on control. The good ol’ PCT model has been the basis of all my research and I find it very easy to use the theory to understand everyday behavior. So why all the interest in information theory? DO you use it in your research on control? Does it explain things about control that control theory doesn’t?

Martin Taylor (2012.12.05.14.0)–

RM: Oh, no. Not this again!?! I thought we had slayed the “information
about the disturbance in perception” dragon years ago. Suffice it to
say, the error signal can’t convery informatoin about the disturbance
because the error signal depends on the difference between r and o+d,
not between r and d. Think about it.

MT: Here’s the argument, without the maths that so confused you in the long-ago
disucssion. Just think about the situation from the viewpoint of an outside
observer deducing the disturbance waveform from observation of the output
waveform. Information-theoretically, the quality of control is specified by
how much you don’t know about the disturbance when you know the output. If
control is perfect, and the observer knows the environmental feedback
function, the output waveform is sufficient for the observer to know the
disturbance waveform, because the output waveform, transformed by the
environmental feedback function, is exactly the negative of the disturbance
waveform.

RM: OK. So far so good. Although you should be clear that it is the net effect of disturbance variables on the controlled about which you are getting information. this effect, call it d* is a function if the all disturbances that are having an effect on the controlled variable:

d* = g(d1,d2,…dn)

where g() is the disturbance function that related disturbance variables, d1,d2,dn to the controlled variable.

MT: How can this happen? The output doesn’t create its waveform spontaneously,
nor are the output and disturbance waveforms both determined by some common
source of variation; yet the output waveform is influenced (determined, if
the reference value remains unchanged) by the waveform of the disturbance.
The disturbance does not influence the output by effects that propagate
backward through the environmental feedback function. The only remaining
possibility is that the output is influenced by the chain of influences
through the sensory system, the perceptual mechanism, the error computation,
through to the output function and output machinery that produces the output
that the outside observer can now use to determine what the disturbance must
be.

RM: OK. So you are saying that it works this way:

disturbance–>sensory system–>perception → error → output

Three problems here. First, the disturbance variable has an effect on a physical variable that is the environmental correlate of the controlled perceptual variable; in B:CP it’s called the controlled quantity… So the path above should go like this:

disturbance–> controlled quantity–>sensory system–>perception → error → output

Second, the controlled quantity is influence by both the disturbance and output variables simultaneously. So an arrow should loop back from output to controlled quantity.

Finally, there is a (possibly varying) feedback function connecting output to controlled quantity.

Give these last two facts (or either one of them alone), it is impossible for for the error signal to contain information about the disturbance which would allow the output to mirror that disturbance. Be even assuming that this were possible, there is no way for the system to know what the output should be so as to take into account the feedback function. For example, if the feedback function multiplies the output by 2, then if the output of the system were, based on information about it, a perfect mirror of the disturbance, then the output would be twice as big as it should be and there would be no control.

I hope you will agree that there is no information about the feedback function that comes into the system. And you yourself said that one has to know the feedback function in order to reconstruct the disturbance from output. So clearly a control system can generate the output appropriate to compensate for the effect of disturbances to a controlled quantity without any information about the feedback function that connects it’s own output to the controlled quantity. So if it can do that, why not go ahead and imagine that it can also generate output that compensates for the disturbance without any information about the effect of the disturbance? Imagine there’s no information about the disturbance (or feedback function). It’s easy if you try.

MT: Information about the disturbance does pass through the interior components
of the control loop, back to the input where that information is used to
cancel the disturbance.

RM: So again I ask, why do you want to believe this? Or better yet, what do I gain if I go ahead and believe it? Would it change the questions I ask? The way I do research? Build models?

MT: I know it’s hard to think of closed loops, but that’s what you must do when
when dealing with control.

RM: I’ll try;-)

Bruce Abbott (2012.12.05.1545 EST)–

BA: If the output of the system perfectly cancelled the effect of the
disturbance, the error signal would never vary from zero.
Consequently, there would be no variation in the output to cancel out
variation in the CV due to the disturbance.

RM: I don’t think this is actually true. Zero error just means no change in
output. So while the output is changing in a way that perfectly opposes the
disturbance there will be control with zero error.

BA: Do you realize what you just said? You said that there is no change in
output (agreed), but the output is changing in a way that perfectly opposes
the disturbance. How can an output that is constant be changing at the same
time?

I was thinking of a system where the error drives a change in output (velocity). But I realized that it doesn’t have to be that complex. Since the input is o+d (for simplicity) and the systems is trying to keep (o+d) = r then whenever d = r, error will be zero and output will be zero, as it should be. If d stays equal to r then the error remains at zero and the controlled quantity (o+d) remains at the reference. But I think you were talking about this in terms of changing disturbances. I agree that, with a fixed reference a changed disturbance to the controlled quantity would not be noticed if the error remained at zero. What I was reacting to is the idea that a control system won’t control when the error is zero. That’s what’s not true. So, yes, the system must be able to detect the state of the controlled quantity (and, therefore, detect whether there is a difference between the state of the controlled quantity and the reference state – ie, an error – in order to control it. I don’t think you need information theory to know that. That’s pretty basic control theory.

RM: Oh, Bruce. Say it ain’t so.

BA: Oh, Rick, it IS so. Sorry to be the bearer of what apparently is for you
a bit of bad news.

Not that bad. It’s really irrelevant to me. But I would like to know what it does for you?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.06.23.52]

[From Rick Marken (2012.12.06.1430)]

  I suppose I should reply to Bruce A and Martin T.  But because I

know that whatever I say is not going to change their minds I
would like to try to move the conversation up a level and ask why
they are so interested in information theory and it’s relationship
to control.

Now that's a good question. I can't speak for Bruce, but my interest

is that I like to have as varied a toolbox as I can. If there’s a
tool that looks as though it might sometimes be useful, I’ll keep it
in the box.

  I know why I'm not interested in information theory:

it seems (and has proved) to be completely irrelevant to all of
the research I have done on control. The good ol’ PCT model has
been the basis of all my research and I find it very easy to use
the theory to understand everyday behavior. So why all the
interest in information theory? DO you use it in your research on
control? Does it explain things about control that control theory
doesn’t?

I don't understand this last question. It's like asking "does

[algebra|calculus|Laplace|Fourier transform mathematics] explain
things a bout control that control theory doesn’t?" None of those
questions would make sense to me, and neither does the question when
the mathematical operations involved are information-theoretic.

  > Martin Taylor (2012.12.05.14.0)--



  > RM: Oh, no. Not this again!?! I thought we had slayed the

"information

  > about the disturbance in perception" dragon years ago.

Suffice it to

  > say, the error signal can't convery informatoin about the

disturbance

  > because the error signal depends on the difference between r

and o+d,

  > not between r and d. Think about it.



  > MT: Here's the argument, without the maths that so confused

you in the long-ago

  > disucssion. Just think about the situation from the viewpoint

of an outside

  > observer deducing the disturbance waveform from observation

of the output

  > waveform. Information-theoretically, the quality of control

is specified by

  > how much you don't know about the disturbance when you know

the output. If

  > control is perfect, and the observer knows the environmental

feedback

  > function, the output waveform is sufficient for the observer

to know the

  > disturbance waveform, because the output waveform,

transformed by the

  > environmental feedback function, is exactly the negative of

the disturbance

  > waveform.



  RM: OK. So far so good. Although you should be clear that it is

the net effect of disturbance variables on the controlled about
which you are getting information. this effect, call it d* is a
function if the all disturbances that are having an effect on the
controlled variable:

  d* = g(d1,d2,...dn)



  where g() is the disturbance function that related disturbance

variables, d1,d2,dn to the controlled variable.

Red Herring alert! The input quantity

sometimes called qi = d + f(qo), where f is the environmental
feedback function and d is the disturbance that you call d*.

  >MT:  How can this happen? The output doesn't create its

waveform spontaneously,

  > nor are the output and disturbance waveforms both determined

by some common

  > source of variation; yet the output waveform is influenced

(determined, if

  > the reference value remains unchanged) by the waveform of the

disturbance.

  > The disturbance does not influence the output by effects that

propagate

  > backward through the environmental feedback function. The

only remaining

  > possibility is that the output is influenced by the chain of

influences

  > through the sensory system, the perceptual mechanism, the

error computation,

  > through to the output function and output machinery that

produces the output

  > that the outside observer can now use to determine what the

disturbance must

  > be.



  RM: OK. So you are saying that it works this way:



  disturbance-->sensory system-->perception --> error

→ output

  Three problems here. First, the disturbance variable has an effect

on a physical variable that is the environmental correlate of the
controlled perceptual variable; in B:CP it’s called the controlled
quantity… So the path above should go like this:

  disturbance--> controlled quantity-->sensory

system–>perception → error → output

  Second, the controlled quantity is influence by both the

disturbance and output variables simultaneously. So an arrow
should loop back from output to controlled quantity.

  Finally, there is a (possibly varying) feedback function

connecting output to controlled quantity.

All true, but how does this affect what I said? Was I required to

detail every element of the control loop? I would assume that most
CSGnet readers would know all of what you call “problems”, and not
require a basic tutorial every time one wants to sketch something
about the influence flow.

  Give these last two facts (or either one of them alone), it is

impossible for for the error signal to contain information about
the disturbance which would allow the output to mirror that
disturbance.

That is precisely where I showed that you were wrong! I won't repeat

the argument, because it is so trivially simple that there is hardly
any other way to say it beyond “with good control, the influence of
the output on the input quantity mirrors the influence of the
disturbance. Information about the disturbance waveform therefore
arrives at the output. The information cannot go backwards through
the environmental feedback function. The only other possibility is
that it goes through the path internal to the control system.”

Let's forget "information" in the technical sense for a moment, and

talk “correlation”. If two variables A and B are correlated, there
are several possibilities for the influences involved: (1) A
influences B, (2) B influences A, (3) something else we may call X
influences both A and B. We can substitute “have non-zero mutual
information” for “correlation” and the same thing holds. If there is
control, the mutual information between disturbance and output is
non-zero. One of the following must be true: (1) the output
influences the disturbance, (2) The disturbance influences the
output, and (3) Some unknown thing influences both output and
disturbance. In a control system we know 1 and 3 are false and from
the physical circuitry we know 2 to be true. Somehow, the
information from the disturbance appears at the output, and the only
path through which this can happen is by way of the internal
circuitry of the control system.

  Be even assuming that this were possible, there is no

way for the system to know what the output should be so as to take
into account the feedback function.

Why should it? And how is this statement relevant? What control

system “knows” (I suppose you mean perceives) its output?

  For example, if the feedback function multiplies the

output by 2, then if the output of the system were, based on
information about it, a perfect mirror of the disturbance, then
the output would be twice as big as it should be and there would
be no control.

True. Why mention it?
  I hope you will agree that there is no information about the

feedback function that comes into the system.

Yes.
  And you yourself said that one has to know the

feedback function in order to reconstruct the disturbance from
output. So clearly a control system can generate the output
appropriate to compensate for the effect of disturbances to a
controlled quantity without any information about the feedback
function that connects it’s own output to the controlled quantity.

Yes.
  So if it can do that, why not go ahead and imagine

that it can also generate output that compensates for the
disturbance without any information about the effect of the
disturbance? Imagine there’s no information about the disturbance
(or feedback function). It’s easy if you try.

No it isn't. It is damned difficult, simply because I don't believe

in pure magic. I believe control systems work on normal physical
principles. — [Later] When I wrote that, I mentally substituted
“magnitude” for “effect” in your first sentence. With good control,
the disturbance has very little effect, no matter how large its
magnitude. It’s the magnitude of the disturbance that is
informationally connected to the output, not the effect of the
disturbance.

  > MT: Information about the disturbance does pass through the

interior components

  > of the control loop, back to the input where that information

is used to

  > cancel the disturbance.



  RM: So again I ask, why do you want to believe this?  Or better

yet, what do I gain if I go ahead and believe it? Would it change
the questions I ask? The way I do research? Build models?

That is up to you. I can't speak for you. But it's not a question of

belief or disbelief. It’s a question of tool use. If you know how to
use a hammer, chisel, saw, and screwdriver, you can do more with
wood than if you can use only a hammer and a saw. If you don’t
believe that a chisel is useful, or don’t know what one is when you
see it, you won’t learn to use it, and your woodwork may be good,
but it will miss some effects that can easily be done with a chisel.

One thing thinking in information-theoretic terms does for me is

clarify the relationship between disturbance bandwidth, transport
lag, and limits on control quality. But there’s lots beyond that,
especially when you get into the interactions among many control
systems. Not, however, anything that interests you, which is fine by
me so long as you stop converting your personal lack of interest in
the tool into general statements of its irrelevance or
impossibility.

Martin

[From Bruce Abbott (2012.12.07.0715 EST)]

Rick Marken (2012.12.06.1430) –

RM: I suppose I should reply to Bruce A and Martin T. But because I know that whatever I say is not going to change their minds I would like to try to move the conversation up a level and ask why they are so interested in information theory and it’s relationship to control. I know why I’m not interested in information theory: it seems (and has proved) to be completely irrelevant to all of the research I have done on control. The good ol’ PCT model has been the basis of all my research and I find it very easy to use the theory to understand everyday behavior. So why all the interest in information theory? DO you use it in your research on control? Does it explain things about control that control theory doesn’t?

As I said before, I see information theory as a potentially useful tool in the analysis of control systems. The fact that you’ve never felt the need to use this tool does not demonstrate that the tool is either useless or inappropriate to this purpose. I’m not aware that you’ve used the mathematical tools of frequency or time-domain analysis in your studies. Consequently, by your logic, these must be useless or perhaps even misleading tools and therefore we should just forget about Bode plots and Nyquist limits.

I personally don’t have the mathematical expertise needed to apply information theory in a rigorous way to the analysis of control systems (a state of affairs I much regret), but I recognize its potential in the hands of those do. The paper by Hugo Touchette and Seth Lloyd (2008) that Martin Taylor (2012.12.06.15:55) posted is a case in point.

Bruce

[From Bill Powers (2012.12.07.0520 MST)]

Martin Taylor 2012.12.06.23.52]

[From Rick Marken
(2012.12.06.1430)]

I know why I’m not interested in information theory: it seems (and has
proved) to be completely irrelevant to all of the research I have done on
control. The good ol’ PCT model has been the basis of all my research and
I find it very easy to use the theory to understand everyday behavior. So
why all the interest in information theory? DO you use it in your
research on control? Does it explain things about control that control
theory doesn’t?

MMT: I don’t understand this last question. It’s like asking “does
[algebra|calculus|Laplace|Fourier transform mathematics] explain things a
bout control that control theory doesn’t?” None of those questions
would make sense to me, and neither does the question when the
mathematical operations involved are
information-theoretic.

BP: I think it’s time to resolve this dispute once and for all. All the
arguments on the pro-information-theory side throughout this years-long
discussion have been qualitative and indirect, with no actual worked-out
problems showing how a control system could be designed on the basis of
information theory, with information-blocking and perfect control. All we
have is allusions to computations that could be carried out, and
conditions which, if they obtained, would permit perfect
control.

I want to see an actual example of a control system design based on
Ashby’s way of thinking. I will throw down the gauntlet: I claim that no
such design can be produced, and furthermore that no actual working
device based on Ashby’s idea ever has or ever could be built to work as
Ashby claimed it would work in producing perfect control.

Furthermore, I will claim that Ashby’s concept can’t be used to model the
participant in any of the control demos that we have produced, and that
this concept can’t correctly simulate human behavior in any of the
control tasks such as the tracking experiment. If you, Martin, can come
up with such a design or device, or find one in any literature ever
published by Shannon or any cyberneticist, I will eat my words and
capitulate.

As fair warning, I will tell you how my rejection of the information
theory approach will be justified. I will simply point out that sensing
the disturbance and calculating its effect on the output applied to the
controlled variable is impossible in all these demonstrations and
experiments. Not just “difficult” or “insufficiently
accurate” – impossible. That is because D, the disturbance as Ashby
shows it in his diagrams, is not observable by the person doing the task,
and so is not observable by any model that simulates the human being. The
disturbance (or collection of disturbances) that causes the controlled
variable to depart from the desired value cannot, in our experiments, be
seen at all, nor can whatever function connects disturbing variables to
effects on the controlled variable be observed or deduced. Only the
effects on the controlled variable can be observed, and even then those
effects are caused in part by the controller’s own actions, which can’t
be known in advance.

In the more general case I will show that real controllers can never
achieve perfect control by Ashby’s method, and that in fact even when all
disturbing variables can be observed and each of the connecting functions
is known, perfect control is still impossible. Finally, I will show that
negative feedback control based on sensing the controlled variable will
always produce better control than Ashby’s scheme can produce, because at
least one source of system noise is greately suppressed.

This is worth doing for one simple reason: we can publish the results and
put an end to this cybernetic travesty that has wasted the time of a
couple of generations.

I sincerely hope that Richard Kennaway is going to back me up as I try to
chew this rather large bite. I hope also that you, Martin, will see what
the outcome is going to be and anticipate me by reaching the right
conclusions on your own.

Best,

Bill P.

[Richard Kennaway (2012.12.07.13:42 GMT)]

[Martin Taylor 2012.12.06.15.55]
I really shouldn't cite this before I've read it, but the abstract seems to satisfy your requirements -- or at least the second half of it does:

Hugo Touchette and Seth Lloyd, "Information-theoretic approach to the study of control systems", asXiv:physics/0104007v2 (is that an actual reference?).

Thanks for that reference. This is the most substantial thing on the subject I've yet seen, and there might be some actual meat in the bun, although I did have to chew through rather a lot of bread.

It starts out unpromisingly, saying that open-loop control is simpler than closed-loop, because the former just has to tell the system what to do, while the latter has to also look at what it did. A lot of the paper is then dealing only with discrete-valued variables in discrete time and simply recapitulates the connection with Shannon's communication theory. So far, no different from Ashby.

But, they do get to the point of considering continuous variables and continuous time, and define a way of deriving rates of information flow by passing ot the limit of the discrete time step tending to zero. I haven't worked out how well this actually stands up.

Finally they consider some examples, one of which is (they say) a proportional controller, although it's set out rather oddly (to my mind):

  X' = X - C
  C = hat(X) - r

The terminology means this:

  X is the variable to be controlled
  X' is its rate of change
  C is the output of the controller
  hat(X) is the controller's estimate of X (i.e. it's assumed to have an imperfect sensor)
  r is the reference, assumed zero throughout their discussion. They call it x*, but that's typographically confusing.

(Earlier in the paper there is a diagram of a system in which a disturbance is applied to X instead of to C as it is here, but they don't analyse that situation any further.)

Ok, the second equation looks like a proportional controller with a gain of 1. The first equation is supposed to describe the time evolution of the system. If the controller did nothing (i.e. C=0) then this is a system that on its own does an exponential runaway. Ok, it's just an example, although it's odd they don't describe it in those terms.

They consider two different versions of hat(X). The first is a discretised version of X, which I don't find interesting, but the second has hat(X) = X + Z, where Z is a gaussian variable. In other words, the controller's sensor has white noise added to it. (Qualm: white noise is not a physical process. It has infinite energy per unit time. Here, the white noise is sampled once per time step -- we're still on discrete time here -- so I am suspecting that the behaviour of this system will depend on the time step. As the time step goes to zero, it's sampling the noise more often and perhaps may be more heavily perturbed by it.)

The analysis then becomes conceptually unclear to me. Z is assumed to have some variance N, which is fine, but X is assumed to have some variance P. What does that mean? Uncontrolled, X does an exponential runaway; controlled, surely P will depend on how well the controller works, but the paper is taking P to be a property of X alone. So I don't know what's going on there.

Then they introduce the possibility of varying the gain by modifying the equations thus:

  X' = X - g C
  C = hat(X) - r

where g is the gain. Their criterion of optimality of the controller (which I don't yet understand) leads to the result that the optimal gain is P/(P+N), which in the limit of low noise (N=0) tends to 1.

Ok, let's set the noise to zero, so hat(X) = X. For clarity in what is going on let r be any constant value, and solve the above equations. They become:

  X' = X - g C
  C = X - r

so X' = X(1-g) + rg = (1-g)(X + rg/(1-g)). This is solved by X = a exp( (1-g)t ) - rg/(1-g). For g < 1, this is an exponential runaway, and for g > 1, it tails off to rg/(g-1). For g=1, X' is zero and X is constant at any value. That is, for the supposedly optimal value of g, X is not brought close to r, and for an even slightly smaller value, X is unstable. For larger values of g, the steady-state error X-r is r/(1-g). This does not look like a reasonable set of properties of an optimal controller. This one clearly works better, the higher g is.

I looked up Touchette, and this paper is basically his M.Sc. thesis work with Lloyd as his supervisor. I've downloaded more of Touchette's work on this:

His M.Sc. thesis:
http://www.maths.qmul.ac.uk/~ht/archive/mscthesis.pdf
"Information-Theoretic Aspects in the Control of Dynamical Systems"
Hugo Touchette

"Information-Theoretic Limits of Control"
Hugo Touchette and Seth Lloyd
This appears to be a subset of the paper I discussed.

http://www.maths.qmul.ac.uk/~ht/archive/talks/infoconsing1.pdf
"Information in control"
Hugo Touchette
Slides for a presentation of the same work. Might be useful for getting a quick overview.

I've only glanced at these so far, but I expect only the M.Sc. thesis will have anything more.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[Martin Taylor 2012.12.07.10.21]

My intuition says you are correct. But how does it relate to your

first paragraph, which poses a completely legitimate question?
Martin

···

[From Bill Powers (2012.12.07.0520 MST)]

  Martin Taylor 2012.12.06.23.52]
      [From Rick Marken

(2012.12.06.1430)]

      I know why I'm not interested in information theory: it seems

(and has
proved) to be completely irrelevant to all of the research I
have done on
control. The good ol’ PCT model has been the basis of all my
research and
I find it very easy to use the theory to understand everyday
behavior. So
why all the interest in information theory? DO you use it in
your
research on control? Does it explain things about control that
control
theory doesn’t?

    MMT: I don't understand this last question. It's like asking

“does
[algebra|calculus|Laplace|Fourier transform mathematics] explain
things a
bout control that control theory doesn’t?” None of those
questions
would make sense to me, and neither does the question when the
mathematical operations involved are
information-theoretic.

  BP: I think it's time to resolve this dispute once and for all.

All the
arguments on the pro-information-theory side throughout this
years-long
discussion have been qualitative and indirect, with no actual
worked-out
problems showing how a control system could be designed on the
basis of
information theory, with information-blocking and perfect control.
All we
have is allusions to computations that could be carried
out, and
conditions which, if they obtained, would permit
perfect
control.

  I want to see an actual example of a control system design based

on
Ashby’s way of thinking. I will throw down the gauntlet: I claim
that no
such design can be produced, and furthermore that no actual
working
device based on Ashby’s idea ever has or ever could be built to
work as
Ashby claimed it would work in producing perfect control.

[Martin Taylor 2012.12.07.10.52]

On reading my earlier response, I realize that I didn't correct a

misperception.

[Martin Taylor 2012.12.07.10.21]

Can you refer to any occasion on which anyone on this list proposing

to analyze control systems using information-theoretic tools has
suggested that it might be possible that “a control system could be
designed on the basis of information theory, with
information-blocking and perfect control”?

So far as I can remember, the only suggestions that have been made

for the use of information theory are that bog-standard perceptual
control systems can be analyzed using differential equations, simple
algebra, Fourier analysis, Laplace transforms, information theory,
and combinations of these, depending on which tool best suits the
question being asked. Nobody that I remember has suggested that any
of these tools could be used to design control systems of a
different kind or control systems that have “information-blocking
and perfect control”.

I wonder where your idea came from that any of these analytical

tools might be used to design control systems that are theoretically
impossible.

Martin
···

[From Bill Powers (2012.12.07.0520 MST)]

    Martin Taylor 2012.12.06.23.52]
        [From Rick Marken

(2012.12.06.1430)]

        I know why I'm not interested in information theory: it

seems (and has proved) to be completely irrelevant to all of
the research I have done on control. The good ol’ PCT model
has been the basis of all my research and I find it very
easy to use the theory to understand everyday behavior. So
why all the interest in information theory? DO you use it in
your research on control? Does it explain things about
control that control theory doesn’t?

      MMT: I don't understand this last question. It's like asking

“does [algebra|calculus|Laplace|Fourier transform mathematics]
explain things a bout control that control theory doesn’t?”
None of those questions would make sense to me, and neither
does the question when the mathematical operations involved
are information-theoretic.

    BP: I think it's time to resolve this dispute once and for all.

All the arguments on the pro-information-theory side throughout
this years-long discussion have been qualitative and indirect,
with no actual worked-out problems showing how a control system
could be designed on the basis of information theory, with
information-blocking and perfect control.

[Martin Taylor 2102.12.07.10.24]

[Richard Kennaway (2012.12.07.13:42 GMT)]

[Martin Taylor 2012.12.06.15.55]
I really shouldn't cite this before I've read it, but the abstract seems to satisfy your requirements -- or at least the second half of it does:

Hugo Touchette and Seth Lloyd, "Information-theoretic approach to the study of control systems", asXiv:physics/0104007v2 (is that an actual reference?).

Thanks for that reference. This is the most substantial thing on the subject I've yet seen, and there might be some actual meat in the bun, although I did have to chew through rather a lot of bread.

Thanks for your explanation. If Touchette and Lloyd deal only with a proportional controller, it isn't very interesting. The information-theoretic problems that I see are related to the fact that a good controller has a memory such as an integrator output function.

In my view, the informational limit on control has nothing to do with noisy internal channels, though if they were sufficiently noisy they might impose lower limits. The limit has to do with the fact that the disturbance changes unpredictably at a rate that can be specified, in a linear system by its bandwidth and in a general system by its information generation rate. If there is a transport lag of L seconds in the control loop, the effect of the output on the input quantity at time t0 cannot be influenced any observation of the input quantity later than time t0-L. If the disturbance has an information generation rate of D bits/sec, the uncertainty of how it (and the input quantity) has changed in those L seconds is D*L bits. Those D*L bits are uncorrectable by any controller with transport lag L. Some controllers will do worse, but none will do better.

This argument applies no matter what the form of variation of the disturbance, whether it is a Gaussian random variable (as is usually assumed in linear analyses) or a step-wise variable with steps of Poisson-distributed time and weirdly-distributed size, or whether it is discrete or continuous, provided that a long-term average information generation rate for the disturbance can be computed. Note that if you disallow discrete variation, you can't analyze control at the category level or above, no matter what your analysis technique.

The stumbling block I have run into in my analyses has always been how to take into account the information that is spread over time by the action of the integrator. It is important, because taking into account only the most recently observed value of the disturbance, as a proportional controller does, is functionally equivalent to disregarding the derivatives of a smoothly varying disturbance waveform.

Martin

[Martin Taylor 2102.12.07.10.24]

[Richard Kennaway (2012.12.07.13:42 GMT)]

[Martin Taylor 2012.12.06.15.55]
I really shouldn't cite this before I've read it, but the abstract seems to satisfy your requirements -- or at least the second half of it does:

Hugo Touchette and Seth Lloyd, "Information-theoretic approach to the study of control systems", asXiv:physics/0104007v2 (is that an actual reference?).

Thanks for that reference. This is the most substantial thing on the subject I've yet seen, and there might be some actual meat in the bun, although I did have to chew through rather a lot of bread.

Thanks for your explanation. If Touchette and Lloyd deal only with a proportional controller, it isn't very interesting. The information-theoretic problems that I see are related to the fact that a good controller has a memory such as an integrator output function.

There is integration going on in their example: the output of the controller acts in the rate of change of the controlled variable.

The stumbling block I have run into in my analyses has always been how to take into account the information that is spread over time by the action of the integrator. It is important, because taking into account only the most recently observed value of the disturbance, as a proportional controller does, is functionally equivalent to disregarding the derivatives of a smoothly varying disturbance waveform.

This comes up in the paper. In discretised time, the dependencies of X and C look like this:

  ... X(t-dt) ------> X(t) ------> X(t+dt) ...
             \ ^ \ ^
              \ / \ /
               \ / \ /
                V / V /
          ... C(t-dt) -----> C(t) ...

C(t-dt) contains a lot of information common to X(t) and C(t), the more so the shorter the time step. From this they argue that you can pass to a limit and derive time rates of information transfer. But they do not actually prove this. The argument is very lacking in rigour.

So in general the way to handle the influence of past history would be to look at the mutual information among current values of all the variables, conditional in their values a time dt ago. When you're integgrating, all of the effect of past history is contained in the values the variables had a moment ago. Then let dt tend to zero. For higher orders of integration or differentiation you might have to look at values a few times dt ago but the general idea is the same.

Transport lags would be trickier.

The non-physicality of their noise source bothers me though. When I generate randomly varying waveforms as the disturbances applied to simulated control systems, I always construct them so as to be invariant with respect to the time step of the simulation, usually as a form of heavily bandwidth-limited smoothed white noise. This ensures that the time step has no role in the matter other than being a computational approximation to continuous time.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Rick Marken (2012.12.07.1150)]

Martin Taylor (2012.12.06.23.52)–

MT:... Somehow, the

information from the disturbance appears at the output, and the only
path through which this can happen is by way of the internal
circuitry of the control system.

RM: You are assuming that the only way for information about the disturbance to appear at the output is for this information to have gone through the organism. This is just a fancy way of saying that the causal (S-R) model of behavior must be true. But this is not how a control system works. Any “information” about the disturbance in the output is merely a side effect of the action of a negative feedback control system acting to keep a perceptual signal matching a reference signal. The system knows nothing about the reason why the perceptual signal is varying; if the system is organized properly (for negative feedback) it will act (produce output) to keep the perception at the reference level and, as a side effect will be acting in precise opposition to the net disturbance to that perception.

  RM: imagine there's no information about the disturbance

(or feedback function). It’s easy if you try.

MT: No it isn't. It is damned difficult, simply because I don't believe

in pure magic. I believe control systems work on normal physical
principles.

RM: So do I. But we differ on what constitutes “normal physical principles”. To me the principles of control, which are reflected in the simultaneous equations that define a negative feedback control system, are normal physical principles. To you it appears that only the one-way causal principles of information theory reflect normal physical principles. The idea that output must be based on information about the disturbance is just a fancy way of saying that output must be caused by the disturbance (via information about the disturbance).

I’m afraid that you have fallen completely for the behavioral illusion. There is no causal (or informational, if you like) connection from disturbance to output; there is no need for information about the disturbance to go through the organism and cause (or allow the organism to select) the right output. And there is no magic about it. The output contains all that information about the disturbance to a controlled variable, not because of magic (or information about the disturbance) but because the output varies in precise opposition to a disturbance when the system acts to keep the controlled variable at the reference specification (that is when it acts to keep the error signal as close to zero as possible).

Organisms are not communication channels; they are closed-loop control systems.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Fred Nickols (2012.12.07.1323 AZ)]

I’m probably in way over my head here but I’m going to chime in anyway (see below).

Regards,

Fred Nickols

···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Richard Marken
Sent: Friday, December 07, 2012 12:49 PM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby’s Law of Requisite Variety

[From Rick Marken (2012.12.07.1150)]

Martin Taylor (2012.12.06.23.52)–

MT:… Somehow, the information from the disturbance appears at the output, and the only path through which this can happen is by way of the internal circuitry of the control system.

RM: You are assuming that the only way for information about the disturbance to appear at the output is for this information to have gone through the organism. This is just a fancy way of saying that the causal (S-R) model of behavior must be true. But this is not how a control system works. Any “information” about the disturbance in the output is merely a side effect of the action of a negative feedback control system acting to keep a perceptual signal matching a reference signal. The system knows nothing about the reason why the perceptual signal is varying; if the system is organized properly (for negative feedback) it will act (produce output) to keep the perception at the reference level and, as a side effect will be acting in precise opposition to the net disturbance to that perception.

[Fred Nickols] I’m a negative feedback control system and although there are lots of situations in which I don’t why a particular perceptual signal is varying, there are also situations in which I do. If someone is shoving me and I’m struggling to maintain my balance, I know the source of that disturbance. When the wind blows sideways on my car and I’m working to maintain my lane position, I also know the source of that disturbance. An “engineered” negative feedback control system such as a thermostat clearly doesn’t “know” the source of the open window that caused the temperature to drop or the source of the heat from the furnace that caused it to go up, but I am perfectly willing to accept the notion that people, as “living control systems” often know the source of the “disturbances” to the variables they are attempting to control. I would agree that in the two cases I mentioned that it’s not necessary for me to know the sources of those disturbances in order to maintain my balance or my lane position. But saying that it’s not necessary to know the source of a disturbance to maintain control of a variable and saying that the system knows nothing of the source of a disturbance are, to me, two very different assertions.

[From Rick Marken (2012.07.1250)]

Fred Nickols (2012.12.07.1323 AZ)]

I’m probably in way over my head here but I’m going to chime in anyway (see below).

RM: Not at all. Great points.

[Fred Nickols] I’m a negative feedback control system and although there are lots of situations in which I don’t why a particular perceptual signal is varying, there are also situations in which I do.

Other control systems in you do. I was talking about this from the perspective of a control system controlling a particular varaiable. From the perspective of that system all it knows is p. Some other control system may perceive the disturbance, d, to the other system’s p and would then be able to control its perception of d .

*** If someone is shoving me and I’m struggling to maintain my balance, I know the source of that disturbance. When the wind blows sideways on my car and I’m working to maintain my lane position, I also know the source of that disturbance. An “engineered” negative feedback control system such as a thermostat clearly doesn’t “know” the source of the open window that caused the temperature to drop or the source of the heat from the furnace that caused it to go up, but I am perfectly willing to accept the notion that people, as “living control systems” often know the source of the “disturbances” to the variables they are attempting to control. I would agree that in the two cases I mentioned that it’s not necessary for me to know the sources of those disturbances in order to maintain my balance or my lane position. But saying that it’s not necessary to know the source of a disturbance to maintain control of a variable and saying that the system knows nothing of the source of a disturbance are, to me, two very different assertions.***

RM: Right. The system controlling p knows nothing of the disturbance to p, ever. All the system controlling p knows is p which is o+d. But some other system in you could be perceiving d or o. In the shoving example, one system, controlling for the perception of balance, is correcting for the disturbance to this variable while another system, at a much high level, that can perceive the relationship between shoving (d) and balance (p) can control the perception of this relationship by shoving back (unless he’s controlling for not escalating conflicts, a still higher order perception).

So it’s only from the point of view of a control system controlling a particular perception that the system knows nothing about the reason for any variation in the perception (has no information about the disturbance(s) causing any variation in the perception). It’s true of the system controlling balance as much as it is of the thermostat controlling temperature.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Fred Nickols (2012.12.07.1359 AZ)]

Aha! I forgot. I’m a hierarchical control system.

Fred Nickols

···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Richard Marken
Sent: Friday, December 07, 2012 1:55 PM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Ashby’s Law of Requisite Variety

[From Rick Marken (2012.07.1250)]

Fred Nickols (2012.12.07.1323 AZ)]

I’m probably in way over my head here but I’m going to chime in anyway (see below).

RM: Not at all. Great points.

[Fred Nickols] I’m a negative feedback control system and although there are lots of situations in which I don’t why a particular perceptual signal is varying, there are also situations in which I do.

Other control systems in you do. I was talking about this from the perspective of a control system controlling a particular varaiable. From the perspective of that system all it knows is p. Some other control system may perceive the disturbance, d, to the other system’s p and would then be able to control its perception of d .

*** If someone is shoving me and I’m struggling to maintain my balance, I know the source of that disturbance. When the wind blows sideways on my car and I’m working to maintain my lane position, I also know the source of that disturbance. An “engineered” negative feedback control system such as a thermostat clearly doesn’t “know” the source of the open window that caused the temperature to drop or the source of the heat from the furnace that caused it to go up, but I am perfectly willing to accept the notion that people, as “living control systems” often know the source of the “disturbances” to the variables they are attempting to control. I would agree that in the two cases I mentioned that it’s not necessary for me to know the sources of those disturbances in order to maintain my balance or my lane position. But saying that it’s not necessary to know the source of a disturbance to maintain control of a variable and saying that the system knows nothing of the source of a disturbance are, to me, two very different assertions.***

RM: Right. The system controlling p knows nothing of the disturbance to p, ever. All the system controlling p knows is p which is o+d. But some other system in you could be perceiving d or o. In the shoving example, one system, controlling for the perception of balance, is correcting for the disturbance to this variable while another system, at a much high level, that can perceive the relationship between shoving (d) and balance (p) can control the perception of this relationship by shoving back (unless he’s controlling for not escalating conflicts, a still higher order perception).

So it’s only from the point of view of a control system controlling a particular perception that the system knows nothing about the reason for any variation in the perception (has no information about the disturbance(s) causing any variation in the perception). It’s true of the system controlling balance as much as it is of the thermostat controlling temperature.

Best

Rick


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.07.16.15]

[From Rick Marken (2012.12.07.1150)]

Martin Taylor (2012.12.06.23.52)–

        MT:... Somehow, the

information from the disturbance appears at the output, and
the only path through which this can happen is by way of the
internal circuitry of the control system.

      RM: You are assuming that the only way for information about

the disturbance to appear at the output is for this
information to have gone through the organism. This is just a
fancy way of saying that the causal (S-R) model of behavior
must be true.

No, but each link in the loop, such as perceptual

function->comparator, is an S-R system. What appears at the input
determines (apart from noise, which we are discounting) what arrives
at the output.

      But this is not how a control system works. Any

“information” about the disturbance in the output is merely a
side effect of the action of a negative feedback control
system acting to keep a perceptual signal matching a reference
signal.

That's a reasonable way of stating it, yes.
      The system knows nothing about the reason why the

perceptual signal is varying;

True.
      if the system is organized properly (for negative feedback)

it will act (produce output) to keep the perception at the
reference level and, as a side effect will be acting in
precise opposition to the net disturbance to that perception.

True. So what is your point? You have just stated that information

about the disturbance wanveform arrives at the input quantity from
the output, and this must necessarily happen by virtue of the
signals passing through the internal part of the loop circuitry.

          RM: imagine there's no information about the

disturbance (or feedback function). It’s easy if you try.

        MT: No it isn't. It is damned difficult, simply because I

don’t believe in pure magic. I believe control systems work
on normal physical principles.

      RM: So do I. But we differ on what constitutes "normal

physical principles". To me the principles of control, which
are reflected in the simultaneous equations that define a
negative feedback control system, are normal physical
principles. To you it appears that only the one-way causal
principles of information theory…

Huh?
      ...reflect normal physical principles. The idea that output

must be based on information about the disturbance is just a
fancy way of saying that output must be caused by the
disturbance (via information about the disturbance).

      I'm afraid that you have fallen completely for the behavioral

illusion.

Them's fightin' words, buddy!

Are you trying to say that the equations we usually use to describe

the operation of a control system are wrong? It certainly sounds
like it. Or are you saying that there is no signal path from sensory
input through the perceptual function, the comparator, the output
function and the environmental feedback path?

      There is no causal (or informational, if you like)

connection from disturbance to output; there is no need for
information about the disturbance to go through the organism
and cause (or allow the organism to select) the right output.
And there is no magic about it. The output contains all that
information about the disturbance to a controlled variable,
not because of magic (or information about the disturbance)
but because the output varies in precise opposition to a
disturbance when the system acts to keep the controlled
variable at the reference specification (that is when it acts
to keep the error signal as close to zero as possible).

Quite so, for the last part of the paragraph. But you would have to

rethink the operation of a control system to jsutify the first part.

      Organisms are not communication channels; they are closed-loop

control systems.

And hierarchic ones, at that.

Martin

[From Bill Powers (2012.12.07.1455 MST)]

[Martin Taylor 2012.12.07.16.15]

It's clear to me that I know too little about information theory to say anything even a fraction as useful as what Richard Kennaway can say. I'll be watching and listening with great interest and keeping my mouth pretty much shut from here on in this thread. If I can.

Best,

Bill P.

[From Bruce Abbott (2012.12.07.1830 EST)]

Rick Marken (2012.12.07.1150)] –

One of us is confused here – and I don’t think it’s me! (nor Martin)

Martin Taylor (2012.12.06.23.52)

MT:… Somehow, the information from the disturbance appears at the output, and the only path through which this can happen is by way of the internal circuitry of the control system.

RM: You are assuming that the only way for information about the disturbance to appear at the output is for this information to have gone through the organism.

How else could it get there? Magic?

This is just a fancy way of saying that the causal (S-R) model of behavior must be true.

Not so. It’s a circular loop of causality, but each element within the loop transmits information only one way, from CV to perceptual signal to error signal to output and back to CV. And because of loop delay, these effects do not propagate instantly and simultaneously around the loop.

I suspect that you are confusing what must be done to correctly ANALYZE how the system works (trace the effects backwards around the loop) with how changes in the signals and variables PROPAGATE around the loop (i.e., in the forward direction).

RM: But this is not how a control system works. Any “information” about the disturbance in the output is merely a side effect of the action of a negative feedback control system acting to keep a perceptual signal matching a reference signal.

So you agree that the control system transmits information from the disturbance to the output? Other than by coincidence, that’s the only way that the form of one could match the inverse of the other.

RM: The system knows nothing about the reason why the perceptual signal is varying;

And no one has asserted or even suggested that it does, only that at least a part of the variation in the perceptual signal is induced by variation in the disturbance. If we could separate that part of the variation in the perceptual signal from the other sources of variation, we could reduce uncertainty as to the time-varying values of the disturbance to zero. (I am assuming that we know the function through which disturbance values translate into effects on the CV.)

RM: if the system is organized properly (for negative feedback) it will act (produce output) to keep the perception at the reference level and, as a side effect will be acting in precise opposition to the net disturbance to that perception.

I don’t see why you call this a side effect. The opposing action is what keeps the perception NEAR (not at) the reference level. I think of a side effect as something the system does to affect something else, which is not intrinsic to the proper operation of the system. For example, the control system acting to position my arm may change my center of gravity as a side effect of the arm movement, disturbing another control system that is maintaining my balance.

RM: imagine there’s no information about the disturbance (or feedback function). It’s easy if you try.

MT: No it isn’t. It is damned difficult, simply because I don’t believe in pure magic. I believe control systems work on normal physical principles.

Me, too!

RM: So do I. But we differ on what constitutes “normal physical principles”. To me the principles of control, which are reflected in the simultaneous equations that define a negative feedback control system, are normal physical principles.

All the conversions taking place around the loop are taking place continuously and simultaneously: input values are establishing perceptual values via the input function, error via the comparator function, output via the output function, effect of output on the CV via the feedback function. But because of transmission lags, the error signal going into the output function represents the state of the error signal a short time earlier, and that error signal is based on an even earlier value of the perceptual signal, etc. This is where the overall loop delay comes from. So the output that is being fed back onto the CV is not based on the current value of the disturbance (via its effect on the CV) but on its value one cycle through the loop earlier. The influence of changes in the magnitude of the disturbance propagate around the loop in real time; they do not occur simultaneously at each point in the control loop.

RM: To you it appears that only the one-way causal principles of information theory reflect normal physical principles.

Two points are in order here: First, as demonstrated above, “one-way causal principles” are at work within a control loop whether you apply an information analysis or not. Second, information theory is just a tool for analyzing the transmission of information through a channel. There is nothing in principle that prevents one from constructing an information analysis of a closed loop system.

RM: The idea that output must be based on information about the disturbance is just a fancy way of saying that output must be caused by the disturbance (via information about the disturbance).

More properly, it’s a way of saying that changes in output must be caused by changes in the disturbance. The causal influence is indirect, of course, being mediated by the circular loop of causation operating within the control system.

RM: I’m afraid that you have fallen completely for the behavioral illusion. There is no causal (or informational, if you like) connection from disturbance to output; there is no need for information about the disturbance to go through the organism and cause (or allow the organism to select) the right output. And there is no magic about it. The output contains all that information about the disturbance to a controlled variable, not because of magic (or information about the disturbance) but because the output varies in precise opposition to a disturbance when the system acts to keep the controlled variable at the reference specification (that is when it acts to keep the error signal as close to zero as possible).

You’ve presented a circular argument (no pun intended). You’re saying that the output contains all the information about the disturbance to a controlled variable because the control system generates an output that varies in exactly the same way as the disturbance (although inverted), which is equivalent to saying that it contains the information because it contains the information. How it gets that information impressed upon it is apparently magic, because you don’t explain how it happens, other than to say that “the control system does it.”

RM: Organisms are not communication channels; they are closed-loop control systems.

So, you are saying that there are no communication channels in a control system through which effects or signals are propagated from CV to perceptual signal to error signal to output and back to CV? My, oh my, you really do believe in magic!

Bruce