Ashby's Law of Requisite Variety

[From Bill Powers (2012.12.08.1043 MSRT)]

Bruce Abbott (2012.12.07.1830 EST) –

BA: One of us is confused here –
and I don’t think it’s me! (nor Martin)
Martin Taylor (2012.12.06.23.52)

MT:… Somehow, the information from the disturbance appears at the
output, and the only path through which this can happen is by way of the
internal circuitry of the control system.

RM: You are assuming that the only way for information about the
disturbance to appear at the output is for this information to have gone
through the organism.

BA: How else could it get there? Magic?

BP: I’m gradually getting the idea (which I wish someone would explain
clearly) that the information-theoretic analysis of a control system
can’t be the basis for designing a control system or explaining its
operation. Martin acts all impatient and disgusted at me for not
understanding this self-evident and obvious fact, but can’t we all just
accept the fact that Bill is old, slow, and ignorant, and try to make all
these obvious facts a little easier for a creaky worn-out brain to
understand? Is all this stuff about information theory actually so
completely worked out that there are no doubts about it at all any more?
If so, why not just explain it all? Or do you have to have a Mensa pin to
understand it?

A point I need to have cleared up is how the information from the
disturbance can appear at the output as Martin, and now Bruce Abbott,
claims it does. How can you verify that?

My problem is very simple. The input quantity (or better, the perceptual
signal) is a physical variable with a certain magnitude at any moment.
That magnitude is the sum of effects from two other signals, d and o,
disturbance and output. But the magnitude of the physical variable is
only a single number, and it could be the sum of, literally, an infinity
of effects from other signals in pairs or larger sets. So what can there
be about a perceptual variable, which is known to the control system only
as a single number at any given moment, that gives it a unique
relationship to a disturbance, a sum over many possible causal
variables? If the controlled variable raises that problem, so does the
output variable.

The Taylor/Abbott claim seems to me like claiming that the sweetness
contributed to a cup of tea by a particular sugar cube can be identified
in the final sweetness resulting from all the sugar cubes dissolved in
the tea. Yes, the particular sugar cube’s effect appears in the
sweetness, but given the sweetness, there is no way to work backward to
deduce the presence of any particular sugar molecule or collection of
molecules. So if you think of a control system as a channel carrying a
message from the disturbance to the output, you have to explain how that
message can have any meaning when its elements are taken more or less at
random from two or more other simultaneous messages (d and o).

It seems to me that the physics of message transmission takes precedence
over the information-theoretic and semantic aspects. If you subtract a
signal carrying one message from another signal carrying a different
message, you get a single signal which has only one magnitude at any
given instant. Perhaps someone up on the mathematics of information
theory can tell me what happens to the information in each of the two
input signals – surely the information in the sum of the signals is not
just the sum over the information in each contributing signal. And the
meaning of the output message surely is not the sum of the meanings of
the incoming messages!

Best,

Bill P.

[From Bruce Abbott (2012.12.08.1635]

Bill Powers (2012.12.08.1043 MSRT) –

BA: Bruce Abbott (2012.12.07.1830 EST)

BA: One of us is confused here – and I don’t think it’s me! (nor Martin)

Martin Taylor (2012.12.06.23.52)

MT:… Somehow, the information from the disturbance appears at the output, and the only path through which this can happen is by way of the internal circuitry of the control system.

RM: You are assuming that the only way for information about the disturbance to appear at the output is for this information to have gone through the organism.

BA: How else could it get there? Magic?

BP: I’m gradually getting the idea (which I wish someone would explain clearly) that the information-theoretic analysis of a control system can’t be the basis for designing a control system or explaining its operation. Martin acts all impatient and disgusted at me for not understanding this self-evident and obvious fact, but can’t we all just accept the fact that Bill is old, slow, and ignorant, and try to make all these obvious facts a little easier for a creaky worn-out brain to understand? Is all this stuff about information theory actually so completely worked out that there are no doubts about it at all any more? If so, why not just explain it all? Or do you have to have a Mensa pin to understand it?

BP: A point I need to have cleared up is how the information from the disturbance can appear at the output as Martin, and now Bruce Abbott, claims it does. How can you verify that?

Bill, your intellect is clearly up to the task. Evidently we just haven’t explained it clearly enough. Let’s see if this helps:

Information, in the information-theoretic sense, is all about uncertainty reduction. Imagine a grid of four squares. A coin is positioned on one of them, but you can’t see it. The uncertainty as to the coin’s location, expressed in bits, is 2 bits. That is, it takes two binary digits to represent the specific position of the coin on the grid. (00, 01, 10, 11 are the four possible positions.) If I tell you that the coin is located in the northern half of the grid, I have provided one bit of information, thus reducing the uncertainty about the coin’s location to 2 bits – 1 bit = 1 bit. Information is the number of bits by which uncertainty is reduced.

If I had told you right away that the coin is in the north-east corner of the grid, I would have provided two bits of information and reduced uncertainty concerning the coin’s location to zero. The information transmitted through a channel is equal to the number of bits of uncertainty reduction that the information provides.

That’s the discrete case, but for the continuous case, basically we can think of information as the reduction in uncertainty concerning the waveform of a signal that is transmitted through a communication channel.

In the case of a control system with constant reference, the uncertainty as to the disturbance waveform is reduced to the extent that the output waveform matches that of the disturbance (after inversion). The reduction in that uncertainty is what information theory defines as the information transmitted through the communication channel from disturbance to output.

Because the disturbance waveform is very nearly mirrored by the output waveform when control is good, it follows by definition that information has been transmitted from disturbance to output. For information to have been transmitted, there must have existed a channel of communication between them, however indirect.

That communication channel is provided by the influences each variable and signal have on the values of the variables and signals to which they directly connect.

The information approach is just another way to analyze the performance of a control system. However, whether analyzing the performance of a control system in information-theoretic terms is worth the trouble remains to be demonstrated. I’m not suggesting that it is a particularly useful tool, only that it can be applied validly to the analysis of the performance of any system that involves the communication of information. And that includes control systems.

Did that help?

Bruce

[From Rick Marken (2012.12.08.1800)]

Bruce Abbott (2012.12.08.1635)--

BA: Because the disturbance waveform is very nearly mirrored by the output
waveform when control is good, it follows by definition that information has
been transmitted from disturbance to output. For information to have been
transmitted, there must have existed a channel of communication between
them, however indirect.

RM: What if I showed you a situation where control is good and the
disturbance waveform is not even remotely mirrored by the output.
Would that get you to give up on information theory? I didn't think
so. But I'll develop the demonstration anyway.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Abbott (2012.12.08.2225 EST)]

Rick Marken (2012.12.08.1800)--

Bruce Abbott (2012.12.08.1635)

BA: Because the disturbance waveform is very nearly mirrored by the
output waveform when control is good, it follows by definition that
information has been transmitted from disturbance to output. For
information to have been transmitted, there must have existed a
channel of communication between them, however indirect.

RM: What if I showed you a situation where control is good and the
disturbance waveform is not even remotely mirrored by the output.
Would that get you to give up on information theory? I didn't think so. But
I'll develop the demonstration anyway.

Well, this should be entertaining . . . I'm looking forward to it.

Bruce

[From Rick Marken (2012.12.09.1130)]

Bruce Abbott (2012.12.08.2225 EST)--

RM: What if I showed you a situation where control is good and the
disturbance waveform is not even remotely mirrored by the output.
Would that get you to give up on information theory? I didn't think so. But
I'll develop the demonstration anyway.

BA: Well, this should be entertaining . . . I'm looking forward to it.

RM: Attached is one particularly nice run of a compensatory tracking
task that I am developing (based on the one at
Nature of Control). The
difference between this task and the typical compensatory tracking
task (like the one on the net) is that the feedback function changes
periodically throughout the task. In the attached figure M is mouse
position, which is the output, C is cursor position, which is the
controlled variable, and D is the disturbance. The feedback function
is the connection from M (output) to C (controlled variable). In this
task

C = integral ( k* dM) + D

where k is the feedback function and dM is the rate of change in
output at each frame of the program. The value of k is either 1 or -1
at different times during a run; it changes about 5-7 times during a
28 sec run. After some practice it is possile to vary M (output)
appropriately so as to keep C under control. Since k changes from 1 to
-1 and back every so often during a run it is sometimes necessary to
move the mouse left and sometimes right to compensate for disturbances
that push the cursor in the same direction. But you can get pretty
skilled at moving the mouse the right way in order to keep C under
control (near the target). When you do this, there is often zero
correlation between M (output) and disturbance (D). This what is shown
in the attached graph

In this run I kept C pretty close to the target during a run (average
deviation was 20 pixels out of a possible range of 300; about 6% of
the total). So control was quite good. Yet the correlation between
output and disturbance (the M-D correlation -- output - disturbance)
is only .046, virtually zero. The zero correlation between M and D
says that there is no information about the disturbance in the output.
Somehow I was able to move the mouse so that it's effect on the
controlled variable (C) almost perfectly compensated for the
disturbance despite the fact that I apparently had no information
about the disturbance as indicated by the fact that my outputs did not
mirror the disturbance.

I don't see how information theory can account for the results shown
here, where there is control without any evidence that there was any
information about the disturbance.

By the way, I will post this tracking program once I have it in a
state I like, which may take a few days since I am working on other
things too. But I just wanted to show these results now because they
came out pretty nice ( have a couple other runs where the M-D
correlation is zero but this was the prettiest).

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Content-Type: image/gif; name="ControlwoDistInfo.GIF"
Content-Disposition: attachment; filename="ControlwoDistInfo.GIF"
X-Attachment-Id: f_haiiurtm0

[Martin Taylor 2012.12.09.14.50]

[From Rick Marken (2012.12.09.1130)]

Bruce Abbott (2012.12.08.2225 EST)--

RM: What if I showed you a situation where control is good and the
disturbance waveform is not even remotely mirrored by the output.
Would that get you to give up on information theory? I didn't think so. But
I'll develop the demonstration anyway.

BA: Well, this should be entertaining . . . I'm looking forward to it.

RM: Attached is one particularly nice run of a compensatory tracking
task that I am developing (based on the one at
Nature of Control). The
difference between this task and the typical compensatory tracking
task (like the one on the net) is that the feedback function changes
periodically throughout the task.

A nice experiment on switching rate.

Somehow I was able to move the mouse so that it's effect on the
controlled variable (C) almost perfectly compensated for the
disturbance despite the fact that I apparently had no information
about the disturbance as indicated by the fact that my outputs did not
mirror the disturbance.

I'm not sure where the part after "the fact that" comes from. Who, for example is the "I" in question?

In an earlier study you did with Bill on this kind of switching, you used a second control system to change the sense of the output. You should try to make a single one-level control system model without reorganization to fit the results of this experiment, and then analyze what happens in it. I don't believe you could do that, since the control system only has one perception, which is the magnitude of its input quantity, and one thing it can alter to influence that perception, which is the magnitude of its output quantity. It has no mechanism for altering its own output function to change the sense of the output.

Informationally, there isn't any problem. All that you have done is introduce 5 to 7 bits of noise, one per switch of the sense of the environmental feedback function. Those 5 to 7 bits require compensation. Your second system in your earlier modelling of this situation supplied that compensation. You haven't affected the information about the disturbance at the output, and I don't see why you think you have.

I don't see how information theory can account for the results shown
here, where there is control without any evidence that there was any
information about the disturbance.

The evidence is in the fact that the influence of the output on the input quantity opposes the disturbance quite nicely except right after a switch of the environmental feedback function. The influence of the output is determined jointly by the value of the output signal and by the environmental feedback function, and you re not controlling by altering the environmental feedback function to compensate for an information-free output value (though in principle you could control that way).

Martin

[From Rick Marken (2012.12.09.1415)]

Martin Taylor (2012.12.09.14.50)--

RM: Brilliant, Martin. I knew you could wipe me up like apple pie,
just like those rabbi's did with Feynman;-)

RM: Somehow I was able to move the mouse so that it's effect on the
controlled variable (C) almost perfectly compensated for the
disturbance despite the fact that I apparently had no information
about the disturbance as indicated by the fact that my outputs did not
mirror the disturbance.

MT: I'm not sure where the part after "the fact that" comes from.

RM: It comes from the fact that there is no correlation between
disturbance and output, which I thought was the basis for saying that
there was information about the disturbance in the input to the
control system. But I guess there's new rule now and I didn't get the
memo. The rule is that there is information about the disturbance no
matter what. There just is, see.

MT: In an earlier study you did with Bill on this kind of switching, you used a
second control system to change the sense of the output. You should try to
make a single one-level control system model without reorganization to fit
the results of this experiment, and then analyze what happens in it. I don't
believe you could do that, since the control system only has one perception,
which is the magnitude of its input quantity, and one thing it can alter to
influence that perception, which is the magnitude of its output quantity. It
has no mechanism for altering its own output function to change the sense of
the output.

RM: You're absolutely right; modeling this task would require two
control systems (at least). But that's PCT, isn't it? Not information
theory.

MT: Informationally, there isn't any problem.

RM: So I've gathered.

MT: All that you have done is
introduce 5 to 7 bits of noise, one per switch of the sense of the
environmental feedback function. Those 5 to 7 bits require compensation.
Your second system in your earlier modelling of this situation supplied that
compensation. You haven't affected the information about the disturbance at
the output, and I don't see why you think you have.

RM: I think I have because there is no correlation between disturbance
and output. But, as I said, I didn't get the memo about that not being
relevant anymore;-)

RM I don't see how information theory can account for the results shown
here, where there is control without any evidence that there was any
information about the disturbance.

MT: The evidence is in the fact that the influence of the output on the input
quantity opposes the disturbance quite nicely except right after a switch of
the environmental feedback function. The influence of the output is
determined jointly by the value of the output signal and by the
environmental feedback function, and you re not controlling by altering the
environmental feedback function to compensate for an information-free output
value (though in principle you could control that way).

RM: Of course;-)

Now let's see how Bruce does. You're a tough act to follow but Bruce
is pretty good too.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.09.17.22]

[From Rick Marken (2012.12.09.1415)]

Martin Taylor (2012.12.09.14.50)--

RM: Brilliant, Martin. I knew you could wipe me up like apple pie,
just like those rabbi's did with Feynman;-)

RM: Somehow I was able to move the mouse so that it's effect on the
controlled variable (C) almost perfectly compensated for the
disturbance despite the fact that I apparently had no information
about the disturbance as indicated by the fact that my outputs did not
mirror the disturbance.

MT: I'm not sure where the part after "the fact that" comes from.

RM: It comes from the fact that there is no correlation between
disturbance and output, which I thought was the basis for saying that
there was information about the disturbance in the input to the
control system.

Well, you should have studied information theory a bit more closely, if you thought that.

It's easy to disprove theories you imagine. Less easy to disprove ones that are out there to be learned.

Look, the point about your experiment is that there are two separate perceptions to be controlled. One is a perception that can be translated as "which way the other control system's environmental feedback path is switched". The disturbance to that perception is a step function, and its information rate is, you say, 5 to 7 bits over the course of a run. The second is the tracking perception whose environmental feedback path is switched. The two disturbances are independent of each other, so their combination provides 5 to 7 more bits of uncertainty to be partitioned between the two outputs than is provided by the tracking disturbance in itself.

  But I guess there's new rule now and I didn't get the
memo.

No, but you could read the Shannon book, in which it is written (it's only 63 years old, so I guess you could call it a new rule).

  The rule is that there is information about the disturbance no
matter what. There just is, see.

Only to the degree that control is good.

MT: In an earlier study you did with Bill on this kind of switching, you used a
second control system to change the sense of the output. You should try to
make a single one-level control system model without reorganization to fit
the results of this experiment, and then analyze what happens in it. I don't
believe you could do that, since the control system only has one perception,
which is the magnitude of its input quantity, and one thing it can alter to
influence that perception, which is the magnitude of its output quantity. It
has no mechanism for altering its own output function to change the sense of
the output.

RM: You're absolutely right; modeling this task would require two
control systems (at least). But that's PCT, isn't it? Not information
theory.

The fact that you need a second control system has nothing to do with information theory. The information-theoretic analysis of the case does. Your point seemed to be that you could solve the situation using only a single control system. If that were true, then you might have a more convincing argument, because you would have demonstrated control using a single control system with a single-valued perception in a situation where not only is there no one-to-one relationship between output and the effect on the input quantity, but the same output quantity could produce equal and opposite influences on the input quantity.

Informationally, that would mean that the mutual information between the output value and the output influence on the input quantity would be low and yet control would be good. But, as you agree, it is not possible to produce such a situation.

In the real world, related situations do occur, a lot. The environmental feedback path is seldom stable for most perceptions, and that fact is at the heart of what Kent was trying to get at in the "Analysing feedback paths" thread he started.

I find it instructive to observe how easily you move from failed demonstration to satirical comment when defending against disturbances to a tightly controlled belief. I would have thought that as a scientist you might have been more inclined to find either rational arguments or effective demonstrations that 2+2 is not 4.

Martin

[From
Bruce Abbott (2012.12.07.1830 EST)]

      Rick Marken

(2012.12.07.1150)] –

        One

of us is confused here – and I don’t think it’s me! (nor
Martin)

ExponentialTrajectories2.jpg

···

Martin Taylor (2012.12.06.23.52)

            MT:... Somehow, the information from

the disturbance appears at the output, and the only path
through which this can happen is by way of the internal
circuitry of the control system.

          RM: You are assuming that the only way for information

about the disturbance to appear at the output is for this
information to have gone through the organism.

            How

else could it get there? Magic?

          This is just a fancy way of saying that

the causal (S-R) model of behavior must be true.

            Not

so. It’s a circular loop of causality, but each element
within the loop transmits information only one way, from
CV to perceptual signal to error signal to output and
back to CV. And because of loop delay, these effects do
not propagate instantly and simultaneously around the
loop.

[From Bruce Abbott (2012.12.09.1850 EST)]

Rick Marken (2012.12.09.1415)--

Martin Taylor (2012.12.09.14.50)

RM: Brilliant, Martin. I knew you could wipe me up like apple pie, just like
those rabbi's did with Feynman;-)

RM: Somehow I was able to move the mouse so that it's effect on the
controlled variable (C) almost perfectly compensated for the
disturbance despite the fact that I apparently had no information
about the disturbance as indicated by the fact that my outputs did
not mirror the disturbance.

MT: I'm not sure where the part after "the fact that" comes from.

RM: It comes from the fact that there is no correlation between disturbance
and output, which I thought was the basis for saying that there was
information about the disturbance in the input to the control system. But I
guess there's new rule now and I didn't get the memo. The rule is that there
is information about the disturbance no matter what. There just is, see.

...

RM I don't see how information theory can account for the results
shown here, where there is control without any evidence that there
was any information about the disturbance.

MT: The evidence is in the fact that the influence of the output on
the input quantity opposes the disturbance quite nicely except right
after a switch of the environmental feedback function. The influence
of the output is determined jointly by the value of the output signal
and by the environmental feedback function, and you re not controlling
by altering the environmental feedback function to compensate for an
information-free output value (though in principle you could control that

way).

RM: Of course;-)

RM: Now let's see how Bruce does. You're a tough act to follow but Bruce is
pretty good too.

Well, you're right: Martin is a tough act to follow. It's impossible to be
righter than right, and I won't try. But let me explain, in a slightly
different way, how your experimental results are actually consistent with an
information analysis.

When I read your description of the experiment, I noticed the same thing
Martin did, which is that it would require a two-level PCT model to account
for your results. The bottom level is controlling for cursor position, while
the top level is trying to maintain an inverse relation between the effect
of the disturbance on the CV (cursor position) and the effect of the output
(mouse movements) on same.

Right after the program reverses the relation between mouse movement and
cursor movement (inverting the sign of the environmental feedback function),
a runaway positive feedback condition sets in and control over the cursor
position is lost. The top-level system then acts by reversing the output of
the lower-level system, restoring negative feedback in the lower system,
after which control is quickly regained.

After the higher-level system inverts the sign of the output function so as
to compensate for the change in the sign of the environmental feedback
function (eff), the output, after passing through the eff, acts on the
controlled variable in the usual way, and its pattern of variation will
again mirror that of the lower-level disturbance. Except for the brief
moments when control is lost, the information in the disturbance is still
being replicated in the output waveform. It is not lost at all.

The output that "receives" information from the disturbance is the output f
from the environmental feedback function, not the output from the output
function. In previous discussions, to keep the presentation simple we've
been using a simplified version of the control system in which the output
feeds back directly onto the CV. That is equivalent to assuming that the
feedback function is simply f= 1*o. In other words, f = o. When you
manipulate the sign of the feedback function as you did in your experiment,
making the distinction between o and f is then crucial. In that case f = -o,
and it is f that reproduces the information in the disturbance. (The same
information is present in o, but to recover the disturbance waveform using o
we would have to reverse its sign each time the program inverted the sign of
the eff. Using f avoids that minor problem.)

As Martin noted, a full two-level model can easily accommodate your result
(output sign reversal). The reversal of the sign of the lower-level system's
feedback function constitutes a disturbance to the higher-level system,
which that system corrects by reversing the sign of the lower-level system's
output function. The information in the disturbance to the higher-level
system is thus transmitted, inverted as usual, to its output function.

So there! (;->

Bruce

[From Bruce Abbott (2012.12.08.2000 EST)]

Martin Taylor 2012.12.08.11.32 –

MT: Let’s see if we can take a different tack to make it more intuitively easy to see how information from the disturbance gets to the output even though the input quantity – the controlled environmental variable – is influenced by both the disturbance and the output of the control system. I’ll borrow from the usual way differential calculus is introduced, by treating a smooth waveform as though it was created by making a number of step changes, and reducing the time between changes so they become indefinitely small. But first, I want to illustrate how control can be seen as measurement.

. . .

MT: So long as the disturbance variation produces less than T bits/sec, control is possible, but if the disturbance variation is greater than T bits/sec, control is lost. The spike in the input value at a step change in the disturbance is an example of that, where control is completely lost at the moment of the step, and gradually regained in the period when the disturbance remains steady, not generating uncertainty.

Wonderfully clear exposition, Martin. Thanks for taking the time.

Bruce

[From Bill Powers (2012.12.09.1740 MST)]

[From Rick Marken (2012.12.09.1130)]

RM: Attached is one particularly nice run of a compensatory tracking
task that I am developing (based on the one at
Nature of Control). The
difference between this task and the typical compensatory tracking
task (like the one on the net) is that the feedback function changes
periodically throughout the task.

...

  I don't see how information theory can account for the results shown
here, where there is control without any evidence that there was any
information about the disturbance.

BP: You're working under the same mistaken idea I was using -- that Martin and Bruce are claiming that the control system uses information in the output about the disturbance to achieve control. They are not. Martin agreed several days ago that the information in the output is calculated by the observer, and is not used by the control system. It is merely a description of a relationship between disturbance and output cast in terms of variables appropriate to information theory. We use a different equation to describe that relationship in terms of magnitudes of system variables and functions expressed in physical units: o = Fd(d) - Ff(o), approximately.

It is true that some cyberneticists speak as if information is used by the control system to achieve control. They make the same mistake we have been making. Our argument with them would be the same one Martin and Bruce would make.

A negative feedback control system doesn't need to know anything about the disturbing variable, because all it needs to know is the behavior of the controlled variable. If that variable departs from the reference level for any reason at all, and the reason does not include changes that make the control system stop working, the control system will experience an error which the output function converts into a rate of change of the output in the direction that will cause the error to decrease. The control systems we deescribe do no calculations using information as a variable.

By the way, speaking of rate of change, in sifting through all the ideas that have been floating around during this discussion, I realized that no real control system can be a proportional controller if the loop gain is more negative than -1. Proportional control uses no functions with time as a variable in them. The output from a function changes as the input changes, with no delay at all. The result is that if you try to run a simulation of a proportional control system using our current programming methods, you will get a runaway oscillator for any loop gain large enough to accomplish some useful degree of control.

This is another way of saying that no physical variable can change from one value to another in zero time. It must always pass through all the intermediate values. Even in a digital computer where apparently instantaneous switching can seem to happen, close examination shows that the voltages or currents involved take time to change, even though the time can be quite short -- a nanosecond or less in today's computers. The only actual quantization occurs in units of one electronic charge, and even that is smoothed out as the electron approaches the end of a wire and its field begins to be detected by whatever the wire is connected to.

This is all based on ignoring quantum mechanics, which I am still inclined to do.

Best,

Bill P.

[From Rick Marken (2012.12.09.1750)]

Martin Taylor (2012.12.09.17.22)--

MT: It's easy to disprove theories you imagine. Less easy to disprove ones that
are out there to be learned.

RM: I was just trying to show that there could be control with no
correlation between disturbance and output. I thought that the
typically observed correlation between output and disturbance was the
evidence that information about the disturbance was transmitted to the
output by the control system, making control possible (so that the
output compensates for the disturbance).

MT: Look, the point about your experiment is that there are two separate
perceptions to be controlled. One is a perception that can be translated as
"which way the other control system's environmental feedback path is
switched". The disturbance to that perception is a step function, and its
information rate is, you say, 5 to 7 bits over the course of a run. The
second is the tracking perception whose environmental feedback path is
switched. The two disturbances are independent of each other, so their
combination provides 5 to 7 more bits of uncertainty to be partitioned
between the two outputs than is provided by the tracking disturbance in
itself.

RM: OK, I see your point. It was not a perfect demonstration. I used
the change in polarity in order to get the disturbance-output
correlation to zero. I wouldn't call the polarity of the feedback
connection a disturbance but I see your point; you certainly can
perceive the changed effect of the mouse on the cursor when the the
polarity changes. But you don't always see the runaway; it did happen
sometimes but many of the changes were smooth (and are pretty much
invisible in the trace). But I got pretty good at sensing whether I
was having he desired effect on the cursor and didn't need to see the
cursor "run away" at all. But it was something I could perceive -- as
a relationship between the perception of my output efforts and the
effect on the cursor.

But before I go try to develop another demo, perhaps you could just
tell me what the evidence is that a control system uses information
about the disturbance to produce the outputs that counter the
disturbance. Or whatever it is you believe (I've just seen Bill's
(Bill Powers (2012.12.09.1740 MST)]) recent post and now I guess I
have no idea what you guys believe). So is there any observations I
could make that would allow me to demonstrate to myself whatever it is
you believe is true about the application of information theory to
control.

Clearly this is way too complex for my little brain. So try to make it
as simple as possible. What can I observe in a simple control task,
such as the compensatory tracking task, that will convince
simple-minded me that information about the disturbance (and/or
feedback function and whatever else) is relevant to control.

Thanks

Best

Rick

···

  But I guess there's new rule now and I didn't get the
memo.

No, but you could read the Shannon book, in which it is written (it's only
63 years old, so I guess you could call it a new rule).

  The rule is that there is information about the disturbance no
matter what. There just is, see.

Only to the degree that control is good.

MT: In an earlier study you did with Bill on this kind of switching, you
used a
second control system to change the sense of the output. You should try
to
make a single one-level control system model without reorganization to
fit
the results of this experiment, and then analyze what happens in it. I
don't
believe you could do that, since the control system only has one
perception,
which is the magnitude of its input quantity, and one thing it can alter
to
influence that perception, which is the magnitude of its output quantity.
It
has no mechanism for altering its own output function to change the sense
of
the output.

RM: You're absolutely right; modeling this task would require two
control systems (at least). But that's PCT, isn't it? Not information
theory.

The fact that you need a second control system has nothing to do with
information theory. The information-theoretic analysis of the case does.
Your point seemed to be that you could solve the situation using only a
single control system. If that were true, then you might have a more
convincing argument, because you would have demonstrated control using a
single control system with a single-valued perception in a situation where
not only is there no one-to-one relationship between output and the effect
on the input quantity, but the same output quantity could produce equal and
opposite influences on the input quantity.

Informationally, that would mean that the mutual information between the
output value and the output influence on the input quantity would be low and
yet control would be good. But, as you agree, it is not possible to produce
such a situation.

In the real world, related situations do occur, a lot. The environmental
feedback path is seldom stable for most perceptions, and that fact is at the
heart of what Kent was trying to get at in the "Analysing feedback paths"
thread he started.

I find it instructive to observe how easily you move from failed
demonstration to satirical comment when defending against disturbances to a
tightly controlled belief. I would have thought that as a scientist you
might have been more inclined to find either rational arguments or effective
demonstrations that 2+2 is not 4.

Martin

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2012.12.09.2000)]

Bill Powers (2012.12.09.1740 MST)--

RM: I don't see how information theory can account for the results shown
here, where there is control without any evidence that there was any
information about the disturbance.

BP: You're working under the same mistaken idea I was using -- that Martin
and Bruce are claiming that the control system uses information in the
output about the disturbance to achieve control.

RM: What? I thought they were claiming that the control system uses
information in the INPUT about the disturbance to achieve control and
that the high correlation between disturbance and output (which is
very similar to H(Y|X) the information theory measure of information
transmitted, where Y is output and X is disturbance in this case) is
evidence that information about the disturbance is transmitted to the
output by the control system. My recent demonstration shows (with the
caveat about the information about the feedback function) that there
can still be control even when there is no information about the
disturbance transmitted to the output (no correlatoin between
disturbance and output).

BP: A negative feedback control system doesn't need to know anything about the
disturbing variable, because all it needs to know is the behavior of the
controlled variable.

RM: Yes, that's what I've been saying but that's not what Bruce and
Martin are saying. In fact, Martin just posted a long and elaborate
description of how a control system extracts information about the
disturbance from variations in the input variable. And I have
demonstrated over and over again that there is absolutely no
information about the disturbance in the controlled input to a control
system.

So, as you say, this"information about the disturbance" dragon is
never going to be slain. But it's fun to try;-)

Oh, and I just though of a way to deal with Martin's objection to my
recent demonstration where he notes that there is information about
the change in polarity: multiple regression!

I'll be baak;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.09.23.49]

   Martin just posted a long and elaborate
description of how a control system extracts information about the
disturbance from variations in the input variable. And I have
demonstrated over and over again that there is absolutely no
information about the disturbance in the controlled input to a control
system.

It looks as though you intended the two sentences quoted above to be in contradiction with each other, though my "long and elaborate description" argued that they are actually mutually supportive, as you know if you read the message in question. One pretty well follows from the other. If you have actually read the message you know that in addition to showing how and how rapidly information gets from the disturbance to the output, it also shows why there is nearly (not "absolutely") no information about the disturbance waveform in the waveform of the input variable of a system that is accurately controlling its perception.

Incidentally, having carefully read (:wink: my "long and elaborate description", did you find the relationship between control and measurement with which it starts to be interesting or (heaven forbid) conceptually useful?

Oh, and I just though of a way to deal with Martin's objection to my
recent demonstration where he notes that there is information about
the change in polarity: multiple regression!

Beware, there is a pretty tight mapping between analyses based on variance or correlation and analyses based on information. Before you say again that I am changing to new rules, I should point out that this was pretty well known half a century ago. For the mapping from information analysis to analysis of variance, see W.R.Garner and W.J.McGill, The relation between information and variance analyses, Psychometrika, 21 #3, September 1956. I don't know how you plan to use multiple regression, but I thought you should be reminded of this relationship before you do whatever you might be intending.

Actually, that article might also be useful for you if you want to know how information from multiple separated sources can be analysed together. I can send you an OCR'd PDF scan if you can't find the article in a local library.

Martin

[From Rick Marken (2012.12.10.0845)]

Martin Taylor (2012.12.09.23.49)--

RM: Martin just posted a long and elaborate
description of how a control system extracts information about the
disturbance from variations in the input variable. And I have
demonstrated over and over again that there is absolutely no
information about the disturbance in the controlled input to a control
system.

MT: It looks as though you intended the two sentences quoted above to be in
contradiction with each other, though my "long and elaborate description"
argued that they are actually mutually supportive, as you know if you read
the message in question.

RM: I did read the message, at least up to the point where you said:

qo(t) = -d0*(1-e^(-G*(t-t0))).

Pretty fancy equation but if it's meant to describe the behavior of
variables in a control loop then it's nonsense because output
variations, qo(t), are not a function of disturbance variations, d, in
a control system. Once again, you have left out the controlled
variable.

MT: One pretty well follows from the other. If you have
actually read the message you know that in addition to showing how and how
rapidly information gets from the disturbance to the output, it also shows
why there is nearly (not "absolutely") no information about the disturbance
waveform in the waveform of the input variable of a system that is
accurately controlling its perception.

RM: I re-read the article and could not find such a conclusion
anywhere. So you are saying that your analysis shows how a control
system extracts information about the disturbance from variations in
the input variable and that there is no information about the
disturbance in the controlled input to a control system. I would
really appreciate it if you could give me a nice, succinct review of
how you showed that that is the case.

MT: Incidentally, having carefully read (:wink: my "long and elaborate
description", did you find the relationship between control and measurement
with which it starts to be interesting or (heaven forbid) conceptually
useful?

RM: I'm afraid not. But Bruce A. did so maybe it was worth the obvious
effort you must have put into it.

What I really would like to see is what you say you showed in your
analysis: how information gets from the disturbance to the output
and why there is nearly no information about the disturbance waveform
in the waveform of the input variable of a system that is accurately
controlling its perception. While you're at it you might also like to
prove that the Snark was a Boojum -- you see;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Richard Kennaway (2012.12.10 16:54 GMT)]

How would the participants in this discussion handle some questions about the following system?

  Controller: O = k(R-P)
  Environment: P = D + integral(O)

When the controller controls well and the reference is constant, then the perception will also be nearly constant. D and the integral of O will be equal and opposite.

However, this implies zero correlation between D and O. (Because of this theorem: a bounded variable and its first derivative have zero correlation.)

(1) What definition of mutual information are you using, and what answer does it give when applied to two variables related in this way?

(2) If D varies much faster than the controller can handle (i.e. on a timescale much smaller than 1/k), then P will closely track D. O always tracks P (when R is constant) and there will therefore be a high correlation between D and O. What is the measure of mutual information in this case?

(3) When R and D vary randomly and independently, both on timescales much longer than 1/k (so the controller controls well), it will be found that O correlates well with R and not at all with D, while the integral of O correlates well with D and not at all with R. What are the measures of mutual information among O, R, and D?

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, http://www.cmp.uea.ac.uk/~jrk/
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bill Powers (2012.12.10.0805 MST)] --
Rick Marken (2012.12.09.2000) --

> Bill Powers (2012.12.09.1740 MST)--

> BP earlier: You're working under the same mistaken idea I was using -- that Martin
> and Bruce are claiming that the control system uses information in the
> output about the disturbance to achieve control.

RM: What? I thought they were claiming that the control system uses
information in the INPUT about the disturbance to achieve control and
that the high correlation between disturbance and output (which is
very similar to H(Y|X) the information theory measure of information
transmitted, where Y is output and X is disturbance in this case) is
evidence that information about the disturbance is transmitted to the
output by the control system. My recent demonstration shows (with the
caveat about the information about the feedback function) that there
can still be control even when there is no information about the
disturbance transmitted to the output (no correlation between
disturbance and output).

So far, Bruce (and as far as I remember, Martin) has spoken only about information in the output about the disturbance, though I would think this implies that the information must also be in the input quantity since there is no other path from disturbance to output. If the information is in the input quantity it must be very well disguised. I seem to remember from way back that it was input information being discussed at some point in the discussion. But I also remember Martin's warning that information is not a substance flowing around in a system, so that reasoning is probably flawed.

If the output mirrors the disturbance (with a constant reference signal), it does look reasonable to say that there is information about the disturbance in the output, though I'm not sure that is information with the common-sense meaning or with the technical theoretical meaning -- or if the information at the input is retrievable by an observer.

In Martin's reply to your remark above which I just looked at, he says "If you have actually read the message you know that in addition to showing how and how rapidly information gets from the disturbance to the output, it also shows why there is nearly (not "absolutely") no information about the disturbance waveform in the waveform of the input variable of a system that is accurately controlling its perception." I think they have special courses at Cambridge or Oxford or whatever backwater college it was in how to sneer at people when it amuses one to make them feel like, or at least look like, dumbbells.

There he seems to be treating information as sa substance flowing through the system as it "gets from the disturbance to the output," but I'm sure that is OK when he is the one saying it.

The impression he gives is that the information IS there, and is thus somehow used by the control system to do its controlling -- or that controlling depends on the control system's somehow knowing about that information. My current impression, helped by Bruce A and Martin himself is that it is only the observer who calculates the information and uses it to analyze the control system. That fits with Martin's agreement that there is very little information (but still some), about the disturbance to be found in the input quantity or perceptual signal. If the control system needed to use that information, could the message somehow reappear restored to accuracy in the output of the system? It's hard to imagine what less than perfect information would look like if it still is identifiable as being about the disturbance, but you and I are too ignorant to understand hard problems like that.

I suspect that it's only the observer who knows these things, and that knowledge about the information is not used in the control system's operation (as we have suspected all along).

So, as you say, this "information about the disturbance" dragon is
never going to be slain. But it's fun to try;-)

Oh, and I just though of a way to deal with Martin's objection to my
recent demonstration where he notes that there is information about
the change in polarity: multiple regression!

I'll be baak;-)

I don't think we've been talking bout the same thing throughout this discussion, so don't waste your energy. Oh, I see Martin has rejected your multiple regression idea and has helpfully included some references which you probably don't, and I surely don't, have convenient access to.

Best,

Bill P.

[Martin Taylor 2012.12.10.16.48]

[From Bill Powers (2012.12.10.0805 MST)] --
Rick Marken (2012.12.09.2000) --

Oh, and I just though of a way to deal with Martin's objection to my
recent demonstration where he notes that there is information about
the change in polarity: multiple regression!

I'll be baak;-)

I don't think we've been talking bout the same thing throughout this
discussion, so don't waste your energy. Oh, I see Martin has rejected
your multiple regression idea and has helpfully included some
references which you probably don't, and I surely don't, have
convenient access to.

How could I reject his idea when I don't know what it is? I merely
suggested that Rick at least read an old article that I offered to send
him if he wanted, since multiple regression is a form of variance
analysis and the article shows the mapping between variance analysis and
information analysis. The thought was that if he planned to use multiple
regression to argue something that would not be possible to analyze by
information theory, he would probably be barking up the wrong tree. To
be sure he is not, he should be sure he knows what he is talking about
before he talks. So far in this thread he has not shown that he does.

Just to make it easier, I attach the article I offered to send him. The
OCR quality isn't fantastic, but you can read the scanned image. Also,
you refer to "references" plural. I assume that the plural refers to
Shannon's book, for which I gave an Amazon URL where it can be purchased
if your local university library doesn't have it (which is most unlikely).

Martin

Garner_McGill.pdf (2.09 MB)

[From Rick Marken (2012.12.10.1630)]

Bill Powers (2012.12.10.0805 MST) --

BP: So far, Bruce (and as far as I remember, Martin) has spoken only about
information in the output about the disturbance

RM: Right, and if an observed correlation between output and
disturbance is an indication that there is information in the output
about the disturbance then my little demo creates a situation where
there is no information in the output about the disturbance.

BP: My current impression, helped by Bruce A and Martin himself is
that it is only the observer who calculates the information and uses it to
analyze the control system.

RM: That's fine. Then my little demo shows that control can happen
when there is no information about the disturbance observed in the
output of the control system.

RM: Oh, and I just though of a way to deal with Martin's objection to my
recent demonstration where he notes that there is information about
the change in polarity: multiple regression!

BP: I don't think we've been talking bout the same thing throughout this
discussion, so don't waste your energy.

RM: Not much energy required. I was just going to look both the
disturbance and feedback function as predictors of the output
variable. I've actually already done several runs where I have
calculated the necessary correlations and there is really no need to
do a regression analysis; it turns out that on many runs the feedback
function and disturbance variable both have a low correlatoin with the
output (low meaning ~0.0) and the correlation between disturbance and
feedback function function is also close to zero. So solving the
regression matrix in my head I get a regression equation that looks
something like this:

output = 0.0 * disturbance + 0.0.* feedback + constant.

In other words the output contains no information about what the
disturbance or feedback function was during the 30 sec control task.
So whatever we've been talking about in this discussion -- whether
information is only something an observer calculates or whether it is
something the system uses to produce outputs -- my little
demonstration proves that it has nothing to do with control.

Best regards

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com