Ashby's Law of Requisite Variety

I’m sorry to jump in so late. I must admitt that I didn’t read everything precisely, so maybe I didin’t understand something right. But I realy couldn’t avoid some statements, which were quite clear to me

BP :

I know you admired Ashby’s work in the past. For a long time, so did I, until I worked out my own understandings of control processes and saw all his mistakes. But we can’t ignore those after we know about them, and I long ago saw Ashby as reduced to the status of a highly intelligent and very clever dilettante. It’s still easy to find things in his writings to admire, but he spread more poison than fertilizer and it is still contaminating people.

HB :

I think this is not necessary toward your “long time admired teacher”. If you admired him for a long time (at least I tried to conclude that could be at least till 1998), then I think your statement is not on your level.

If you needed a long time to work out “your own understanding” of control processes to see all Ashby’s mistakes, then I think your knowledge based on his knowledge with improvements (eliminating mistakes). That’s the way I understand it. So I think that his work was the bases of your “working out your own theory”. I concluded also that he was in some sense your teacher. And I think he was a good teacher, because student exceed his knowledge.

For me he was a pioneer in reserching “design of a brain” on the bases of ultrastability with feed-back, and I think we should respect him for this superb vision, as that was quite long time ago (1952). It’s easy to look upon some work 60 years ago through the eyes of progress and say “well how stupid diletanttes they were in that time”.

Can we really look upon history with such a criticism ? Can we really look upon the history which put some “supporting stones” in development of human knowledge, as something that “spread more poison than fertilizer and it is still contaminating people”. And criticiser was “poisoned and contaminated” with the same knowledge for a long time.

I look more on Ashby as somebody who probbaly “opened” your, our eyes, as you were “blind” if I understood you right.

BP (23.12.2010) :

Something is coming together that is making sense of some ideas I
have resisted for a long time. It has to do with the brain’s models
of the external world. From the way I have seen those models proposed
by others such as Ashby and Modern Control Theory adherents, I have
thought they were simply impractical, calling for far too much
knowledge, computing power, and precision of action – as indeed they
are and they do, as they have been presented.

But those ideas may nevertheless be right. Some of those other blind
men standing around the elephant are perhaps only a little
nearsighted, and are seeing something going on that looks fuzzily
like modeling, but there’s something funny about it so it isn’t quite
how it seems from this angle or that. This particular blind or
nearsighted man writing these sentences has not seen models; he has
seen a hierarchy of perceptions that somehow represents an external
world, and a large collection of Complex Environmental Variables (as
Martin Taylor calls them) that is mirrored inside the brain in the
form of perceptions.

Briefly, then: what I call the hierarchy of perceptions is the model.
When you open your eyes and look around, what you see – and feel,
smell, hear, and taste – is the model. In fact we never experience
ANYTHING BUT the model. The model is composed of perceptions of all
kinds from intensities on up.

HB :

So I think it’s not right that you try to underestimate Ashby’s contribution to understanding of “how brain work adaptivelly”, as you see, you are still discovering details of his “elephant vision”.

I’m repeating all the time that your theory is very good and your idea very promising, but I have in mind also the range of organization of control units, not ONLY in analyzing one control unit to infinity and in all possible “figures”. I think you have to move somehwere, as time is passing very quickly.

Also Carver and Scheier are writing about your theory as partly useful, and that you didn’t do enough on hierarchy (as probably they are the only psychologists, who seriously grab your theory outside CSGnet).

Maybe they didn’t understand you quite well. As you maybe didn’t’ whole-understand Ashby.

My oppinion is that you will never be able to answer what is life, in which control system has reamarcable part, if you will not start to work seriously on organization of control from “genetic source to system concepts”.

I tried a little “to play” with this hierarchy (as it seems that nobody else wants, as you and Rick are mostly “playing” with details in one unit or unit as a whole), and I discover some interesting things.

One is that I couldn’t work only with your principles in detail. I also tried with Ashby’s “diagram of immediate effects” and it didn’t work either. Then I tried with both theories (all principles I could find) together and I must say that I noticed some progress. Till now it’s even working

I think that Ashby had a great vision and he was really very inteligent and smart man, not just “very clever dilettante”. His central question as I understood him was :

“The origin of the nervous system’s unique ability to produce adaptive behavior or

simply how brain produce adaptive behavior”.

It seems to me, that you and Rick are trying to analyse something else. I have impression that you are researching control unit in detail with two or three experiments (mostly with tracking experiment). It also seems to me that you are not progressing to find out “what is life” or what is the difference between alive and dead horse as Ashby used to compare. It’s look like to me that you want to analyse control in detail and you are simply overlooking the problem of the way control units are organized as the main problem of life.

I see Ashby’s work as pioneer work. If that is not so and you think it wasn’t, I’d be greatfull to you if you show me who was in his or past time to wrote about “design of the brain” on the bases of ultrastability. I’m still wondering why would he wrote a book if everything was so clear about “questions of brain and life”.

The concepts of organization, behavior, change of behavior, part- whole, dynamic system, coordination (!!!), equilibrium, local stablities, etc. as I see it, are still “in the game”, because I don’t see how you solved some of this problems in “control processes” as whole organism.

His whole vision of “eliminating intrinsic errors” is by my oppinion still right on high abstract level. Yours failed in some parts (figure on page 191 in B:CP, 2005). Although you both used “double feed-back” ” (what I think is his idea) to eliminate “intrinsic error” what by my oppinion is a little misleading, I think his is better on some generalized level.

But on the level of structural abstraction, you are by my oppinion absolutely dominating with your inovation of organized same control units and perfect analyses of one control unit. But as I see it, you still don’t answer how “intrinsic error” is eliminated. Your model (picture on 191, B:CP, 2005) doesn’t show that. And I think Ashby managed that but with some weekness in some parts which you succesfully improved. Whatever. I can work only with both theories. Single theory doesn’t offer me enough conceptual support for “building” the whole picture of organisms functioning. Both do.

So conversations that ends with accusing and defending who is S-O-R and who is not seems to me unusefull. Also unusefull seems to me critisizing and trying to eliminate historical knowledge of some eminent people.

I don’t see the problem on the side of Martin and Bruce. But I must admitt I’m wondering why they were “attacked” again. And Martin T. helped me with his post. I quite agree with him.

People who have maybe some different ideas, which could show some other perspective of how nervous system and organism as a whole works in environemnt. I think about Bruce A. and Martin T. that they are serious and respect worth people. I realy admire both of them for their past contribution to PCT and I’m thankfull to both of them for quite long conversations I had with them. I learned much.

By my oppinion you Bill with your “own understanding of control processes” started the avalanche and the main things has to be done yet. Why loose energy on discussions with no end whether some knowledge has to be included of not. We can discuss it but with patient and mutual respect, from different perspectives and on different control level in organism.

As I noticed before, my oppinion is that you are the pionieer of organization of control units, and next generations I’m sure will undoubtelly improve your ideas, as Ashby’s ideas was improved by you and some others. I see Maturana as one of them.

But what you didn’t do is to create the whole picture of how organism is working with horizontal and hierachical organization of control units. You are good at details about control unit and some on organization of control units (I can easily put my hat down), but the main point is missing. How all your knowledge works in natural environment interacting with organisms and their inside organization ?

So I see Ashby as pioneer on the field of “control of aims”, goal-seeking behavior to minimizing displacement in essential variables (ultrastability), “design for a brain”,etc, as I see you as pioneer of some different perspective to problem life “control”. And I like also Martin T. perspective and Bruce A. perspective and even Rick’s, when he is conversating normally.

But I think also, if there wouldn’t be Ashby’s work, who knows whether you would come to the point of your own understanding of control processes in organism after your long admiration of Ashby’s work…and whether we would come to the point of our understanding of control processes as we understand it today. I’m thankfull to both of you.

But I still hink it’s right to seek also other ways to find more and more possible perspectives which will enable more solutions to problems PCT have. And Bruce and Martin, as I see it, tried to show that. At least to me. They are maybe not satisfyed with all of “PCT truth” and they try to seek for more possibilities.

So I’m thankfull for a thoughtfull debate to all of you, also to Rick (in certain range)J. But I rather see if you stay in the range of arguments, not at “songs” and other attempts of degradations (control) as Rick did. Could this be really the final point of PCT conversations ? To prove who is S-O-R ? Does such a conversation lead anywhere ?

I like Erling Jorgensen’s interpretation of such a debate :

“My background is not mathematics. While I have a Ph.D. degree, I always
have to follow closely those who can translate the mathematical concepts
into more intuitive ways of understanding them. However, there are some
points, & indeed fallacies, that have only been lightly touched upon in
this discussion. And I need to see if my understanding matches that of
others.”

Why not give more concern to all possible understandings and perspectives of the problem you were talking about. I think this could be than multicultural, multilingual, multi-conceptual approach which could made everybody reacher and happier as they would understand what’s conversation about and they can contribute. Isn’t that one of the goals of PCT ? Isn’t that one of possible ways to improve PCT knowledge and spread it among people ?

Best,

Boris

4d13c0.jpg

···

----- Original Message -----

From:
Bill Powers

To: CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Wednesday, December 05, 2012 7:03 PM

Subject: Re: Ashby’s Law of Requisite Variety

[From Bill Powers (2012.12.05.0950 MST)]

Bruce Abbott (2012.12.05.0740)]

Bill Powers (2012.12.04.1723 MST) --

Bruce Abbott (2012.12.04.1955 EST)

BA previously: I don't find anything in Ashby's paper about behavior being noisy and
statistical. The argument is simply that noise in a transmission channel
(which could take the form of nonrandom variation such as that induced by
channel cross-talk, by the way) is logically equivalent to
disturbance-induced variation in a controlled variable. Thus, an isomorphism
exists between Ashby's Law of Requisite Variety and Shannon's Theorem 10.

BP: But in a noise-free channel, that theorem is irrelevant, isn't it?
BA: You’re missing the point. Perhaps another example will clarify. Assume that you have a control system with a continuously varying reference signal. In addition, a continuously varying disturbance is acting on the controlled variable. The perceptual signal emerging from the input function varies as a function of both. It represents the desired values of the perceptual signal, plus “noise” added to that signal by the disturbance. That’s the “noise” we’re talking about. We’re not talking about any other source of noise in the system.

BP: OK, I see how this applies to the first part of the paper. You’ll notice that the elements of Ashby’s analysis are not continuous variables, but “events” which either occur or don’t occur. This is on page 2:

Emacs!

This is the traditional way of defining probabilities, information, uncertainty, and so on. The variables are occurrances which either happen or don’t happen. If you count all the things that might happen, perhaps with accompanying relative probabilities, you can use that number to compute the probability that a particular one will happen. But nothing ever happens partway.

The main effect of using this discrete-variable approach is to bypass the problem of quantitative accuracy. Ashby does this a lot, by giving examples in which the variables have values that are small integers or just logical states. If the reference signal is 1 and the perceptual signal is 2, the error, r - p, is -1, which in one form of controller should lead to an output effect of -1 that will reduce the perceptual signal to 1 and the error to 0, giving perfect control. Of course no physical system works that way. The example on page 3 is set up and discussed that way, with the outcomes being rated either Good or Bad.

BA: In a control system, we have access to the reference signal. We can subtract the perceptual signal from the reference signal; what is left is the disturbance waveform, otherwise known as the error signal. The error signal drives the output, which negatively feeds back onto the controlled variable to oppose the effect of the disturbance on the controlled variable.

If the output of the system perfectly cancelled the effect of the disturbance, the error signal would never vary from zero. Consequently, there would be no variation in the output to cancel out variation in the CV due to the disturbance. Conclusion: perfect control in such a system is impossible.

BP: This is the conclusion you reach when thinking in terms of discrete variables or small whole numbers. As you pointed out in yesterday’s post, however, the concept of loop gain is missing from this approach. When you take loop gain into account you find that you can get the error as small as you please. Also, only simple proportional control is considered. If the output function is an integrator, the output variable becomes a time series which converges toward a final value of zero error. How rapidly it converges depends on the gain of the integrator. Strictly speaking, control is still not perfect until you’ve waited an infinite time. But if the first iteration corrects 90% of the error, the 10th one reduces the error to 1% of its initial value, and so on, and at some point the error becomes smaller than the system’s own internal noise level.

BA: In information theory, the channel that conducts the error signal to the output function conveys information about the disturbance. In a perfect control system, that channel is blocked with respect to the flow of information because the error signal never varies.

BP: That is an artifact of the method of analysis, and requires ignoring variations in the reference signal. When you neglect all real sources of disturbance in the controller itself, and measure continuous variables as small integers, and assume perfect ability to convert signals into other signals or physical effects of the correct magnitude, of course you come out with perfect control using Ashby’s model, and the problem of zero error (not) producing the right output in the closed-loop model. But Ashby’s model applies only in an imaginary idealized environment, and for control systems built of imaginary perfect materials, with computers of infinite speed and precision inside them.

In the real control system, you can solve the equations to get the steady-state value of the variables, and you find that the system ends up with a specific amount of error which can be reduced to the limits of our devices to detect errors by using high enough loop gain and taking pains to stabilize the loop (a problem never mentioned by Ashby).

BA: Information theory simply offers another way to analyze the operation of a control system. It’s just another tool. Whether it’s a useful tool in this context depends on the purposes of the investigator.

BP: I very strongly dispute those claims in relation to control systems. It is a blunt tool that works only with imaginary systems in an imaginary environment. It comes up with very clear conclusions, which is a strong reason to reject it because the conclusions are wrong for any real control system.

I know you admired Ashby’s work in the past. For a long time, so did I, until I worked out my own understandings of control processes and saw all his mistakes. But we can’t ignore those after we know about them, and I long ago saw Ashby as reduced to the status of a highly intelligent and very clever dilettante. It’s still easy to find things in his writings to admire, but he spread more poison than fertilizer and it is still contaminating people.

His generalizations and “laws” are banalities or trivial truths when translated into simpler engineering language. Of course the output of a control system has to be able to vary enough, and rapidly enough and in the right ways, to counteract the effects of the disturbances that are most likely to happen. Who on Earth would ever have thought otherwise? And who would ever have the nerve to dress that quite obvious fact up as a law with a fancy name?

There have been many people like Ashby in cybernetics and allied occupations who are too smart for their own good. They are used to coming up with their own explanations of things like control systems, and are so confident that they never actually learn how others solved the same problems. They reinvent wheels and often miss the obvious simplifications that are possible. A lot of them simply love complexity, instead of being made suspicious by it that they have gone down an unnecessarily twisty path.

Best,

Bill P.

Re: Ashby’s Law of Requisite Variety
[From Erling Jorgensen (2012.12.20.2130 EST)]

As I have followed & participated in this discussion of the relation between
Information Theory & Perceptual Control Theory, I have often felt as
though everyone else was in the same room playing air hockey in real time,
while I was left to play long distance chess (with a sizeable transport lag!)

The names of the relevant threads have drifted around a bit, but the bulk
of the discussion has taken place in the thread referring to Ashby’s law…

Martin Taylor 2012.12.14.10.05
Drawing from:
Fwd: Re: Information about (was Ashby’s Law of Requisite Variety)

Erling Jorgensen (2012.12.14.0830 EST)

Another question I tried to address, or at least ask, was uncertainty
about what?

[previously]
EJ: The technical understanding of information here means reduction
in uncertainty. So isn’t the error signal conveying information (i.e.,
reducing uncertainty) about the perception, not the disturbance, by
saying the perception is not yet equal to the reference?

Short answer: Yes, the error signal conveys information about the
reference and the perception; the perception carries information about
the disturbance and the output; the output carries information about the
error and the history of the error (assuming some kind of integration in
the output function). The error signal therefore does carry information
about the disturbance (and the reference and the output, as well as its
own history).

There is an image that has been kicking around in my head for a couple
of days. I believe it may be a way to understand the notion of information
being “carried” (as opposed to “used”) within the control loop.

Martin Taylor 2102.12.15.20.33
The only facts
relevant to a control system analysis are the functions in the loop and
the waveforms of the two inputs (reference and disturbance) together
with any other inputs that a more refined model might add, such as
system noise or gain controls.

This is the picture we need to start with. The canonical PCT analysis
consists of a loop with just two inputs: one from ‘above’, the reference
signal, & one from ‘below’, the disturbance signal. In my head, I’m
thinking of a little vortex or galaxy symbol, with two galactic arms
extending above & below, respectively.

The PCT equations quantify what is happening within this little vortex,
with just the two “independent variables,” reference & disturbance, from
outside the loop.

From an information theoretic standpoint, there is a flow of Uncertainty
going around this loop. If the uncertainty is reduced at any points, we
are justified in saying Information has been gained, in a technical sense;
otherwise, it remains uncertain, or as Entropy, to use the alternate term
from Information Theory.

I think the metaphor of a whirlpool captures this uncertainty situation
quite well. The individual water molecules within a whirlpool came from
somewhere, adjoining currents & eddies & the like. But once within the
whirlpool itself, it is impossible to distinguish them as to original location.
I think the same can be said for any information carried within a control
loop. It is indistinguishable in & of itself, because essentially it is not a
substance at all. It is a statement of the relationship between two
measurements – “before” & “after” – & whether uncertainty has thereby
been reduced.

So the question to ask is what uncertainty gets reduced by means of a
negative feedback control loop. Here I think the PCT answer is very
clear: The value of the perception becomes more & more certain, as it
is made to track the value of the reference. To take some liberties with
Shannon’s communication language, the perceptual signal ‘gets the
message’ about what value the reference signal intended it to take.

The structure of the control loop, with its transport functions &
parameters, is set up to bring about that result. In the process, there
is a side effect that is produced. The output as it is modified by the
environmental feedback function (EFF) ends up approximating the inverse
of the net disturbance. In that sense, it too becomes more & more
certain.

So to paraphrase – out of the changing values of a control loop & this
whirlpool of uncertainty, a message of sorts (“Follow me,”) is sent from
the reference signal to the perceptual signal. As it is received, the
perceptual signal becomes increasingly precise & more like the reference
signal. So its uncertainty has been reduced. The inchoate & entropic
whirlpool has yielded a gain in information, at the point of the perceptual
signal.

Thus, the unknown (to the control loop) & unpredicted variation from the
reference signal accumulates in the preceptual signal. What is left in the
whirlpool, so to speak, is the remaining uncertainty deriving from the
disturbance function, & this accumulates in inverse form in the net output
(& by this I mean, the output as amplified or modified by the EFF.) So
there is a secondary reduction in uncertaintay, which means a gain in
information, at the point of that net output, as it more & more precisely
counteracts the net effect of any disturbances.

It is important to note that these statements about information gain are
outcomes of the operation of the control loop. The loop itself is not
“using” information in the sense of calculating probabilities among the
possible states of the values swirling within it. The loop is arriving at a
more certain state for its perceptual signal, & in the process arriving at
a more certain state for its net output.

A couple of clarifications are in order. The term disturbance here is meant
to refer to the net effect on the controlled environmental variable, from
which the perceptual signal is derived. The control loop knows nothing
about the source of any disturbances, just as a comparator doesn’t know
what the reference signal means in terms of the output of any higher level
control loops.

In the same way, we ought to start talking about a net output after it has
been modified by an environmental feedback function. It is only at that
point of impact, where it is rolled into the controlled environmental
variable, that it truly serves as the inverse to the net disturbance.

I hope this way of thinking about information & uncertainty reduction
in the operation of control loops resonates for other people. I would be
interested in hearing whether that is the case or not, so I can distinguish
between a privatized versus shared corpus of understandings here.

All the best,
Erling

  NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the "reply" feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.

[From Bruce Abbott (2012.21.1055 EST)]

Well, the world hasn’t ended yet, so I suppose there’s still time for a brief reply.

Erling Jorgensen (2012.12.20.2130 EST) –

Martin Taylor 2102.12.15.20.33
The only facts
relevant to a control system analysis are the functions in the loop and
the waveforms of the two inputs (reference and disturbance) together
with any other inputs that a more refined model might add, such as
system noise or gain controls.

EJ: This is the picture we need to start with. The canonical PCT analysis
consists of a loop with just two inputs: one from ‘above’, the reference
signal, & one from ‘below’, the disturbance signal. In my head, I’m
thinking of a little vortex or galaxy symbol, with two galactic arms
extending above & below, respectively.

EJ: The PCT equations quantify what is happening within this little vortex,
with just the two “independent variables,” reference & disturbance, from
outside the loop.

EJ: From an information theoretic standpoint, there is a flow of Uncertainty
going around this loop. If the uncertainty is reduced at any points, we
are justified in saying Information has been gained, in a technical sense;
otherwise, it remains uncertain, or as Entropy, to use the alternate term
from Information Theory.

Yes!

EJ: I think the metaphor of a whirlpool captures this uncertainty situation
quite well. The individual water molecules within a whirlpool came from
somewhere, adjoining currents & eddies & the like. But once within the
whirlpool itself, it is impossible to distinguish them as to original location.
I think the same can be said for any information carried within a control
loop. It is indistinguishable in & of itself, because essentially it is not a
substance at all. It is a statement of the relationship between two
measurements – “before” & “after” – & whether uncertainty has thereby
been reduced.

EJ: So the question to ask is what uncertainty gets reduced by means of a
negative feedback control loop. Here I think the PCT answer is very
clear: The value of the perception becomes more & more certain, as it
is made to track the value of the reference. To take some liberties with
Shannon’s communication language, the perceptual signal ‘gets the
message’ about what value the reference signal intended it to take.

EJ: The structure of the control loop, with its transport functions &
parameters, is set up to bring about that result. In the process, there
is a side effect that is produced. The output as it is modified by the
environmental feedback function (EFF) ends up approximating the inverse
of the net disturbance. In that sense, it too becomes more & more
certain.

EJ: So to paraphrase – out of the changing values of a control loop & this
whirlpool of uncertainty, a message of sorts (“Follow me,”) is sent from
the reference signal to the perceptual signal. As it is received, the
perceptual signal becomes increasingly precise & more like the reference
signal. So its uncertainty has been reduced. The inchoate & entropic
whirlpool has yielded a gain in information, at the point of the perceptual
signal.

EJ: Thus, the unknown (to the control loop) & unpredicted variation from the
reference signal accumulates in the preceptual signal. What is left in the
whirlpool, so to speak, is the remaining uncertainty deriving from the
disturbance function, & this accumulates in inverse form in the net output
(& by this I mean, the output as amplified or modified by the EFF.) So
there is a secondary reduction in uncertaintay, which means a gain in
information, at the point of that net output, as it more & more precisely
counteracts the net effect of any disturbances.

EJ: It is important to note that these statements about information gain are
outcomes of the operation of the control loop. The loop itself is not
“using” information in the sense of calculating probabilities among the
possible states of the values swirling within it. The loop is arriving at a
more certain state for its perceptual signal, & in the process arriving at
a more certain state for its net output.

EJ: A couple of clarifications are in order. The term disturbance here is meant
to refer to the net effect on the controlled environmental variable, from
which the perceptual signal is derived. The control loop knows nothing
about the source of any disturbances, just as a comparator doesn’t know
what the reference signal means in terms of the output of any higher level
control loops.

EJ: In the same way, we ought to start talking about a net output after it has
been modified by an environmental feedback function. It is only at that
point of impact, where it is rolled into the controlled environmental
variable, that it truly serves as the inverse to the net disturbance.

Exactly right.

EJ: I hope this way of thinking about information & uncertainty reduction
in the operation of control loops resonates for other people. I would be
interested in hearing whether that is the case or not, so I can distinguish
between a privatized versus shared corpus of understandings here.

Very nice clarification, Erling.

Bruce

[From Bill Powers (2012.12.21.0949 MST)]
Erling Jorgensen (2012.12.20.2130 EST) –

BP: It’s good to get down to the details in the discussion of
information, entropy, uncertainty, and so on.
But let me say first that the question about arbitrary control has never,
for me, been a moral question. It’s simply a practical question: if you
succeed in getting a person to behave so as to fit into your own
structure of perceptions and reference signals, what is the most likely
outcome?
I don’t question anyone’s ability to control another person’s behavior.
That depends only on your cleverness and whatever limits you place on
what you’re willing to do to another person to gain that kind of control.
You can use anything from carefully crafted subtle disturbances chosen
after extensive testing for controlled variables, to emotional pressure
or drugs, or anything else up to and including torture.
The problem with succeeding in getting this kind of control is that you
have no idea what errors you are getting the person to create for himself
as a result of changing his behavior to fit your requirements (not even
mentioning behavioral and intrinsic errors induced by extreme methods of
getting control).
No person behaves except for the purpose of controlling one or more
perceptions at one or more levels of control. The behavior we see another
person producing, therefore, is being adjusted continuously so as to keep
a whole hierarchy of perceptual variables matching whatever reference
signals are currently being generated by higher systems.
While we might have a pretty good idea of what some of those variables
are, through communication or through examining our own experiences, I
would say that it’s impossible to know what most of them are, at all
levels or even at just one level. In MOL sessions we find that even the
person in whom those variables exist consciously knows about only some of
them. So we don’t know much about why we see any particular
behavior. We notice mainly the behaviors of others that cause
difficulties for ourselves, or that look particularly helpful to us or to
something we strive for or approve.

I see arbitrary control, therefore, as being very likely to disturb many
variables that the person is currently controlling. If these effects are
not too strong, chances are that the person can make small adjustments
and cancel their effects on controlled variables, so control can go on
successfully as before. Then a mild conflict can be allowed to continue
without escalating into a fight. Meddling Aunt Melba is always poking her
nose into our business, but we just shrug it off and laugh at the old
dear where she can’t hear us.

However, when changing our behavior to suit someone else causes errors in
important variables, and we can’t change our behavior and still maintain
control of them, the inevitable result will be conflict or resistance.
Even if we want to comply, we can’t stop controlling variables that are
vital to our most important goals. We have to keep eating and breathing,
for example, and will always try to do those things if we can. In such
cases, the attempt to control us throws us into internal conflict. Do
what that other person wants us to do, and at the same time do what we
must do. Or it simply creates a conflict between that person and us: we
refuse to change. “Eppur si muove.”

So it’s not that we “shouldn’t” try to control other people’s
behavior. It’s that we don’t know enough to do it without creating a
conflict that works against the attempt to change their
behavior.

···

=====================================================

Re: Information

EJ: Short answer: Yes, the error
signal conveys information about the

reference and the perception; the perception carries information
about

the disturbance and the output; the output carries information about
the

error and the history of the error (assuming some kind of integration
in

the output function). The error signal therefore does carry
information

about the disturbance (and the reference and the output, as well as
its

own history).

BP: While I won’t argue against well-entrenched customs that I don’t have
much chance of changing, I do have difficulty with this concept. In some
technical sense the error signal can be said to carry information about
the perceptual or reference signal, but is this information accessible to
any other system or observer? I say it’s not.

Consider this table showing values of the perceptual, reference, and
error signals:

P
R E (= R - P)

20
37 17

92.5
109.5 17

3
20 17

-16
1 17

...

Ok, so given that the error signal magnitude is 17, what information does
that value carry about the perceptual signal? About the reference signal?
I’m sure the quantity in bits can be computed if you have some other
means of finding out the values of R and P as well as knowledge of what
the possible choices are and the distributions, but given only the value
of the error signal one knows nothing at all about the value of R or of
P. If you know either one (in addition to E) you can compute the other
exactly, but E alone is not sufficient. And since the output function
receives no input except the error signal, it can’t make any use of this
hypothetical knowledge, either. There is no way to produce one output if
the error signal is 17, but another output if the error signal is still
17.

Uncertainty:

My objection to this term is that it is anthropomorphic. People can feel
uncertain if they do not know which of several possibilities is the
truth. But contemporary artificial devices can’t be or feel uncertain,
because they have no knowlege of possibilities, and no feelings. They
can’t be certain, either. They don’t have that level of perception or the
ability to sense and evaluate their own perceptions.

Further, people can be certain when they have no reason to be certain.
They can simply decide to believe, as an act of faith. They then lose
their sense of uncertainty even though nothing happened to give them more
information. Artificial devices, fortunately, don’t do that.

EJ: From an information
theoretic standpoint, there is a flow of Uncertainty

going around this loop. If the uncertainty is reduced at any
points, we

are justified in saying Information has been gained, in a technical
sense;

otherwise, it remains uncertain, or as Entropy, to use the alternate term

from Information Theory.

BP: If that’s what you want to imagine, go ahead, but I don’t want to
imagine it, so I won’t. Or perhaps you have knowlege that someone opened
up a control system, examined its parts, and found that there was
something heretofor unknown flowing from one part into others or being
stored in one or more parts, in a way predicted by the equations you use.
If that were true, my own principles would require me to agree with your
assertions. But I know of no such evidence, so I can’t accept Uncertainty
(or Information) as a real physical variable which exists for all
observers. It’s just an optional calculation, like computations of
“utility” or “intelligence.” There is no independent
confirmation of the existence of any variables of that kind.

I’d say the above propositions would be somewhat more acceptable if we
treated them as metaphors or as hypotheses to be tested. In other words,
as a model we want to try out. But so far all that has been offered is
the theory, without any means of disconfirming it. I’m ready to believe
that there are counterparts to such terms if a proper investigation fails
to disconfirm them, or offers positive confirmation. Otherwise I have to
treat them as unsupported propositions, or at best, convenient fictions.
As Rick said earlier, it would be nice to see an actual example of the
uses to which these ideas are put, to see if there are any interesting
conclusions they would lead to that we can’t get out of conventional
analyses of control systems.

Best.

Bill P.

[From Chad Green (2012.12.21.1803 EST)]

BP: Do you think the volunteer could have controlled the position of the knot while wearing a blindfold?

All you have to do is imagine trying it yourself. It's impossible. If you can't perceive the variable, you can't control it. Obviously, perception plays an essential role in this process we call controlling. The more you consider that fact, the more you will come to appreciate why we call this theory not just control theory, but perceptual control theory.

CG: That was from the Nature of PCT article I found online. Perceptually, yes, it is impossible. Conceptually, however, it is indeed possible. Shall we call it conceptual control theory?

Best,
Chad

Chad Green, PMP
Program Analyst
Loudoun County Public Schools
21000 Education Court
Ashburn, VA 20148
Voice: 571-252-1486
Fax: 571-252-1633

"If you want sense, you'll have to make it yourself." - Norton Juster

[Martin Taylor 2012.12.21.23.05]

So why did you ask that an informational analysis of a control

system should be able to describe a function outside the control
system, and claim that the inability of the informational analysis
to do that showed its worthlessness? A control system analysis is concerned only with what goes on in the
control system, together with the input variables to the system. A
control system analysis incorporates nothing about the environment
except the properties of the environmental feedback path. It does
not incorporate any consideration of how its input variables, the
reference and the disturbance, are generated. If the system to be
analyzed has other input variables (e.g. gain control, energy
input), then those input variables must be included. But how the
input variables come to be the way they are is never the job of a
control system analysis, whether based in information theory or not.
At one point you said you felt there must be something wrong with a
bog-standard control system analysis because it showed output qo to
be a function of disturbance qd. When Bill pointed out that it was
so, you then produced a little demo that you claimed showed that the
output was not informationally related to the disturbance. It turned
out that the “disturbance” in your demo was not qd at all, but some
other variable modified by a time-varying function to generate qd. You said (in the quote above) a control system analysis would be
able to figure out the function, whereas I had earlier said that an
informational analysis of the control system could not and had asked
you why you claimed the demo demonstrated the invalidity of
informational analysis. Now you say that a control system analysis
could not, either. So what was the point of the demo, if it was not
to persuade by illegitimate means?
So far, your only actual evidence to support your repeated
statements that the following interchange is valid:
[From Rick Marken (2012.12.15.2210)]
is your own repetition of the same inaccurate claim. Never, in any
way, have you shown any kind of evidence to support your last
statement quoted from this exchange. Every one of your supposed
means of showing has been demonstrated to be based either on your
own misconceptions about information theory or on misleading the
readers about the structure of your demo.
“What I say three times is true”, so I guess that what you say
thirty three times must be more true. Most of what we call science
requires other kinds of evidence. “Because I say so” is an argument
that usually fails when used toward people aged over about three.
Martin

···

On 2012/12/18 12:14 PM, Richard Marken
wrote:

[From Rick Marken (2012.12.18.0915)]

        Martin Taylor

(2012.12.17.23.48)–

Rick Marken (2012.12.15.2240)

                MT: No analysis

of any kind, of any control system could say
anything about functions that are not part of the
control system.

              RM: A control system analysis does this all the time.
        MT: Very interesting. That's something I never heard in the

two decades I thought I understood PCT fairly well. It is
always nice to learn something new.

      RM: I took "say" to mean "include". A control theory analysis

of the situation in my tracking task which you diagram below
would have include what is known about the environmental in
which the system does it’s controlling. Control theory is a
theory of what goes on inside the organism when it is
controlling; I wouldn’t use control theory to analyze what is
going on in the environment outside the control system; there
are physical measuring instruments for that.

Bruce Abbott (2012.12.15.1730 EST)–

            BA:

My only interest throughout this whole exchange has been
to show that information theory (a la Shannon) can be
legitimately applied to analyze the performance of a
control system.

     RM: And my only interest has been to show that it isn't.
             BA:

I’ve demonstrated, contrary to your repeated assertions,
that in our standard control system, information does
get transmitted from disturbance to the output of the
feedback function, as information theory leads one to
expect.

  RM: And I've shown that it doesn't.

[From Rick Marken (2012.12.21.2200)]

Martin Taylor (2012.12.21.23.05)--

MT: So far, your only actual evidence to support your repeated statements that
the following interchange is valid:

BA: I�ve demonstrated, contrary to your repeated assertions, that in our
standard control system, information does get transmitted from disturbance
to the output of the feedback function, as information theory leads one to
expect.

RM: And I've shown that it doesn't.

MT: is your own repetition of the same inaccurate claim. Never, in any way, have
you shown any kind of evidence to support your last statement quoted from
this exchange.

RM: I've shown (as has Bill) that information does not get transmitted
from disturbance to output (of the feedback function) of a control
system in many different ways:

1. Because variations in a controlled input are the simultaneous
result of output and disturbance: i = o+d

2. When control is good there is no correlation between input and
output.(Marken, R. S. and Horth, B. (2011) When Causality Does Not
Imply Correlation: More Spadework at the Foundations of Scientific
Psychology, Psychological Reports, 108, 1-12)

3. When control is good the same disturbance produces exactly the same
output even though the input may be completely different.(Marken, R.
S. (1980) The Cause of Control Movements in a Tracking Task.
Perceptual and Motor Skills, 51, 755-758.)

4. When the feedback function varies from negative to positive there
is no correlatoin between disturbance and output.

5. When the disturbance function varies from negative to positive
there is no correlatoin between disturbance and output.

6. The error signal carries no information even about the perceptual
signal, as shown by Bill today (Bill Powers (2012.12.21.0949 MST):

BP: While I won't argue against well-entrenched customs that I don't
have much chance of changing, I do have difficulty with this concept.
In some technical sense the error signal can be said to carry
information about the perceptual or reference signal, but is this
information accessible to any other system or observer? I say it's
not.

Consider this table showing values of the perceptual, reference, and
error signals:

   P R E (= R - P)
   20 37 17
   92.5 109.5 17
   3 20 17
   -16 1 17
    ...

Ok, so given that the error signal magnitude is 17, what information
does that value carry about the perceptual signal? About the reference
signal? I'm sure the quantity in bits can be computed if you have some
other means of finding out the values of R and P as well as knowledge
of what the possible choices are and the distributions, but given only
the value of the error signal one knows nothing at all about the value
of R or of P. If you know either one (in addition to E) you can
compute the other exactly, but E alone is not sufficient. And since
the output function receives no input except the error signal, it
can't make any use of this hypothetical knowledge, either. There is no
way to produce one output if the error signal is 17, but another
output if the error signal is still 17.

MT: Every one of your supposed means of showing has been
demonstrated to be based either on your own misconceptions about information
theory or on misleading the readers about the structure of your demo.

RM: As Bob Dylan said when he was called "Judas" for going electric "I
don't believe you".

But since you seem to respect Bill's opinion, how about taking his
suggestion in the post I quoted above:

BP: As Rick said earlier, it would be nice to see an actual example of
the uses to which these ideas [information theory] are put, to see if
there are any interesting conclusions they would lead to that we can't
get out of conventional analyses of control systems.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.12.22.0719 MST)]

Rick Marken (2012.12.21.2200)]

RM: I’ve shown (as has Bill)
that information does not get transmitted

from disturbance to output (of the feedback function) of a control

system in many different ways:

BP: We have to be cautious about saying this, because the meaning of
“Information” in information theory is not its meaning in
ordinary language. I capitalize Information when I want to indicate the
special meaning in information theory.

RM: 1. Because variations in a
controlled input are the simultaneous

result of output and disturbance: i = o+d

BP: Yes, this tells me I can’t deduce either o or d just from knowing i.
That means there is no usable information about o or d in i, but it does
not rule out the idea that the Information content of i, in bits, can be
calculated from the Information in o and in d. This is something like
saying that the variance of i equals the square root of the sum of the
squares of variances of o and d, but given only the variance of i, you
can’t deduce what the variance of o or the variance of d is. So a sloppy
description might say that the variance of d is transmitted to i, without
really intending to imply that the contribution of d could be somehow
picked out of the total variance of i. But a person listening to that
description might well hear it as claiming that after the variance (or
information) has been transmitted, we ought to be able to find it in the
place that received it, the way we would find a letter in the mailbox
after it had been received.

This has been the chief problem in this whole interchange:
unconsciousness of alternate meanings in one’s words that mislead others
about what one is trying to say. It’s happening on both sides of the
exchange.

I expect that Information does in fact get transmitted from d to i, and
thence to o, just as variance does – but only in that sense. You can’t
look at i and indentify which bits of Information in i came from
Information in d or what they mean about d, and ditto for Information in
i about Information in o.

RM: 2. When control is good
there is no correlation between input and

output.(Marken, R. S. and Horth, B. (2011) When Causality Does Not

Imply Correlation: More Spadework at the Foundations of Scientific

Psychology, Psychological Reports, 108, 1-12)

BP: It’s not true that there is no correlation between input and output
when control is good. There is no correlation only in the
imaginary case of perfect control, zero error. Throughout
control theory we have to remain conscious of when we are giving an exact
description and when we are approximating to obtain a handy rule of thumb
that is almost true. If you remember only the rule of thumb, you will
come up with wrong conclusions. You will think, for example, that a
control system can work so the error is always exactly zero.

RM: 3. When control is good the
same disturbance produces exactly the same

output even though the input may be completely different.(Marken, R.

S. (1980) The Cause of Control Movements in a Tracking Task.

Perceptual and Motor Skills, 51, 755-758.)

BP: This makes it sound as if you can make the input different while the
disturbance remains the same. The only way to make the input different
without altering the control system’s functions is to make the
disturbance different. If the disturbance is exactly the same, the input
will be exactly the same because the output will also be exactly the
same, but for noise in the control system itself (a hidden disturbance,
as Martin Taylor pointed out recently).

  1. When the feedback function
    varies from negative to positive there

is no correlatin between disturbance and output.

  1. When the disturbance function varies from negative to positive

there is no correlatoin between disturbance and output.

  1. The error signal carries no information even about the perceptual

signal, as shown by Bill today (Bill Powers (2012.12.21.0949
MST):

All these are approximations and are false when looked at more carefully.
If all these statements were true, the behavior of a control system would
be inexplicable. These rules of thumb are interesting because they seem
both valid and paradoxical, but in nature you can’t have both at once.
Once you understand how a control system really works, using the exact
equations, all paradoxes disappear.

By criticizing what you said I am by no means exempting Martin and Bruce
from the same accusations. The blind are accusing the blind of being
blind. When we hear or see our own communications, we most easily
recognize the meanings we intended to communicate. But others hearing or
seeing the same inputs don’t have that advantage, and they intepret the
words in terms of their own experiences. So A can say one thing and B can
hear another, at the level of meaning. To communicate clearly, we all
have to be aware that our words can have more than one meaning, and take
steps to deny or otherwise cancel alternate but unintended
meanings.

I’m sure that Martin never meant that Information in the disturbance is
conveyed to the output via the input in the same way a letter is conveyed
to a person’s mailbox via a post office, arriving in recognizeable form.
I’m sure that even metaphorically he doesn’t mean that, since the
Information at the destination is all jumbled up, Information from
multiple sources being inserted bit by bit where signals meet at a
junction before getting to the output. But nothing he has said has made
that denial clear enough, and he seems no more aware than you are of the
possibility that his words can be read with different meanings. The whole
interchange has consisted of unintended straw men.

And to all of you witnesses to this, standing around and feeling smug,
please think twice before throwing stones at other people’s glass houses.
How do you think I learned about all these fundamental rules of
communication? I learned them by breaking them and suffering the
consequences. You do it, too.

Best,

Bill P.

[From Bill Powers (2012.12.22.0925 MST)]

Chad Green (2012.12.21.1803 EST) --

BP: Do you think the volunteer could have controlled the position of the knot while wearing a blindfold?

CG: That was from the Nature of PCT article I found online. Perceptually, yes, it is impossible. Conceptually, however, it is indeed possible. Shall we call it conceptual control theory?

No, let's not. Are you referring to controlling something in imagination? If you can imagine acting on something, imagine the effects on the something, and imagine the something changing to fit a reference condition, you can "indeed" control conceptually. But you're not controlling anything that someone else could see, are you?

You'd better explain what you mean.

Best,

Bill P.

[From Rick Marken (2012.12.22.1040)]

Bill Powers (2012.12.22.0719 MST)--

RM: This post of yours should be a nice Xmas present to Bruce and
Martin. So I will concede defeat here. I still think information (or
Information) theory has nothing to do with control theory and is
misleading to boot. But it looks like this myth cannot be stopped,
especially if you're going to go out of your ways to imagine that the
proponents of the myth are not proponing it;-)

I'll reply to your points though now that you've gotten your opinions
into the "record" whatever I say can be safely ignored. But just for
fun here goes:

RM: I've shown (as has Bill) that information does not get transmitted
from disturbance to output (of the feedback function) of a control
system in many different ways:

BP: We have to be cautious about saying this, because the meaning of
"Information" in information theory is not its meaning in ordinary language.

RM: You leave out that to which my statement is a reply. Martin T.
said that I had presented no evidence that Bruce A.'s statement that
"information does get transmitted from disturbance to the output" is
false. In my post I point to the evidence that information does not
get _transmitted_ from disturbance to output. This evidence doesn't
really depend on the meaning of the work "information" so much as on
the meaning of the word "transmitted". Thanks to the internet I find
that transmitted means:

1. Cause (something) to pass on from one place or person to another.
2. Broadcast or send out (an electrical signal or a radio or
television program).

So whatever information is, I took Bruce and Martin to be saying that
it is transmitted (in either of the above two senses of the word) via
the organism from disturbance to output. Every piece of evidence I
present (that you criticize) shows that no information (or Information
or stuff or ether or whatever) is _transmitted_ from disturbance to
output. A control system knows only the state of the controlled
perception relative to the reference specification for the state of
that perception and (when properly design) acts to keep that
perception in the reference state. When it does that, and the
disturbance and feedback functions are linear and constant, output
variations will precisely mirror disturbance variations. But this is a
side effect of the control process; it doesn't result from the system
knowing anything about the disturbance (as you, of course, know).

Now to answer your criticisms of my points:

RM: 1. Because variations in a controlled input are the simultaneous
result of output and disturbance: i = o+d

BP: Yes, this tells me I can't deduce either o or d just from knowing i.
That means there is no usable information about o or d in i, but it does not
rule out the idea that the Information content of i, in bits, can be
calculated from the Information in o and in d.

RM: But it does rule out the idea that Information (or information)
about d is TRANSMITTED to o.

BP: This is something like saying
that the variance of i equals the square root of the sum of the squares of
variances of o and d, but given only the variance of i, you can't deduce
what the variance of o or the variance of d is. So a sloppy description
might say that the variance of d is transmitted to i, without really
intending to imply that the contribution of d could be somehow picked out of
the total variance of i. But a person listening to that description might
well hear it as claiming that after the variance (or information) has been
transmitted, we ought to be able to find it in the place that received it,
the way we would find a letter in the mailbox after it had been received.

RM: Ah. So now you are saying that it is both "information" and
"transmitted" that are being understood differently; "information"
means who knows what and "transmitted" means "deduced" by an outside
analyst.

BP: This has been the chief problem in this whole interchange: unconsciousness
of alternate meanings in one's words that mislead others about what one is
trying to say. It's happening on both sides of the exchange.

RM: That's kind of hard for me to believe. Everything Martin and
Abbott says point to them believing that the system bases it's output
on information transmitted to it about the disturbance. But if all
they mean is that output can be used by an observer to deduce the
disturbance to a controlled variable -- in other words, that
information has no relevance to the functioning of a control system --
then I have no dispute with them.

RM: 2. When control is good there is no correlation between input and
output.(Marken, R. S. and Horth, B. (2011) When Causality Does Not
Imply Correlation: More Spadework at the Foundations of Scientific
Psychology, Psychological Reports, 108, 1-12)

BP: It's not true that there is no correlation between input and output when
control is good.

RM: No correlation is what is _observed_. There will be a correlatoin
for a noiseless control system but in actuality no correlatoin is what
is observed and that is what is described in the paper.

RM: 3. When control is good the same disturbance produces exactly the same
output even though the input may be completely different.(Marken, R.
S. (1980) The Cause of Control Movements in a Tracking Task.
Perceptual and Motor Skills, 51, 755-758.)

BP: This makes it sound as if you can make the input different while the
disturbance remains the same.

RM: Well, it may sound like that to you but that is not what the paper
is about, as you know.

BP If the disturbance is exactly the same, the input will be exactly
the same because the output will also be exactly the same, but for noise in
the control system itself (a hidden disturbance, as Martin Taylor pointed
out recently).

RM: So what was your point? Yes, there is noise in a normal, human
control system and that makes the actual input variations in input
different on trials with the same disturbance. So this is another
piece of evidence that information about the disturbance does not get
transmitted (in the ordinary meaning of "transmitted") to the output.

RM: 4. When the feedback function varies from negative to positive there
is no correlatin between disturbance and output.

5. When the disturbance function varies from negative to positive
there is no correlatoin between disturbance and output.

6. The error signal carries no information even about the perceptual
signal, as shown by Bill today (Bill Powers (2012.12.21.0949 MST):

BP: All these are approximations and are false when looked at more carefully. If
all these statements were true, the behavior of a control system would be
inexplicable. These rules of thumb are interesting because they seem both
valid and paradoxical, but in nature you can't have both at once. Once you
understand how a control system really works, using the exact equations, all
paradoxes disappear.

RM: Martin should have fun with this. But to me it's completely
irrelevant to the points of the demonstrations, what are that it is
possible to create situations where an observer will see that no
information (in any sense) about the disturbance is "transmitted" (in
the ordinary sense of that word) to the output.

BP: By criticizing what you said I am by no means exempting Martin and Bruce
from the same accusations. The blind are accusing the blind of being blind.

RM: I think the rose colored glasses wearer (you) are trying to find
some kind of equivalency (even a false one) to make it seem like we're
all on the same page.

BP: I'm sure that Martin never meant that Information in the disturbance is
conveyed to the output via the input in the same way a letter is conveyed to
a person's mailbox via a post office, arriving in recognizeable form.

RM: Well, if Martin never meant that then perhaps he could just say he
never meant it and we would have no problem. It seems to me that if
this is never what Martin (and Bruce A) meant then they should be
comfortable with the idea that information theory really has nothing
to do with how an control system functions. After all, information
theory was developed to deal with situations where information in a
message is conveyed to a receiver in the same way that a letter is
conveyed to a person's mailbox via a post office. Measures of
information transmission were used to determine how effectively the
"post office" (communication channel) could handle various volumes of
letters. Since a control system does not work like a communication
channel Martin and Bruce should be happy to admit that information
theory is irrelevant to the functioning of a control system.

BP: I'm sure that even metaphorically he doesn't mean that, since the Information at
the destination is all jumbled up, Information from multiple sources being
inserted bit by bit where signals meet at a junction before getting to the
output. But nothing he has said has made that denial clear enough, and he
seems no more aware than you are of the possibility that his words can be
read with different meanings. The whole interchange has consisted of
unintended straw men.

RM: Well, it seems to me that we are always testing as best as we can
to get at each other's true meanings. I think Bruce A. got closest to
confirming your idea about what they mean by "transmitting
information" when he said that information theory could be used to
deduce the disturbance from outputs so that an organism could be used
as the equivalent of a voltmeter. Which is a peculiar thing to want
to do, of course, but you could do that just as well with control
theory; no information theory needed.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.22.13.36]

[From Rick Marken (2012.12.21.2200)]

Martin Taylor (2012.12.21.23.05)--
MT: So far, your only actual evidence to support your repeated statements that
the following interchange is valid:

  BA: I�ve demonstrated, contrary to your repeated assertions, that in our
standard control system, information does get transmitted from disturbance
to the output of the feedback function, as information theory leads one to
expect.

RM: And I've shown that it doesn't.

MT: is your own repetition of the same inaccurate claim. Never, in any way, have
you shown any kind of evidence to support your last statement quoted from
this exchange.

RM: I've shown (as has Bill) that information does not get transmitted
from disturbance to output (of the feedback function) of a control
system in many different ways:

1. Because variations in a controlled input are the simultaneous
result of output and disturbance: i = o+d

True. How is this relevant? If you follow the normal control analysis backwards around the loop, you find that o = f(d), so i = g(d), where g( ) = d + f(d). Since f is deterministic, i is completely determined by d, and hence is informationally equivalent to d. What you have to recognize is that f( ) and g( ) are time functions, so you have to include the entire history of the waveform. It's analogous to the situation with Fourier transforms. If you sample a waveform at one moment, you have no idea what its Fourier spectrum looks like. If you sample one frequency from the spectrum, you have no idea what the waveform looks like.

2. When control is good there is no correlation between input and
output.(Marken, R. S. and Horth, B. (2011) When Causality Does Not
Imply Correlation: More Spadework at the Foundations of Scientific
Psychology, Psychological Reports, 108, 1-12)

True, nearly. More true the better the control. I've pointed out why the information-theoretic approach requires this to be true.

3. When control is good the same disturbance produces exactly the same
output even though the input may be completely different.(Marken, R.
S. (1980) The Cause of Control Movements in a Tracking Task.
Perceptual and Motor Skills, 51, 755-758.)

True. How is it relevant? Every kind of analysis, informational or otherwise, gives the same conclusion.

4. When the feedback function varies from negative to positive there
is no correlatoin between disturbance and output.

Two control systems are required for control in these conditions. I showed how the point is irrelevant, and at the time I thought you agreed. One of my tutorials showed how you treat the mutual information relationships among 3 variables, which is the case here. (The Garner and McGill paper does it more technically).

5. When the disturbance function varies from negative to positive
there is no correlatoin between disturbance and output.

The disturbance function is outside the control system, and has nothing to do with the point at issue. To repeat this is disingenuous.

6. The error signal carries no information even about the perceptual
signal, as shown by Bill today (Bill Powers (2012.12.21.0949 MST):

BP: While I won't argue against well-entrenched customs that I don't
have much chance of changing, I do have difficulty with this concept.
In some technical sense the error signal can be said to carry
information about the perceptual or reference signal, but is this
information accessible to any other system or observer? I say it's
not.

I agree with Bill, whom you seem to contradict.

If you read what Bill says, "In some technical sense the error signal can be said to carry information about the perceptual or reference signal". That's the only point at issue, not whether an external observer can read the perceptual signal, or whether the perceptual signal can carry information about the disturbance signal, because observing only one of the two relevant variables that are related to a third will not allow you to tell much, if anything, about that third variable.

There actually is a possibility for an external observer to determine the output pretty closely knowing only the error variable, if the output function is known to be a leaky integrator, but not if it is a perfect integrator. The value of the output is the integral of the error, with older values discounted so that after a long time their influence on the current output value approaches zero. Refer back to my comment on your first point.

MT: Every one of your supposed means of showing has been
demonstrated to be based either on your own misconceptions about information
theory or on misleading the readers about the structure of your demo.

RM: As Bob Dylan said when he was called "Judas" for going electric "I
don't believe you".

I didn't expect that you would. It's the same response that a true believer in any religion would have if you say that he has not provided evidence that his God, whichever it might be, really exists. A Christian might say "I have provided evidence: Look at how Christ rose from the dead" a Mormon might ask how you deal with the fact that God provided a set of inscribed golden tablets that gave the new truth. A Tibetan Buddhist might ask how you explain reincarnation. A Muslim might ask how Mohammed or Moses talked to God if God didn't exist. A rational believer would say that you were correct, but neither had you provided evidence that the God didn't exist.

But since you seem to respect Bill's opinion, how about taking his
suggestion in the post I quoted above:

BP: As Rick said earlier, it would be nice to see an actual example of
the uses to which these ideas [information theory] are put, to see if
there are any interesting conclusions they would lead to that we can't
get out of conventional analyses of control systems.

I agree with Bill. But that's not the point. The point is whether one CAN apply information principles to the analysis of control systems, or if you can, whether there are circumstances in which you can and circumstances in which you can't. You can apply Fourier or Laplace analysis if the system is linear, but not if it isn't. That doesn't make them invalid. It just makes them less useful than one might at first expect. I don't know whether information analysis can be used for more than an estimate of the limits of control under certain types of different conditions, or if it can, under what circumstances it might be more useful than some other approach or combination of approaches. There doesn't seem to be any easily accessible literature on the topic, though I recently learned that Mandelbrot has a book "Information and Control", which might be relevant.

However that may be, it is simply not appropriate for you to continue using ignorance of what informational analysis is or does as a justification for repeating that "there is no information about the disturbance in the output", and using your repetitions as a technique to shut off any interest others might have in pursuing the question.

I think that I might follow Bruce Abbott in continuing to consider information-theoretic approaches to control, but at the same time to cease responding to your irrelevant comments on the issue.

Martin

[From Rick Marken (2012.12.22.1220)]

Martin Taylor (2012.12.22.13.36)--

MT: I agree with Bill, whom you seem to contradict.

RM: And he agrees with you. So I think I have been voted off the
island. I believe I am now officially the only one on CSGNet (or,
possibly, in the entire world) who thinks information theory is both
irrelevant and misleading when applied to control theory. So feel
free to go off and develop the information theory of PCT without any
more interference from me.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Frolm Bill Powers (2012.12.22.1352 MST)]

Rick Marken (2012.12.22.1040) --

> Bill Powers (2012.12.22.0719 MST)--

RM: This post of yours should be a nice Xmas present to Bruce and
Martin. So I will concede defeat here.

BP: I wasn't picking winners and losers. What were you trying to win?

RM: I still think information (or
Information) theory has nothing to do with control theory and is
misleading to boot. But it looks like this myth cannot be stopped,
especially if you're going to go out of your ways to imagine that the
proponents of the myth are not proponing it;-)

BP: When Martin says that information theory cannot explain how control systems work, and that information in the disturbance can't be used to explain why the output opposes it, and that the control system itself does not use information about the distubance to compute its output, it seems clear to me that he is not thinking of information theory the way you are or I was. Why won't you grant him that? Are you insisting that your interpretation of his meaning is closer than his to what he was trying to convey?

I'll reply to your points though now that you've gotten your opinions
into the "record" whatever I say can be safely ignored. But just for
fun here goes:

> RM: I've shown (as has Bill) that information does not get transmitted
> from disturbance to output (of the feedback function) of a control
> system in many different ways:
>
> BP: We have to be cautious about saying this, because the meaning of
> "Information" in information theory is not its meaning in ordinary language.

RM: You leave out that to which my statement is a reply. Martin T.
said that I had presented no evidence that Bruce A.'s statement that
"information does get transmitted from disturbance to the output" is
false. In my post I point to the evidence that information does not
get _transmitted_ from disturbance to output. This evidence doesn't
really depend on the meaning of the work "information" so much as on
the meaning of the word "transmitted". Thanks to the internet I find
that transmitted means:

1. Cause (something) to pass on from one place or person to another.
2. Broadcast or send out (an electrical signal or a radio or
television program).

When you have to resort to the dictionary to find the "real" meaning of a word you're no longer trying to understand what someone meant. Words don't "have real meanings," or any meanings at all. Words are perceptions which each person connects to nonverbal perceptions in that person's memory, and nobody else's. I imagine a juicy red apple, connect it with the written symbol "apple", and transmit that to you using marks on a screen. You read the word, and connect it to your memories of the company that sells computers. So you think I am referring to the company.

Normally, context minimizes that sort of mistake, but in the current discussion there isn't any such easy solution. I frankly don't know what Martin or Bruce means by the letter pattern "information." However, the way they use the term in sentences shows more agreement than disagreement with the meanings I have for the same letter pattern. I don't know what you mean by "transmit," or what the author of the online dictionary meant, but I join you in disagreeing that information is a thing or substance physically passed from disturbance to input to output, so perhaps "transmit" is just being used metaphorically by Bruce or Martin. I believe they both would agree, though probbly for different reasons. You can try to hold them to a literal dictionary meaning, but that would only be for the purpose of winning an argument, not for the purpose of finding out what they do mean.

RM: So whatever information is, I took Bruce and Martin to be saying that
it is transmitted (in either of the above two senses of the word) via
the organism from disturbance to output.

BP: Yes, now you're speaking as I would prefer, by saying that you "take" a word in your own preferred way. If they are taking it differently, then you and they are not disagreeing, you're all just miscommunicating. Now you may see the distinction I'm trying to make.

RM: Every piece of evidence I present (that you criticize) shows that no information (or Information or stuff or ether or whatever) is _transmitted_ from disturbance to output.

BP: I agree with that, and I'm pretty sure that Martin would agree and Bruce might agree. So if you think you are disagreeing with them, that would be evidence that there may be no disagreement but just a difference in word usage. You'd have to ask them, not me, to see if I'm guessing right.

RM: A control system knows only the state of the controlled
perception relative to the reference specification for the state of
that perception and (when properly design) acts to keep that
perception in the reference state. When it does that, and the
disturbance and feedback functions are linear and constant, output
variations will precisely mirror disturbance variations. But this is a
side effect of the control process; it doesn't result from the system
knowing anything about the disturbance (as you, of course, know).

BP: Are you sure that Bruce and Martin don't know that? I would be astonished if they didn't.

Now to answer your criticisms of my points:

> RM: 1. Because variations in a controlled input are the simultaneous
> result of output and disturbance: i = o+d
>
> BP: Yes, this tells me I can't deduce either o or d just from knowing i.
> That means there is no usable information about o or d in i, but it does not
> rule out the idea that the Information content of i, in bits, can be
> calculated from the Information in o and in d.

RM: But it does rule out the idea that Information (or information)
about d is TRANSMITTED to o.

BP: If you want to take the word "transmitted" in strict literal agreement with a dictionary definition that some person wrote down, then I agree. But I don't take it that way, any more than I take literally the idea that a television station transmits pictures to my television set. Can't you just see all those little pictures flying through the air? But I think you would understand what I mean if I said that the digital broadcasts sometimes transmit poor pictures to my TV.

>BP: This is something like saying
> that the variance of i equals the square root of the sum of the squares of
> variances of o and d, but given only the variance of i, you can't deduce
> what the variance of o or the variance of d is. So a sloppy description
> might say that the variance of d is transmitted to i, without really
> intending to imply that the contribution of d could be somehow picked out of
> the total variance of i. But a person listening to that description might
> well hear it as claiming that after the variance (or information) has been
> transmitted, we ought to be able to find it in the place that received it,
> the way we would find a letter in the mailbox after it had been received.

RM: Ah. So now you are saying that it is both "information" and
"transmitted" that are being understood differently; "information"
means who knows what and "transmitted" means "deduced" by an outside
analyst.

BP: I certainly don't mean Information, in bits -- just anything from which I can gain knowlege, which is a much looser meaning. And I do know what transmission means and how, in electronics, it works, but I still use the word loosely meaning only that somehow what is sent has a continuous chain of effects that end up somewhere else. We say that neural fibers transmit trains of inpulses from one place to another, even though nothing physically makes the journey from source to destination. We speak loosely on the basis of superficial appearances, because, for example, it looks as though there is a physical wave travelling through water, or as if a wave is passing along a row of toppling dominoes. In reality (that is, when we look closer), we see small volumes of water moving in ellipses but no horizontal flows of water, and individual dominoes falling down but no dominoes traveling anywhere.

> BP: This has been the chief problem in this whole interchange: unconsciousness
> of alternate meanings in one's words that mislead others about what one is
> trying to say. It's happening on both sides of the exchange.

RM: That's kind of hard for me to believe. Everything Martin and
Abbott says point to them believing that the system bases it's output
on information transmitted to it about the disturbance.

BP: No, you are TAKING everything they say to point that way, assuming whatever meanings are needed to support that conclusion. If instead of just assuming you were to ask them if they mean that, I think they would disagree. But let them speak for themselves; that's the only way you will ever know what they intended you to understand from their words.

RM: But if all
they mean is that output can be used by an observer to deduce the
disturbance to a controlled variable -- in other words, that
information has no relevance to the functioning of a control system --
then I have no dispute with them.

BP: So why not take what they mean that way? Or even better, ask if that's what they mean. It's pretty close to what I have guessed they mean. But I haven't really asked them, either.

> RM: 2. When control is good there is no correlation between input and
> output.(Marken, R. S. and Horth, B. (2011) When Causality Does Not
> Imply Correlation: More Spadework at the Foundations of Scientific
> Psychology, Psychological Reports, 108, 1-12)
>
> BP: It's not true that there is no correlation between input and output when
> control is good.

RM: No correlation is what is _observed_. There will be a correlatoin
for a noiseless control system but in actuality no correlatoin is what
is observed and that is what is described in the paper.

BP: I don't recall the details and don't have a copy of the paper handy where I can find it, but I seem to recall that what you observed were low correlations, not correlations like r = 0.000000000... In a pure integrator (which doesn't exist in nature), a signal going into it has a truly zero long-term correlation with the signal coming out of it. Of course you can make a mathematical model behave as nearly ideally as you wish, but you can't make a physical system behave that way.

> RM: 3. When control is good the same disturbance produces exactly the same
> output even though the input may be completely different.(Marken, R.
> S. (1980) The Cause of Control Movements in a Tracking Task.
> Perceptual and Motor Skills, 51, 755-758.)
>
> BP: This makes it sound as if you can make the input different while the
> disturbance remains the same.

RM: Well, it may sound like that to you but that is not what the paper
is about, as you know.

BP: Yes, but I was trying to remind you thtd the words which come out of you do not necessarily evoke the meanings in the recipient that you intended.

> BP If the disturbance is exactly the same, the input will be exactly
> the same because the output will also be exactly the same, but for noise in
> the control system itself (a hidden disturbance, as Martin Taylor pointed
> out recently).

RM: So what was your point? Yes, there is noise in a normal, human
control system and that makes the actual input variations in input
different on trials with the same disturbance. So this is another
piece of evidence that information about the disturbance does not get
transmitted (in the ordinary meaning of "transmitted") to the output.

BP: But as Martin pointed out, and as I was trying to remind you, the noise is simply another disturbance that was not taken into account. If you include the noise fluctuations in the net disturbance, then repeating the same noise fluctuations as well as the same disturbance of the other kind will result in both the same input and the same output. When you run the tracking model twice with the same disturbance pattern (there is only one disturbance at a time in the model), the repeated input and output traces lie on top of the old traces, exactly.

RM: Martin should have fun with this. But to me it's completely
irrelevant to the points of the demonstrations, what are that it is
possible to create situations where an observer will see that no
information (in any sense) about the disturbance is "transmitted" (in
the ordinary sense of that word) to the output.

>BP: By criticizing what you said I am by no means exempting Martin and Bruce
> from the same accusations. The blind are accusing the blind of being blind.

RM: I think the rose colored glasses wearer (you) are trying to find
some kind of equivalency (even a false one) to make it seem like we're
all on the same page.

BP: But we're not: you are very much on a different page because you're not trying to find out what Martin and Bruce mean; you're telling them what they have to mean, and ignoring them when they tell you they mean something different.

>BP: I'm sure that Martin never meant that Information in the disturbance is
> conveyed to the output via the input in the same way a letter is conveyed to
> a person's mailbox via a post office, arriving in recognizeable form.

RM: Well, if Martin never meant that then perhaps he could just say he
never meant it and we would have no problem.

But he did say so. Martin, can you find that passage and sent it again?

It seems to me that if
this is never what Martin (and Bruce A) meant then they should be
comfortable with the idea that information theory really has nothing
to do with how an control system functions.

Martin has already said that, or near enough to satisfy me.

After all, information
theory was developed to deal with situations where information in a
message is conveyed to a receiver in the same way that a letter is
conveyed to a person's mailbox via a post office. Measures of
information transmission were used to determine how effectively the
"post office" (communication channel) could handle various volumes of
letters. Since a control system does not work like a communication
channel Martin and Bruce should be happy to admit that information
theory is irrelevant to the functioning of a control system.

BP: Did you see Bruce A's example in which a control system is used to transmit a waveform from an input to an output?

RM: Well, it seems to me that we are always testing as best as we can
to get at each other's true meanings. I think Bruce A. got closest to
confirming your idea about what they mean by "transmitting
information" when he said that information theory could be used to
deduce the disturbance from outputs so that an organism could be used
as the equivalent of a voltmeter. Which is a peculiar thing to want
to do, of course, but you could do that just as well with control
theory; no information theory needed.

BP: I can also mention that the Bose noise-cancelling headphones work in a similar way: the music comes in electrically as a reference signal, and the control system makes the net sound-pressure output to the inner ear (from a tiny loudspeaker) match the reference signal variations (the sonic noise plus music is picked up by tiny microphones inside the earphones). So here we have a reference input apparently being transmitted to the sound output. Apparently only, of course, since electronic systems don't actually contain physical things moving from one place to another, except inside transistors and vacuum tubes. In wires with an ampere of current flowing in them apparently close to the speed of light, the average electrons actually drift only a few millimeters per second.

BP: As to your last sentence, that's a different question and I think I agree with you. For everything you can find out about control systems using information theory (other than the bits and uncertainties), you can find out the same thing just using ordinary principles of physics. Of course if you want to express your findings in terms of bits and uncertainty-reduction, for some reason, you use information theory. Chacun a son gout plus a few squigglies.

Best,

Bill P.

[From Bill Powers (2012.12.22.1520 MST)]

Martin Taylor 2012.12.22.13.36--

[MT to RM] I think that I might follow Bruce Abbott in continuing to consider information-theoretic approaches to control, but at the same time to cease responding to your irrelevant comments on the issue.

BP: What I would like to see is an improvement of communication skills all around. This discussion has not exactly been a model of clear and unambiguous verbiage -- there's too much tension in the air and too little attempt to find out what other people mean. Annoyance prevails over reason and friendship. Isn't life a bit too short for us to waste our time like that? Perhaps it just looks that way to me, but you didn't all have white hair when we first met. Mene, mene, tekel, upharsin. Google it.

Best,

Bill P.

[From Rick Marken (2012.12.22.1610)]

Bill Powers (2012.12.22.1352 MST)--

RM: This post of yours should be a nice Xmas present to Bruce and
Martin. So I will concede defeat here.

BP: I wasn't picking winners and losers. What were you trying to win?

RM: The argument about the relevance of information theory to
understanding control. I didn't expect to win but I did expect to
learn something from it and I certainly have.

BP: When Martin says that information theory cannot explain how control
systems work, and that information in the disturbance can't be used to
explain why the output opposes it, and that the control system itself does
not use information about the distubance to compute its output, it seems
clear to me that he is not thinking of information theory the way you are or
I was. Why won't you grant him that?

RM: For the same reason you have never granted that psychologists
understand behavioral and cognitive psychology differently than you do
and in a way that is actually perfectly compatible with PCT.

BP: When you have to resort to the dictionary to find the "real" meaning of a
word you're no longer trying to understand what someone meant.

RM: The dictionary was just a joke. I know what transmit means as well
as you do and it doesn't mean deduce; deduce does.

BP: t I join you in disagreeing that information is a thing or substance physically
passed from disturbance to input to output, so perhaps "transmit" is just
being used metaphorically by Bruce or Martin. I believe they both would
agree, though probbly for different reasons. You can try to hold them to a
literal dictionary meaning, but that would only be for the purpose of
winning an argument, not for the purpose of finding out what they do mean.

RM: I haven't done this with just words. The demos are an attempt to
implement the words in terms of observable behavior. I thought we all
had agreed, for example, that the reason Martin and Abbott say that
there is information about the disturbance in output is because of the
high (negative) correlation that is typically seen between d and o. So
I created demos where control is good and there is no correlatoin
between d and o. Soon after I presented this data, which should have
been the end of the argument, Martin starts giving all kinds of post
hoc reasons why the lack of o - d correlation doesn't mean that
information about d is not transmitted to o. This was not a word
problem; this was a changing the rules in the middle of the game
problem.

RM: Every piece of evidence I present (that you criticize) shows that no
information (or Information or stuff or ether or whatever) is _transmitted_
from disturbance to output.

BP: I agree with that, and I'm pretty sure that Martin would agree and Bruce
might agree. So if you think you are disagreeing with them, that would be
evidence that there may be no disagreement but just a difference in word
usage. You'd have to ask them, not me, to see if I'm guessing right.

RM: I think the word meaning we disagree about is the word "agree". If
you think Martin and Abbott have been agreeing that my demos are
evidence that no information is _transmitted_ from disturbance to
output then what you mean by "agree" is something more like what I
mean by "vehemently disagree":wink:

RM: A control system knows only the state of the controlled
perception relative to the reference specification for the state of
that perception and (when properly design) acts to keep that
perception in the reference state. When it does that, and the
disturbance and feedback functions are linear and constant, output
variations will precisely mirror disturbance variations. But this is a
side effect of the control process; it doesn't result from the system
knowing anything about the disturbance (as you, of course, know).

BP: Are you sure that Bruce and Martin don't know that? I would be
astonished if they didn't.

RM: I would be astonished if they did. I hope they say whether they
do know it. But it is difficult for me to believe that anyone who
really knew that would have any interest in talking about information
about disturbances being transmitted to the output of a control
system. There is no way I can understand those words ("information
about disturbances being transmitted to the output of a control
system") in a way that is consistent with the idea that an observed
correlation between output and disturbance is a side effect of acting
to keep error small. Apparently you can and that's why I will leave
this discussion to you, if you want to continue it.

The remainder of your replies shows that you can understand Martin and
Bruce to be saying things about information theory that are perfectly
consistent with an understanding of control theory. So I hope that
you will continue the conversation with them. I'm am clearly incapable
of doing it and I think I might learn a lot about how to understand
ideas that at first seem completely inconsistent with PCT as being
pleasantly consistent with it.

BP: Did you see Bruce A's example in which a control system is used to
transmit a waveform from an input to an output?

RM: Yes, and I complemented him on his innovative approach to
designing a voltmeter (I guess I should have said oscilloscope).

BP: As to your last sentence, that's a different question and I think I
agree with you. For everything you can find out about control systems using
information theory (other than the bits and uncertainties), you can find out
the same thing just using ordinary principles of physics. Of course if you
want to express your findings in terms of bits and uncertainty-reduction,
for some reason, you use information theory. Chacun a son gout plus a few
squigglies.

RM: Righto.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.12.22.23.07]

[From Bill Powers (2012.12.22.1520 MST)]

Martin Taylor 2012.12.22.13.36--

[MT to RM] I think that I might follow Bruce Abbott in continuing to consider information-theoretic approaches to control, but at the same time to cease responding to your irrelevant comments on the issue.

BP: What I would like to see is an improvement of communication skills all around. This discussion has not exactly been a model of clear and unambiguous verbiage -- there's too much tension in the air and too little attempt to find out what other people mean. Annoyance prevails over reason and friendship. Isn't life a bit too short for us to waste our time like that? Perhaps it just looks that way to me, but you didn't all have white hair when we first met. Mene, mene, tekel, upharsin. Google it.

Thanks for that. It's a breath of fresh air. Don't need Google. And thanks for your comments to Rick on the technical side also.

One of the reasons for my delayed response to many messages is that I have been trying instead to put together a tutorial that is plain and easy to understand, as a sequel to my message of [Martin Taylor 2012.12.08.11.32], which I had hoped would already have made clear exactly how "transmit information" should be understood.

That message considered a disturbance that consisted of random steps, and used both the physical waveforms and the information-theoretic implications of those waveforms to show how the information relations changed over time among the variables of interest (disturbance, error, output) in a trivially simple control loop.

The confusion that has followed has surprised me, so I've been trying to think of ways to be clearer in a more general form. It isn't easy, and it isn't made easier by a stream of irrelevant demos, ignorance-based falsehoods, and quasi-religious dogma delivered _ex cathedra_. They constitute noise, and noise seldom enhances communication. I had (and hope I no longer have, as I stated in what you quoted above) an unfortunate tendency to respond to these noise messages. Responding only seems to increase the noise level.

I have written and deleted three versions of the new tutorial so far, because each seemed overly long and technical, and failed to make the important points suffiviently clear. What I am trying for is a version that appeals to the intuition of anyone familiar with other modes of analysing control systems.

The best of the holiday season to all.

Martin

PS. This evening we watched a National Geographic program on PCT. It was about the Pacific Crest Trail, but they always referred to it as PCT. Maybe our PCT journey is as difficult as it was for those following the other PCT!

[From Rick Marken (2012.12.23.0900)]

Martin Taylor (2012.12.22.23.07)--

BP: What I would like to see is an improvement of communication skills all
around...

MT: Thanks for that. It's a breath of fresh air. Don't need Google. And thanks
for your comments to Rick on the technical side also.

One of the reasons for my delayed response to many messages is that I
have been trying instead to put together a tutorial that is plain and easy to
understand, as a sequel to my message of [Martin Taylor 2012.12.08.11.32],
which I had hoped would already have made clear exactly how "transmit
information" should be understood.

RM: Since Bill and I agree that information is not a thing that is
passed physically from disturbance to input to output (as per Bill's
quote here):

BP: I join you in disagreeing that information is a thing or substance
physically passed from disturbance to input to output, so perhaps "transmit"
is just being used metaphorically by Bruce or Martin.

will your tutorial make it clear that you are just using "transmit
information" as a metaphor?

MT: The confusion that has followed has surprised me, so I've been trying to
think of ways to be clearer in a more general form. It isn't easy, and it
isn't made easier by a stream of irrelevant demos, ignorance-based
falsehoods, and quasi-religious dogma delivered _ex cathedra_.

RM: Bill doesn't seem to think of the demos as irrelevant. Indeed, he
seems to think that you (and possibly Bruce, too) agree with the
conclusions of my demos, per this interchange between Bill and me:

RM: Every piece of evidence I present (that you criticize) shows that no
information (or Information or stuff or ether or whatever) is _transmitted_
from disturbance to output.

BP: I agree with that, and I'm pretty sure that Martin would agree and Bruce
might agree.

So, nu? Do you agree that my demos show that no information is
transmitted (in the sense of something being physically moved from one
point to another) from disturbance to output? I didn't think you did
agree but Bill seems to think so. And Bill is an honorable man (not a
noise source like me;-)

Finally, could you confirm that you do, indeed, understand what I say here:

RM: A control system knows only the state of the controlled
perception relative to the reference specification for the state of
that perception and (when properly designed) acts to keep that
perception in the reference state. When it does that, and the
disturbance and feedback functions are linear and constant, output
variations will precisely mirror disturbance variations. But this is a
side effect of the control process; it doesn't result from the system
knowing anything about the disturbance (as you, of course, know).

All the apparently metaphorical talk about transmitting information
about the disturbance led me to thinking that you and Bruce didn't
really understand this fact about how control systems work. Bill
chides me for even considering it possible that you don't understand
this:

BP: Are you sure that Bruce and Martin don't know that? I would be
astonished if they didn't.

But it would be nice to have you confirm it so that I know that all
this talk about information being transmitted is really just a
metaphor and that what you really mean is that when control is good it
looks _as if_ information about the disturbance is being transmitted
via the organism to the output.

Best regards

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.12.23.1045 MST)]

Rick Marken (2012.12.23.0900) --\

RM: Since Bill and I agree that information is not a thing that is
passed physically from disturbance to input to output (as per Bill's
quote here):

> BP: I join you in disagreeing that information is a thing or substance
> physically passed from disturbance to input to output, so perhaps "transmit"
> is just being used metaphorically by Bruce or Martin.

will your tutorial make it clear that you are just using "transmit
information" as a metaphor?

BP: Do you recall my use of neural signals as another example of apparent "transmission" that is incorrect? Spikes do not physically travel along a neural fiber, though they seem to, and though we often speak of them as if they did. What happens instead is that a spike occurring in one place triggers, just before it collapses, another one a little further on, each spike drawing on local energy sources. The same thing happens in the apparent transmission of disturbances to outputs. There is no single thing moving from a disturbance to an input variable and then on through a perceptual channel, comparator, error signal, and output function to the final output variable.

However, there is a lawful chain of causation between disturbance and output, which we build into each model we run. The output is a function of the net disturbance, the value of the output some short time in the past, and the reference signal. In our models, we show signals traveling from the output of one function to the inputs of the next ones, but that is only a metaphor, too. Nothing actually moves from one function to the next one. Instead, there is a lawful chain of causation, so one function can affect another one some distance away through a series of intervening effects.

This is also the case in propagation of electromagnetic waves, as in radio transmissions. A changing magnetic field in one place produces an electric field, which changes and generates a changing magnetic field a little further one -- we see this as propagation of alternating waves of electric and magnetic fields. We do speak of transmission of energy, but that is another metaphor since energy is not a physical substance but a capacity to do work.

So in fact there is no such thing as transmission even in many cases where we speak freely of transmission. However, even in cases where there is no actual transmission and no correlation between cause and effect, there can still be a lawful relationship between cause and effect. If the output function of a control system is a pure integrator, the output is the integral of the error signal, so the correlation of error with output is zero. The perceptual signal is the sum of disturbances and effects from the output. For slow changes in the disturbance, the output will vary so as to remain nearly equal and opposite to the disturbance in terms of effects on the perceptual signal. So even though nothing physically passes from disturbance to output, we can still recognize a very definite and precise effect of the disturbance in the changes of the output. When there is some noise in the system, the effects of disturbance on output become somewhat variable, but the overall relationship can still be seen even though the proportion of noise in the error signal is rather large. The integration that follows smooths out a great deal of the noise.

I've been thinking about your demo in which you randomly switch the sign of the feedback function during a run of the simulation, and thus get zero correlation between disturbance and output. I think that occurs only because you are averaging together two control behaviors seen with different feedback functions. If you wrote the feedback function as a function which switches signs at specific times, the solution of the system equations would change its characteristic at those times. With that switching taken into account, the output would be the negative of the disturbance when it should, and have the same sign as the disturbance when it should, so the correlation would again be high. (I don't recall all the details, but I assume you did the switching so the control system retained negative feedback at all times).

I have to conclude that you're barking up the wrong trees here. Perhaps when Martin finishes his explanation, you'll be able to agree with that.

Best,

Bill P.

[From Rick Marken (2012.12.23.1145)]

Bill Powers (2012.12.23.1045 MST)--

BP: However, there is a lawful chain of causation between disturbance and
output, which we build into each model we run.

RM: I thought the chain of causation went from o+d to o and that this
chain is part of of a loop. Yes' there is a path from d to o but it is
completely confounded with the path from o to o. Saying there is a
chain of causation from d to o could suggest to those trying to avoid
doing research based on the Test for Controlled Variables that we can
study the characteristics of this chain using conventional IV-DV
methods.

BP: I've been thinking about your demo in which you randomly switch the sign of
the feedback function during a run of the simulation, and thus get zero
correlation between disturbance and output. I think that occurs only because
you are averaging together two control behaviors seen with different
feedback functions.

RM: That's true but, I think, irrelevant to the point of the demo,
which was simply to show that it is possible to create a situation
where there can be good control with no observed correlation between
disturbance and output, and thus no apparent information about the
disturbance in the output.

BP: I have to conclude that you're barking up the wrong trees here.

RM: I'm barking up the same trees I've been barking up since 1980,
when the implications of PCT for my field (experimental psychology)
hit me squarely between the eyes. If Martin and Bruce would agree that
it is impossible to study the causal path from disturbance (S) to
output (R) in a control loop using the conventional methods of
experimental psychology then I'll stop barking.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.12.23.1915 MST)]

Rick Marken (2012.12.23.1145)]

> Bill Powers (2012.12.23.1045 MST)--
>
> BP: However, there is a lawful chain of causation between disturbance and
> output, which we build into each model we run.

RM: I thought the chain of causation went from o+d to o and that this
chain is part of of a loop. Yes' there is a path from d to o but it is
completely confounded with the path from o to o. Saying there is a
chain of causation from d to o could suggest to those trying to avoid
doing research based on the Test for Controlled Variables that we can
study the characteristics of this chain using conventional IV-DV
methods.

BP: That conclusion would demonstrate a misunderstanding. Each function in the loop has an input and an output, and is a straight-through cause-effect function. When you solve the system of equations that describes all the functions and their interconnections, however, you get the behavior of the closed-loop system. In fact you can show that each system variable in the loop, such as the output quantity, can be described as a function of the two independent variables, d and r (where d may be the sum of many disturbances). With r held constant, variations in o are a function of variations in d alone unless system noise is important, which I don't think is often the case.

The main difference between an S-R system and the closed-loop system is that in the latter kind of system, the overall function connecting d and o has a form determined mostly by the feedback function and to a lesser degree (how much less depending on loop gain) by functions acting in the forward direction, the perceptual input function, comparator, and output function.

>BP: I've been thinking about your demo in which you randomly switch the sign of
> the feedback function during a run of the simulation, and thus get zero
> correlation between disturbance and output. I think that occurs only because
> you are averaging together two control behaviors seen with different
> feedback functions.

RM: That's true but, I think, irrelevant to the point of the demo,
which was simply to show that it is possible to create a situation
where there can be good control with no observed correlation between
disturbance and output, and thus no apparent information about the
disturbance in the output.

>BP: I have to conclude that you're barking up the wrong trees here.

RM: I'm barking up the same trees I've been barking up since 1980,
when the implications of PCT for my field (experimental psychology)
hit me squarely between the eyes. If Martin and Bruce would agree that
it is impossible to study the causal path from disturbance (S) to
output (R) in a control loop using the conventional methods of
experimental psychology then I'll stop barking.

It is time to update your conception of the implications of PCT, if it is that ancient (unchanged for 32 years?). Your conclusions are of course, mostly correct, but you tend to express them in terms of idealized approximations that result from letting loop gain go to infinity. When you say "no information" you should be saying "almost no information" and so on. And you probably shouldn't be mentioning information (Information) at all, since those who do understand information theory are saying that you are misrepresenting it.

It is perfectly possible to study the path from disturbance (S) to output (R) using conventional methods of system analysis -- in fact that is exactly what we do. What this analysis does is to show the dominance of the feedback function in determining the apparent properties of that path, a fact that cannot be seen if one uses a model that has no feedback in it. That is the real beef: using the wrong model. This is a very important point, because it shows that S-R theory is not a consequence of stupidity or ancestor worship or some other character defect, but simply a matter of not having discovered the right model. When behaviorism was forming, nobody knew what the right model was because control theory hadn't been invented yet.

This gives us a way to try to enlighten conventional scientists without their having to lose face. They can't be blamed for their teachers not knowing about control systems, nor could their teachers be blamed when nobody at all understood them. A disaster shared by everyone is not nearly as hard to take as a disaster suffered by you alone, especially if it was specifically your fault. We can tell behaviorists that failure to include the facts of control in early analyses of behavior was not specifically anyone's fault. The originators of behaviorism were doing the best they could with what they did know. It just happened to be insufficient for getting the right answers to their questions about organisms.

We have to be very careful not to make Western scientists lose face. They are at least as easy to offend in that regard as our Occidental cousins have been thought to be. Just look at how many of our difficulties in this intellectually advanced forum called CSGnet can be attributed to people trying not to lose face.

Best,

Bill P.