Inconsistent theories (let peace return)

[From Bill Powers (2009.10.01.0703 MDT)]

Martin Taylor 2009.09.30.14.51 –

MT: This message of yours is one
such occasion on which you really have annoyed me, I hope
inadvertently. Perhaps I should use the word “puzzled”
rather than “annoyed”, because I know you are better informed
than you pretended to be in what you wrote – and yet because I know this
I am led to believe you intended to annoy me, despite that historical
evidence leads me not to believe that to be the case. I therefore have
two perceptions in direct conflict: (1) that you intended to annoy, and
(2) that you did not intend to annoy. But (2) presupposes that you did
not understand what I wrote, and on this occasion I find that truly hard
to believe.

BP: Perhaps the annoyance is merely reorganization getting under way when
understanding seems impossible. My annoyance has just about gone away now
that I think I see what the problem is. Here it is:

MT: I should point out, though,
that reconstruction was not the object of my presentation. It was an
extreme illustration of the fact that information about the disturbance
waveform is available in the perception waveform.

BP: Of course this is not true and you know it’s not true. It’s true only
if you also know the forms of the input and output functions and the
feedback connection through the external world, as well as the form of
the function connecting the disturbance to the controlled variable, and
take those into account as you compute what the disturbance must have
been. If that is what you meant to say, you didn’t say it (there was one
part of one sentence, I recall now, where you did say something about not
meaning the perceptual signal alone, but I didn’t see its meaning
if you meant what I now am seeing).
Behind this misunderstanding, if that is what it was, is another: it
seemed to me that this got started because you were saying this
information about the disturbance was of some use to the control system,
as part of its means of controlling. Of course that information is not
available to the control system, because it has no perception of its
output function or the external feedback connection or indeed anything
but its perceptual signal. An external observer who knows the properties
of all the components of the control system and the properties of the
environment in the feedback path can, indeed, work backward (within
limits) to deduce what the net disturbance must have been (though not any
individual disturbance if more than one is present at the same time). I
didn’t realize you were talking about an external observer; I thought you
were talking about how information was involved in the process of
control, and saying that control did depend on knowledge by the
control system
of the form of the disturbance. That’s what I’ve been
arguing against all along, believing it was what you meant. That
interpretation of what you were getting at influenced my interpretation
of everything else you said.
Clearly, there is no information about the environmental feedback
function or the output function (or for that matter, the perceptual input
function) represented in the perceptual signal. Since that information
would be required in order for the control system to deduce the present
and past values of the disturbance, the control process does not depend
on having any representation or reconstruction of the disturbance
available to the control system. When you spoke of availability,
you meant only availability to the observer. You meant reconstruction by
the observer, not by the control system. That I could agree to.

And now I am wondering if by “the disturbance” you could have
meant the perturbation of the controlled quantity rather than the
variable we call d, which is the cause of the perturbation.

Best,

Bill P.

[Martin Taylor 2009.10.01.12.03]

[From Bill Powers (2009.10.01.0703 MDT)]

Martin Taylor 2009.09.30.14.51 –

Let peace return, indeed!

But not necessarily total agreement :slight_smile:

MT: I should point out,
though,
that reconstruction was not the object of my presentation. It was an
extreme illustration of the fact that information about the disturbance
waveform is available in the perception waveform.

BP: Of course this is not true and you know it’s not true. It’s true
only
if you also know the forms of the input and output functions and the
feedback connection through the external world, as well as the form of
the function connecting the disturbance to the controlled variable, and
take those into account as you compute what the disturbance must have
been.

I believe that what I said was true. Let’s ask why we disagree about
this. I don’t think you meant to assert that the object of my
presentation was the reconstruction of the disturbance waveform,
contrary to my pointing out that it wasn’t. So the “not true” must
refer to my second sentence.

I could agree with a revision of what you said if you replaced the “It”
at the start of your second sentence by “Reconstruction of the
disturbance waveform from the perceptual waveform”. But then your
comment would not be relevant to mine, since the criteria you mention
are the same criteria I used in setting up the “extreme illustration”
both in 1992/3 and recently. You would simply be reaffirming what I
said.

So you probably mean something different.

I cannot agree with a revision of what you said if you replace the “It”
by “Extracting information about the disturbance waveform from the
perceptual waveform”, because that much relaxed requirement needs only
that there be some consistency in the functions and pathways concerned.
It doesn’t have to be much consistency – only enough to allow some
control. If this is what you believe to be “not true”, perhaps we could
explore the disagreement.

If that is what you meant to say, you didn’t say it
(there was one
part of one sentence, I recall now, where you did say something about
not
meaning the perceptual signal alone, but I didn’t see its
meaning
if you meant what I now am seeing).

Maybe it’s a quibble, but I think you should say “most of each message”
rather than “part of one sentence”.

Behind this misunderstanding, if that is what it was, is another: it
seemed to me that this got started because you were saying this
information about the disturbance was of some use to the control
system,
as part of its means of controlling.

OK. That misunderstanding is probably my fault. I never intended to say
that, at least not in that form.

Of course that information is not
available to the control system, because it has no perception of its
output function or the external feedback connection or indeed anything
but its perceptual signal. An external observer who knows the
properties
of all the components of the control system and the properties of the
environment in the feedback path can, indeed, work backward (within
limits) to deduce what the net disturbance must have been (though not
any
individual disturbance if more than one is present at the same time). I
didn’t realize you were talking about an external observer; I thought
you
were talking about how information was involved in the process of
control, and saying that control did depend on knowledge by the
control system
of the form of the disturbance.

Yes. My fault. Sorry I did not perceive that source of misunderstanding.

Clearly, there is no information about the environmental feedback
function or the output function (or for that matter, the perceptual
input
function) represented in the perceptual signal. Since that information
would be required in order for the control system to deduce the present
and past values of the disturbance, the control process does not depend
on having any representation or reconstruction of the disturbance
available to the control system. When you spoke of
availability,
you meant only availability to the observer. You meant reconstruction
by
the observer, not by the control system. That I could agree to.

Good. Now we can be on the same track for future work.

And now I am wondering if by “the disturbance” you could have
meant the perturbation of the controlled quantity rather than the
variable we call d, which is the cause of the perturbation.

I didn’t know we ever had a label for the entity that causes the
perturbation. I have always considered “d” (the label on the arrow that
joins the “o” arrow to form the “qi” variable) to be a label sometimes
for a signal path and sometimes a label for the waveform of the signal
on that path. My understanding of the “perturbation of the controlled
quantity” is the net effect of “o” and “d” in perturbing the value of
“qi”. My use of “d” is what is intended when we use the unit-transform
simplification in order to say “p = d + o”. If the perception is
uncontrolled, p = d. In my recent messages, “the disturbance” is the
waveform of the signal “d”.

Does that help explain what I meant by “the disturbance”?

Thank you for finding the core of the misunderstanding, which had
escaped me.

Peace be with you.

Martin

[From Bill Powers (2009.10.01.1057 MDT)]

Martin Taylor 2009.10.01.12.03 –

BP Earlier: Clearly, there is no
information about the environmental feedback function or the output
function (or for that matter, the perceptual input function) represented
in the perceptual signal. Since that information would be required in
order for the control system to deduce the present and past values of the
disturbance, the control process does not depend on having any
representation or reconstruction of the disturbance available to the
control system
. When you spoke of availability, you meant only
availability to the observer. You meant reconstruction by the observer,
not by the control system. That I could agree to.

MT: Good. Now we can be on the same track for future work.

BP earlier: And now I am wondering if by “the disturbance” you
could have meant the perturbation of the controlled quantity rather than
the variable we call d, which is the cause of the
perturbation.

MT: I didn’t know we ever had a label for the entity that causes the
perturbation.

BP: (!!!)

I have spent quite a few posts on this subject, distinguishing D from the
effect of D on qi. There is an ambiguity in the term
“disturbance.” It can mean either the variable that is the
cause of a perturbation in another variable, or it can mean the
perturbation that was caused. I always mean the former, because the
observable perturbation of a variable under control is not the
perturbation that would be seen without the feedback effects. You can
apply a disturbance by changing the variable d, but that might not result
in any perceptible disturbance of qi. You can see the problem.

I have always considered
“d” (the label on the arrow that joins the “o” arrow
to form the “qi” variable) to be a label sometimes for a signal
path and sometimes a label for the waveform of the signal on that path.
My understanding of the “perturbation of the controlled
quantity” is the net effect of “o” and “d” in
perturbing the value of “qi”. My use of “d” is what
is intended when we use the unit-transform simplification in order to say
“p = d + o”. If the perception is uncontrolled, p = d. In my
recent messages, “the disturbance” is the waveform of the
signal “d”.

That could easily account for some of our apparent disagreements. I have
NEVER used d to stand for the arrow between the letter d and qi, OR the
resulting change in qi, even when we assume a unit transform. I might
label that arrow “1”, or insert a box in the middle of the
arrow with the label “x 1” to indicate that there is a
multiplier of 1 in that path. In the rubber-band demo, d stands for the
position of the experimenter’s end of the rubber bands, or using other
representations, the force applied to the experimenter’s end. The arrows
in the environment NEVER stand for signals; they stand for properties of
the environment that convert the measure of d into a measure of the
contribution of d to the state of qi.

In the case of the iris “reflex,” d might be the measured
brightness of a light-bulb, or its output in lumens. The arrow going from
d to the retina indicates the laws of optics including effects of
geometry. In this case the feedback would affect the illumination of the
retina by throttling the input of light energy before it gets to the
retina, so it would alter the function between the disturbance and the
retina. When such complicated effects exist it is probably better to draw
an explicit function box between d and qi, where qi in this case is
the amount of light energy reaching the retina. This can also be
understood, in other cases, as an effect of the variable d on the
feedback function rather than as an additive effect on qi.

Does that help explain what I
meant by “the disturbance”?

It helps, but I really recommend reserving that term for a variable that
has independent existence outside the control system, rather than the
function linking it to the input quantity or the actual perturbation of
the input quantity (which is partly caused by feedback from the output
quantity). Unless a single unambiguous usage is adopted, we can have
confusing statements like “I adjusted the disturbance to 100 units,
but there was only a 10-unit disturbance of the input quantity.”
It’s best to use a term like “change” or
“perturbation” to refer to the net effect on qi. Ambiguity is
the enemy of clear writing.

Best,

Bill P.

[Martin Taylor 2009.10.01.15.06]

[From Bill Powers (2009.10.01.1057 MDT)]

Martin Taylor 2009.10.01.12.03 –

BP Earlier: Clearly,
there is no
information about the environmental feedback function or the output
function (or for that matter, the perceptual input function)
represented
in the perceptual signal. Since that information would be required in
order for the control system to deduce the present and past values of
the
disturbance, the control process does not depend on having any
representation or reconstruction of the disturbance available to
the
control system
. When you spoke of availability, you meant only
availability to the observer. You meant reconstruction by the observer,
not by the control system. That I could agree to.

MT: Good. Now we can be on the same track for future work.

BP earlier: And now I am wondering if by “the disturbance” you
could have meant the perturbation of the controlled quantity rather
than
the variable we call d, which is the cause of the
perturbation.

MT: I didn’t know we ever had a label for the entity that causes the
perturbation.

BP: (!!!)

I have spent quite a few posts on this subject, distinguishing D from
the
effect of D on qi. There is an ambiguity in the term
“disturbance.” It can mean either the variable that is the
cause of a perturbation in another variable, or it can mean the
perturbation that was caused. I always mean the former, because the
observable perturbation of a variable under control is not the
perturbation that would be seen without the feedback effects. You can
apply a disturbance by changing the variable d, but that might not
result
in any perceptible disturbance of qi. You can see the problem.

I see an ambiguity quite different from the one you identify. I thought
you were seeing the one I see, because I never considered that the
observable perturbation could be identified as the disturbance, or as
“d”. The ambiguity I saw was between the variable that combines with
the output variable to generate qi and the source of that variable. It
never occurred to me that you might think of the disturbance as “the
effect of D on qi” or that you might think I would. So, if I now
understand you correctly, I think that what I assumed to be “the
disturbance” is what you “always mean”, because it is what I also
always mean.

I have always
considered
“d” (the label on the arrow that joins the “o” arrow
to form the “qi” variable) to be a label sometimes for a signal
path and sometimes a label for the waveform of the signal on that path.
My understanding of the “perturbation of the controlled
quantity” is the net effect of “o” and “d” in
perturbing the value of “qi”. My use of “d” is what
is intended when we use the unit-transform simplification in order to
say
“p = d + o”. If the perception is uncontrolled, p = d. In my
recent messages, “the disturbance” is the waveform of the
signal “d”.

That could easily account for some of our apparent disagreements. I
have
NEVER used d to stand for the arrow between the letter d and qi, OR the
resulting change in qi, even when we assume a unit transform. I might
label that arrow “1”, or insert a box in the middle of the
arrow with the label “x 1” to indicate that there is a
multiplier of 1 in that path.
I don’t see how it can account for any recent disagreement, since I
simply said what seems to be how you “always mean” the disturbance –
as the waveform of the variable that is added to the output to form the
input quantity qi (s in the diagram).

ctrl.logo.transp1.png

I don’t remember you having complained about this figure before now.
It’s been around and used (in its animated form) for many years, and it
was complimented when I first used it.

In the rubber-band demo, d stands for the
position of the experimenter’s end of the rubber bands, or using other
representations, the force applied to the experimenter’s end. The
arrows
in the environment NEVER stand for signals; they stand for properties
of
the environment that convert the measure of d into a measure of the
contribution of d to the state of qi.

I would call the measure of the contribution of d exactly a signal. And
that is what the arrows in diagrams usually stand for, at least as I
was taught. If there is a function that alters the signal at the tail
of the arrow into a different signal at the head of the arrow, there
ought to be a box or something to represent that function. It should
never be a property of the arrow. Hence, if an arrow is labelled “d”,
either the letter identifies the arrow itself, or it specifies the
signal present at both ends of the arrow. To me, an arrow never
represents a property of the environment – well, never is an
exaggeration, since an arrow could represent the direction of wind
flow, or the slope of ground or some other vector field, but we are
talking about arrows that represent signal paths in connected networks,
not vector fields.

In the case of the iris “reflex,” d might be the measured
brightness of a light-bulb, or its output in lumens. The arrow going
from
d to the retina indicates the laws of optics including effects of
geometry. In this case the feedback would affect the illumination of
the
retina by throttling the input of light energy before it gets to the
retina, so it would alter the function between the disturbance and the
retina. When such complicated effects exist it is probably better to
draw
an explicit function box between d and qi, where qi in this case is
the amount of light energy reaching the retina. This can also be
understood, in other cases, as an effect of the variable d on the
feedback function rather than as an additive effect on qi.

Yes. Effects of extrinsic variables on the feedback function are an
aspect of PCT that is seldom discussed here, but it is something that I
have long thought quite important in social interactions
. I don’t like
calling those effects “disturbances” because they can have the effect
of enhancing the ability to control, and don’t necessarily affect the
value of qi at all.
As I always do…
Yes.
True, but in any natural language, ambiguity is unavoidable. The main
reason for dialogue is that monologue is necessarily ambiguous, since
without using “the Test” the speaker/writer cannot tell what meaning
the listener/reader is getting. That’s why the syntax for writing in
academic books is more formal than the syntax for street conversation.
Despite this, it is not always true that the reader of an academic book
understands precisely what the author intended him to understand.
E-mail falls between. Opportunities for detecting and correcting
misunderstandings do exist, as we have recently demonstrated. But they
are not as immediate as would be the case if we spoke face-to-face
around a blackboard.
Martin

···

http://www.mmtaylor.net/PCT/Mutuality/index.html

Does that help explain
what I
meant by “the disturbance”?

It helps, but I really recommend reserving that term for a variable
that
has independent existence outside the control system,

rather than the
function linking it to the input quantity or the actual perturbation of
the input quantity (which is partly caused by feedback from the output
quantity).

Unless a single unambiguous usage is adopted, we can have
confusing statements like “I adjusted the disturbance to 100 units,
but there was only a 10-unit disturbance of the input quantity.”
It’s best to use a term like “change” or
“perturbation” to refer to the net effect on qi. Ambiguity is
the enemy of clear writing.

[From Rick Marken (2009.10.01.1415)]

Bill Powers (2009.10.01.0703 MDT)–

Martin Taylor 2009.09.30.14.51 –

MT: I should point out, though,
that reconstruction was not the object of my presentation. It was an
extreme illustration of the fact that information about the disturbance
waveform is available in the perception waveform.

BP: Of course this is not true and you know it’s not true. It’s true only
if you also know the forms of the input and output functions and the
feedback connection through the external world, as well as the form of
the function connecting the disturbance to the controlled variable, and
take those into account as you compute what the disturbance must have
been.

Right. That was my point. Since p = h[f(o)+g(d)], if you know h, the input function, f(), the output function, g(), the function connecting d to the controlled variable, as well as the values of o and p, then you can certainly solve for d. This is basic algebra.

Of course that information is not
available to the control system, because it has no perception of its
output function or the external feedback connection or indeed anything
but its perceptual signal. An external observer who knows the properties
of all the components of the control system and the properties of the
environment in the feedback path can, indeed, work backward (within
limits) to deduce what the net disturbance must have been (though not any
individual disturbance if more than one is present at the same time).

Right. I mentioned this as well. The calculation of d (assuming just one disturbance variable is involved) can’t be done by the control system itself; just by an outside observer. So if “solving for d given p and all this other information” is what is meant by “information about the disturbance in the perception” then it’s not information that can be used by the control system itself. Fortunately, control systems don’t need information about the disturbance in perception because they are “automatically” protecting the controlled perception from disturbance by acting, on the basis of the error signal, to reduce the error signal; it’s just simple closed-loop. negative feedback.

This discussion of “information about the disturbance in perception” must seem very arcane to many observers who probably wonder why we care. So what if people want to think that there is information about the disturbance in perception? What could that possibly have to do with the advancement of PCT? I’m sure I’ve discussed this before but, for me, it’s important to get this right (understanding that there is no information about the disturbance in perception) because getting it wrong means missing what is most important about the role of perception in PCT. In PCT, perception is controlled. In information theory, perception is informative. In PCT, perception is “informed” by the agent; the agent’s reference signal “tells” the perception what it should do. In information theory, the agent is “informed” by perception; the perception “tells” the agent what to do.

In PCT, perceptual variables are controlled variables. In information theory, perceptual variables are informing variables. I would expect that when one tries to graft information theory onto PCT the inclination would be to focus their interest on the informative as opposed to the controlled aspect of perception. So interest in explaining behavior in terms of what variables are controlled (using the test for controlled variables) would probably take a back seat to an interest in explaining behavior in terms of the informational content of perception (using conventional causal methodology).

So that’s why I think this is important. Information theory, like other conventional theories of behavior, points in precisely the wrong direction regarding where to look for an explanation of the behavior we see. Information theory points to the world outside the organism; to the information in perception. PCT points to the world inside the organism; to the references for what the states of perceptual variables should be.

Of course, no one can convince someone else to change their mind. The only one who can change a person’s mind is the person themselves. But I think the discussion of the relationship between information theory and PCT is useful, even if it changes no minds (which it won’t) because, if studied carefully, it will help those with an open mind to better understand control theory.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.10.02.01.01]

[From Rick Marken (2009.10.01.1415)]

Bill Powers (2009.10.01.0703 MDT)–

Martin Taylor 2009.09.30.14.51 –

MT: I should point out, though,
that reconstruction was not the object of my presentation. It was an
extreme illustration of the fact that information about the disturbance
waveform is available in the perception waveform.

BP: Of course this is not true and you know it’s not true. It’s true
only
if you also know the forms of the input and output functions and the
feedback connection through the external world, as well as the form of
the function connecting the disturbance to the controlled variable, and
take those into account as you compute what the disturbance must have
been.

Right. That was my point. Since p = h[f(o)+g(d)], if you know h, the
input function, f(), the output function, g(), the function connecting
d to the controlled variable, as well as the values of o and p, then
you can certainly solve for d. This is basic algebra.

Of course that information is not
available to the control system, because it has no perception of its
output function or the external feedback connection or indeed anything
but its perceptual signal. An external observer who knows the
properties
of all the components of the control system and the properties of the
environment in the feedback path can, indeed, work backward (within
limits) to deduce what the net disturbance must have been (though not
any
individual disturbance if more than one is present at the same time).

Right. I mentioned this as well.

I think we all did, though it took me a while to get agreement on it.

The calculation of d (assuming just one disturbance variable is
involved) can’t be done by the control system itself; just by an
outside observer.

As is obvious from the fact that there is only one perceptual signal
internal to an elementary control unit.

So if “solving for d given p and all this other information” is
what is meant by “information about the disturbance in the perception”
then it’s not information that can be used by the control system
itself.

Right, apart from the fact that “Solving for d given p and all this
other information” is NOT "what is meant by “information about the
disturbance in the perception”.

Fortunately, control systems don’t need information about the
disturbance in perception because they are “automatically” protecting
the controlled perception from disturbance by acting, on the basis of
the error signal, to reduce the error signal; it’s just simple
closed-loop. negative feedback.

Right, provided that by “don’t need information” you mean either “don’t
need to calculate the information” or “don’t need to know the values of
the disturbance variable”. On the other hand, if you mean “are not
influenced by variation in the disturbance” then I must disagree,
because it is precisely the deviation from the reference value caused
in the perception by that variation in the disturbance, for which the
informational value can be computed by an analyst, that does matter in
control.

So far. I see nothing (well not much) that I could possibly disagree
with other than the comments noted above.

This discussion of “information about the disturbance in perception”
must seem very arcane to many observers who probably wonder why we
care. So what if people want to think that there is information about
the disturbance in perception? What could that possibly have to do with
the advancement of PCT?

The channel capacities of the different signal paths in the control
loop and its inputs are limiting factors in the ability of the system
to control. In a readily analyzed linear control system, there’s not
much point in dealing with these limits, because the control system
parameters give precise answers rather than limiting conditions. But
when things get complicated with nonlinear and interacting systems,
it’s not so easy to make the straightforward calculations, and then it
can be useful to consider the limiting envelopes.

I know this doesn’t suit your mindset, or Bill’s, but I prefer to work
from the general and abstract to the particular rather than the other
way round. In my working career, I often found a good way to get an
insight into a problem was to complement my abstract approach with the
concrete modelling approach of a colleague. Thinking from both ends, we
would meet in the middle with a clearer understanding than either could
have achieved alone. I would like to think the same could happen on
CSGnet, but instead of the abstract approach being welcomed as a useful
complement to the concrete, it tends to be denigrated and dismissed as
useless. This seems to me to be rather self-defeating.

I’m sure I’ve discussed this before but, for me, it’s important
to get this right (understanding that there is no information about the
disturbance in perception)

Did you not read at all the other messages in this thread? You can’t
have, because if you had done, you could not so blithely say something
like that.

because getting it wrong means missing what is most important
about the role of perception in PCT. In PCT, perception is
controlled.

Where do you see any inconsistency between the perceptual waveform
being completely informationally redundant with the disturbance
waveform and the fact that perception is controlled?

Just take a little step back, and imagine in your head a noise-free
control loop with a constant reference signal. It matters not a whit
what the form is of the output function and the environmental feedback
function, or any other component of the loop. The loop is a physical
entity, so it must follow the laws of physics. There is exactly one
input from outside to the loop, and that is the disturbance waveform.
It doesn’t matter what happens in the loop, unless some function is
multivalued in the sense that the same output can come from two
different inputs. Apart from that, everything that happens to every
variable in the loop depends on the disturbance waveform. In the
absence of noise, unless you have some kind of information loss
component such as a function with a non-unique output, every signal
waveform in the loop always carries all the information about the
disturbance waveform.

I don’t see how you can get around that fact, and continue proclaiming
like a religious mantra “there is no information about the disturbance
in perception”. It’s what you did when Allan and I first presented the
proof to the contrary, and its what you continue to do on any topic
whenever something differs from your preconceived notion of what ought
to be the truth. As I said at the top of this thread when I changed the
subject line, it’s that kind of irrational anti-argumentation that
drives people who like rational discussion away from CSGnet, or at
least away from contributing to CSGnet.

Martin

[From Bill Powers (2009.10.02.1010 MDT)]

Martin Taylor 2009.10.01.15.06 –

MT: I see an ambiguity quite
different from the one you identify. I thought you were seeing the one I
see, because I never considered that the observable perturbation could be
identified as the disturbance, or as “d”. The ambiguity I saw
was between the variable that combines with the output variable to
generate qi and the source of that variable. It never occurred to me that
you might think of the disturbance as “the effect of D on qi”
or that you might think I would. So, if I now understand you correctly, I
think that what I assumed to be “the disturbance” is what you
“always mean”, because it is what I also always
mean.

BP: Sorry, but we’re still not on the same page. If it never occurred to
you that people use the same term, disturbance, to mean both the cause of
a perturbation and the perturbation itself, you must not read much. Have
you ever heard of sunspots causing a disturbance of communications? Or a
weatherman describing a “tropical disturbance?” Or a newsperson
reporting a heckler causing a “disturbance” at a political
rally? In fact, I think that usage may be more common than the other one.
I can think of “disturbing news” and other examples using the
gerund rather than the noun to indicate the cause. The passive form,
“disturbed”, would, I think, be used almost invariably to refer
to the effect instead of the cause. I have tried using “disturbing
variable” to indicate d as a cause, but it hasn’t caught on –
perhaps because there are others who, like you, appear not to have given
this distinction much importance, or perhaps even haven’t understood what
I’m talking about.

MT: I have always considered
“d” (the label on the arrow that joins the “o” arrow
to form the “qi” variable) to be a label sometimes for a signal
path and sometimes a label for the waveform of the signal on that path.

BP: I find this usage of “signal” awkward and contrary to all
technical uses of the term I have encountered. The effect of a signal is
not caused by the signal itself but by some device that detects a
microscopic signal and draws on local energy sources to produce some
macroscopic effect (or, of course, another microscopic signal). Without
the detector and energy conversion, the signal would have no important
effect at all, except on other signals. Signals operate at very low
energy levels compared to the ordinary phenomena of macroscopic physics.
This distinction has been made clear in every subject I have learned
about that involves signal processing.

I think of a disturbing quantity as something that has macroscopic
effects on another physical quantity, as a force affects the velocity of
a mass. Signals operate at the level of microwatts or nanowatts, forces
at watts or kilowatts.

MT: My understanding of the
“perturbation of the controlled quantity” is the net effect of
“o” and “d” in perturbing the value of
“qi”.

That seems to strand the concept halfway between cause and effect. You
refer to o and d as “perturbing” the value of qi, which turns
perturbation into a transitive verb, an action of o and d on something
else, but this seems to eliminate perturbation as the result of the
action, the change in qi. You’ve simply made “perturbation”
into a synonym of “disturbance” with the same ambiguity as
before. I intended the term perturbation to refer only to the change in
qi without any dependence on what caused the change. It is a perturbation
used in that sense that affects a control system, and to which the
control system responds by exerting some action opposed to the
perturbation. If the controlled variable is the position of a mass, the
control system can apply forces to the mass to oppose any perturbations,
departures from a reference position, basing its action solely on the
departures, without regard to their causes. Many different combinations
of disturbances could cause the same perturbation; the control process
would be the same.

I think this gets us closer to the roots of this dispute. I have a notion
that all this started when I criticised Ashby’s concept of a control
system. He proposed that a control system detects the state of a
disturbing variable, and computes how much output to apply to the
controlled variable to cancel the direct effect of the disturbance on the
controlled variable, like this:

5b73fc.jpg

This is derived from diagrams in Ashby’s “An Introduction to
Cybernetics”, various places from p. 210 to p. 222 and
therabouts.

Of course Ashby never explained how this system comes into being; it
clearly has to be designed by some engineer who can see the effects of
the disturbance on the controlled variable, and can design a controller
that can be carefully adjusted (while the engineer watches the controlled
variable) to have just the magnitude, direction, and kind of effect
require to cancel automatically the effect of the disturbance – for
maybe 10 minutes, after which it has to be adjusted again in the same
way, by the same negative feedback control system. In touting the
superiority of this design over negative feedback control, Ashby ignored
every practical problem involved and all the machinery that would be
needed in the background to make this idea work. Oded Maler actually
found a German design for an apartment-building temperature control
system that worked this way. I’m glad I didn’t have to live
there.

I have countered this proposal (to little avail among cyberneticists) by
showing that a negative feedback control system can achieve just as good
stabilization of the controlled variable against disturbances as Ashby’s
would ( or better, in the real world), without needing any information
about the disturbance. I’m pretty sure that this statement is what set
you off on the insistence that information about the disturbance really
did exist inside the control system. It now turns out that you mean that
an external observer could deduce the nature of the disturbance by using
knowledge of all the details of the negative feedback control system,
which of course does not do the control system any good. Neither does it
mean that the control system needs knowledge of the state of the
disturbing variable in order to stabilize the controlled quantity against
the effects of the disturbing variable. The control system senses changes
in the controlled variable and acts directly on the controlled variable
to eliminate them; it does not need to know what caused those changes.
For what seems by now the thousandth time.

The information about the disturbance that does remain inside the control
system consists of the departures of the controlled variable from its
reference level – whatever small departures remain
uncontrolled.

So we got our wires crossed and have been arguing about nothing. Not for
the first time. All the information that the control system needs to
control the controlled variable is contained in the variations of the
controlled variable, without regard to what caused them. But to the
extent that control is imperfect, there is some residual information left
in what small variations are incompletely corrected. In a noise-free
system, the external observer could still reconstruct the waveform of the
effective disturbing variable from the small variations that are left,
but that isn’t how the control system achieves control.

Best,

Bill P.

P.S. In LCS3, Demo 8-1, ArmControlReorg, the control systems can learn to
control using only variations in the disturbances (start with reference
signals set to Constant). After 20 to 30 minutes of learning the errors
have become very small. At that point you can turn off the disturbances,
turn off reorganization, and set the reference signal pattern to Smooth
or Jump. The arm will dutifully move as the reference signals vary,
showing that control is good. You can turn the disturbances back on and
verify that they have only a small effect on the controlled variables,
the joint angles, while the Smooth or Jump reference pattern continues.
This shows that it is not behavior that is learned, but control.

[From Rick Marken (2009.10.02.1525)]

Martin Taylor (2009.10.02.01.01)--

Where do you see any inconsistency between the perceptual waveform being
completely informationally redundant with the disturbance waveform and the
fact that perception is controlled?

I didn't mean to imply that the information and control view are
inconsistent. I meant to imply only that the information view is
wrong. But my main point was that the information view (right or
wrong) points one's attention in the wrong direction; from trying to
determine what perception is controlled to focusing on what it is
about the perception that guides control action. I also noted that I
didn't expect this discussion to change anyone's mind; I was saying
that the discussion can be used by observers to get a better sense of
what control theory is about: it's about control of perception; it's
not about the information in perception that is the basis of control.
The only information that is the basis of control is the perception
itself. The information use in control is not _in_ the perception; it
_is_ the perception.

everything that
happens to every variable in the loop depends on the disturbance waveform.

But what happens to the controlled perceptual variable depends on
_both_ the disturbance and output "waveforms" simultaneously: p = o +
d. So it's impossible for the system to tell, just from the variations
in p, the extent to which these variations (or lack thereof) are due
to d or o. In that sense, there is no information in p _that is
available to the system itself_ about d.

I don't see how you can get around that fact, and continue proclaiming like
a religious mantra "there is no information about the disturbance in
perception".

But I agreed that, if "information about the disturbance in
perception" means that given the values, p, k1, k2 and o in the
equation

p = k1o + k2d

you can solve for d, then, indeed, there is information about the
disturbance in perception. Congratulations. You have proven that
algebra works, and I accept that.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.10.04/09.56

[From Bill Powers (2009.10.02.1010 MDT)]

Martin Taylor 2009.10.01.15.06 –

MT: I see an ambiguity
quite
different from the one you identify. I thought you were seeing the one
I
see, because I never considered that the observable perturbation could
be
identified as the disturbance, or as “d”. The ambiguity I saw
was between the variable that combines with the output variable to
generate qi and the source of that variable. It never occurred to me
that
you might think of the disturbance as “the effect of D on qi”
or that you might think I would. So, if I now understand you correctly,
I
think that what I assumed to be “the disturbance” is what you
“always mean”, because it is what I also always
mean.

BP: Sorry, but we’re still not on the same page. If it never occurred
to
you that people use the same term, disturbance, to mean both the cause
of
a perturbation and the perturbation itself, you must not read much.

If it never occurred to you that people use the same term “perception”
to mean what they consciously observe, then you must not talk very much
to other people.

MT: I have always
considered
“d” (the label on the arrow that joins the “o” arrow
to form the “qi” variable) to be a label sometimes for a signal
path and sometimes a label for the waveform of the signal on that path.

BP: I find this usage of “signal” awkward and contrary to all
technical uses of the term I have encountered. The effect of a signal
is
not caused by the signal itself but by some device that detects a
microscopic signal and draws on local energy sources to produce some
macroscopic effect (or, of course, another microscopic signal). Without
the detector and energy conversion, the signal would have no important
effect at all, except on other signals. Signals operate at very low
energy levels compared to the ordinary phenomena of macroscopic
physics.
This distinction has been made clear in every subject I have learned
about that involves signal processing.

I think of a disturbing quantity as something that has macroscopic
effects on another physical quantity, as a force affects the velocity
of
a mass. Signals operate at the level of microwatts or nanowatts, forces
at watts or kilowatts.

Oh. OK. Then several of the pathways in the control loop are no longer
to be designated as “signal paths”, and in control systems working at
different power levels, which pathway is a “signal path” differs. I am
sorry I had not noted the moment when the word “signal” was disallowed
when referring to some parts of the control loop.
I’m sorry, but I find this kind of approach to demonstrating my
inability to understand control systems to be rather beneath you.
Rather than trying so hard to find possible variant meanings of the
words I use in order to show that they could possibly be taken
to mean something silly, would it not be more helpful to clarify the
obvious meanings that make sense? When you say “p = o + d”, are you
saying that the power level in the perceptual pathway is equal to the
sum of the power levels in the environmental feedback path and the
disturbance path? That is, of course, the natural implication of your
assault on my ise of the word “signal”.

All the information that the control system needs to
control the controlled variable is contained in the variations of the
controlled variable, without regard to what caused them. But to the
extent that control is imperfect, there is some residual information
left
in what small variations are incompletely corrected. In a noise-free
system, the external observer could still reconstruct the waveform of
the
effective disturbing variable from the small variations that are left,
but that isn’t how the control system achieves control.

The only thing in this with which I can disagree is the implication I
believe you want the reader to get from your use of “residual
information”. You seem to suggest that it is only the imperfection of
control that permits some information about the disturbance waveform to
leak into the control system. This is incorrect, as I have repeatedly
shown, with appropriate explanations.

To Rick: It seems irrelevant how often I point out that “information
about the disturbance in the perceptual signal” is quite different from
“ability to reconstruct the disturbance waveform from the perceptual
waveform” in that the latter requires knowledge of all the function
forms and the latter doesn’t. Accordingly, I am not going to respond
any more to your comments until you make one that recognizes this fact.
Your comment [From Rick Marken (2009.10.02.1525)]:

I didn't mean to imply that the information and control view are
inconsistent. I meant to imply only that the information view is
wrong.

is exactly of the character of your 1992/3 comments that I
characterised as “t’aint so”. Despite clear demonstrations to the
contrary, you simply close your mind to all analysis and say “I know
what’s right, and you aren’t”, without supporting evidence. Further
discussion seems to me to be pointless,

Martin

[From Rick Marken (2009.10.04.0915)]

Martin Taylor (2009.10.04/09.56) –

To Rick: It seems irrelevant how often I point out that “information
about the disturbance in the perceptual signal” is quite different from
“ability to reconstruct the disturbance waveform from the perceptual
waveform” in that the latter requires knowledge of all the function
forms and the latter doesn’t. Accordingly, I am not going to respond
any more to your comments until you make one that recognizes this fact.
Your comment [From Rick Marken (2009.10.02.1525)]:

I didn't mean to imply that the information and control view are
inconsistent. I meant to imply only that the information view is
wrong.

is exactly of the character of your 1992/3 comments that I
characterised as “t’aint so”. Despite clear demonstrations to the
contrary, you simply close your mind to all analysis and say “I know
what’s right, and you aren’t”, without supporting evidence. Further
discussion seems to me to be pointless,

I agree. Pointless, if the point is to convince either of us that the other has a point. But I still think it can have a point if the point is to help others understand the PCT view of perception.

Based on what you say above, it seems that you believe that there is something called “information about the disturbance” in perception. I presume you think this information is there because it is possible to reconstruct the disturbance waveform given the perceptual
and output waveform as well as the functions that relate these variables. I’m willing to believe in this information stuff, too. But for me it’s like believing in a benevolent god. I can believe in such a god all I want but innocent children are still killed in natural disasters. Similarly, you can believe that there is information about disturbances in perception all you want but there is still no way for the disturbance resisting control system to get that information. It’s just make believe.

I think the conventional psychological view of perception that comes closest to the control theory view is something like the Hubel/Wiesel receptive field idea. What they proposed is that individual neurons carry signals that indicate, by their magnitude, the degree to which a particular optical pattern is present on an area of the retina. The neural signal is equivalent to the perceptual signal in PCT. The receptive field is equivalent to the perceptual function. Hubel/Weisel thought of the magnitude of the neural signal (measured in impulses/sec, just as we do in PCT) as indicating the degree to which a particular pattern, like a horizontal line, is present on the retina. In PCT we would see the magnitude of neural signals as indicating the state of a perceptual variable. So I would guess that, to the extent that it varies with the orientation of a line on the retina, the magnitude of the neural signal would be proportional to that orientation: the fact that the signal fires maximally to a horizontal line and minimally to a vertical line suggests that the magnitude of the signal is a measure of orientation, maximum signal being 0 degrees relative to optical horizontal and minimum signal being 90 degrees relative to it.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.10.04.13.00]

[From Rick Marken (2009.10.04.0915)]

Martin Taylor
(2009.10.04/09.56) –

To Rick: It seems irrelevant how often I point out that “information
about the disturbance in the perceptual signal” is quite different from
“ability to reconstruct the disturbance waveform from the perceptual
waveform” in that the latter requires knowledge of all the function
forms and the latter doesn’t. Accordingly, I am not going to respond
any more to your comments until you make one that recognizes this fact.
Your comment [From Rick Marken (2009.10.02.1525)]:

I didn't mean to imply that the information and control view are
inconsistent. I meant to imply only that the information view is
wrong.

is exactly of the character of your 1992/3 comments that I
characterised as “t’aint so”. Despite clear demonstrations to the
contrary, you simply close your mind to all analysis and say “I know
what’s right, and you aren’t”, without supporting evidence. Further
discussion seems to me to be pointless,

I agree. Pointless, if the point is to convince either of us that the
other has a point. But I still think it can have a point if the point
is to help others understand the PCT view of perception.

Based on what you say above, it seems that you believe that there is
something called “information about the disturbance” in perception. I
presume you think this information is there because it is possible to
reconstruct the disturbance waveform given the perceptual
and output waveform as well as the functions that relate these
variables.

Really! How many times must I repeat that THIS IS NOT SO? Have you read
absolutely nothing of my messages starting with the one that initiated
this part of the thread [Martin Taylor 2009.09.29.00.29] in which I
said quite explicitly the opposite of what you think I believe: “When
the environmental feedback path is not the unit transform, but is
unchanging over time and is monotonic in the relation between the
output value and the influence of the output on p, d cannot be
reconstructed, but a signal can be generated that is informationally
completely redundant with d.”? I’ve lost track of how many times I have
repeated the same thing: reconstruction is an extreme example, not a
rationale!

you can believe that there is information about
disturbances in perception all you want but there is still no way for
the disturbance resisting control system to get that information.

How many times must I repeat that I have always agreed with this, how
many times must I show why this must be true (because you would have to
change the wiring diagram of the conventional control loop to get the
information), and how many times must I repeat that it’s totally
irrelevant to the issue (because control systems simply don’t compute
the information – though an analyst or a designer of control systems
might)?

Why not at least sometimes read what I write, instead of imagining what
I must be saying (which was what you explicitly claimed you did in one
of your responses a while back)?

I think the conventional psychological view of perception that comes
closest to the control theory view is something like the Hubel/Wiesel
receptive field idea. What they proposed is that individual neurons
carry signals that indicate, by their magnitude, the degree to which a
particular optical pattern is present on an area of the retina. The
neural signal is equivalent to the perceptual signal in PCT. The
receptive field is equivalent to the perceptual function. Hubel/Weisel
thought of the magnitude of the neural signal (measured in
impulses/sec, just as we do in PCT) as indicating the degree to which a
particular pattern, like a horizontal line, is present on the retina.
In PCT we would see the magnitude of neural signals as indicating the
state of a perceptual variable. So I would guess that, to the extent
that it varies with the orientation of a line on the retina, the
magnitude of the neural signal would be proportional to that
orientation: the fact that the signal fires maximally to a horizontal
line and minimally to a vertical line suggests that the magnitude of
the signal is a measure of orientation, maximum signal being 0 degrees
relative to optical horizontal and minimum signal being 90 degrees
relative to it.

What does this have to do with anything? Why does the specification of
one kind of perceptual input function mean anything with respect to
“the control theory view”? Back in 1958-9 we were arguing from a
functional point of view that the physiologists would have to find
something like this when I was in graduate school half a century ago,
and the physiologists then were saying “no way”. It’s nothing to do
with “the control theory view”. It’s just a passive input function –
though nowadays it begins to seem not so passive as it was once
thought, efferent fibres apparently reaching into very low perceptual
levels (I meant to bring this up in Richard’s thread on object
stability). Even in my first “summer student” term in psychology in
1957 John Ogilvie and I discovered that the so-called “vertical” and
“horizontal” were linked neither to the real-world orientations, the
head orientation, nor the eyeball orientation. In fact they were not
even at right angles for most of the eyes we tested, and the angle
between them changed with head orientation. I think I still have the
data somewhere. I’m not sure whether it was published.

And what does it have to do with the topic of this thread?

When you want to respond to something I actually wrote, or when you say
something sensible that interests me (as you often do), I will engage
in discussion with you. Otherwise, I probably won’t.

Martin

[From Rick Marken (2009.10.04.1130)]

Martin Taylor (2009.10.04.13.00)

Rick Marken (2009.10.04.0915)–

Based on what you say above, it seems that you believe that there is
something called “information about the disturbance” in perception. I
presume you think this information is there because it is possible to
reconstruct the disturbance waveform given the perceptual
and output waveform as well as the functions that relate these
variables.

Really! How many times must I repeat that THIS IS NOT SO? Have you read
absolutely nothing of my messages starting with the one that initiated
this part of the thread [Martin Taylor 2009.09.29.00.29] in which I
said quite explicitly the opposite of what you think I believe: “When
the environmental feedback path is not the unit transform, but is
unchanging over time and is monotonic in the relation between the
output value and the influence of the output on p, d cannot be
reconstructed, but a signal can be generated that is informationally
completely redundant with d.”? I’ve lost track of how many times I have
repeated the same thing: reconstruction is an extreme example, not a
rationale!

I really am trying to figure out what you are saying. I thought I had it there. But I’m willing to keep trying. I’m sorry it’s so frustrating for you but I’m afraid I’m a bear of little brain.

you can believe that there is information about
disturbances in perception all you want but there is still no way for
the disturbance resisting control system to get that information.

How many times must I repeat that I have always agreed with this,

Then what are we arguing about? Based on what I read here I would conclude that you believe that there is information about disturbances in perception but it can’t be used by the control system. If this is what you are saying then I guess I can agree with that: there is information about the disturbance in perception but it is irrelevant to control. Is that OK?

I think the conventional psychological view of perception that comes
closest to the control theory view is something like the Hubel/Wiesel
receptive field idea…

What does this have to do with anything?..

And what does it have to do with the topic of this thread?

I’m sorry, I should have added that this is relevant to my point about how one’s view of perception can point one in the right (or wrong) direction in terms of research. I believe that the information view – which suggests that there is information in the perceptual signal – points one away from the question of what perceptual variable(s) the behaving system is controlling. Something like the Hubel/Weisel view points specifically to that question because it conceives of perceptions (which are what control systems control) as functions of variables in the environment: p = f (e1,e2…en). It’s a short step from there (at least it was for me) to the idea that understanding the observed behavior of a control system means knowing what functions of the variables in the environment the system is controlling. So research on control would be oriented to finding those functions, f(), that defines the variables controled by the system: controlled variables. And this, of course, would be research aimed at testing for controlled variables.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.10.04.1033 MDT)]

Martin Taylor 2009.10.04/09.56

[From Bill Powers
(2009.10.02.1010 MDT)]

Martin Taylor 2009.10.01.15.06 –

MT: I see an ambiguity quite
different from the one you identify. I thought you were seeing the one I
see, because I never considered that the observable perturbation could be
identified as the disturbance, or as “d”. The ambiguity I saw
was between the variable that combines with the output variable to
generate qi and the source of that variable. It never occurred to me that
you might think of the disturbance as “the effect of D on qi”
or that you might think I would. So, if I now understand you correctly, I
think that what I assumed to be “the disturbance” is what you
“always mean”, because it is what I also always
mean.

BP: Sorry, but we’re still not on the same page. If it never occurred to
you that people use the same term, disturbance, to mean both the cause of
a perturbation and the perturbation itself, you must not read
much.

If it never occurred to you that people use the same term
“perception” to mean what they consciously observe, then you
must not talk very much to other people.

I have certainly noticed that people use the term perception to mean
conscious perception. That’s why I have frequently commented on the
difference between a perceptual signal’s existing in an afferent pathway
and awareness of that perceptual signal (the combination we call
conscious perception). The glossary in B:CP defines perception as
existence of the perceptual signal, and conscious perception as awareness
of the perceptual signal. In ordinary language this distinction is not
made, probably because one is not aware of perceptual signals of which
one is not aware. Your counter to my statement is not a parallel
case.

If you always mean what I mean by “disturbance,” then you never
label the arrow that affects qi as being the disturbance, because the
arrows in the environmental part of the model designate properties of the
environment between the disturbing quantity and the input quantity. If
you apply a force to a mass to alter the velocity of the mass, the force
is a disturbance, the mass is the property that gives force an effect on
velocity, and the controlled quantity is the velocity (if the quantity is
under control). The arrow would be labeled as “mass”, not
“force.”

MT: I have always considered
“d” (the label on the arrow that joins the “o” arrow
to form the “qi” variable) to be a label sometimes for a signal
path and sometimes a label for the waveform of the signal on that path.

BP: I find this usage of “signal” awkward and contrary to all
technical uses of the term I have encountered. The effect of a signal is
not caused by the signal itself but by some device that detects a
microscopic signal and draws on local energy sources to produce some
macroscopic effect (or, of course, another microscopic signal). Without
the detector and energy conversion, the signal would have no important
effect at all, except on other signals. Signals operate at very low
energy levels compared to the ordinary phenomena of macroscopic physics.
This distinction has been made clear in every subject I have learned
about that involves signal processing.

I think of a disturbing quantity as something that has macroscopic
effects on another physical quantity, as a force affects the velocity of
a mass. Signals operate at the level of microwatts or nanowatts, forces
at watts or kilowatts.

Oh. OK. Then several of the pathways in the control loop are no longer to
be designated as “signal paths”, and in control systems working
at different power levels, which pathway is a “signal path”
differs. I am sorry I had not noted the moment when the word
“signal” was disallowed when referring to some parts of the
control loop.

See the glossary in B:CP, which specifies that “signal” is the
term for a information (small-i) flow inside the control system. Inside
the brain, signal paths do not apply any functions; the signal at the
origin of the path is the same as the signal at the destination (save for
branching and a small time delay). You can measure a steady signal
anywhere along the path. The functions represent computing processes
inside neurons or compact networks of neurons.

In the environment, the functions we speak of, like the disturbance
function between the disturbing quantity and the controlled quantity or
the feedback function between the output quantity and the controlled
quantity, take place in the space between variables, while the variables
are most often designated as little circles or boxes at specific
locations. So we have

Inside the brain:

signal
signal

function ------------> function ----------->

Outside the brain:

function or
law
function or law

---------------->quantity -----------------> quantity

When I learned about the more sophisticated kinds of signal flow or block
diagrams, these representations were called “duals” of each
other. The arrows inside the controller are labeled as signals, which are
converted into other signals by intervening neural functions. Outside the
controller, the arrows are labeled as functions such that the physical
quantity at the end is a function of the quantity at the beginning of the
arrow, and the arrow itself is labeled as a function (often inside an
inserted box, which is why the quantities are often designated as little
circles so they don’t get confused with the functions). I decided to
refer to environmental variables as “quantities” to associate
them with physical quantities, and variables inside the brain as
“signals” since they carry a value from the output of one
function to the inputs of other functions.

I’m sorry, but I find this kind
of approach to demonstrating my inability to understand control systems
to be rather beneath you. Rather than trying so hard to find possible
variant meanings of the words I use in order to show that they could
possibly
be taken to mean something silly, would it not be more
helpful to clarify the obvious meanings that make sense? When you say
“p = o + d”, are you saying that the power level in the
perceptual pathway is equal to the sum of the power levels in the
environmental feedback path and the disturbance path? That is, of course,
the natural implication of your assault on my ise of the word
“signal”.

No, because I would only use that shorthand when it’s not likely to be
confusing. Pedagogically, I would say p = Fi(qi) and qi = Ff(o) + Fd(d).
A transducer-type function is always needed to turn a physical quantity
outside the controller into a signal inside it, or vice verse. The power
level changes occur inside the input and output functions. Functions in
the environment are simply physical laws that express the effects of
certain physical quantities on other physical quantities and don’t
necessarily involve power level changes, though they might.

The only thing in this with
which I can disagree is the implication I believe you want the reader to
get from your use of “residual information”. You seem to
suggest that it is only the imperfection of control that permits some
information about the disturbance waveform to leak into the control
system. This is incorrect, as I have repeatedly shown, with appropriate
explanations.

But you have also pointed out that control reduces the amount of
information going from the disturbance into the perceptual signal.
Cyberneticists also have seen this possibility, and Ashby said the same
thing about “variety” – variety destroys variety, he said. I
think you’re right that in a noise-free system, there is no destruction
of information, but real systems are not noise-free because we never can
account for (or predict) all the variables that are disturbing a
controlled quantity or other variables in the loop. When the amplitudes
of the perturbations of the perceptual signal by the known disturbances
have been reduced to the point where they are comparable to the
perturbations due to unknown disturbances, we begin to lose the
information about the known disturbance because it no longer reduces
uncertainty by the same amount.

I think this all becomes clearer if we just drop the term
“noise” and substitute terms having to do with perturbations
that have not been traced to known disturbances. All the control system
knows about are the perturbations; if they have components that vary
slowly enough, the control system reduces those effects whether they come
from a known disturbance or from so-called noise. When the known
component of perturbation becomes small enough, the output variations we
observe will be due mostly to unobserved disturbances, so the
reconstruction of the disturbance waveform will be spurious, as will any
measure of information about the known disturbance in the perceptual
signal. Also, the perturbations unaccounted for will create output
variations that reduce control of the effects of known
disturbances.

I’m just trying to establish a consistent way of analyzing the control
process, not trying to show your “inability to understand control
systems.” I think you do understand control systems, but slip up on
the details sometimes because of your preference for working from the
general to the particular. The devil is in the details.

Best,

Bill P.

[From Bill Powers (2009.10.04.2303 MDT)]
Martin Taylor 2009.10.04/09.56
I seem to be awake (I think), and there are more pieces falling into
place in the information discussion. I am genuinely NOT against the
possibility that information theory can contribute to control theory, but
I haven’t yet seen any demonstration of its usefulness. The reason I am
led to keep generating objections and doubts is that I have seen other
similar generalizations, and can easily invent more, that while
demonstrably true are not helpful.
The problem I see in the role of information measures is that they are
not specific to any one situation. That is what makes them general, but
it also is what makes me doubt their usefulness in the absence of
evidence to dispel that doubt. But rather than sticking my neck out by
challenging your expertise, let me show what I mean by using examples
other than information theory.
The closest relative to information theory is Ashby’s concept of
“variety”, which as far as I can see is computed the same way
as infomation. In defining variety, Ashby says it is (1) the number of
distinct elements in a set, or (2) the log to the base 2 of that number.
He even says that in the second case, the measure of variety is the
BInary digiT or BIT, which makes one wonder why he had to use a word
different from “information.”
My first problem with this idea is that it’s hard to see how one could
measure the number of distinct elements in the set of real numbers, which
are used to measure continuous variations. There is an infinite number of
different states that can be put in correspondence with the real numbers,
so it would appear that the states of any system involving continuous
functions have infinite variety. In information theory this would seem to
make the computation of information impossible, since messages of any
finite length do not reduce uncertainty at all. But I am evidently making
a mistake there, because people regularly talk about information in
continuous systems. Or perhaps they are making the mistake because they
want to talk about information in continuous systems.
But I’m talking about variety. After several tedious chapters, Ashby gets
to the subject of “Regulation in biological systems,” and
proceeds to develop some general-sounding “laws”.
One is “Regulation blocks the flow of variety.” This is
developed further until finally he arrives at his famous “Law of
Requisite Variety.” And that is the context in which he generates
his model of control, with a regulator that receives information about a
disturbance acting on some variable, and from that information calculates
the counter-disturbance needed to stabilize the variable being
controlled. His generalization: in order for the regulator to control
successfully, its variety must match the variety of the
disturbance.
This is where I depart from Ashby. That may be a necessary condition but
it is not a sufficient condition for achieving control, even though all
successful controllers exhibit the truth of the Law of Requisite Variety.
The reason is simply that for control to happen, one particular set of
values of the variables in the output of the regulator must be achieved,
and that set can’t be identified just by specifying the number of
elements in it. The regulator must produce outputs that vary in a way
that produces effects on the controlled variable that are equal and
opposite
to the variations in the disturbing variables. If all the
variations about the mean in the output of the regulator simply have
their signs inverted, the variety in the set will remain the same but
control will be lost.
Conclusion: the law of requisite variety is an irrelevant side-effect of
the actual properties required for achieving control. You can’t teach a
person to steer a car better by telling the person to wiggle the steering
wheel more often.
Consider another law: the Law of Requisite Mobility, which governs the
operations of a hockey team. A team with the requisite amount of Mobility
blocks the flow of Mobility of the other team, which thereby reduces the
scoring of the other team, and if greater than the minimum amount
required, will increase the scoring of the team with greater Mobility.
Unfortunately, this universal law of hockey does not tell anyone how to
achieve this result, and perhaps even more unfortunately for the
generalist, no hockey team uses this law to prepare to win games. The
conditions of the law are, of course, satisfied by the winning team, but
they are results of the winning and the practicing to win, not
causes.
I see a parallel problem in information theory. To achieve any given end,
it is not sufficient to have a certain quantity of information. It is
necessary to have exactly the right information, and the right
information can’t be discovered by specifying the number of bits of it
that must present. You can acquire information containing the prescribed
amount of information content and still be unable to achieve the end.
This establishes the quantity of information, in my mind, as a measure of
channel capacity but makes the meaning or usefulness of information into
an orthogonal measure, not constrained or specified by channel capacity.
I think Shannon was of the same opinion.

Best,

Bill P.

[From Bruce Abbott (2009.10.05.0730
EDT)]

Bill Powers (2009.10.04.2303
MDT)

BA: See my comment at the end
of the post.

. . .

BP: The closest relative to information theory is
Ashby’s concept of “variety”, which as far as I can see is computed the
same way as infomation. In defining variety, Ashby says it is (1) the number of
distinct elements in a set, or (2) the log to the base 2 of that number. He
even says that in the second case, the measure of variety is the BInary
digiT or BIT, which makes one wonder why he had to use a word different from
“information.”

My first problem with this idea
is that it’s hard to see how one could measure the number of distinct elements
in the set of real numbers, which are used to measure continuous variations.
There is an infinite number of different states that can be put in
correspondence with the real numbers, so it would appear that the states of any
system involving continuous functions have infinite variety. In information
theory this would seem to make the computation of information impossible, since
messages of any finite length do not reduce uncertainty at all. But I am
evidently making a mistake there, because people regularly talk about
information in continuous systems. Or perhaps they are making the mistake
because they want to talk about information in continuous systems.
But I’m talking about variety. After several tedious chapters, Ashby gets to
the subject of “Regulation in biological systems,” and proceeds to
develop some general-sounding “laws”.
One is “Regulation blocks the flow of variety.” This is developed
further until finally he arrives at his famous “Law of Requisite
Variety.” And that is the context in which he generates his model of
control, with a regulator that receives information about a disturbance acting
on some variable, and from that information calculates the counter-disturbance
needed to stabilize the variable being controlled. His generalization: in order
for the regulator to control successfully, its variety must match the variety
of the disturbance.
This is where I depart from Ashby. That may be a necessary condition but it is
not a sufficient condition for achieving control, even though all successful
controllers exhibit the truth of the Law of Requisite Variety. The reason is
simply that for control to happen, one particular set of values of the
variables in the output of the regulator must be achieved, and that set can’t
be identified just by specifying the number of elements in it. The regulator
must produce outputs that vary in a way that produces effects on the controlled
variable that are equal and opposite to the variations in the disturbing
variables. If all the variations about the mean in the output of the regulator
simply have their signs inverted, the variety in the set will remain the same
but control will be lost.
Conclusion: the law of requisite variety is an irrelevant side-effect of the
actual properties required for achieving control. You can’t teach a person to
steer a car better by telling the person to wiggle the steering wheel more
often.
Consider another law: the Law of Requisite Mobility, which governs the
operations of a hockey team. A team with the requisite amount of Mobility
blocks the flow of Mobility of the other team, which thereby reduces the
scoring of the other team, and if greater than the minimum amount required,
will increase the scoring of the team with greater Mobility. Unfortunately,
this universal law of hockey does not tell anyone how to achieve this result,
and perhaps even more unfortunately for the generalist, no hockey team uses
this law to prepare to win games. The conditions of the law are, of course,
satisfied by the winning team, but they are results of the winning and the
practicing to win, not causes.
I see a parallel problem in information theory. To achieve any given end, it is
not sufficient to have a certain quantity of information. It is necessary to
have exactly the right information, and the right information can’t be
discovered by specifying the number of bits of it that must present. You can
acquire information containing the prescribed amount of information content and
still be unable to achieve the end. This establishes the quantity of
information, in my mind, as a measure of channel capacity but makes the meaning
or usefulness of information into an orthogonal measure, not constrained or
specified by channel capacity. I think Shannon was of the same opinion.
BA: For those who may not be familiar with the source
Bill is quoting from, it’s An Introduction to Cybernetics by W.
Ross Ashby (1956). You can find a full PDF version of this book on the web at http://pespmc1.vub.ac.be/ASHBBOOK.html
.

Bruce
A.

[From Rick Marken (2009.10.05.1315)]

Bill Powers (2009.10.04.2303 MDT)--

I see a parallel problem in information theory. To achieve any given end, it
is not sufficient to have a certain quantity of information. It is necessary
to have exactly the right information, and the right information can't be
discovered by specifying the number of bits of it that must present.

I take "the right information" to mean "the controlled perceptual
variable". This is the way I used the term "information" in my paper
on testing for the controlled variable in baseball catching research.
The paper was called " Optical Trajectories and the Informational
Basis of Fly Ball
Catching"(Journal of Experimental Psychology: Human Perception & Performance, 31
(3), 630 � 634) and the "informational basis" for catching consists of
the perceptual variables controlled by the fielder. So the question
was what information, in terms of a function of the optical image, is
controlled: vertical optical velocity, vertical optical acceleration,
linear optical trajectory, etc. It wasn't a question of how much of
this information the fielder gets; the question was about _what_ the
information is. And that's something that can be determined only by
testing for the controlled variable. Maybe if we called it the "test
for the controlled information" it would make the conventional
psychologists feel better.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.10.06.1405 MDT)]

Rick Marken (2009.10.05.1315) --

BP earlier: I see a parallel problem in information theory. To achieve any given end, it is not sufficient to have a certain quantity of information. It is necessary to have exactly the right information, and the right information can't be discovered by specifying the number of bits of it that must present.

RM: I take "the right information" to mean "the controlled perceptual
variable". This is the way I used the term "information" in my paper
on testing for the controlled variable in baseball catching research. ...

It wasn't a question of how much of this information the fielder gets; the question was about _what_ the information is. And that's something that can be determined only by testing for the controlled variable. Maybe if we called it the "test for the controlled information" it would make the conventional
psychologists feel better.

BP: I was thinking more from the viewpoint of the controlling system, but along the same lines. "What the information is" invites classification ("it's information about where the baseball is"), whereas I was thinking that the information in the perception of the baseball's position has to be quantitative, so it can be compared with a reference perception to generate an error signal and thus produce the amount of output needed to keep the error as small as it is. Just having the right number of bits is useless unless each bit has the correct value and the bits are put in the right sequence; it's important that the most significant bit come first, and so on.

The "amount of information" is not, as I understand it, the same thing as the magnitude of the quantity being represented; rather, it's the number of bits in the representation regardless of their values, 1 or 0. An ASCII character requires 7 or 8 bits of information to specify it (depending on the character set in use), but saying that doesn't tell us what the character is. The meaning of a message depends on what the characters are; the amount of information in the message doesn't.

This may be the entire basis of the disagreement about "information about the disturbance in the perceptual signal." You and I naturally took this to be "information about the magnitude of the disturbance in the magnitude of the perceptual signal." And we both clearly saw that this is impossible: the magnitude of the controlled quantity is different from the magnitude of the disturbance by an amount that depends on the disturbance function and the magnitude of the output quantity as transformed through the feedback function, neither value being represented in the perceptual signal. So knowing only the value of the perceptual signal, the control system could not deduce what the value of the disturbance is. The external omniscient observer, of course, could do that (but would have no need to do it, being able to perceive all pertinent variables in the environment and the control loop). It is conceivable that the same number of bits of information is present in all the signals and quantities, but that doesn't confer the ability to say what the values of the bits are at any moment.

This is probably the basic reason why Shannon divorced information from meaning.

Best,

Bill P.

[Martin Taylor 2009.10.07.00.40]

[From Bill Powers (2009.10.06.1405 MDT)]

This is probably the basic reason why Shannon divorced information from meaning.

Actually, Shannon said that because the communication channel had to support the communication of meaning for all sorts of people and topics, the communication engineer must not consider meaning. That's a little different.

No time for more. Except to say that Shannon did deal with information and continuous signals. It's no more complicated than for discrete signals.

Martin

from Arthur Dijkstra 2009.10.07.0858

[From Bill Powers (2009.10.06.1405 MDT)]

Rick Marken (2009.10.05.1315) --

BP earlier: I see a parallel problem in information theory. To
achieve any given end, it is not sufficient to have a certain
quantity of information. It is necessary to have exactly the right
information, and the right information can't be discovered by
specifying the number of bits of it that must present.

AD: The remarks about the right information needed to achieve any given end
triggered me.
I think I only get the feel for this discussion but it makes me think of the
following requirements for effective control.
These requirements state that the channels carrying 'information' between
the controlled and controlling system must have:
1.Channel capacity: The channels must have sufficient capacity to 'transmit'
a given amount of information relevant to variety selection in a given time.
2.Change or transformation capacity: The possible states (variety) that can
be distinguished within the 'information'
3.Transduction capacity: Wherever the information on a channel capable of
distinguishing a given variety crosses a boundary, it undergoes
transduction; the variety of the transducer must be at least equivalent to
the variety of the channel.

RM: I take "the right information" to mean "the controlled perceptual
variable". This is the way I used the term "information" in my paper
on testing for the controlled variable in baseball catching research. ...

It wasn't a question of how much of this information the fielder
gets; the question was about _what_ the information is. And that's
something that can be determined only by testing for the controlled
variable. Maybe if we called it the "test for the controlled
information" it would make the conventional
psychologists feel better.

BP: I was thinking more from the viewpoint of the controlling system,
but along the same lines. "What the information is" invites
classification ("it's information about where the baseball is"),
whereas I was thinking that the information in the perception of the
baseball's position has to be quantitative, so it can be compared
with a reference perception to generate an error signal and thus
produce the amount of output needed to keep the error as small as it
is. Just having the right number of bits is useless unless each bit
has the correct value and the bits are put in the right sequence;
it's important that the most significant bit come first, and so on.

The "amount of information" is not, as I understand it, the same
thing as the magnitude of the quantity being represented; rather,
it's the number of bits in the representation regardless of their
values, 1 or 0. An ASCII character requires 7 or 8 bits of
information to specify it (depending on the character set in use),
but saying that doesn't tell us what the character is. The meaning of
a message depends on what the characters are; the amount of
information in the message doesn't.

This may be the entire basis of the disagreement about "information
about the disturbance in the perceptual signal." You and I naturally
took this to be "information about the magnitude of the disturbance
in the magnitude of the perceptual signal." And we both clearly saw
that this is impossible: the magnitude of the controlled quantity is
different from the magnitude of the disturbance by an amount that
depends on the disturbance function and the magnitude of the output
quantity as transformed through the feedback function, neither value
being represented in the perceptual signal. So knowing only the value
of the perceptual signal, the control system could not deduce what
the value of the disturbance is. The external omniscient observer, of
course, could do that (but would have no need to do it, being able to
perceive all pertinent variables in the environment and the control
loop). It is conceivable that the same number of bits of information
is present in all the signals and quantities, but that doesn't confer
the ability to say what the values of the bits are at any moment.

This is probably the basic reason why Shannon divorced information
from meaning.

Best,

Bill P.

Does this resolves some aspects of the issue being discussed.
Arthur

[From Bill Powers (2009.10.07.0908 MDT)]

Arthur Dijkstra 2009.10.07.0858

>>BP earlier: I see a parallel problem in information theory. To
>>achieve any given end, it is not sufficient to have a certain
>>quantity of information. It is necessary to have exactly the right
>>information, and the right information can't be discovered by
>>specifying the number of bits of it that must present.
>
AD: The remarks about the right information needed to achieve any given end
triggered me.
I think I only get the feel for this discussion but it makes me think of the
following requirements for effective control.
These requirements state that the channels carrying 'information' between
the controlled and controlling system must have:
1.Channel capacity: The channels must have sufficient capacity to 'transmit'
a given amount of information relevant to variety selection in a given time.
2.Change or transformation capacity: The possible states (variety) that can
be distinguished within the 'information'
3.Transduction capacity: Wherever the information on a channel capable of
distinguishing a given variety crosses a boundary, it undergoes
transduction; the variety of the transducer must be at least equivalent to
the variety of the channel.

....

Does this resolves some aspects of the issue being discussed.

BP: These may all be actual requirements for control, but nobody could build a control system given only these requirements. When I say "right information" I mean the right magnitudes and rates of change of magnitudes of the signals (including sign) -- and a lot more than that. The three requirements you mention, as I interpret them, can be cast in more traditional terms:

1. Channel capacity --> bandwidth. In the frequency domain, the channel must be able to pass a signal with a frequency range from zero (or some lower limit) up to some maximum frequency.

2. Change or transformation capacity ---> dynamic range. The channel must be able to handle variations in signal magnitude from some minimum to some maximum. This is usually a matter of signal-to-noise ratio: the smallest signals can't be distinguished from the noise level, so can't be transmitted accurately. This also determines the minimum difference between two magnitudes that can be accurately detected.

3. Transduction capacity ---> input and output amplifier characteristics. The gain and phase shift of input and output amplifiers must be sufficient to reproduce accurately all waveforms passing through the transducers.

These requirements were all well-known long before information theory came into being. They apply to control systems, radio and other transmitters and receivers, and even mechanical systems. The only new aspect in Ashby's way of expressing the requirements is the introduction of the measure called variety or information, which is basically the log to the base 2 of the channel capacity. The connection of this measure to ideas like "entropy" is a metaphor, not a physical principle. The fact that two physical processes are described by the same form of an equation does not mean that they have anything else in common. The motion of a mass on a spring with frictional losses is described by the same second-order differential equations that describe the behavior of a somewhat underdamped negative feedback control system, but there is no other connection between the mass and the control system. We can say that the control system behaves like a mass on a lossy spring, but that does not mean the control system is a mass on a lossy spring. It isn't.

The main shortcoming of these three requirements is that they don't give a clue as to how to build any particular device or system. It's as if the specification manages to focus on peripheral, though necessary, aspects of a system design while leaving out all of the essential details. It's like analyzing a racing car and informing everyone that one basic requirement for achieving maximum speed is that the interface between car and the road has to have a minimum flattening of the supporting arc to produce equilibrium between gravitational and pneumatic forces. The mechanic standing around while this guy generalizes might then explain, "He means you have to pump up the tires." That is true, but it doesn't take you very far toward understanding what makes a racing car go.

Information theory or the law of requisite variety doesn't take you very far toward understanding how a control system works. It's a bit like reminding the engineer designing the car not to forget to put gas in the tank.

Best,

Bill P.