Kennaway paper, information

[From Rick Marken (970418.2340)]

Me:

If I am not able to design an experiment so that it produces clear
and consistent results, I give up on the experiment until a
cleverer person comes along and does it right.

Bruce Abbott (970418.1100 EST)--

Well, that explains why PCT has not advanced beyond simple
tracking experiments in over 25 years. You leave all the
really hard work to others -- and there ain't no others.

How true. But I reassure myself with the knowledge that
reinforcement theory has not advanced beyond simple feeding experiments
in over 60 years.

Me:

Even when there is an observed correlation between d and p it is
still not correct to say that p reduces the system's uncertainty
about d.

Bruce Abbott (970418.2005 EST) --

there is no use arguing about it; it is an empirical question. We
could answer it with appropriate data from a simple tracking study.

I doubt it. We did such studies a couple years ago; Martin is convinced
that they showed that there is information about the disturbance in the
perceptual signal; I (and I think Bill and Tom)
am convinced that they showed that there is no such information in
the perceptual signal.

We can design such studies again if you like. But I think it
might be nice to first discuss a higher level question: Why should
anyone care whether or not there is information about the disturbance in
perception? If there is such information it is certainly not
clear what it would be doing there. If, as you say, the information
about the disturbance is proportional to the correlation between
disturance and perceptual signal (higher correlation means more
information) then the _better_ the system controls, the _less_
information there is about the disturbance. This is a peculiar
relationship if the information about the disturbance is used by
the control system as the basis for generating the outputs that counter
that disturbance; the more information there is about the disturbance,
the more poorly the system does at generating outputs that counter the
disturbance.

It may be, however, that the information about the disturbance is
something that control systems don't want. All control systems might
have a reference level of 0 for the amount of information about the
disturbance that they want to receive. This seems to be Martin's
position and I have to agree with Bruce Gregory that it seems awfully
strange. It is certainly something I have never heard in the literature
on control theory -- even the literature on MCT.

I can tell you why I _don't_ like the idea that there is information
about the disturbance in perception. I don't like it because it
sounds to me like an attempt to conceptualize control in S-R terms.
The information-based models of control that I have seen assume that
information about the disturbance is an aspect of the sensory input
to the system. This information _guides_ the outputs that compensate
for that disturbance. This conception of the control process is
consistent with the fundamental assumption of conventional psychology --
that inputs guide or cause outputs. It is completely inconsistent with
the way control systems actually work. A control system guides
or _controls_ its input; inputs don't guide or control the control
system.

Of course, there might actually be information about the disturbance in
perception. If so, then I wonder what is it there for? If there
is information in the perceptual signal then how can it be anything
other than a side effect -- like the buzz that a neuron makes when
it is hooked up to an amplifier?

I note that you had _no_ comment about my claim that a system
provided a sensor to detect the effect of its own actions on
its perception of the CV would be capable of providing almost
full information about the disturbance. Don't want to admit I
have it right?

Both Bill and I noted that if a system (other than the one doing the
controlling) has access to both the perceptual signal, p, and output
variable, o, (assuming all other functions are constants) then it would
be trivially easy for that system to deduce the disturbance, since p = o
+ d and d = o - p. The system doing the controlling
could compute the estimated value of d too, of course, and everything
would be fine as long as the system "provided" this information to some
other system and didn't use it itself as its perceptual or
output signal.

Of course, in a real situation, you would have to know a lot more
than just the output signal to deduce the disturbance variable(s);
you would have to know the feedback function, the number of
disturbing variables influencing the controlled variable and the
function relating EACH disturbing variable to the controlled
variable.

Best

Rick

[Martin Taylor 970419 15:00]

Rick Marken (970418.2340)] (the "Me:" of what follows)

I didn't want to get back into the information theory discussion, because
it's a fruitless endeavour until I or someone else can do as Bill Powers
asks, and provide a complete information analysis of the control loop.
However:

Me:

Even when there is an observed correlation between d and p it is
still not correct to say that p reduces the system's uncertainty
about d.

Bruce Abbott (970418.2005 EST) --

there is no use arguing about it; it is an empirical question. We
could answer it with appropriate data from a simple tracking study.

I doubt it. We did such studies a couple years ago; Martin is convinced
that they showed that there is information about the disturbance in the
perceptual signal; I (and I think Bill and Tom)
am convinced that they showed that there is no such information in
the perceptual signal.

The study in question was the following.

Question: Does the perceptual signal carry any information about the
varying waveform of the disturbing influence on the controlled environmental
variable?

Assertion: Properties that do not vary carry no information about
anything that does vary.

Experiment: If, by using only properties that do not vary, together with the
perceptual signal, one can construct a waveform highly correlated with the
waceform of the disturbing influence, then the perceptual signal necessarily
carries information about the waveform of the disturbing influence.

These propositions seem incontrovertible. The output function (including the
environmental feedback function, as is oirdinarily done) does not vary--or
at least if it does, it does not do so in any relation with the perceptual
function. What was demonstrated was that by applying the output function
to the perceptual signal, one could construct a duplicate of the disturbance
waveform--at least to the accuracy with which the control system controlled.
This fact seemed to be so self-evident that no demonstration should be
necessary. But it proved necessary, because Marken at the time believed
both (a) that no information about the disturbing waveform passed through
the perceptual signal, and (b) that the experiment would fail.

Marken asserted that we should go ahead and try the demonstration, and we
would see that the disturbance waveform would NOT be reconstructed. But
when the obvious proved to be the truth, what had to go was the notion
that the experiment was correctly stated, not his preconceptions as to what
ought to have happened. He had to now assert that since there is no
information about the disturbing waveform in the perceptual signal (from
prior knowledge), therefore reconstruction of it from the perceptual
signal plus some invariants could not be evidence to the contrary. A
strongly controlled perception resists even the strongest disturbances:-(

As Marken now says, he is convinced that the ability to reconstruct the
disturbance waveform from the perceptual signal, using a constant
function, is evidence that the perceptual signal conveys no information
(Nada) about the disturbance waveform.

Go figure.

The answer probably lies in:

I can tell you why I _don't_ like the idea that there is information
about the disturbance in perception. I don't like it because it
sounds to me like an attempt to conceptualize control in S-R terms.

Yep, so it is--just as is the idea that one can use the Laplace transform
of the different parts of the loop to analyze the loop behaviour;-)
Too bad that the Laplace transform approach works (for linear systems).
It's S-R in exactly the same way. A varying waveform is applied to the
input of the loop component, the transform of the components is multiplied
by the transform of the waveform, and out comes--hey presto S-R--the
transform of the waveform at the output of the component. Put them all
together and one has a loop! How did that happen with all these S-R ideas
floating around?

Don't be so scared, Rick. Loops are loops. They are made of bits and
pieces, sensors and connectors, functions and components--label them
how you will. Its the connections that make the loop out of the bits.
The loop doesn't function any less as a loop because it is made of S-R
components.

Of course, there might actually be information about the disturbance in
perception. If so, then I wonder what is it there for?

Well, a bit of a concession, perhaps. Let me guess at an answer: maybe
because it allows control to happen?

···

-----------------

Bruce Abbott (970418.1440 EST)

If there is "information in the perceptual signal about the disturbance,"
what exactly does this mean? It means that knowing the value of the
perceptual signal would reduce incertainty as to the value of the
disturbance. In other words, there would be some variation in the
perceptual signal that is correlated with variation in the disturbance.

Remember that this is taken over time, not sample by sample. The variations
in the perceptual signal, in a control system, depend not only on the current
value of the disturbance, but on that together with all past values of the
disturbing influence. That's one reason why the correlation between
perceptual signal "now" and disturbance "now" are so low. The current
value of the disturbing influence is only a minor component of the effect
of the disturbing influence on the perceptual signal (like P1 of your
four independent variables that affect the test variable, but more so).

In a perfect control system, this correlation would be zero, because any
effect of the disturbance on the CV would be instantly and perfectly
countered by the system's action. Thus, no information about the
disturbance would be "passed" by the perceptual function.

No. That's quite wrong. It's a limit process, and what you are doing is
saying 0/0 = 0, rather than saying 0/0 = <limit of whatever process
gave rise to the values that approached zero>. So long as the reference
value doesn't change, and the output (including environmental feedback
function) doesn't change, ALL the fluctuations in the perceptual signal
are due to fluctuations in the disturbance, present and historical. If
control is not very good, much of the present value of the disturbing
influence will be represented in the value of the perceptual signal. As
control gets better and better, the perceptual fluctuations reflect more
and more the history, which is in the output signal that counters the
disturbing influence. Less and less does the perceptual signal vary, and
less and less do those variation correspond to what the disturbing variable
is doing "now".

Keeping with the situation in which the form and parameters of the control
loop don't change, and the reference value stays constant, the only
possible cause of any fluctuation in the perceptual signal is the disturbance.
The form of these fluctuations is very much affected by the parameters of
the loop, but they are fixed, and therefore don't appear in the information
analysis. Informationally, the perceptual signal is therefore completely
determined by the disturbance waveform, even though the correlation between
perceptual signal waveform and disturbance waveform approaches zero in the
limit of perfect control. The information about the disturbing waveform
passed by the perceptual signal approaches perfection to the degree that
control does.

Martin

[From Bill Powers (970419.1830 MST)]

Just got back from a quick two-day trip to Grand Junction to get some
warrantee work done on my laptop, and see some new parts of Colorado. Now I
find that my modem wasn't transferred to the new case, so I'll have to use
Mary's computer for a few days, which entails writing into my laptop files
and then transferring them to the other computer for transmission. Yech.

Also, I'm pooped, so I won't try to comment on everthing, or anything much.
I just sent a post I composed yesterday morning but couldn't send because my
mailbox was hung up.

RE: Martin Taylor 970419 15:00

The whole "info in p about d" thing started when I said that the neat thing
about a control system is that it can correct the effects of disturbances
without knowing what is causing them. I thought "disturbance" would be
unambiguous because of the way I have always diagrammed the basic unit of
control:

                            r
                            >
                  p ---->[Comp] ---- e
                  > >
                 [Fi] [Fo]
                  > >
                  qi<----[Fe]------- qo
                  ^
                  >
                   --<--[Fd] <---- qd

where
qi = input quantity ("controlled variable")
qo = output quantity
qd = disturbing quantity or "disturbance" for short
Fi = input function
Fo = output function
Fe = environmental (feedback) function
Fd = disturbance function

Clearly, any number of combinations of Fd and qd could produce the identical
contribution to the state of qi. It is not necessary, of course, for the
control system or its designer to know just how disturbances come to affect
qi -- all that matters to the control system is the state of qi, as
represented by p.

Consider that the control system is in equilibrium. The reference signal is
constant, qd is zero and Fd(0) = 0. The perceptual signal p will be nearly
equal to r, and e will drive the output quantity qo to maintain p as it is.

Now let the disturbance qd change to a nonzero value. This will cause an
error which will produce output that brings the error back toward zero. The
input quantity will initially change its value, and then that value will
return almost to its original measure.

Obviously, the initial change in output was caused by a change in the
perceptual signal, which in turn was caused by a change in the disturbance
qd. So you might say that information about the disturbance got into the
system and as a result the output action changed. But using the definitions
above, this is not correct. What has entered the system via the input
function is information about _a perturbation of qi_. It is easy to show
that this is not information about qd.

Consider the following table showing the value of the disturbing variable,
the form of the disturbance function, the contribution to the state of qi,
and the change of output needed to cancel the effect of the disturbance:

   qd Fd contribution approx change in output
    1 qd^2 1 -1
    3 qd^3 - 26 1 -1
    -4 -qd/4 1 -1
    pi cosine(qd) 1 -1

The contribution of qd to the value of qi would be exactly the same in each
case. The action of the control system would reduce this contribution nearly
to zero, in exactly the same way in each case. Since the variations in qd
and Fd make exactly no difference in the way the control system acts, it is
clear that the control system is getting no information about either qd or
Fd -- nor does it need any.

So what IS it getting information about? A perturbation in qi, and that is
all. The control system can't distinguish between different causes of
perturbations; if the perturbation is the same on several occasions but the
cause is different, the control system will not behave differently.

All this seems simple and straightforward to me. Why did it become so
tangled? Because Martin initially interpreted "disturbance" to mean not qd,
but the contribution of qd (via Fd) to the state of qi. OF COURSE, if the
control system is to react properly to the externally-induced change in qi,
it must receive information about the change in qi. It receives this
information in the form of a change in the perceptual signal p; when qi
changes, p changes. This information is used in the comparator, which
changes the value of the error signal when p changes relative to the
reference signal. The change in the error signal is what causes the change
in the output quantity. And the change in the output quantity, acting
through the environmental feedback function Fe, produces a contribution to
the state of the input quantity that is equal and opposite to the
contribution from the disturbance acting through the disturbance function.
None of this requires any knowledge of qd or Fd.

How did qd come to be confused with Fd(qd)? It was my fault. For pedagogical
reasons, I almost always assume that both Fd and Fe are simple unity
multipliers. In our tracking experiments and other control experiments, we
define the functions and variables so we can assume unity multipliers. Among
other advantages of doing this, the data showing the results from tracking
experiments then always show a familiar symmetrical "mirroring" of the
disturbance by the control system's output, which we have called the
"butterfly" figure, from the kind of figure that frequently gets this name
in a Rohrschach test.

But this "mirroring" exists only when Fd and Fe are both simple constants of
proportionality, preferably equal. It would have been simple to eliminate
this mirroring effect, by choosing a different Fd. For example, suppose that
Fd had been k*qd*sin(exp(qd)), and Fe, as before, a multiplier of 1. The
control system would continue to work just as before, but now there would be
no nice accurate mirroring of qd by qo. But why should I have done this,
when that complex function k*qd*sin(exp(qd)) means nothing with respect to
the operation of the control system? All that counts is the _contribution_
to the state of qi, the _value of the function_. And by using a simple pair
of unity multipliers, we could see how the output was responding to the
disturbance -- alas for clarity.

If I had used this strange function or anything else but simple small
constants of proportionality, the significance of the fact that the output
mirrors the disturbance would have been seen immediately: there is no
significance in this fact -- it is purely an artifact of my choice of Fd and
Fe. The question of "reconstructing" the waveform of the disturbing quantity
would never have arisen -- in the case above, because no reconstruction is
even possible, the function not having any tractable or unique inverse. This
is of no consequence in the operation of the control system, but it makes a
large difference in preventing confusion of the type we have been struggling
with for the last few years.

What all this boils down to is that Rick Marken is literally and exactly
right when he says that the information about the disturbance in the
perceptual signal -- or anywhere else in the control system -- is zero, nil,
nada, none. Rick is using the original definition of "disturbance" which has
been used consistently and invariably throughout all technical discussions
on CSGnet since it started: "the disturbance" means qd. All technical
discussions, that is, except those that Martin Taylor introduced, where he
changed the meaning of "the disturbance" from qd (the value of the variable
qd) to Fd(qd) (the value of the function Fd(qd)). If we simply go back to
the original definition of "disturbance" as a technical term in PCT, this
entire argument should disappear.

Best,

Bill P.

[Martin Taylor 970420 23:30]

Bill Powers (970419.1830 MST)]

So what IS it getting information about? A perturbation in qi, and that is
all. The control system can't distinguish between different causes of
perturbations; if the perturbation is the same on several occasions but the
cause is different, the control system will not behave differently.

All this seems simple and straightforward to me.

And has ANYONE ever thought differently?

Why did it become so
tangled? Because Martin initially interpreted "disturbance" to mean not qd,
but the contribution of qd (via Fd) to the state of qi.

Only because it was so clear that it is impossible for the control system
to react to anything else, that the notion of "disturbance" as meaning
something of which the control system could know nothing was a complete
surprise. And, I thought then and think now, completely illogical as a
word usage. But when you made it clear that this nonsense is what you
mean by "disturbance", of course I accepted it and tried to use the
longer form (when I didn't forget). But it is absurd in talking about the
signals that occur in a control loop, to use a significant term to refer
to something that has no relevance to the control loop.

OF COURSE, if the
control system is to react properly to the externally-induced change in qi,
it must receive information about the change in qi.

That's all we have ever talked about, and you know it.

It receives this
information in the form of a change in the perceptual signal p; when qi
changes, p changes. This information is used in the comparator, which
changes the value of the error signal when p changes relative to the
reference signal. The change in the error signal is what causes the change
in the output quantity. And the change in the output quantity, acting
through the environmental feedback function Fe, produces a contribution to
the state of the input quantity that is equal and opposite to the
contribution from the disturbance acting through the disturbance function.
None of this requires any knowledge of qd or Fd.

Right. This paragraph allows us to get back to the program we once upon a
time attempted.

Rick is using the original definition of "disturbance" which has
been used consistently and invariably throughout all technical discussions
on CSGnet since it started: "the disturbance" means qd.

I think you are being very kind to Rick, but I don't mind, if he agrees
that this is the reason he has sustained his objections despite knowing
very well that qd has never been the object of discussion

All technical
discussions, that is, except those that Martin Taylor introduced, where he
changed the meaning of "the disturbance" from qd (the value of the variable
qd) to Fd(qd) (the value of the function Fd(qd)). If we simply go back to
the original definition of "disturbance" as a technical term in PCT, this
entire argument should disappear.

Well, at least the nonsense of saying that "there's no information about
the disturbance (qi) in the perceptual signal" should disappear.

Thanks for saying this. It makes life much easier. But I'd like to make
a plea for you to return to what you once said you would do: talk about
qd as "the disturbing variable" rather than "the disturbance." qi is what
affects the control system, and it would be much easier if we could
use "the disturbance" generically to refer to something that is part
of the control diagram, rather than to something that isn't.

···

-------------
Having said all that, I should point out that if x = f(y) where f is a
single-valued function, x has perfect information about the fluctuations
in y, and if f is invertible, y has perfect information about fluctuations
in f. So, if the perceptual signal passes any information about qi, it
passes the same amount of information about qd. What it would have no
information about is Fd.

Martin

[Martin Taylor 970421 13:00]

Bill Powers (970419.1830 MST)]

What all this boils down to is that Rick Marken is literally and exactly
right when he says that the information about the disturbance in the
perceptual signal -- or anywhere else in the control system -- is zero, nil,
nada, none. Rick is using the original definition of "disturbance" which has
been used consistently and invariably throughout all technical discussions
on CSGnet since it started: "the disturbance" means qd.

I find this very hard to reconcile with Rick Marken (970418.0900 PDT), in
which "disturbance" seems very clearly to mean the effect of the disturbing
influence on the controlled environmental variable in the absence of the
control system output:

The perceptual function transforms sensory input (s) into a perceptual signal
(p): p = f(s). Assuming the simplest case, where the sensory variable is the
additive result of disturbance and output (as in a tracking task), p = f(o+
d). So I interpret the statement above to mean that the perceptual function,
f(), passes information about d to somewhere (the comparator?). I presume
that this information, once "passed", is carried by some medium (p?) to its
destination. So when you say that there is "imperfect information about the
disturbance passed by the perceptual function" I can't help but think you are
saying that there is information about the state of d in the state of p. But
it is easy to show that there is absolutely no information about d
in p; knowing p tells you nothing about the state of d.

I think you'd better find some other way to exculpate Rick:-)
If he wants to be, that is.

Martin

[From Rick Marken (970421.1100)]

Bill Powers (970419.1830 MST) --

What all this boils down to is that Rick Marken is literally and
exactly right...Rick is using the original definition of
"disturbance"..."the disturbance" means qd.

Martin Taylor (970421 13:00) --

I find this very hard to reconcile

I bet;-)

with Rick Marken (970418.0900 PDT), in which "disturbance" seems
very clearly to mean the effect of the disturbing influence on
the controlled environmental variable in the absence of the
control system output:

How do you get that? I said:

...the sensory variable is the additive result of disturbance and
output (as in a tracking task), p = f(o + d).

Clearly, the word "disturbance" refers to d, which is a variable, not a
"disturbing influence" on the controlled variable. The variable
d is exactly equivalent to qd, as Bill says.

I think you'd better find some other way to exculpate Rick:-)

I think Bill would rather scalp me than exculpate me any day.

Richard Kennaway (970421.1615 BST) --
Richard Kennaway (970421.1735 BST) --
Richard Kennaway (970421.1610 BST) --

I love you, Richard Kennaway!

Best

Rick

[From Bill Powers (970421.1415 MST)]

Martin Taylor 970421 13:00--

I find this very hard to reconcile with Rick Marken (970418.0900 PDT), in
which "disturbance" seems very clearly to mean the effect of the disturbing
influence on the controlled environmental variable in the absence of the
control system output:

[Citing Rick]

The perceptual function transforms sensory input (s) into a perceptual
signal (p): p = f(s). Assuming the simplest case, where the sensory
variable is the additive result of disturbance and output (as in a

tracking >>task), p = f(o+d).

I think you'd better find some other way to exculpate Rick:-)
If he wants to be, that is.

You missed the fact that Rick is using the simple case where the disturbance
function and the environmental feedback function are both unity multipliers,
as well as using a unity multiplier as the input function Fi. The general
case would be

p = Fi[f(o) + g(d)]

When you use unity multipliers, the contribution of the disturbance d to the
state of the controlled quantity is simply the magnitude of d. However, in
the general case the contribution is g(d), where g is the disturbance
function. In that case, the effect of d is not the same as the magnitude of
d, and it is impossible to deduce the state of d from knowledge of the
variables and functions in the control loop. You must also know the form of
g, which is not deducible from anything inside the control system. In the
general case, there is no information in the control system about d. Rick is
still right.

Best,

Bill P.

[Martin taylor 970422 00:00]

Rick Marken (970421.1100)]

Bill Powers (970419.1830 MST) --

What all this boils down to is that Rick Marken is literally and
exactly right...Rick is using the original definition of
"disturbance"..."the disturbance" means qd.

Martin Taylor (970421 13:00) --

I find this very hard to reconcile

I bet;-)

with Rick Marken (970418.0900 PDT), in which "disturbance" seems
very clearly to mean the effect of the disturbing influence on
the controlled environmental variable in the absence of the
control system output:

How do you get that? I said:

...the sensory variable is the additive result of disturbance and
output (as in a tracking task), p = f(o + d).

Clearly, the word "disturbance" refers to d, which is a variable, not a
"disturbing influence" on the controlled variable. The variable
d is exactly equivalent to qd, as Bill says.

I think you'd better look at Bill's diagram, in which he makes quite clear
that the "disturbance" (qd) acts through the disturbance function, to
form Fd(qd), which is what acts on the controlled environmental variable
qi. qi = Fd(qd) + Fe(qo). Your "d" is my "disturbing influence", NOT Bill's
"disturbance". It may be your "disturbance", and I do prefer that usage.

But your "d" is not Bill's "qd". Here's the diagram as Bill posted it:

···

--------------------------------

                            r
                            >
                  p ---->[Comp] ---- e
                  > >
                 [Fi] [Fo]
                  > >
                  qi<----[Fe]------- qo
                  ^
                  >
                   --<--[Fd] <---- qd

where
qi = input quantity ("controlled variable")
qo = output quantity
qd = disturbing quantity or "disturbance" for short
Fi = input function
Fo = output function
Fe = environmental (feedback) function
Fd = disturbance function
-----------------------------

You don't even see the difference between qd and the variable that is
added to the output effect on qi. How can you possibly make coherent
statements about what conveys information about what? -- Oh, sorry, I
forgot. You don't do that in any case, do you :wink:

I think Bill would rather scalp me than exculpate me any day.

Yeah--for a really nice guy, your scalp sometimes looks mighty tempting:-)

Martin

[From Bill Powers (970422.0743 MST)]

Martin Taylor 970422 00:00--

Rick said:

Clearly, the word "disturbance" refers to d, which is a variable, not a
"disturbing influence" on the controlled variable. The variable
d is exactly equivalent to qd, as Bill says.

And you said:

I think you'd better look at Bill's diagram, in which he makes quite clear
that the "disturbance" (qd) acts through the disturbance function, to
form Fd(qd), which is what acts on the controlled environmental variable
qi. qi = Fd(qd) + Fe(qo). Your "d" is my "disturbing influence", NOT
Bill's "disturbance". It may be your "disturbance", and I do prefer that
usage.

You have misread Rick's equation. He said

p = f(o+d)

The only possible interpretation of "f" is that it is the perceptual input
function: in other words, he was saying

qi = o+d
p = f(qi)

So he was using the very special case in which the disturbance function and
the environmental feedback function are unity multipliers. That is the only
case in which the perturbation of qi (with qo constant) is equal to d.

I noticed a reference to a message I may have missed, in which you say that
there will be information about qd in p if Fd is single-valued and has an
inverse, or something like that. This is not a requirement that must be met
if the control system is to counteract the effect of qd. There is no need
for Fd to have an inverse or to be single-valued; this makes absolutely no
difference in the operation of the control system.

Best,

Bill P.

[Martin Taylor 970422 13:30]

Bill Powers (970422.0743 MST)]

Martin Taylor 970422 00:00--

Rick said:
>>Clearly, the word "disturbance" refers to d, which is a variable, not a
>>"disturbing influence" on the controlled variable. The variable
>>d is exactly equivalent to qd, as Bill says.

And you said:
>I think you'd better look at Bill's diagram, in which he makes quite clear
>that the "disturbance" (qd) acts through the disturbance function, to
>form Fd(qd), which is what acts on the controlled environmental variable
>qi. qi = Fd(qd) + Fe(qo). Your "d" is my "disturbing influence", NOT
>Bill's "disturbance". It may be your "disturbance", and I do prefer that
>usage.

You have misread Rick's equation. He said

p = f(o+d)

The only possible interpretation of "f" is that it is the perceptual input
function: in other words, he was saying

qi = o+d
p = f(qi)

How have I misread it? That's exactly what I pointed out that he was saying!
That's WHY I said that your argument about qd being possibly different from
the variable that influenced qi was different from his argument!!!

So he was using the very special case in which the disturbance function and
the environmental feedback function are unity multipliers. That is the only
case in which the perturbation of qi (with qo constant) is equal to d.

True, but irrelevant. His (wrong) point is that when there is a summation,
information about the sum conveys no information about the components of
the sum. What's that got to do with the irrelevant qd? Why is it relevant
that qd = Fd(qd) if Fd() is the unity transform? You have a numerical
equality, not a conceptual equivalence.

He's insisted upon his point three times at least in the last couple of days.
It's a good friend who tries so hard to get him off the hook, but it would
be a better friend who tried equally hard to get him to see the problem.

I noticed a reference to a message I may have missed, in which you say that
there will be information about qd in p if Fd is single-valued and has an
inverse, or something like that. This is not a requirement that must be met
if the control system is to counteract the effect of qd. There is no need
for Fd to have an inverse or to be single-valued; this makes absolutely no
difference in the operation of the control system.

Who said it did? I've always maintained that it didn't, haven't I? qd is
not an input to the control system. Fd(qd) is such an input, no matter
what kind of function Fd() might be, single or multiple-valued, differentiable
or not, time-varying or not. Who cares, when we are analyzing THE CONTROL
SYSTEM?

This is REALLY getting out-of-hand ridiculous :frowning:

What I DID say was that IF you have a function that is single-valued and
so is its inverse, THEN the argument and the function value have a one-to-one
relationship and observation of one is informationally equivalent to
observation of the other. Whether Fd() has those properties is of little
interest, in general. If it does, then if you know Fd(qd), it is
informationally equivalent to qd, but so what when you are interested in the
behaviour of the control system?

I did NOT say (and never have said, despite all your attempts over the
years to persuade people to the contrary) that something that is neither
an input to nor a property of a control system has to be known for the
control system to be analysed. Fd() is neither an input to, nor a property
of, the control system. qd is neither an input to, nor a property of,
the control system. Neither should be shown in any diagram of the control
system, except when it is necessary to emphasize that there can be all
sorts of different sources of influence on the controlled environmental
variable and hence on the controlled perceptual signal. What should be
shown is the input Fd(qd), which I labelled "xd" in my amendment to your
diagram. "xd" is in the spirit of "r", which also is a function of potentially
many input variables. But we don't hear all these irrelvancies about the
control system not having to know which higher-order control systems
contribute what to the reference signal. Why do we have to keep putting
up with hearing about them in respect to the disturbance input?

            ^ V
          p | | r
            > >
            --------C-------
perceptual | comparator |
  signal ^ V
    p | |error signal e
          input output
         function function
            > >
  ==========^===============V=================
    sensory | | output signal o
     signal | |
       s | environmental |
            +-<-feedback-<--
            ^ function
disturbance |
     d |

That's all you need, at least until you start adding models, imagination
loops, and the like. No Fd(), no qd. I've used single letters for all the
variables here, but I'm not trying to usurp your right to name your variables
how you please. These are variable names we have used before, without
confusion. Relating them to your names, p = p, s = qi, d = Fd(qd), o = qo.

I vote for a campaign for the elimination of red herrings (except as a
component of a delicious breakfast:-)

Martin

[From Bill Powers (970422.1622 MST)]

Martin Taylor 970422 13:30]--

You have misread Rick's equation. He said

p = f(o+d)

The only possible interpretation of "f" is that it is the perceptual
input function: in other words, he was saying

qi = o+d
p = f(qi)

How have I misread it? That's exactly what I pointed out that he was
saying! That's WHY I said that your argument about qd being possibly
different from the variable that influenced qi was different from his
argument!!!

I'll agree that Rick's point is different from mine, but it is not wrong. If
the perceptual signal is o+d, there is no way for the control system to
partition it into the part due to o and the part due to d; p is a single
signal. Rick is wrong in saying that d CANNOT (by any means) be deduced from
knowledge of the variables, parameters and functions in the control loop. It
can -- by an external observer who knows everything about the control
system, and in the case where Fd(d) = 1*d. In making this deduction,
however, the external observer is using knowledge and deductive mechanisms
that are not to be found in the control system itself.

His (wrong) point is that when there is a summation,
information about the sum conveys no information about the components of
the sum. What's that got to do with the irrelevant qd? Why is it relevant
that qd = Fd(qd) if Fd() is the unity transform? You have a numerical
equality, not a conceptual equivalence.

Conveys information to whom, or what? If the only information you have is
the value of the sum -- if all that the control system can perceive is the
sum of o and d -- there is no indication that o and d even exist, much less
what their relative contributions to p were!

But we have been through this circle of arguments too many times already,
and it gets nowhere.

What I DID say was that IF you have a function that is single-valued and
so is its inverse, THEN the argument and the function value have a
one-to-one relationship and observation of one is informationally
equivalent to observation of the other. Whether Fd() has those properties
is of little interest, in general. If it does, then if you know Fd(qd), it
is informationally equivalent to qd, but so what when you are interested
in the behaviour of the control system?

I tried to explain the "so what" in an earlier post today. In studying
behavior, is is almost unheard of for Fd(qd) or qi to be noticed, much less
measured. What is normally called a "stimulus" is qd, not Fd(qd). The only
importance of this argument for PCT is that it explains why traditional
observations of the effects of "stimuli" on "responses" fail to produce high
correlations, and fail to reveal controlled quantities or their reference
levels. The main point is that control systems do not have to know either qd
or Fd in order to counteract the effects of disturbances, which means that
any connection of responses to apparent stimuli (qd) is coincidental and of
no general importance.

I did NOT say (and never have said, despite all your attempts over the
years to persuade people to the contrary) that something that is neither
an input to nor a property of a control system has to be known for the
control system to be analysed. Fd() is neither an input to, nor a property
of, the control system. qd is neither an input to, nor a property of,
the control system. Neither should be shown in any diagram of the control
system, except when it is necessary to emphasize that there can be all
sorts of different sources of influence on the controlled environmental
variable and hence on the controlled perceptual signal.

That, of course, is the point of showing them in the standard diagram. When
we study behavior, the first thing we see is the relationship of an
organism's actions to events in the environment. The standard diagram
reminds us of the true role of some of these environmental events.

I don't accept your use of "d" in your diagram because it is confusing to
people trying to understand the standard PCT diagram. And that is all I care
about -- what you do in developing your Information-theoretic analysis of
control systems is your own business, but PCT is mine.

Best,

Bill P.

[Martin Taylor 970422 10:45]

Bill Powers (970421.1415 MST)]

I imagine you are a great fan of Gilbert and Sullivan. I also find a lot of
humour in their nonsense. But not in the middle of what I'd prefer to be
a serious scientific discussion.

Martin Taylor 970421 13:00--
>I find this very hard to reconcile with Rick Marken (970418.0900 PDT), in
>which "disturbance" seems very clearly to mean the effect of the disturbing
>influence on the controlled environmental variable in the absence of the
>control system output:

[Citing Rick]
>>The perceptual function transforms sensory input (s) into a perceptual
>>signal (p): p = f(s). Assuming the simplest case, where the sensory
>>variable is the additive result of disturbance and output (as in a
tracking >>task), p = f(o+d).

>I think you'd better find some other way to exculpate Rick:-)
>If he wants to be, that is.

You missed the fact that Rick is using the simple case where the disturbance
function and the environmental feedback function are both unity multipliers,
as well as using a unity multiplier as the input function Fi. The general
case would be

p = Fi[f(o) + g(d)]

When you use unity multipliers, the contribution of the disturbance d to the
state of the controlled quantity is simply the magnitude of d. However, in
the general case the contribution is g(d), where g is the disturbance
function. In that case, the effect of d is not the same as the magnitude of
d, and it is impossible to deduce the state of d from knowledge of the
variables and functions in the control loop. You must also know the form of
g, which is not deducible from anything inside the control system. In the
general case, there is no information in the control system about d. Rick is
still right.

Rick is right because he doesn't mean what he says, is that it? He SAYS that
the reason is that p = f (o+d)--that p contains no information about d because
p is a function of the sum of the two variables. But he doesn't mean that.
Instead he means that p = f(o) + g(d), where d = g(d) but g(d0 is not
always the unity function. Very funny, hah hah! Quoting Rick again, confirming
that this was what he meant:

+Rick Marken (970421.1100)

+How do you get that? I said:

···

+
+>...the sensory variable is the additive result of disturbance and
+> output (as in a tracking task), p = f(o + d).
+
+Clearly, the word "disturbance" refers to d, which is a variable, not a
+"disturbing influence" on the controlled variable. The variable
+d is exactly equivalent to qd, as Bill says.

As Bill does not say.

Those two variables (o and d) are what you (Bill) called Fe(qo) and Fd(qd)
in your diagram (although you give Fd(qd) the label g(d) this time). You
tell me that this is OK because Rick takes g(d) to be unity. And then that
he is right because if g() were not the unity function, then you couldn't
deduce d without knowing g(). I ought to be rolling in the aisles by this
time, shouldn't I? Sorry not to be.

And I am right, and you are right, and all is right as right can be...
(with apologies for misquoting Gilbert:-).

Sir Arthur Powers, meet Sir Richard Gilbert.

I like fun as much as the next person, but could we be serious just a
little, on this issue? You sent a fine posting on it just a couple of
days ago.

It's hard to keep straight your humorous and your serious postings,
but I shouldn't complain, because I'm sure you find the same problem with
mine.
------------------------

It's quite a separate issue, qd being irrelevant to the operation of the
control loop (Fd(qd) being the variable that affects qi), but did you notice
my observation that if x = f(y) and f() and f^-1() are single-valued, then
knowledge of y gives knowledge of x and vice-versa? And that therefore x
carries all the information possible about fluctuations in y, and vice-
versa? And that if Fd() and Fd^-1() are single valued, Fd(qd) carries
all the information there is about qd?

It's quite true that observing Fd(qd) gives you no idea what the value of
qd might be, if you don't know Fd(). But what you lack information
about is Fd(). In an information-theory sense, if Fd() and its inverse
are single-valued, qd and Fd(qd) have mutual information equal to the
uncertainty of either. And in any case, as I repeatedly have pointed out,
and as you keep pointing oujt to "refute" me:-), the value of qd is quite
irrelevant to the operation of the control loop.

And as for Rick's oft-repeated point that if p is a function of a sum of
two variables, then p conveys information about neither, did you notice
Bruce Abbott's demonstration to the contrary, with four variables?

Or (from earlier today: Martin Taylor 970422 10:10), the number of bits
transmissible by a waveform is B = W*T*log2(1 + P/N), where the waveform is
the sum of two variables, a noise of power N and a "signal" of power P
and bandwidth W. Of course this number depends on the signal and the noise
being uncorrelated, whereas Fd(qd) and Fe(qo) are negatively correlated,
but the principle applies that much information about one component can
be conveyed by the sum.
---------------------

To follow up on that earlier message, it might be interesting to look at
the relation between correlation and transmissible bits.

From Richard Kennaway (970421.1610 BST):

c^2 = 1/(1 + 1/SN) where c is the correlation between two sets of observations.

c could be the correlation between two waveforms, taking successive
statistically independent samples, of which there are 2W per second
for a waveform of bandwidth W. SN is the ratio of the signal and noise
variances, or powers when we consider waveforms. SN = P/N in the
formula for B -- B = W*T*log2(1 + P/N), or W*log2(1 + P/N) bits per
second.

There are two statistically independent samples per unit bandwidth of the
signal. The number of samples per second N, is therefore 2*W. The bit rate
per sample is R = 0.5*log2(1 + P/N).

Now consider correlation.

If c^2 = 1/(1 + N/P)
c^2 = (P/N)/(1 + P/N)
1 - c^2 = 1/(1 + P/N) (this is the coefficient of alienation, I believe, so
     call it A.)

log2(A) = - log2(1 + P/N)

The number of bits per sample is then -0.5*log2(A). Two signals having a
correlation c convey -W*log2(1-c^2) bits per second between each other.

Notice that nowhere has anything been said about a causal relation between
the two signals. There's a correlational relation, and that's all.

Numerical examples: at a bandwidth of 5 Hz there are 10 samples per second.
At a correlation between the signals of c there are R bits per second.

   c R Time to get 1 bit (seconds)
  0.1 0.144 ~7
  0.2 0.59 1.7
  0.3 1.36 0.74
  0.4 2.51 0.40
  0.5 4.14 0.24
For higher correlations than this, it becomes a bit absurd to compute the
time to get 1 bit of information, since each sample conveys nearly half
a bit already. So we shift to time to get 10 bits.
  0.6 6.43 1.55
  0.7 9.70 1.03
  0.8 14.71 0.68
                     Time to get 100 bits
  0.9 23.91 4.18
  0.95 33.45 2.99
  1.0 infinite 0

As you can see, even when correlations are rather low, it still doesn't
take too long to gather a bit of information about one signal from
observation of the other. How do humans do? In the experiment analyzed
in the paper mentioned in the previous message, subjects were asked
to push one of two buttons that corresponded to two lights, one of which
flashed. The rate for this task was about 140 bits/sec, varying across
subjects. In other experiments we did on simultaneous discrimination of
visual and auditory signals we estimated rates from 3 to 25 bits/second.
It depends on the situation, and presumably on what the subjects are trying
to do.

I write the above with some trepidation, knowing that it usually takes a week
or two to reinstitute our agreement about what happens in a psychophysical
study, each time the question arises.

I guess it's time for the next chorus. Does the music or the text come first?

Martin

[From Bill Powers (970422.2107 MST)]

Martin Taylor 970422 10:45 --

I imagine you are a great fan of Gilbert and Sullivan. I also find a lot
of humour in their nonsense. But not in the middle of what I'd prefer to
be a serious scientific discussion.

I think that's an appropriate note on which to end that particular discussion.

Best,

Bill P.

[From Bruce Gregory (970423.1045 EST)]

Bill Powers (970422.1622 MST)]

But we have been through this circle of arguments too many times already,
and it gets nowhere.

Hear! Hear!

I don't accept your use of "d" in your diagram because it is confusing to
people trying to understand the standard PCT diagram. And that is all I care
about -- what you do in developing your Information-theoretic analysis of
control systems is your own business, but PCT is mine.

Amen!

Bruce Gregory