paradox of control?

[From Bruce Nevin (980303.1300)]

(Martin Taylor 980228 17:50)--

One might indeed
begin to believe in magic if it were _not_ possible to
extract the disturbance waveform from the waveform of the
perceptual signal, since the perceptual signal is the _only_
access the control system has to anything about the external
world. But luckily, the system proves to be physical and
non-paradoxical, after all.

Martin, what is the paradox? I challenge you to state it.

If you don't invoke "information about the waveform of the disturbing
signal/influence being in the perceptual input" how is the control system
paradoxical?

  Bruce Nevin

[Martin Taylor 980305 21:15]

Bruce Nevin (980303.1300)]

(Martin Taylor 980228 17:50)--

One might indeed
begin to believe in magic if it were _not_ possible to
extract the disturbance waveform from the waveform of the
perceptual signal, since the perceptual signal is the _only_
access the control system has to anything about the external
world. But luckily, the system proves to be physical and
non-paradoxical, after all.

Martin, what is the paradox? I challenge you to state it.

You are asking the wrong person. My claim was that there is _no_
paradox.

If you don't invoke "information about the waveform of the disturbing
signal/influence being in the perceptual input" how is the control system
paradoxical?

It is not paradoxical, whether you do or whether you don't.

Why do you pose the question to me? To me the operation of a control
system is straightforward, easily described, and easily understood. I have
the impression that to some people it is complex and paradoxical. Ask
them.

Martin

[From Bruce Nevin (980306.0858)]

Martin Taylor 980305 21:15--

Martin, what is the paradox? I challenge you to state it.

You are asking the wrong person. My claim was that there is _no_
paradox.

What *would be* paradoxical if you were unable to "extract the disturbance
waveform from the waveform of the perceptual signal"?

I do not believe there would be any paradox even then, and that this
derived value buys you nothing. What do you believe it buys you?

  Bruce Nevin

  "Je n'avais besoin de cette hypothese."
  -- LaPlace

[Martin Taylor 980306 10:45]

Bruce Nevin (980306.0858)

Now I understand your question, and why you addressed it to me.

What *would be* paradoxical if you were unable to "extract the disturbance
waveform from the waveform of the perceptual signal"?

The assertion is that the control system is physical, and that there are
only two inputs to it, and two outputs from it (note that this excludes the
insertion of random noise, because that would count as another input).

The control loop's physics are described by a set of functions of one or
two arguments. This means that the output of each function depends _only_
on the (current and historical) value(s) of its input(s). The inputs are
all scalar waveforms (which means each has only one numeric value at
each moment in time).

Symbols: The values of the different signals need to be labelled if we
are to talk about them. Most have conventional uses, but there are two
for which there is a problem. I will use "d" for the external input to
the CCEV, whose output is qi, and "x" for the internal input to the CCEV
(these are the influences from the disturbance and from the output
respectively, and the CCEV is where those influences combine, just as
the comparator is where the perceptual signal and the reference signal
combine).

The functions are connected in a loop. They are, starting from the output
signal as a function of the error signal (so as to be clear about the
notation):

o = Fo(e)
x = Fe(o)
qi = x + d (the CCEV)
p = Fp(qi)
e = r - p (the comparator)

Which completes the loop. If the system is physical, each of these
expressions has a symbol on the left side whose current value can be
determined from the current and historical values of the symbols on the
right side. One can string these together: e = r - Fp(qi) gives the error,
if the reference value (including its history) and the value of the
CCEV (including its history) are known. (Actually, in this specific
case, the history is not needed, because the subtraction operator is
not a time-extended function).

Let's string a few more:

e = r - Fp(x + d) = r - Fp(Fe(o) + d))

If the system is physical, then e is undetermined unless r and d are known.

Let's turn that equation around a bit.

e - r = Fp(Fe(o) + d))

Fp^-1(e-r) = Fe(o) + d

d = Fp^-1( e - r) - Fe(o)

Now the question arises about the paradox. If Fp^-1() is single-valued,
then d is computable from e, r, and o. If Fp^-1() is multiple-valued,
d has a discrete set of possible values.

How does an inverse function of one variable have multiple values? It happens
if the direct function is non-monotonic. In this case, it means that if
the perceptual signal first increases and then decreases as qi increases,
Fp^-1() becomes two-valued over at least part of its range. This is a
range of qi where control is lost, because the feedback becomes positive
(Bill provides a nice example of this, with a cubic perceptual function).

So long as there is control, we don't have to worry about the possibility
of Fp^-1() being multiple-valued, and d can be recovered. We can treat
Fp as being a set of monotonically rising segments, since the positive
feedback that occurs in the declining segments means that qi never stops
in those segments, but flips immediately to the next segment (as Bill's
demo illustrates). Fp(^-1) is still multiple valued, because for a given
value of p we could be on one or the other of the rising segments, but the
historical values of the variables include any transients that occur during
the positive feedbcak "flip" between segments, so the appropriate segment
for computing d is determinable.

Now, I'm sure you noticed that I haven't closed the loop yet. We need
one more step. Go back to:

e = r - Fp(Fe(o) + d)) and backtrack to see what "o" is.

e = r - Fp(Fe(Fo(e) + d))

Now we do the same as before, to get

d = Fp^-1(e-r) - Fe(Fo(e))

The same argument applies, even when the loop is completed.

But, I hear you say, This requires knowledge of "e", an internal signal in
the loop. "d" is determined only if we know r and e, and we can't know e
by direct measurement. That's OK, too, because e = r - p, so we can write:

d = Fp^-1(p) - Fe(Fo(r-p))

So long as there is control and the reference signal is constant, d can be
extracted from the waveform of the perceptual signal. It would be paradoxical
if there were control and d could not be extracted, because that would
say that at least one of these supposed functions was a non-physical
device (at least until you get down to the quantum level, at which the
quantum uncertainty would take the role of inserted noise from another
external input).

I do not believe there would be any paradox even then, and that this
derived value buys you nothing. What do you believe it buys you?

The _ability to derive_ the value buys you control. To actually derive
the value buys you nothing.

Martin

[From Bruce Nevin (980306.1210 EST)]

Martin Taylor 980306 10:45--

The _ability to derive_ the value buys you control. To actually
derive the value buys you nothing.

Isn't that backward? It's because of control that the analyst can derive
the value x and show its correspondence with d. The ability of an external
analyst to derive the value x has no effect on the existence or possibility
or actuality of control.

I have not worked through the equations. I will try when I get some time.
But intuitively it does not make sense to me.

Here's a value x that an analyst can derive from internal properties of the
control loop.

Here's a value d that the analyst can derive by factoring out the control
loop output o from the observed state of the controlled variable cv. (Not
generally by measuring all affects on cv other than o.)

o opposes d.

x is a kind of complement of o within the control loop during its control
of cv in resistance to d.

I am not surprised that x corresponds to d. It seems intuitively obvious
that it should. The mathematics of the derivation bears this out: x
corresponds to d, even though there is no transmission of information about
d into the control system, and no way that the control system could use
such information even if there were.

  Bruce Nevin

[Martin Taylor 980307 0:25]

Bruce Nevin (980306.1210 EST)]

Martin Taylor 980306 10:45--

The _ability to derive_ the value buys you control. To actually
derive the value buys you nothing.

Isn't that backward? It's because of control that the analyst can derive
the value x and show its correspondence with d.

No, that's not so. It's because of control that x corresponds to d. That
much is true. But x is not the critical point. d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

The point is that the system is _physical_, meaning that when the independent
variables are known, the dependent ones can be computed. There's one
independent variable from the external world, and that's the one at issue.
So long as the loop is complete, _all_ the variables in it can be derived
from a knowledge of the functions in the loop and of the two independent
variables (the reference and the disturbance signals). If the loop is
broken somewhere, the values beyond the break may not depend on one or
other of the inputs, depending on where the break is. For example, if
Fe(o) = 0, the output doesn't affect qi, and p = Fp(d). But the output
still is a function of d and r. It is o = Fo(r-Fp(d)).

I guess this last is a way to see that the disturbance effect goes around
the loop just like any other effect. Imagine Fe() being not quite a complete
break, so that the output has a tiny effect on qi, and then gradually
imagine the strength of the Fe() link increasing until the output effect
on qi is large and the loop gain high (negative, of course). You go
continuously from no control to full control, but you haven't changed the
way the disturbance signal affects either the perceptual signal or the
output signal. All you have done is to allow any signal in the loop to
affect itself around the loop.

There is no place for magic here. Just a straightforward instance of control.

Martin

[From Bill Powers (980307.0649 MST)]

Martin Taylor 980307 0:25--

No, that's not so. It's because of control that x corresponds to d. That
much is true. But x is not the critical point. d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

Martin, that is completely false. The variable d is a physical variable
separate from qi, linked to qi additively through a function Fd. What you
can compute given p and r (or more directly, qi and Fe(o)) is DS, the
disturbing signal under your definitions. If you know the disturbing
signal, however, you can't in general compute d, because there are normally
many disturbances acting through many paths to produce DS. You would have
to know the form of every link between every disturbance and qi.

I am going to object every time you use the symbol d when you should say DS.

If the loop is
broken somewhere, the values beyond the break may not depend on one or
other of the inputs, depending on where the break is. For example, if
Fe(o) = 0, the output doesn't affect qi, and p = Fp(d). But the output
still is a function of d and r. It is o = Fo(r-Fp(d)).

In each case d should be changed to DS if you want your statements to be
correct.

Best,

Bill P.

[From Bruce Nevin (980308.1046 EST)

Martin Taylor 980307 0:25--

Bruce Nevin (980306.1210 EST)]

Martin Taylor 980306 10:45--

The _ability to derive_ the value buys you control. To actually
derive the value buys you nothing.

Isn't that backward? It's because of control that the analyst can derive
the value x and show its correspondence with d.

No, that's not so. It's because of control that x corresponds to d. That
much is true. But x is not the critical point. d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

This is a misstatement. The externally derived variable d cannot be derived
from the loop-internal variables p and r. The error variable e is derived
from perceptual input p and reference input r.

When you say x is not the critical point perhaps you have forgotten what x
is. It is your "waveform of the disturbance influence" which you can derive
from loop-internal factors, thereby demonstrating to you that information
about the disturbance has been transmitted from the environment into the
control system. For reasons that you have not yet explained if this were
not so then the control system would (you believe) be paradoxical and/or
non-physical.

The external variable d can be derived from the observed state of the
controlled variable cv less the observed output o of the control loop into
the environment. It is an external variable, that is, it can be derived by
an analyst from things observed outside the control system.

The internal variable x is the one that you can derive from p, r, and "the
loop functions". It requires privileged access by the analyst to these
factors inside the control system.

I said "let's call it x" because DS, "disturbance signal," "waveform of the
disturbing influence," etc. are tendentious, question-begging terms that
lend themselves to confusion. Substituting x in place of "the waveform of
the disturbance signal" here is what you said about its derivation (980228
17:50):

an analyst who knows the loop functions and the reference
signal can indeed recover ... [ x ]
to within the precision of control, if allowed access to the
perceptual signal.

The point of your calling x "the disturbance signal" was that x (derived
without reference to the external variableS summed up as d or Fd(d). I say
variableS because, as Bill remarked (980301.0940 MST)

"Fd(d)" is itself shorthand for the true general
case, in which multiple disturbances contribute to the state of qi via
multiple paths, each embodying its own disturbance function. Just think of
steering the car when it is simultaneously affected by tilts in the
roadbed, winds, offcenter loads, soft tires, ruts, and dragging brakes.

Today you say you need to know d and r plus the loop functions in order to
compute any variable in the loop.

The point is that the system is _physical_, meaning that when the independent
variables are known, the dependent ones can be computed. There's one
independent variable from the external world, and that's the one at issue.
So long as the loop is complete, _all_ the variables in it can be derived
from a knowledge of the functions in the loop and of the two independent
variables (the reference and the disturbance signals).

If the point is to derive x and show that x co-varies with d, then this is
beside the point.

I guess this last is a way to see that the disturbance effect goes around
the loop just like any other effect.

"The disturbance effect." Here's another synonym for "information about the
disturbance," I guess. Information about the disturbance does not get
transmitted around the loop. In general, no "effects" get transmitted
around the loop. Only instantaneous values get transmitted. The effect of o
countering d is constructed by continuous feedback reducing the difference
between p and r moment to moment as instantaneous values.

Take the set of values internal to the control system, p, r, and loop
functions. There is a way I suppose of algebraically rearranging values in
formulae, perhaps adding constants, so that they cancel to a zero sum. From
that formula subtract e. What remains is a value that co-varies with d. It
is a mathematical image, like an inverse. Not because information about d
is transmitted to within the control system. Because continuous feedback
reducing the difference between p and r moment to moment as instantaneous
values inside the control system constructs the effect of o countering d
outside the control system. The mirroring of the environment is a function
of control. The appearance that there is information mirrored is a product
of an external analyst looking at the control system and the environment
and observing the correspondence. The analyst asks how this comes about.

The old answer is to say "The disturbance is an objective fact in the
environment. The senses have transmitted information about the disturbance
from the environment into the organism."

The new answer is "The organism is controlling its perceptual input and
bringing it toward a preferred state. The appearance that there is a
disturbance in the environment is merely a projection by me within my
perceptual universe based upon my perceptions of the organism's behavioral
outputs o and what I think is the controlled variable, cv, as I perceive
it. It enables me to guess at what r is, the reference value within the
organism. The organism does not live in the environment that I perceive,
that is, it does not live in my perceptual universe, it lives in its own
perceptual universe. What I perceive as a disturbance is in my perceptual
universe. The organism perceives its controlled perception, which its
actions affect to bring it into a preferred state. If any information
enters the organism, it is about the controlled variable cv, but that is
only the instantaneous, momoment-to-moment state of cv. No waveform or
other construct of information has any reality or any utility except in my
perceptual universe as observer and analyst."

There is no place for magic here. Just a straightforward instance of control.

There is no place for information here. Just a straightforward instance of
control.

  Bruce Nevin

[Martin Taylor 980307 14:13]

Bruce Nevin (980308.1046 EST)

Martin Taylor 980307 0:25--

Bruce Nevin (980306.1210 EST)]

Martin Taylor 980306 10:45--

The _ability to derive_ the value buys you control. To actually
derive the value buys you nothing.

Isn't that backward? It's because of control that the analyst can derive
the value x and show its correspondence with d.

No, that's not so. It's because of control that x corresponds to d. That
much is true. But x is not the critical point. d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

This is a misstatement. The externally derived variable d cannot be derived
from the loop-internal variables p and r. The error variable e is derived
from perceptual input p and reference input r.

Huh? p and r are internal to the loop?

Huh? p = d/(1+G) + Gr/(1+G) and yet you can't derive d from p and r?

Let's try:

(1+G)p = d + Gr

d = (1+G)p - Gr

That looks to me like a derivation of d from p and r. Doesn't it look that
way to you?

I guess your picture of a control loop is way, waaaay, different from mine.
Here's mine, in its simplest form, where the only non-unitary transfer
function is the output function. What does yours look like? I know that
in your picture, p and r are internal to the loop, but I don't know
anything else about your picture.

                      ^ |
perceptual signal (p)| | reference signal (r)
                      > v
                      >---->------- - ---error signal (e)->-
                      > comparator |
                  (p) ^ output function (G)
                      > >
         perceptual input function V
                      > output signal (o) |
  input quantity (qi) ^ |
                      > >
                 CCEV + --<--output influence (x)---Fe------|
                      > >
disturbance influence ^ (d, or, to please Bill, Fd(d) |
                      > V side-effects

When you say x is not the critical point perhaps you have forgotten what x
is. It is your "waveform of the disturbance influence"

Sorry, I thought you were using my notation, since that's the only place
"x" had hitherto been introduced into the discussion. I had used "x" to
represent the influence of the output on the CCEV, as in x = Fe(o). I didn't
realize you intended it to mean something else.

Perhaps before we continue this, you might reciprocate by illustrating what
you think a control loop looks like? It's obviously quite different from
the way I see one.

Martin

[Martin Taylor 980307 14:37]

Bill Powers (980307.0649 MST)]

I am going to object every time you use the symbol d when you should say DS.

I don't mind that if you object equally (and retroactively) every time
Rick uses it in the same way. I'll use whatever symbology makes you happy.

But I think it only reasonable that when I reply to a message in which
the formula p = o+d is used, it is easier to understand the answer if the
terminology stays the same in the reply as in the original.

+Bill Powers (980307.0603 MST)

+When Rick
+says p = o + d, he specifically means that Fd = 1. When you say p = o + d,
+you do not mean that.

I thought I did. I said I did, too.

What do I mean, then? I'd love to know.

Martin

[Martin Taylor 980307 14:45]

Bill Powers (980307.0649 MST)]

Martin Taylor 980307 0:25--

No, that's not so. It's because of control that x corresponds to d. That
much is true. But x is not the critical point. d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

Martin, that is completely false. The variable d is a physical variable
separate from qi, linked to qi additively through a function Fd.

I wish you wouldn't try to win arguments by rhetorical tricks. I was _very_
careful in my definitions.

+(Martin Taylor 980306 10:45)
+Symbols: The values of the different signals need to be labelled if we
+are to talk about them. Most have conventional uses, but there are two
+for which there is a problem. I will use "d" for the external input to
+the CCEV, whose output is qi, and "x" for the internal input to the CCEV
+(these are the influences from the disturbance and from the output
+respectively, and the CCEV is where those influences combine, just as
+the comparator is where the perceptual signal and the reference signal
+combine).

Is it completely false? I think it is completely true.

Martin

Bruce Nevin (980307.0610 EST)

Martin Taylor 980307 14:13 --

Perhaps before we continue this, you might reciprocate by illustrating what
you think a control loop looks like? It's obviously quite different from
the way I see one.

Well, I can kick in some branching, but that's not the main difference.

                                   \|/
                     \|/ reference input function
                      ^ |
perceptual signal (p)| | reference signal (r)
                      > v
                      >---->------- - ---error signal (e)->-
                      > comparator |
                  (p) ^ output function (G)
                      > >
         perceptual input function V
                      > >
  input quantity (qi) ^ |
                      > >
                     /|\ /|\
                   Sensors Effectors
..........................................................................
                      > >
                      > behavioral output (o) |
                      > >
                   cv + --<--output influence (x)---Fe------|
                      > >
disturbance influence ^ (d or Fd(d)) |
                      > V side-effects
                     /|\
          Many sources of disturbance

The main difference is the dotted line.

Everything below the dotted line is in the environment.

Everything above the dotted line is internal to the loop. I see that I
phrased that in a very misleading way. Perhaps that is what threw you off.
What I mean is, in the portion of the loop that is internal to the control
system.

Values inside the control system (qi, p, r, e) can be called signals.
Values in the environment (o, d, cv) cannot be called signals unless you
are talking about a simulated environment or an artificial environment as
in a tracking task, in which case they are signals in the simulation or in
the computer generating the task, not in a general definition of the
control loop.

cv is the measured state of an aspect of the environment as perceived by
the observer. It is presumed to have a direct relation to qi inside the
control system.

o is the measured activity of the control system as an object in the
observer's environment, or that aspect of it that is judged to affect cv
intentionally. (Intentionally: if the CS scratches her arm with her left
hand, that doesn't count; if that action interferes with mouse movement
with the right hand the interference fraction of it must be accounted a
source of disturbance. Irrelevant outputs that are not side effects of
control of the particular p under investigation are not shown in the diagram.)

In the general case, you can measure o and what you think (in your
perceptual universe) is the cv in the environment, CEV. By factoring o out
of CEV you can approximate d. In a controlled lab setting you might have
some confidence in knowing all the main contributors to d, enough to have
some confidence in metering d directly. The value d is an artifact of
control. If what we observers think is the CEV is in fact not controlled,
then our influence on the state of that aspect of the environment is no
disturbance. If r changes during the course of our applying a given
influence, then the portion of our influence that amounts to d also changes
-- with no change in our contribution we are introducing more or less of a
disturbance. This is why d can only be determined by measuring o and
factoring it out of the measured value cv. It is control that brings d into
being as though it were an inverse projection of r into the environment --
remembering that d exists only in the perceptual universe of the external
observer, not in the perceptual universe of the observed control system, so
it is the observer's projection. This derived term d is the observer's
evidence of what r is inside the control system. There can be no such thing
as information about d being transmitted from the environment into the
control system. The concept is nonsense.

The value that you called "the waveform of the disturbance signal" or "the
waveform of the disturbing influence" derived from p, r, and loop functions
is not shown in the diagram. This is an appropriate omission. It is not an
attribute of the control system or of control.

Perhaps now my objection to your deriving d from p and r makes more sense
to you:

Martin Taylor 980307 0:25--

d can be derived from p
and r whether the loop gain is high or low (whether control is effective
or not). If it could not, there would be no possibility of control.

This is a misstatement. The externally derived variable d cannot be derived
from the [...] variables p and r [inside the CS]. The error variable e is

derived

from perceptual input p and reference input r.

The diagram of course leaves out levels of the hierarchy. These may be
thought of as especially complex input functions and output functions.

  Bruce Nevin

[From Bill Powers (980307.1759 MST)]

Bruce Nevin (980308.1046 EST)--

[YOU TO MARTIN]

The external variable d can be derived from the observed state of the
controlled variable cv less the observed output o of the control loop into
the environment. It is an external variable, that is, it can be derived by
an analyst from things observed outside the control system.

I appreciate your taking my side here. To derive the external variable d,
the observer must know the nature of Fd, the physical connection relating d
to what Martin calls the "disturbance signal." The function Fd can be
determined only by examining the environment -- not the control system. Is
that what you're saying here?

I think it's time to reassess our basic control-system diagram. The problem
that is creating all our difficulties is this variable we're calling qi.
That variable is really a fiction, one we use because we can't observe
anyone else's perceptual signals. As long as people keep their control
models simple, as we do in the tracking experiments, there is no problem.
But as soon as we start looking too hard at qi, and treating it as
something that really exists in the environment, confusion appears.

The real situation (as I see it) is the one I portrayed in the Science
article, where the environment is shown as a collection of variables
designated as "v's". The output of the control system affects some of these
v's, and others of these v's affect the input to the control system's input
function. Effects of output on input are relayed via interactions among
the v's.

The input function creates a perceptual signal p which is a function of
some subset of the v's. Since it is the perceptual signal that is
controlled, that is what we should talk about as the controlled variable.
It doesn't exist in the environment; the environment of this system
consists only of v's.

So where, starting with this model, does the idea of the input quantity
come from, an input quantity that is controlled instead of a collection of
v's? It comes from the observer. The observer receives inputs from the same
or a similar collection of v's, and the observer's perceptual input
function combines them to generate a perceptual signal. It is this
perceptual signal that the observer calls "qi" and projects, conceptually,
into the environment to take the place of the collection of v's.

Example:

The control system receives two signals, one indicating the position of the
target and the other indicating the position of the cursor. The perceptual
input function generates a perceptual signal that represents "target minus
cursor" position along some axis. This "distance" signal is what is
controlled.

When the observer looks at the same display screen, the observer also
receives two signals, representing the positions of target and cursor. The
observer's perceptual input function constructs a perceptual signal p that
also corresponds to a "distance." The observer, like the control system,
thus perceives not just two position signals, but a single "distance" signal.

So the observer and the control system would both agree: there is something
called "distance between cursor and target" out there in the world, and
both would agree that the control system is controlling it. They will even
agree when perturbations of "distance" occur.

But there is no "distance," and that's where the current arguments are
taking us off the track. There are only two perceptual signals at the same
level, the one in the control system and the one in the observer that the
observer refers to as qi, or the "CEV" or "CCEV." We deal with the
environment from the standpoint of a system that receives signals
representing spatial locations: position of the hand and mouse, position of
cursor, position of target. And we combine the positions of the cursor and
target to define a controlled perception that is the distance between them.

This is all about the relationship of directly perceived reality to the
constructs we call "physical reality." I have by no means worked out all
the details of this relationship. I don't think I could model any complex
control process in a way that would completely avoid attributing external
reality to some of my own perceptions. This is something that will take a
long time to work out, and in the process I think we will change not only
psychology but physics.

I began this commentary with a strong sense of insight; most of it has
evaporated, but there is still at least a sense of having identified part
of a problem.

Best,

Bill P.

[From Bill Powers (980308.0012 MST)]

Martin Taylor 980307 14:13 --

Huh? p = d/(1+G) + Gr/(1+G) and yet you can't derive d from p and r?

Let's try:

(1+G)p = d + Gr

d = (1+G)p - Gr

That looks to me like a derivation of d from p and r. Doesn't it look that
way to you?

No, Martin, you're still using d instead of DS. DS is Fd(d), so the
equation you want to start with is

p = Fd(d)/(1+G) + Gr/(1+G), or

p = DS/(1+G) + Gr/(1+G)

In that form, it's clear that you can't derive d from loop variables and
functions alone. You need to know the form of Fd, which is a property of
the environment and can change over time.

I think you're also giving the analyst some abilities that a real
experimenter doesn't have. A real experimenter may have a model of the
organism that leads to the solution you give, but the real experimenter
can't directly observe the variables inside the behaving system. The
variables p, r, and e, as well as the functions Fi and Fo (and the
comparator), are hypothetical variables and functions, not directly
observed. If you try to base your arguments on _observables alone_, you
must start with the observed values of qi and qo, and the observed form of
Fe. As I believe we agree now, DS can be derived from them: DS = qi -
Fe(qo). We can't derive DS from p and r, because p and r must themselves be
inferred from observations of qi and qo, as well as the behavior of these
variables when known disturbances are applied through known disturbance
functions to qi. For example, r is determined from the long-term average
value of qi, or else from the more complex treatment I used in the paper on
experimental measurement of purposes in Wayne Hershberger's book. We
usually assume Fi to be some simple function of observables, and Fo is
determined by fitting different models to actual behavior and picking the
Fi that gives the best fit. In no case can we simply measure p, r, or e.

Best,

Bill P.

[From Bill Powers (980308.0036)]

Martin Taylor 980307 14:37 --

I am going to object every time you use the symbol d when you should say DS.

I don't mind that if you object equally (and retroactively) every time
Rick uses it in the same way. I'll use whatever symbology makes you happy.

Rick never uses d except as a special case of Fd(d) where Fd is a
multiplier of 1, but could be something else. He simply writes d instead of
1 * d.

In the case where Fe(o) = 101 and qi = 1, Fd(d) or DS = -100. However, if
Fd happens to be a multiplier of 5, then d would be -20, while DS would
still be -100. So d and DS are the same only in one special case.

DS cannot be experimentally manipulated without using a real d acting
through some Fd. This is because you can't arbitrarily manipulate either qi
or qo and still have an operating control system. The only choices are to
manipulate some independent environmental variable which influences qi or
qo additively, or to alter some parameter in Fe in such a way as to leave
qi and qo free to vary.

But I think it only reasonable that when I reply to a message in which
the formula p = o+d is used, it is easier to understand the answer if the
terminology stays the same in the reply as in the original.

+When Rick
+says p = o + d, he specifically means that Fd = 1. When you say p = o + d,
+you do not mean that.

I thought I did. I said I did, too.

But you didn't -- you couldn't mean the same thing. If Fd is anything but
1, it is not true that DS and d have the same value. DS and d are not
interchangeable.

What you're doing here is equivalent to an electrical engineer deciding
that whatever other engineers do, he is going to use the symbol R for
capacitance. If your objective is to spread confusion, then using DS and d
interchangeably is an excellent way to do that.

Best,

Bill P.

[From Bill Powers (980308.0054 MST)]

Martin Taylor 980307 14:45 --

Martin, that is completely false. The variable d is a physical variable
separate from qi, linked to qi additively through a function Fd.

I wish you wouldn't try to win arguments by rhetorical tricks. I was _very_
careful in my definitions.

+(Martin Taylor 980306 10:45)
+Symbols: The values of the different signals need to be labelled if we
+are to talk about them. Most have conventional uses, but there are two
+for which there is a problem. I will use "d" for the external input to
+the CCEV, whose output is qi, and "x" for the internal input to the CCEV
+(these are the influences from the disturbance and from the output
+respectively, and the CCEV is where those influences combine, just as
+the comparator is where the perceptual signal and the reference signal
+combine).

I reject those definitions. In this context "d" is already taken -- you
can't just redefine it and ignore the physical meaning it formerly had. I
know you think your definition is preferable to mine, but if we took a poll
on CSGnet right now, I'll bet that your definition would be turned down
almost unanimously. It is intolerable to have different people using the
same important symbol with different meanings, and I got there first. You
just have to find a different symbol that is recognizeably different from
d. In PCT, d means an independent physical variable that affects the
controlled variable through some general function Fd. There may be many d's
acting at the same time. Let's leave it that way. In fact, I insist on that.

Best,

Bill P.

[From Bruce Gregory (980308.1030 EST)]

Bill Powers (980307.1759 MST)

But there is no "distance," and that's where the current arguments

are

taking us off the track. There are only two perceptual signals at

the same

level, the one in the control system and the one in the observer

that the

observer refers to as qi, or the "CEV" or "CCEV." We deal with the
environment from the standpoint of a system that receives signals
representing spatial locations: position of the hand and mouse,

position of

cursor, position of target. And we combine the positions of the

cursor and

target to define a controlled perception that is the distance

between them.

This is all about the relationship of directly perceived reality to

the

constructs we call "physical reality." I have by no means worked

out all

the details of this relationship. I don't think I could model any

complex

control process in a way that would completely avoid attributing

external

reality to some of my own perceptions. This is something that will

take a

long time to work out, and in the process I think we will change

not only

psychology but physics.

I think you are right about what it will take to work this out. One
way to interpret the very puzzling nature of quantum mechanics is
that it shows us the breakdown that occurs when we try to press
concepts from our perception of the world to levels those
perceptions never had occasion to reach before.

Bruce

[From Bruce Nevin (980308.1425 EST)]

Bill Powers (980307.1759 MST)--

Me (980308.1046 EST [should have said 980307 -- BN]) to Martin:

The external variable d can be derived from the observed state of the
controlled variable cv less the observed output o of the control loop into
the environment. It is an external variable, that is, it can be derived by
an analyst from things observed outside the control system.

To derive the external variable d,
the observer must know the nature of Fd, the physical connection relating d
to what Martin calls the "disturbance signal." The function Fd can be
determined only by examining the environment -- not the control system. Is
that what you're saying here?

Yes, I guess I should have said factor out Fe(o) from qi and that leaves
Fd(d), and if you know Fd() as a specification of the relevant physics of
the environment you can derive d, assuming only one source d or a sum of
all sources of disturbance. The point wasn't the derivation of d, it was
just to say that d is in the environment, and this other derived variable
that I was calling x is supposed to be inside the control system.

I was after Martin's claim (repeated in his 980228 17:50) that information
about waveform of the disturbance influence (or disturbance signal) is
present inside a control system, and that if this were not so the control
system would be a paradoxical or nonphysical system involving magic. He
said that a control system uses information about the waveform of the
disturbance influence (or disturbance signal), despite the fact that it
does not have access to such information.

The basis (I thought) for saying this was that he could derive a value
equivalent to "the waveform of the disturbance influence/signal" from p, r,
and loop functions. Because the aim was to show that this information was
inside the control system, I assumed that the loop functions in the
derivation must be system-internal just as p and r are internal.

I suggested that we use x for this derived value inside the control system,
just to distinguish it from "the disturbance influence/signal" itself,
which is in the environment. If x = Fd(d) then I suppose you could say that
information about Fd(d) is present inside the control system.

I didn't look at the derivation that demonstrates the presence of
"information about the waveform of the disturbance influence/signal". I'm
not sure I could find it. I took Martin's word for it.

My response is that information is not required as an explanatory principle
in control theory, it is not a quantum in modelling control, and it is not
required in order for control to happen. If you want to talk about
information and entropy you can say that the process of control creates
information. It does not use it.

I can see that Fd() specifies properties of the environment. However, it
seems obvious that "disturbance" is created (in the observer's perceptual
world) by the control that opposes it. Whether and how much some activity
or state d is a disturbance is a function of the control that is being
disturbed. Hold environmental sources of influence d constant while
changing the reference value r and the effect of Fd(d) as disturbance
varies. (Similarly if you change loop gain.) Cease to control a perceptual
input p and environmental influences on sensors affecting qi and p cease to
be disturbances. This is because qo varies as a consequence, and qi
includes both Fe(o) and Fd(d). So a disturbance is a perception projected
into the environment by the external analyst, a difference that should make
a difference in qi, but which the countervailing actions of the observed
control system cause to make no difference.

  Bruce Nevin

[From Bruce Nevin (980308.1807)]

Bill Powers (980307.1759 MST)--

But there is no "distance," and that's where the current arguments are
taking us off the track. There are only two perceptual signals at the same
level, the one in the control system and the one in the observer that the
observer refers to as qi, or the "CEV" or "CCEV." We deal with the
environment from the standpoint of a system that receives signals
representing spatial locations: position of the hand and mouse, position of
cursor, position of target. And we combine the positions of the cursor and
target to define a controlled perception that is the distance between them.

Bill Powers (980301.1123 MST)

There is no single
environmental variable that corresponds to the perceptual signal. There are
only the component input variables. I've never liked this idea of a CEV,
because it seems to sneak naive realism back into the theory even though it
is basically denied by the theory. It's like saying, "Sure, I know that the
taste of lemonade isn't really there in the mixture of acids, salts, and
oils -- but let's pretend it's really there anyway." That's all it is, a
pretense.

This is all about the relationship of directly perceived reality to the
constructs we call "physical reality."

Sounds like the higher up in the hierarchy you go the more the perceptual
signal is a construct projected onto the environment. There are sensors for
acid, salt, perhaps oil, I don't know; there is no direct sensor for the
taste of lemon. The suggestion seems to be that signals directly from
sensors (all intensity perceptions?) are closest to reality.

Sensors are also input functions. Low-level perceptions are also
constructs. But being as close as we can get to the physical environment we
call these perceptions "directly perceived reality". Am I with you?

I have by no means worked out all
the details of this relationship. I don't think I could model any complex
control process in a way that would completely avoid attributing external
reality to some of my own perceptions. This is something that will take a
long time to work out, and in the process I think we will change not only
psychology but physics.

One direction is to try to attain an objective point of view. I don't hear
you suggesting that. Another direction is to try to attain the point of
view of the control system being modelled. Same problem of projecting one's
own perceptions, but probably more attainable.

I think you are right about what it will take to work this out. One
way to interpret the very puzzling nature of quantum mechanics is
that it shows us the breakdown that occurs when we try to press
concepts from our perception of the world to levels those
perceptions never had occasion to reach before.

The perceptual universe of another organism is a place our perceptions have
scarcely ever had occasion to reach before.

Another direction is not to attribute external reality to theoretical
constructs until they shape up into a model of a domain, then interpret the
model as a whole. I think of this because it's what Harris did with
language, but I know that means nothing to you. Still it might be where
we're headed. We might not be able to resolve these sorts of questions
until our models of hierarchical perceptual control are more fully
articulated even without such answers.

  Bruce Nevin

[From Bruce Abbott (980309.0900 EST)]

Bruce Nevin (980308.1807) --

Bill Powers (980301.1123 MST)

There is no single
environmental variable that corresponds to the perceptual signal. There are
only the component input variables. I've never liked this idea of a CEV,
because it seems to sneak naive realism back into the theory even though it
is basically denied by the theory. It's like saying, "Sure, I know that the
taste of lemonade isn't really there in the mixture of acids, salts, and
oils -- but let's pretend it's really there anyway." That's all it is, a
pretense.

This is all about the relationship of directly perceived reality to the
constructs we call "physical reality."

Sounds like the higher up in the hierarchy you go the more the perceptual
signal is a construct projected onto the environment. There are sensors for
acid, salt, perhaps oil, I don't know; there is no direct sensor for the
taste of lemon. The suggestion seems to be that signals directly from
sensors (all intensity perceptions?) are closest to reality.

In most (all?) cases perception is a construct from the start. The retina
contains four kinds of photoreceptor, three of which together provide the
basis of color vision. These three are "tuned" to different frequency bands
of the electromagnetic spectrum; their peaks overlap somewhat. What we call
yellow usually begins as a particular frequency of light, one that activates
the two lowest-frequency photoreceptors about equally well. Equal
co-activation of these receptors and the near-absence of activity in the
highest-frequency receptor ultimately give rise to the perception we label
as yellow.

In this example "yellow" is "really there" (in the sense of there being
present a particular frequency of light which we label as yellow). However,
shining "red" and "green" light (particular light frequencies) on the same
part of the retina will also produce equal co-activation of the two
lowest-frequency receptors and will also produce the perception of yellow.
In this case the yellow (as a frequency of electromagnetic wave) is not
present at all. This is the method used in color TV to produce all the
colors of perception using only the three "primary colors."

Similarly, variations in temperature seem to be conveyed as the intensity
signals of a single sense, but in fact the perception of temperature arises
from the co-activation of two, more specific, receptors in the skin. One
receptor becomes more active as skin temperature increases above some value
(the "warm" receptor). The other has an inverted U-function, becoming more
active as skin temperature falls below some value _or_ as the temperature
reaches an upper extreme. This latter receptor is labeled the "cold"
receptor and conveys the impression of "burning" hot when co-active with the
warm receptor. (Aside: I have omitted sensory adaptation effects in this
description, which complicate the picture somewhat.)

Regards,

Bruce