assuming the point of view

[From Bruce Nevin (980228.1047 EST)]

Bill Powers (980228.0308 MST) --
Re: Disturbing Disturbances

The only way to make any deductions about d is from outside the
control system, where the form of Fd can be observed as well as d
itself. That information is not available to the control system;
it is available ONLY to the omniscient external observer.

This is a recurrent issue. PCT research must take the point of view of the
control system. Many people take the point of view of the observer outside
the control system.

When we conduct the Test we are stuck in the external point of view; we
cannot see into the control system. But even then we must do the best we
can to reconstruct the point of view of the control system. We guess what
aspect is being controlled in the complex environmental variable (CEV). The
CEV is perceived from our point of view as observer, of course, but we also
must make a guess at how it is perceived by the control system. We must do
this because the aim is to build a model. To build and study a model we
absolutely must assume the internal point of view of the control system.
The whole point is to get inside the black box.

Why would one want to take the external point of view of the observer? Here
are some guesses.

o It's easier, it's obvious. Well, OK, engagement with PCT
  entails the discipline of setting aside the external observer's
  POV in favor of the internal control system's POV. Maybe we
  just haven't emphasized that enough. Maybe learning to practice
  that discipline just takes time.

o I don't notice that I'm doing it. The discipline of setting
  aside my own POV slips in the heat of argument or in the
  pursuit of a research interest.

o I really believe the external POV is the right way to engineer
  a control system "plant".

These are just guesses. Among those who keep coming back with the external
point of view, can you give us some introspective evidence as to what is
going on? Martin, Bruce Abbott, it's come up explicitly for you in recent
weeks, but I'm not really picking on you. I know there are many more slips
of the viewpoint among us. How does it come about? How can we better help
newcomers to turn this corner?

  Bruce

[From Bruce Gregory (980228.1214 EST)]

Bruce Nevin (980228.1047 EST)]

These are just guesses. Among those who keep coming back with the external
point of view, can you give us some introspective evidence as to what is
going on? Martin, Bruce Abbott, it's come up explicitly for you in recent
weeks, but I'm not really picking on you. I know there are many more slips
of the viewpoint among us. How does it come about? How can we better help
newcomers to turn this corner?

In a post last week I made the modest suggestion, strongly disavowed by Jeff,
that he and Rick live in different perceptual worlds.

Bruce

[From Bruce Nevin (980228.1301 EST)]

Bruce Gregory (980228.1214 EST)--

In a post last week I made the modest suggestion [...]
that [two people] live in different perceptual worlds.

That's a given.

Then what?

The Test is a way to figure out something about the perceptual world of
another, by an iterative trial-and-error process. I think we do this
informally all the time. I think we call on one another to do it. If
something disturbs a perception that person A is controlling, and A
imagines that you can do something about it, A is likely to let you know. A
might disturb aspects of CEVs that you are controlling in the expectation
that you will fix A's problem, if only in the hope that A will go away and
stop bothering you. Seems to be the way kids do it.

Collusion in controlling a disturbance is a basic form of agreement, though
interpersonal conflict is maybe a more basic form--opposite ends defining a
seesaw. Perhaps we use interpersonal conflict and cooperation as means for
learning what perceptions to control and how better to control them. That
includes learning what perceptions to ignore as of no account.

Bruce Gregory (980228.1634 EST)--

... the "many worlds" aspects of PCT
... is very important in making sense of conflict.

What do you have in mind?

  Bruce Nevin

[From Rick Marken (980228.1740)]

Bruce Nevin (980228.1301 EST)

The Test is a way to figure out something about the perceptual world of
another, by an iterative trial-and-error process. I think we do this
informally all the time. I think we call on one another to do it. If
something disturbs a perception that person A is controlling, and A
imagines that you can do something about it, A is likely to let you know.

I agree. In fact, after I posted my earlier post I realized that most
people really _go out of their way_ to try to let other people know
what they are controlling for. I think they do this when they want
other people to join in and control for the same thing. Heck, that's
what we are doing all the time on this net: trying to get people to see
what we are controlling for and trying to recruit their assisance in
controlling for it. Politicians do this; professors do this; business
people do this; even dictators do it (Hitler was pretty up front about
wanting a perception of Europe sans Jews).

We are using language all the time to try to describe what we _want to
perceive_ (what perceptions we are controlling) and to try to convince
others to want the same perception.

Disagreements are probably the result of the difficulty of using
language to describe what we want to perceive and the result of
the fact that people ofteh don't want to perceive the same things. But
I think you are absolutely right, Bruce -- we are using a linguistic
version of The Test all the time to try to figure out what others
want; and, because of the limitations of language, we have to do
this even when others _want_ us to know what they want.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Gregory (980301.0518 EST)]

Bruce Nevin (980228.1301 EST)

Bruce Gregory (980228.1214 EST)--

In a post last week I made the modest suggestion [...]
that [two people] live in different perceptual worlds.

That's a given.

I'm not sure Jeff would agree. Nor am I sure that this outlook is shared by
more than a vey few.

Then what?

Two possibilities. You dwell in a world given by the perception that others
are autonomous agents in their own perceptual worlds. Or you don't. You alter
your world, or you try to impress it on others.

Bruce Gregory (980228.1634 EST)--

... the "many worlds" aspects of PCT
... is very important in making sense of conflict.

What do you have in mind?

See above.

Bruce

[Martin Taylor 980228 17:50]

Bruce Nevin (980228.1047 EST)

... Among those who keep coming back with the external
point of view, can you give us some introspective evidence as to what is
going on? Martin, Bruce Abbott, it's come up explicitly for you in recent
weeks, but I'm not really picking on you. I know there are many more slips
of the viewpoint among us. How does it come about? How can we better help
newcomers to turn this corner?

I think it is not only newcomers who have the problem of being unable to
keep straight the viewpoint.

A few years ago there was a long discussion, in which a whole taxonomy of
viewpoints was proposed. I don't remember how the taxonomy went, because
I have usually found only two kinds of view to be important. I call these
"the analyst's (or external observer's) view" and "the control system's
view." There's also a "designer's" (or "engineer's") viewpoint, which looks
at the requirements the environment imposes on a system if it is to control
X, but we rarely are concerned with that (except possibly when considering
the evolution of control, and perhaps reorganization).

It was because originally I unwittingly took the control system's view
while Bill P and Rick were taking the analyst's view that we got into
the recently reawakened hassle about the word "disturbance" and "information
about the disturbance".

In 1992-3, when I first talked about information-theoretic analysis of
the control system, it simply never occurred to me that anyone
might conceive that a control system with only a single scalar value to
represent what it "knows" about the state of the world might be thought
to represent the different sources of disturbing influences on its
perceptual signal. Where the disturbing influence comes from is simply
irrelevant to the control system, so why would anyone be concerned about
it, let alone use the word "disturbance" not to refer to an effect seen
by the control system, but to refer to unknowable and irrelevant distant
sources of influence? But that turned out to be what Bill and Rick were
doing, as I eventually discovered.

Bill and Rick, unknown to me, were taking the analyst's view. The analyst
can indeed see what the control system cannot--that there are possibly many
sources of external influence on the CCEV, and they had used the word
"disturbance" to represent any one of these sources of influence. So they
assumed I was saying that something in the perceptual signal could allow
an analyst looking at that signal to distinguish the different sources of
disturbing influence. Nothing I could say then or subsequently would
disabuse them of this notion, and apparently Bill still believes it
(see Bill Powers (980224.1041 MST) to me:

+ The control system has no way to find out how many different
+disturbances are acting at the same time through different environmental
+connections.

···

+
+This, of course, is your cue to say "But I never said anything to the
+contrary about that; that you could think I might say anything different is
+insulting.")

The hypothestical quote is, of course, precisely correct.

Eventually, in that earlier discussion, we settled on using "disturbing
signal" or "disturbance waveform", or the like, to refer to the influence
on the CCEV, and "disturbing variable" to refer to the source of that
influence, avoiding the term "disturbance" entirely, as being ambiguous.
Nevertheless, we all do use the term "disturbance" quite frequently,
which provides much scope for misunderstanding, which is useful to some,
when to misunderstand will protect a controlled perception.

It matters whether one takes the analysts or the control system's viewpoint,
and it may not always be obvious which viewpoint is being taken in a
message--indeed, some messages quite blithely mix the two viewpoints.
When I write a message in which I think there might be ambiguity as to
the point of view, I usually (trying for "always") say which one I am
using. For example, from the control system's view it is not possible
to construct from the perceptual signal the waveform of the disturbing
influence, for the simple reason that the control system has nowhere to
represent such a construction,and nowhere to store it if it could construct
it.

But an analyst who knows the loop functions and the reference signal
(none of which are influenced by the disturbance signal) can indeed
recover the waveform of the disturbance signal to within the precision
of control, if allowed access to the perceptual signal. This seems to
me to be incontrovertible proof that information about the disturbance
waveform is "in" (to use Rick's word) the perceptual signal. The
criciticsm of that statement is sometimes couched in language like "the
control system doesn't know its own functions." True, it doesn't. It
embodies them--to touch on another current thread.

Neither does it demonstrate that the control system extracts the
disturbance waveform separately in generating the output signal. It
doesn't. There is only one output signal, and again (looking from the
viewpoint of the control system) where would the control system
represent the extracted disturbance waveform if it did extract it
from the perception of the changing CEV, the changes in which are
caused by a combination of the output and the disturbance signals?
There is no place in a canonical Elementary Control System for
storing extra stuff like that.

From the analyst's viewpoint, one can demonstrate the information about

the disturbance waveform to be available from the perceptual signal. The
control system uses this information, but it doesn't extract it.
One might indeed begin to believe in magic if it were _not_ possible
to extract the disturbance waveform from the waveform of the perceptual
signal, since the perceptual signal is the _only_ access the control
system has to anything about the external world. But luckily, the system
proves to be physical and non-paradoxical, after all.

Your main question is how to get beginners in PCT to recognize the
existence of the two viewpoints, and to keep them straight when thinking
about control. Other than making explicit in every message which viewpoint
is being taken, and when the viewpoint changes, I have no idea. But it
isn't only beginners who get into trouble, so perhaps we should try always
to emphasize which viewpoint is being taken, rather than just when we
think it might be ambiguous.

Martin

[From Bill Powers (980301.1026 MST)]

Martin Taylor 980228 17:50--

In 1992-3, when I first talked about information-theoretic analysis of
the control system, it simply never occurred to me that anyone
might conceive that a control system with only a single scalar value to
represent what it "knows" about the state of the world might be thought
to represent the different sources of disturbing influences on its
perceptual signal. Where the disturbing influence comes from is simply
irrelevant to the control system, so why would anyone be concerned about
it, let alone use the word "disturbance" not to refer to an effect seen
by the control system, but to refer to unknowable and irrelevant distant
sources of influence? But that turned out to be what Bill and Rick were
doing, as I eventually discovered.

Beautiful, Martin. My admiration for the Oxbridge method of winning
arguments has just doubled. You told me once that you were invincible in
this area, and I guess I have to agree with you.

Just something to think about:

What you call the Disturbance Signal (DS) can be derived this way:

qi = Fe(qo) + DS

Therefore,

DS = qi - Fe(qo)

"Reconstructing" DS is therefore just a matter of subtracting Fe(qo)
(observable) from qi (observable under a given hypothesis about the
controlled variable). Nothing else needs to be known about the control system.

Reconstructing DS is a different matter from explaining DS. To explain DS
you have to come up with hypotheses about various disturbance functions and
disturbing variables, of which there may be many acting at once. When we
model control systems, we have to create DS by manipulating physical
variables in the role of the disturbance d, and we have to know the
function through which d acts to produce DS. That explains why Rick and I
do not consider the source of the disturbance to be "unknowable and
irrelevant," and indeed why we adopt the analyst's point of view in
modeling behavior.

Best,

Bill P.

[Martin Taylor 980305 16:58]

Bill Powers (980301.1026 MST)

Martin Taylor 980228 17:50--

In 1992-3, when I first talked about information-theoretic analysis of
the control system, it simply never occurred to me that anyone
might conceive that a control system with only a single scalar value to
represent what it "knows" about the state of the world might be thought
to represent the different sources of disturbing influences on its
perceptual signal. Where the disturbing influence comes from is simply
irrelevant to the control system, so why would anyone be concerned about
it, let alone use the word "disturbance" not to refer to an effect seen
by the control system, but to refer to unknowable and irrelevant distant
sources of influence? But that turned out to be what Bill and Rick were
doing, as I eventually discovered.

Beautiful, Martin. My admiration for the Oxbridge method of winning
arguments has just doubled.

If restating plain facts is the "Oxbridge method of winning arguments" I
guess I do try to use it. I'd much rather be sustained or refuted by a
fact than by a rhetorical trick such as using pejorative labels.

However...

We may have here a place where unconscious assumptions have been causing
a great communication gap, that has lasted over several years. I doubt the
gap can be bridged in one message, but at least the first caisson for a
bridge pier might be built.

DS = qi - Fe(qo)

"Reconstructing" DS is therefore just a matter of subtracting Fe(qo)
(observable) from qi (observable under a given hypothesis about the
controlled variable). Nothing else needs to be known about the control system.

Reconstructing DS is a different matter from explaining DS.

It is, indeed. I've never thought the "explaining" DS was of any interest.
At least not in the analysis of an individual control loop. Where the
disturbance comes from is irrelevant to the loop. What matters is how big DS
is, how fast it varies, and stuff like that. Now a designer wanting to
build a control loop for some purpose in a given environment may want to
know where the disturbances might come from so that the design might
include some shielding, or action paths directed at specific components
of the disturbance. But when the designer has done that, the analyst still
is uninterested in "explaining" DS. The analyst just wants to know how
well the control system will work.

Reconstructing the DS is an issue only insofar as the demonstration that
it can be done assures us that the information from its variation is not
lost in the operation of the loop, and specifically, it is not lost from
the perceptual signal. Also, since the reference signal input to the loop
is unaffected by the disturbance, that information comes through the
perceptual input function, and is available also in the error signal
that is the input to the output function.

To explain DS
you have to come up with hypotheses about various disturbance functions and
disturbing variables, of which there may be many acting at once. When we
model control systems, we have to create DS by manipulating physical
variables in the role of the disturbance d, and we have to know the
function through which d acts to produce DS. That explains why Rick and I
do not consider the source of the disturbance to be "unknowable and
irrelevant,"

I said "unknowable and irrelevant to the operation of the control system"
not "unknowable and irrelevant to everybody." When you model a control system
_in its environment_, certainly you model the possible sources of disturbance
and feed them through signal paths of some functional character, combining
them to influence qi (the value of the CCEV). But when you model the
behaviour of a control system, you don't do that. You need to use only DS.
And that's what I've been doing.

and indeed why we adopt the analyst's point of view in
modeling behavior.

Fair enough. But the analyst's point of view on what? On more than the
control loop. Adopting the analysts view _of_ the control loop, explaining
DS is of no interest. Characterizing it is.

Example: A simple control loop with unitary transforms everywhere except
for the output function, which is a simple integrator with an output gain
rate of k per second (i.e. for an input steady at 1.0 for t seconds, its
output is k*t). DS is a sinusoid. (Why is it a sinusoid? Who cares?). What
is the control ratio (the ratio between the excursions of DS and the
excursions of the perceptual signal?
Answer: We can't tell, because it depends on the frequency of the sinusoid.
If the sinusoid is very slow, the control ratio will be very high. If the
sinusoid is fast, it will be low, and if there is any transport lag in the
system, at some frequency of DS, the control ratio might even be worse
than unity.

Point of the example? Just a trivial case in which characterizing DS matters
in talking about the performance of the control system. The analyst who
looks at the control loop can readily determine just how the variations
in DS affect the variations in signal value anywhere in the loop, without
any idea of why DS varies sinusoidally, provided that the sinusoid frequency
is known.

Neither the example nor anything I have written is intended to suggest
that there is a place in the control loop where DS is reconstituted. (I
don't know why I have to say this every time, and I guess I probably don't,
because saying doesn't seem to affect the probability that I will be
claimed to have said that such a place exists.) And neither the example
nor anything I have written is intended to say that the disturbance "causes"
behaviour. The disturbance signal DS, applied to the CCEV of a loop
with known functions, results in a precisely determined pattern of
variations in the signal values anywhere in the loop. Is that "cause-effect?"
Perhaps. But it depends on your philosophical position whether to analyse
the behaviour of the control loop is to "defend the position that
disturbances cause behaviour."

Perhaps this is a small contribution toward a bridge? You are interested
in looking at the sources of disturbances when you model control systems
sitting in modelled environments. I am not interested in the sources of
disturbances when analyzing the behaviour of the control loop itself.

Martin

[Martin Taylor 980306 10:20]

Bill Powers (980301.1026 MST)]

Thinking a bit further about a message to which I already replied:

What you call the Disturbance Signal (DS) can be derived this way:

qi = Fe(qo) + DS

Therefore,

DS = qi - Fe(qo)

That's fair enough. I'm sure it can be derived many ways. It is, after
all, one of the two inputs to the control loop, and any sufficient
set of the control loop equations will be solvable for it.

"Reconstructing" DS is therefore just a matter of subtracting Fe(qo)
(observable) from qi (observable under a given hypothesis about the
controlled variable). Nothing else needs to be known about the control

system.

That also is fair enough. In fact, what you need for this derivation is
only what can be observed from outside. Now look at what else you can do
with this particular equation, if we (as the analyst) contemplate a
S-R psychologist running an experiment, applying a variety of "stimuli"
and measuring various "responses."

qi is controlled quite well (or so we assume). So, as a first approximation,
we can set qi to be a constant C. So

DS = C - Fe(qo)

Fe^-1(DS) = -qo + Fe^-1(C)

The experimenter "applies the stimulus" DS, and measures qo. A reliable
relationship is found, at least if Fe^-1(constant) is constant. The
experimenter thinks that a relationship has been found within the
individual. Is this not the "behavioural illusion?"

Now let's go a little further. A reliable relationship is found only if
Fe^-1(constant) is a constant. This is true only for a small class of Fe()
functions (remember that Fe includes time variation--in a linear system
it would be a Laplace transform). In many cases, the effect of Fe^-1(C)
is _not_ constant, so the relationship between DS and qo is measurable,
and perhaps strong, but not reliable (e.g. correlations may be 0.3 rather
than 0.95).

Here we have a simple equation that not only describes the behavioural
illusion and gives a reason for it, but also suggests, without appealing
to the notion of "noise" or random variation, why S-R experimental
results can be erratic.

This insight (for so I deem it, though it may be well known to Bill and
Rick) supplements another reason I had thought to be the main source of
variability in S-R studies. I bring this up only because it ties in with
another thread (the "teach-your-grandmother-to-such-eggs" thread--e.g.,
Bill Powers (980303.1327 MST)--about there being many inputs to the
Perceptual Input Function, and many influences on the environment of
the output signal). I had assumed that the experimenter would have
been unable to control or to measure all the sources of disturbance
to the PIF, and all the influences of the output on the CCEV (i.e.
on the value of qi) or on the environment. Omitting variable
disturbances and output influences would reduce the reliability
of the relation between "stimulus" and "response" even if the controlled
value of qi were truly constant (Bill posted a simulation of this effect
in the cited message).

···

-----------------------

Reconstructing DS is a different matter from explaining DS. To explain DS
you have to come up with hypotheses about various disturbance functions and
disturbing variables, of which there may be many acting at once. When we
model control systems, we have to create DS by manipulating physical
variables in the role of the disturbance d, and we have to know the
function through which d acts to produce DS. That explains why Rick and I
do not consider the source of the disturbance to be "unknowable and
irrelevant," and indeed why we adopt the analyst's point of view in
modeling behavior.

As I said, I see your point. But it still doesn't explain why Rick is
allowed to use the symbol "d" in P = o + d, whereas it seriously disturbs
some perception of yours when I use the same symbol with the same intent.

Martin

[From Bill Powers (980306.1238 MST)]

Martin Taylor 980305 16:58--

We may have here a place where unconscious assumptions have been causing
a great communication gap, that has lasted over several years. I doubt the
gap can be bridged in one message, but at least the first caisson for a
bridge pier might be built.

DS = qi - Fe(qo)

"Reconstructing" DS is therefore just a matter of subtracting Fe(qo)
(observable) from qi (observable under a given hypothesis about the
controlled variable). Nothing else needs to be known about the control

system.

Reconstructing DS is a different matter from explaining DS.

Reconstructing the DS is an issue only insofar as the demonstration that
it can be done assures us that the information from its variation is not
lost in the operation of the loop, and specifically, it is not lost from
the perceptual signal. Also, since the reference signal input to the loop
is unaffected by the disturbance, that information comes through the
perceptual input function, and is available also in the error signal
that is the input to the output function.

OK. I think we'd best leave it there. When you work out how to apply
information theory to a closed loop, perhaps we can go on to see what
information theory can add to our understanding of control.

Best,

Bill P.

[From Bill Powers (980306.1251 MST)]

Martin Taylor 980306 10:20 --

qi is controlled quite well (or so we assume). So, as a first approximation,
we can set qi to be a constant C. So

DS = C - Fe(qo)

Fe^-1(DS) = -qo + Fe^-1(C)

The experimenter "applies the stimulus" DS, and measures qo.

There are several problems with this.

1. "The experimenter applies DS" means that the experimenter is determining
qi - Fe(qo). But it is the control system that produces qo and stabilized
qi so you can call it C. If the experimenter is to apply a disturbance in a
way that doesn't prevent the control system from working, he must do it by
varying some remote variable d that acts through its own path Fd on qi,
leaving the control system free to generate its own contribution to qi.

2. Your derivation of the inverse should be

Fe(o) = C - DS, or

o = Fe^-1(C - DS)

Your derivation is correct only in the special case where Fe^-1(C - DS) =
Fe^-1(C) - Fe^-1(DS) -- in other words, where Fe^-1 is linear and
single-valued.

A reliable relationship is found, at least if Fe^-1(constant) is constant.
The
experimenter thinks that a relationship has been found within the
individual. Is this not the "behavioural illusion?"

Now let's go a little further. A reliable relationship is found only if
Fe^-1(constant) is a constant. This is true only for a small class of Fe()
functions (remember that Fe includes time variation--in a linear system
it would be a Laplace transform). In many cases, the effect of Fe^-1(C)
is _not_ constant, so the relationship between DS and qo is measurable,
and perhaps strong, but not reliable (e.g. correlations may be 0.3 rather
than 0.95).

I think you're arguing verbally where you need a mathematical argument. It
is impossible for the experimenter to directly determine DS; any attempt to
force DS directly to have a particular value will break the control loop.
Remember that DS is not literally a variable; it is a computed difference
between two variables, and can't just be set to a particular value without
arbitrarily setting the two variables (qi and qo) to specific values. But
if you force qi and qo to specific values, you don't have a control system
any more.

This is why the variable d and the function Fd are needed. Using them is
the only way to apply a disturbance that leaves the control system still
working.

Here we have a simple equation that not only describes the behavioural
illusion and gives a reason for it, but also suggests, without appealing
to the notion of "noise" or random variation, why S-R experimental
results can be erratic.

Cancel that, please; your reasoning is flawed.

This insight (for so I deem it, though it may be well known to Bill and
Rick) supplements another reason I had thought to be the main source of
variability in S-R studies.

It is not an insight; it is a mistake.

As I said, I see your point. But it still doesn't explain why Rick is
allowed to use the symbol "d" in P = o + d, whereas it seriously disturbs
some perception of yours when I use the same symbol with the same intent.

Because the real equation is

p = Fi(Fe(o) + Fd(d)),

where for pedagogical purposes, Fi, Fe, and Fd are selected to be unity
multipliers. The meaning of d in PCT is _always_, with no exceptions, a
remote variable that contributes to qi through some function Fd.

When you use the symbol d, you mean DS, the disturbing signal. That you're
talking about a different variable is not evident when you treat the
special case in which Fe, Fi, and Fd are all multipliers of 1. But when
they have another form, it is immediately clear that DS is not, in general,
the same as d.

Best,

Bill P.

[Martin Taylor 980307 1:08]

Bill Powers (980306.1251 MST)]

Martin Taylor 980306 10:20 --

qi is controlled quite well (or so we assume). So, as a first approximation,
we can set qi to be a constant C. So

DS = C - Fe(qo)

Fe^-1(DS) = -qo + Fe^-1(C)

The experimenter "applies the stimulus" DS, and measures qo.

There are several problems with this.

1. "The experimenter applies DS" means that the experimenter is determining
qi - Fe(qo). But it is the control system that produces qo and stabilized
qi so you can call it C. If the experimenter is to apply a disturbance in a
way that doesn't prevent the control system from working, he must do it by
varying some remote variable d that acts through its own path Fd on qi,
leaving the control system free to generate its own contribution to qi.

Of course. No problem here, and no effect on the argument.

2. Your derivation of the inverse should be

Fe(o) = C - DS, or

o = Fe^-1(C - DS)

No. That's wrong. One applies the inverse function to both sides of the
equation, which is what I did. F^-1(F(x) = x. The assumption that is
hidden in what I did was that F^-1(x + y) = F^-1(x) + F^-1(y), which
is a statement that F^-1 is linear.

Your derivation is correct only in the special case where Fe^-1(C - DS) =
Fe^-1(C) - Fe^-1(DS) -- in other words, where Fe^-1 is linear and
single-valued.

Linear, yes.

A reliable relationship is found, at least if Fe^-1(constant) is constant.
The
experimenter thinks that a relationship has been found within the
individual. Is this not the "behavioural illusion?"

Now let's go a little further. A reliable relationship is found only if
Fe^-1(constant) is a constant. This is true only for a small class of Fe()
functions (remember that Fe includes time variation--in a linear system
it would be a Laplace transform). In many cases, the effect of Fe^-1(C)
is _not_ constant, so the relationship between DS and qo is measurable,
and perhaps strong, but not reliable (e.g. correlations may be 0.3 rather
than 0.95).

I think you're arguing verbally where you need a mathematical argument. It
is impossible for the experimenter to directly determine DS; any attempt to
force DS directly to have a particular value will break the control loop.

I don't see that. DS is _not_ qi. To say that fixing DS forces qi to a
particular value is the same as saying that fixing r forces e to a
particular value. qi = DS + Fe(o), remember?

Remember that DS is not literally a variable; it is a computed difference
between two variables, and can't just be set to a particular value without
arbitrarily setting the two variables (qi and qo) to specific values. But
if you force qi and qo to specific values, you don't have a control system
any more.

I don't suppose you remember your posting of a couple of days ago, about
how a function of two inputs and one output could have an infinity of
value pairs of its two inputs for any given value of its output?

This is why the variable d and the function Fd are needed. Using them is
the only way to apply a disturbance that leaves the control system still
working.

In practice, of course, you always have Fd. The argument is unchanged
by including it. The symbolism is visually less readily understood.

Here we have a simple equation that not only describes the behavioural
illusion and gives a reason for it, but also suggests, without appealing
to the notion of "noise" or random variation, why S-R experimental
results can be erratic.

Cancel that, please; your reasoning is flawed.

Nothing you have said bears on the argument, so I suppose you have some
unstated reason for saying this.

This insight (for so I deem it, though it may be well known to Bill and
Rick) supplements another reason I had thought to be the main source of
variability in S-R studies.

It is not an insight; it is a mistake.

Well, if you know that, perhaps you could present a reason why. Is it
the assumption of linearity in the inverse of the environmental feedback
function? If so, then you could say "it is unproven except in the case
of a linear environmental feedback function" rather than "it is a mistake."
That would be more accurate, unless you have some reason to say it is a
mistake.

As I said, I see your point. But it still doesn't explain why Rick is
allowed to use the symbol "d" in P = o + d, whereas it seriously disturbs
some perception of yours when I use the same symbol with the same intent.

Because the real equation is

p = Fi(Fe(o) + Fd(d)),

where for pedagogical purposes, Fi, Fe, and Fd are selected to be unity
multipliers. The meaning of d in PCT is _always_, with no exceptions, a
remote variable that contributes to qi through some function Fd.

When you use the symbol d, you mean DS, the disturbing signal. That you're
talking about a different variable is not evident when you treat the
special case in which Fe, Fi, and Fd are all multipliers of 1. But when
they have another form, it is immediately clear that DS is not, in general,
the same as d.

That _still_ doesn't explain why it's all right for Rick to say p = o+d,
but all wrong for me to say it.

Martin

[From Bill Powers (980307.0603 MST)]

Martin Taylor 980307 1:08--

2. Your derivation of the inverse should be

Fe(o) = C - DS, or

o = Fe^-1(C - DS)

No. That's wrong. One applies the inverse function to both sides of the
equation, which is what I did. F^-1(F(x) = x. The assumption that is
hidden in what I did was that F^-1(x + y) = F^-1(x) + F^-1(y), which
is a statement that F^-1 is linear.

Your derivation is correct only in the special case where Fe^-1(C - DS) =
Fe^-1(C) - Fe^-1(DS) -- in other words, where Fe^-1 is linear and
single-valued.

Linear, yes.

Linear AND single-valued. To say that the inverse of the input function is
multiple-valued is only to say that p is a function of several variables.

A reliable relationship is found, at least if Fe^-1(constant) is
constant. The experimenter thinks that a relationship has been found
within the individual. Is this not the "behavioural illusion?"

Yes.

In many cases, the effect of Fe^-1(C)
is _not_ constant, so the relationship between DS and qo is measurable,
and perhaps strong, but not reliable (e.g. correlations may be 0.3 rather
than 0.95).

Please explain what you mean by a relationship that is strong but not
reliable.

I think you're arguing verbally where you need a mathematical argument. It
is impossible for the experimenter to directly determine DS; any attempt to
force DS directly to have a particular value will break the control loop.

I don't see that. DS is _not_ qi. To say that fixing DS forces qi to a
particular value is the same as saying that fixing r forces e to a
particular value. qi = DS + Fe(o), remember?

You're right. The experimenter _can_ arbitrarily vary DS. What breaks the
control loop is assuming qi = constant. Both qi and qo are functions of DS.
Approximations should be made only AFTER all equations have been solved
exactly. Otherwise, you can arrive at spurious conclusions. In this case,
you conclude that o can vary when qi remains constant (and r is constant).

Remember that DS is not literally a variable; it is a computed difference
between two variables, and can't just be set to a particular value without
arbitrarily setting the two variables (qi and qo) to specific values. But
if you force qi and qo to specific values, you don't have a control system
any more.

I don't suppose you remember your posting of a couple of days ago, about
how a function of two inputs and one output could have an infinity of
value pairs of its two inputs for any given value of its output?

That has nothing to do with this case. The "two variables" were the
independent (of each other) input signals that were combined to produce the
perceptual signal. Here we have two variables that are in different places
in the same loop, and their relative values are set by the system equations.

Here we have a simple equation that not only describes the behavioural
illusion and gives a reason for it, but also suggests, without appealing
to the notion of "noise" or random variation, why S-R experimental
results can be erratic.

The equation is exactly the one I do use, and have always used, to describe
the behavioral illusion. I don't see why you think you're saying anything new.

Sorry, but your explanation for the "erratic" results in the absence of
noise or random variations doesn't make any sense. If the variations are
regular, and there is no noise, we can write equations that explain the
output exactly. Your simple equation doesn't explain why, in a mostly
proportional system, the output and disturbance still don't correlate
highly with the controlled variable even though they correlate highly with
each other.

It is not an insight; it is a mistake.

Well, if you know that, perhaps you could present a reason why. Is it
the assumption of linearity in the inverse of the environmental feedback
function?

It is the assumption of linearity AND the assumption that the inverse of
the input function is single-valued AND the premature assumption that qi is
a constant.

If so, then you could say "it is unproven except in the case
of a linear environmental feedback function" rather than "it is a mistake."
That would be more accurate, unless you have some reason to say it is a
mistake.

You're still assuming that the inverse of the perceptual input function is
single-valued, because you're still thinking in terms of a nonexistent CEV
in the environment. The only way for p to correspond in a single-valued way
with the CEV is for the CEV to have real existence in the environment. If p
= Fi(x1,x2, ... xn), then there is no CEV in the environment, but only x1,
x2, ... xn.

If you will look again at my little demo program of day before yesterday,
you will see that there is NO single entity in the environment that
corresponds to p. There is no CEV. What you call a CEV is simply a
perception in the observer who applies the same Fi to the same set
x1,x2,... xn.

···

---------------------------------

As I said, I see your point. But it still doesn't explain why Rick is
allowed to use the symbol "d" in P = o + d, whereas it seriously disturbs
some perception of yours when I use the same symbol with the same intent.

Because the real equation is

p = Fi(Fe(o) + Fd(d)),

where for pedagogical purposes, Fi, Fe, and Fd are selected to be unity
multipliers. The meaning of d in PCT is _always_, with no exceptions, a
remote variable that contributes to qi through some function Fd.

When you use the symbol d, you mean DS, the disturbing signal. That you're
talking about a different variable is not evident when you treat the
special case in which Fe, Fi, and Fd are all multipliers of 1. But when
they have another form, it is immediately clear that DS is not, in general,
the same as d.

That _still_ doesn't explain why it's all right for Rick to say p = o+d,
but all wrong for me to say it.

Because you don't mean a remote variable when you use the term d. What you
should write is p = o + DS. If Fd is a multiplier of 3, then p = o + 3*d,
and not p = o + d. If Fd is 3, then it would be wrong for Rick to write p =
o + d. But it would still be right for you to write p = o + DS. When Rick
says p = o + d, he specifically means that Fd = 1. When you say p = o + d,
you do not mean that.

Best,

Bill P.

[Martin Taylor 980307 14:47]

Bill Powers (980307.0603 MST)

Your derivation is correct only in the special case where Fe^-1(C - DS) =
Fe^-1(C) - Fe^-1(DS) -- in other words, where Fe^-1 is linear and
single-valued.

Linear, yes.

Linear AND single-valued. To say that the inverse of the input function is
multiple-valued is only to say that p is a function of several variables.

p is a function of qi, which is scalar. An observer external to the
control loop might observe several variables that could be combined
by some function to form qi, but it is qi that is the variable that
matters in the loop. How qi is formed is as irrelevant to the loop as
is the source of the disturbance(s).

A function whose result is multidimensional is not necessarily multiple
valued. A multiple-valued function is one in which the same set of input
values gives rise to more than one set of output values. If the
dimensionality of the input is lower than that of the result, then the
function is necessarily multiple-valued, but if the dimensionality of
the input is at least as high as that of the result, the function may
or may not be multiple-valued.

In the case in question, the function Fe(o), which yields the influence
of the output on the CCEV, has a scalar input and a scalar result.
Nevertheless, Fe^-1() would be multiple-valued if Fe() was non-monotonic
(as in the real world it often is). Does that matter? Yes it does, and I
discussed how, in the context of Fp^-1().

A reliable relationship is found, at least if Fe^-1(constant) is
constant. The experimenter thinks that a relationship has been found
within the individual. Is this not the "behavioural illusion?"

Yes.

In many cases, the effect of Fe^-1(C)
is _not_ constant, so the relationship between DS and qo is measurable,
and perhaps strong, but not reliable (e.g. correlations may be 0.3 rather
than 0.95).

Please explain what you mean by a relationship that is strong but not
reliable.

More or less what you have frequently talked about. "Reliable" means you
can be pretty sure it will be there every time. "Strong" means it is there
a lot of the time. Here, I'm looking at what an S-R psychologist would
be likely to see and to report.

I think you're arguing verbally where you need a mathematical argument. It
is impossible for the experimenter to directly determine DS; any attempt to
force DS directly to have a particular value will break the control loop.

I don't see that. DS is _not_ qi. To say that fixing DS forces qi to a
particular value is the same as saying that fixing r forces e to a
particular value. qi = DS + Fe(o), remember?

You're right. The experimenter _can_ arbitrarily vary DS. What breaks the
control loop is assuming qi = constant. Both qi and qo are functions of DS.
Approximations should be made only AFTER all equations have been solved
exactly. Otherwise, you can arrive at spurious conclusions. In this case,
you conclude that o can vary when qi remains constant (and r is constant).

Indeed, o _must_ vary if qi and r are constant and DS varies. Likewise,
o must vary if qi is nearly constant and r is constant and DS varies.
You aren't saying anything that influences the argument.

Remember that DS is not literally a variable; it is a computed difference
between two variables, and can't just be set to a particular value without
arbitrarily setting the two variables (qi and qo) to specific values. But
if you force qi and qo to specific values, you don't have a control system
any more.

I don't suppose you remember your posting of a couple of days ago, about
how a function of two inputs and one output could have an infinity of
value pairs of its two inputs for any given value of its output?

That has nothing to do with this case. The "two variables" were the
independent (of each other) input signals that were combined to produce the
perceptual signal. Here we have two variables that are in different places
in the same loop, and their relative values are set by the system equations.

Here we have two variables, o and DS, each of which is affected by the
actions of a different control system. One control system attempts to
set DS to a desired value, the other attempts to set qi to a desired
value. Both can achieve their goal, since one can affect qi by varying o,
and the other has no disturbance to affect the setting of DS.

Incidentally, I _love_ the way you go from a personal assertion to
"Remember that..."

Here we have a simple equation that not only describes the behavioural
illusion and gives a reason for it, but also suggests, without appealing
to the notion of "noise" or random variation, why S-R experimental
results can be erratic.

The equation is exactly the one I do use, and have always used, to describe
the behavioral illusion. I don't see why you think you're saying anything new.

Why did you think the reference to the behavioural illusion was supposed
to be new? But I wasn't aware that you used this equation to show why
S-R experimental results can be erratic.

Sorry, but your explanation for the "erratic" results in the absence of
noise or random variations doesn't make any sense.

Oh, you aren't.

If the variations are
regular, and there is no noise, we can write equations that explain the
output exactly.

Indeed we can.

Your simple equation doesn't explain why, in a mostly
proportional system, the output and disturbance still don't correlate
highly with the controlled variable even though they correlate highly with
each other.

"A mostly proportional system" is key, here. We almost never talk about
mostly proportional systems. Usually we talk about systems in which the
output function is mostly an integrator. Even when it isn't, the effect
of the output on qi usually is, as when the output is a force and the
controlled perception is of a position.

I don't know how the correlations work with a mostly proportional system.
Do you have demo examples?

I do know that in a system that is nearly an integrator between the error
signal and qi, the correlation between perception and DS (or o) must
be low when the loop control ratio is high (and I've been preparing a
message about why this is).

It is not an insight; it is a mistake.

Well, if you know that, perhaps you could present a reason why. Is it
the assumption of linearity in the inverse of the environmental feedback
function?

It is the assumption of linearity AND the assumption that the inverse of
the input function is single-valued AND the premature assumption that qi is
a constant.

What I think you are saying is that my original claim

+Here we have a simple equation that not only describes the behavioural
+illusion and gives a reason for it, but also suggests, without appealing
+to the notion of "noise" or random variation, why S-R experimental
+results can be erratic.

is true if the control loop is the simplest possible one with an
integrator output function, but is not true if the loop is more complicated.

In other words, the relation between Stimulus and response would be
reliable in S-R experiments in which (unknown to the S-R experimenter)
the control loop has non-linear, multiple-valued functions, even though
they are unreliable for the simple control loop to which the analysis
applies rigorously.

You could be right, I suppose.

Let's rephrase the claim.

An S-R experiment in which the "stimulus" is a disturbance to the
controlled perception in a simple control loop with linear components
and an output or environemtal feedback function that includes an integrator
will give erratic results. It is not known whether erratic results will be
obtained if the control system is more complicated.

How's that?

Martin

[From Bill Powers (980308.0107 MST)]

Martin Taylor 980307 14:47 --

Linear AND single-valued. To say that the inverse of the input function is
multiple-valued is only to say that p is a function of several variables.

p is a function of qi, which is scalar.

I can see that we really need to thresh this out. qi is nothing but a
convenience, which allows the observer to label HIS OWN perception of what
is presumed to be under control. There is no literal qi in the environment
where it could be measured. It is p that is the scalar; the actual input to
the perceptual input function is a set of lower-level variables, not a
single variable.

An observer external to the
control loop might observe several variables that could be combined
by some function to form qi, but it is qi that is the variable that
matters in the loop. How qi is formed is as irrelevant to the loop as
is the source of the disturbance(s).

This is where all the confusion is coming from. THERE IS NO QI IN THE
ENVIRONMENT. Instead, there are multiple inputs to a perceptual function,
which yield a scalar output, the perceptual signal. When we, as observers,
apply a similar perceptual input function to the same enviroment, we see a
qi there, just as the control system does. But all that is actually there
is the set of input variables, variously affected by the system's output
and independent disturbing variables.

I don't know how the correlations work with a mostly proportional system.
Do you have demo examples?

Yes, all our tracking experiments, when modeled the most exactly, involve a
leaky-integrator output function that behaves nearly proportionally when
the disturbance varies slowly. Given a very easy disturbance (low
frequencies only), a person can control exceedingly well -- three nines
correlation between d and o. In that case, the correlation of disturbance
with qi, and qi with output, can be less than 0.1, and is often negative.
In other words, it averages close to zero.

I do know that in a system that is nearly an integrator between the error
signal and qi, the correlation between perception and DS (or o) must
be low when the loop control ratio is high (and I've been preparing a
message about why this is).

Never mind -- the correlation between a sine and a cosine is zero, so
naturally a pure integrator leads to a low correlation between output and
the cv. But if that were the right explanation, there would be a high
correlation between do/dt and qi. There is not a high correlation between
those measures. The easily-observable explanation for the low correlation
is the presence of random-looking fluctuations that are unrelated to either
disturbance or output fluctuations. The "system noise" explanation is the
correct one, particularly in cases where the disturbance variations are
very slow and the response of the system is essentially proportional.

Best,

Bill P.