Disturbances

Suppose that Person A lives and works in what is for him a disturbance-laden
environment.

Suppose that Person B lives and works in what is for her, by comparison, a
disturbance-free environment.

It seems to me that large portions of Person A's time and energy would go
into countering or combating disturbances whereas Person B would not have
such demands made of her.

Person A, then, is caught up in never-ending conflict with his environment
and probably does well simply to hold his ground whilst Person B is free to
focus on new goals and end states (i.e., perceptions to be controlled).

Two questions:

  1 - Have such notions been discussed previously on this list?

  2 - If so, are they in the archives or the literature somewhere?

Thanks...

Regards,

Fred Nickols
The Distance Consulting Company
nickols@worldnet.att.net
http://home.att.net/~nickols/distance.html

[From Bruce Gregory (980113.1204 EST)]

Rick Marken (980113.0800)

When there is conflict, the outputs that would "counter" the effect
of a disturbance to a controlled variable lead to _increases_
in that disturbance. In ordinary disturbance resistance, disturbance
countering output counters disturbance; in conflict situations,
disturbance countering output _increases_ disturbance.

Ah ha! Dawns a light....

Bruce

[From Rick Marken (980113.0800)]

Fred Nickols (980113) --

Suppose that Person A lives and works in what is for him a
disturbance-laden environment.

Suppose that Person B lives and works in what is for her, by
comparison, a disturbance-free environment.

It seems to me that large portions of Person A's time and energy
would go into countering or combating disturbances whereas
Person B would not have such demands made of her.

"Time and energy" are not just required to counter disturbances;
they are also required to make perceptions vary to match varying
references (as when I make the perception of my hand match different
references for chord positions on the guitar); they are also required
to make dynamic perceptions match fixed references (as when I make
my hand move in a circle at a fixed speed; dynamic perception of
motion matches fixed reference for speed).

So you can't compare the "time and energy" involved in controlling
just by comparing the environments in which this controlling is
done. The "time and energy" involved in controlling depends on
_both_ the environment (amplitude of prevailing disturbances to
controlled variables) in which perceptions are controlled _and_ on
what the person wants to do with those perceptions (how much the
person varies the references for controlled perceptions).

Person A, then, is caught up in never-ending conflict with his
environment

Disturbance resistance is not conflict. In disturbance resistance,
the output of the control system has no effect on the disturbance
itself; the output only affects the controlled variable. If the
effect of the output on the controlled variable is equal and
opposite to the effect of the disturbance, then the disturbance is
"countered". When there is conflict, the output of the control system
_does_ have an effect on the disturbance itself (via its effect on
the controlled variable, because the disturbance is the output of a
control system that is controlling the same variable).

When there is conflict, the outputs that would "counter" the effect
of a disturbance to a controlled variable lead to _increases_
in that disturbance. In ordinary disturbance resistance, disturbance
countering output counters disturbance; in conflict situations,
disturbance countering output _increases_ disturbance.

and probably does well simply to hold his ground

"Holding ground" is really just "controlling". If person A is
controlling in an environment where he is working at the limits of
his ability to exert control (at the limits of his ability to
generate outputs that can counter disturbances) then he is doing
well to just keep his perceptions under control at all; he doesn't
have any output capability left to bring those perceptions to new
reference states. For example, if you are driving a car in a
heavy wind then you are lucky to be able to keep it on the road at
all, let alone in a particular lane; you certainly can't change
lanes as accurately as you can when there is no wind.

whilst Person B is free to focus on new goals and end states
(i.e., perceptions to be controlled).

Yes. Person B can control for different end states (like different
lanes in the road, for example) and even different goals (reading
the road signs, say) better than can Person A.

Two questions:

  1 - Have such notions been discussed previously on this list?

In some form or another, I'm sure they have.

  2 - If so, are they in the archives or the literature somewhere?

Is there an archivist in the house?

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Rick Marken (970428.1300)]

Bear with me, Ellery;-)

Martin Taylor (970424 09:40) --

Rick is talking about the influence that one or more sources of
disturbance has on a complex controlled environmental variable
(CCEV) defined by the Perceptual Input Function (PIF) of an
Elementary Control Unit (ECU).

No. I was talking about what a disturbance is. And could you please stop
using those goofy abbreviations. I feel like I'm at a meeting
of the Joint Chiefs.

Me:

A disturbance doesn't _contaminate_ a perception; it influences
where the perception sits on its perceptual dimension. A
disturbance doesn't come and go; it's always there. It just
_varies_

Martin:

However, the sources of disturbing influence may come and go.
Lights may be physically removed, loudspeakers taken away.

You missed my point. A light is a disturbance (variable) if it
contributes to the value of a perceptual variable. Removing the
light (turning it off) just changes the value of the disturbance (to 0).
The disturbance is the amount of light from the light source; it is a
variable (d). This variable can take on a range of possible values
(including 0).

The influence of this variable on the controlled perception depends
on (among other things) the nature of the function relating d to the
controlled variable. If the controlled variable is the ambient intensity
of light on the retina then the function relating d to the controlled
variable might be a constant of proportionality: p = kd.
If the controlled variable is a spatial gradient then the function
relating d to the controlled variable might be more complex -- perhaps p
= tan (d). The disturbances I was talking about are environmental
variables; these variables always have some value (including 0).

Before attempting "the Test" an observer cannot do more than guess
what variables another person is controlling

Righto!

"The Test" uses _sources_ of disturbance perceptible by the
experimenter/observer, anticipating that variations in those
sources will induce variations in the true CEV.

Righto!

The experimenter has to deduce the effect of the source variations
on the supposed controlled environmental variable.

Wrongo! The experimenter has to _observe_ the effect of these
disturbances on the supposed controlled variable. This is a VERY crucial
point to understand, Martin.

Bill P. says that it is because the external observer/experimenter
can see only the sources that the word "disturbance" must be
restricted to them, rather than to their influence on the
controlled environmental variable.

Wrongo! Bill makes this distinction because he is talking about two
different things. In the tracking task (for example) the _disturbance_
is the sequence of numbers generated by the computer. The _disturbing
influence_ of these numbers on the cursor depends on what is done to
each number before it is added to cursor position. Typically, we do
nothing to these numbers so the disturbing influence is the same as the
disturbance. But, if we squared the numbers before adding them to the
cursor then the disturbing influence is d^2 rather than d.

In Bill P's usage, there may be many "disturbances". But there
can be only one value of the "disturbance signal."

I don't think this is correct. When Bill says "disturbance" I think he
is referring to a variable -- like the sequence of numbers in the
computer. I thought your "disturbance signal" referred to the same
thing; that is "disturbance signal" = "disturbance variable". If so,
then there can be many "disturbance signals" just as there can be
many disturbance variables that influence a controlled variable.

If, however, your "disturbance signal" refers to the _influence_ of
disturbance variable(s) on a controlled variable (that is, if
"disturbance signal" refers to f(d)) then, indeed, while there may be
many disturbance variables (d1, d2...dn) that influence a controlled
variable, there is only one net _influence_ of these variables on the
controlled variable. That is, if qi = f1(d1)+f2(d2)...+fn(dn)+g(o) then
the net _influence_ of all disturbance variables on qi is the
"disturbance signal" (call it ds) which is ds = f1(d1)+f2(d2)...+fn(dn).

Best

Rick

[Martin Taylor 970428 17:45]

Rick Marken (970428.1300)]

Martin Taylor (970424 09:40) --

Rick is talking about the influence that one or more sources of
disturbance has on a complex controlled environmental variable
(CCEV) defined by the Perceptual Input Function (PIF) of an
Elementary Control Unit (ECU).

No. I was talking about what a disturbance is.

Yes, I think I even used the words: "An excellent description of what a
disturbance does and is."

And could you please stop
using those goofy abbreviations. I feel like I'm at a meeting
of the Joint Chiefs.

Why now, Rick? We've been using the same abbreviations for years without
objection. I define them so that new readers can understand, not for your
benefit, because you are used to them by now. CCEV is much easier for writer
and reader than constant repetition of "Controlled Complex Environmental
Variable," provided it's defined once.

Martin:

However, the sources of disturbing influence may come and go.
Lights may be physically removed, loudspeakers taken away.

You missed my point. A light is a disturbance (variable) if it
contributes to the value of a perceptual variable. Removing the
light (turning it off) just changes the value of the disturbance (to 0).
The disturbance is the amount of light from the light source; it is a
variable (d). This variable can take on a range of possible values
(including 0).

Yes. That's what I said. I'm not clear what point is is that I missed.
The disturbing variable can go away, but the disturbance signal does not.

I wish you would take to heart Bill's admonition to use "disturbing
signal" for the influence that is always there (possibly with magnitude
zero), and leave "disturbance" for the sources of that signal. At least
when the difference between these things is the issue under discussion;-)

The experimenter has to deduce the effect of the source variations
on the supposed controlled environmental variable.

Wrongo! The experimenter has to _observe_ the effect of these
disturbances on the supposed controlled variable. This is a VERY crucial
point to understand, Martin.

How on Earth is the experimenter supposed to _observe_ the effect of
the disturbances on something that is a function of the _subject's_
perceptual input function? The experimenter can _observe_ only the
effect on his/her OWN perceptions. This is an EXTREMELY crucial point
to understand, Rick.

Bill P. says that it is because the external observer/experimenter
can see only the sources that the word "disturbance" must be
restricted to them, rather than to their influence on the
controlled environmental variable.

Wrongo! Bill makes this distinction because he is talking about two
different things. In the tracking task (for example) the _disturbance_
is the sequence of numbers generated by the computer. The _disturbing
influence_ of these numbers on the cursor depends on what is done to
each number before it is added to cursor position.

Of course. But I think you should ask Bill whether he said that the
experimenter can or can't see the influence on the subject's controlled
environmental variable. The fact is that the experimenter can't, whatever
Bill said. But I think he did say that. It's a subtle point, but quite
important. The experimenter can see only through the experimenter's
sensor systems. He can only deduce what goes into the subject's
sensor systems, and therefore can only deduce the influence of the
disturbance on the subject's controled environmental variable.

I don't think this is correct. When Bill says "disturbance" I think he
is referring to a variable -- like the sequence of numbers in the
computer. I thought your "disturbance signal" referred to the same
thing; that is "disturbance signal" = "disturbance variable".

Oh dear. I'm sorry I'm responding to this message of yours, because it is
quite clear that you haven't read either my previous messages OR Bill's
responses to them. So there's really no point in my sending this one
either, is there?

But, just in case...

The control loop has two input signals and two output signals. The two
input signals are called "reference signal" and "disturbance signal" (the
latter name by recent agreement between Bill P and me, more zealously
enforced by him than by me). The outputs are called "perceptual signal"
and "output signal", to complete the list. All of these "signals" are
time-varying scalar waveforms, as are all the signals _within_ the loop.

That is, if qi = f1(d1)+f2(d2)...+fn(dn)+g(o) then
the net _influence_ of all disturbance variables on qi is the
"disturbance signal" (call it ds) which is ds = f1(d1)+f2(d2)...+fn(dn).

You got it. It's what you usually call "d" except when you are trying to
find a way to demonstrate that I know nothing about PCT.

Martin

[From Rick Marken (970428.1555)]

Me:

And could you please stop using those goofy abbreviations.

Martin Taylor (970428 17:45) --

Why now, Rick?

The error finally got big enough.

Martin:

The experimenter has to deduce the effect of the source variations
on the supposed controlled environmental variable.

Me:

Wrongo! The experimenter has to _observe_ the effect of these
disturbances on the supposed controlled variable.

Marin:

How on Earth is the experimenter supposed to _observe_ the effect
of the disturbances on something that is a function of the
_subject's_ perceptual input function?

Try my "Test for the Controlled Variable" demo and see how easy it
is. Remember, the experimenter can perceive the hypothetical controlled
variable too -- though not necessarily as the subject perceives it. An
experimeter can perceive the echo-pattern controlled by a bat even
though she can't perceive it as bat does.

The experimenter can _observe_ only the effect on his/her OWN
perceptions. This is an EXTREMELY crucial point to understand, Rick.

Yes, indeed. Do you understand it?

But I think you should ask Bill whether he said that the
experimenter can or can't see the influence on the subject's
controlled environmental variable. The fact is that the
experimenter can't, whatever Bill said.

Take the Three Squares (Test for the Controlled Variable) demo and call
me in the morning.

Me:

That is, if qi = f1(d1)+f2(d2)...+fn(dn)+g(o) then the net
_influence_ of all disturbance variables on qi is the "disturbance
signal" (call it ds) which is ds = f1(d1)+f2(d2)...+fn(dn).

Martin:

You got it. It's what you usually call "d"

ds is _not_ what I usually call d; d is an environmental variable;
ds is the influence of that variable (or many such variables) on
a controlled variable.

Best

Rick

[From Bill Powers (96042.1826 MST)]

Rick Marken (970428.1555)--

Just a little addendum:]

Martin:

The experimenter has to deduce the effect of the source variations
on the supposed controlled environmental variable.

Me[Rick]:

Wrongo! The experimenter has to _observe_ the effect of these
disturbances on the supposed controlled variable.

Since the controlled variable, the output quantity, and the disturbing
variable are physical variables or functions thereof, to determine the
effect of the disturbing variable on the controlled variable, all one has to
do is remove the putative control system, vary the disturbing variable, and
observe what happens to the controlled variable.

In the rubber band experiment, send the person playing the controller out
for coffee, stick a tack through his end of the rubber bands into the
tabletop, and record how the knot moves as the experimenter's end is moved
around. If it's a nonlinear environment you might have to try several
positions for the controller's end.

Best,

Bill P.

···

Marin:

How on Earth is the experimenter supposed to _observe_ the effect
of the disturbances on something that is a function of the
_subject's_ perceptual input function?

Try my "Test for the Controlled Variable" demo and see how easy it
is. Remember, the experimenter can perceive the hypothetical controlled
variable too -- though not necessarily as the subject perceives it. An
experimeter can perceive the echo-pattern controlled by a bat even
though she can't perceive it as bat does.

The experimenter can _observe_ only the effect on his/her OWN
perceptions. This is an EXTREMELY crucial point to understand, Rick.

Yes, indeed. Do you understand it?

But I think you should ask Bill whether he said that the
experimenter can or can't see the influence on the subject's
controlled environmental variable. The fact is that the
experimenter can't, whatever Bill said.

Take the Three Squares (Test for the Controlled Variable) demo and call
me in the morning.

Me:

That is, if qi = f1(d1)+f2(d2)...+fn(dn)+g(o) then the net
_influence_ of all disturbance variables on qi is the "disturbance
signal" (call it ds) which is ds = f1(d1)+f2(d2)...+fn(dn).

Martin:

You got it. It's what you usually call "d"

ds is _not_ what I usually call d; d is an environmental variable;
ds is the influence of that variable (or many such variables) on
a controlled variable.

Best

Rick

[Martin Taylor 970428 23:30]

Rick Marken (970428.1555)]

It's sooo tempting, isn't it:-)

Me:

That is, if qi = f1(d1)+f2(d2)...+fn(dn)+g(o) then the net
_influence_ of all disturbance variables on qi is the "disturbance
signal" (call it ds) which is ds = f1(d1)+f2(d2)...+fn(dn).

Martin:

You got it. It's what you usually call "d"

ds is _not_ what I usually call d; d is an environmental variable;
ds is the influence of that variable (or many such variables) on
a controlled variable.

That's why you never add "d" to "o" in computing the influence on qi?

Martin

[From Bill Powers (970428.2215 MST)]

Martin Taylor 970428 23:30--

Marken:

ds is _not_ what I usually call d; d is an environmental variable;
ds is the influence of that variable (or many such variables) on
a controlled variable.

Taylor:

That's why you never add "d" to "o" in computing the influence on qi?

For heaven's sake, Martin, wake up. If you have a disturbing variable d, and
it affects the controlled quantity through a multiplier of 1, its effect on
qi is also going to be d. That is exactly what Rick means when he writes "o
+ d". It's just a convenience that we use when we don't want to get into
complications like saying d^3/2 + log(o). It's simplest to talk about
systems with simple forms of Fd and Fe. The only explanation I can think of
for your obtuseness on this point is that you're getting a kick out of
teasing Rick.

Best,

Bill P.

···

Martin

[Martin Taylor 970429 10:15

Bill Powers (970428.2215 MST)]

Martin Taylor 970428 23:30--

Marken:

ds is _not_ what I usually call d; d is an environmental variable;
ds is the influence of that variable (or many such variables) on
a controlled variable.

Taylor:

That's why you never add "d" to "o" in computing the influence on qi?

For heaven's sake, Martin, wake up. If you have a disturbing variable d, and
it affects the controlled quantity through a multiplier of 1, its effect on
qi is also going to be d. That is exactly what Rick means when he writes "o
+ d". It's just a convenience that we use when we don't want to get into
complications like saying d^3/2 + log(o). It's simplest to talk about
systems with simple forms of Fd and Fe. The only explanation I can think of
for your obtuseness on this point is that you're getting a kick out of
teasing Rick.

Oh dear. My secret is discovered. Yes, I do get a kick out of teasing Rick.
Perhaps it's too easy and it wastes net bandwidth, so maybe I'll cease and
desist for a while. Sorry if it bothered people (other than Rick).

However, there is a serious point behind the teasing. For all that numerically
the disturbance variable equals the disturbance signal when the transfer
function is the unity function, _conceptually_ they are very different.
What is added to the influence of the control system output to form the
total influence on qi is the disturbance _signal_, not the disturbance
variable. And that is the variable to which I gave the label "d" in my
little diagram of control loop inputs and outputs, using the same label
in the same place in the formula qi = f(o+d) as Rick would have used.

I like your use of qd for the disturbing variable value. I like the use
of "d" in the formula qi = f(o+d). I like the convention "d==Fd(qd)".
But I don't like the confusion of the disturbance signal with the
disturbing variable that can occur so easily when you say "d = qd when
Fd is the unity function, and that justifies the use of 'd' when I mean
'qd'." You've spent years trying to convince people that I have this
mental confusion when I never have had it. I don't see why you shouldn't
be careful to ensure other people don't get confused.

I don't doubt that you and Rick both know the difference between the
disturbing variable, the disturbance transfer function, the disturbing
signal, and the disturbing signal value. But to press the point that the
external observer can see only the disturbing variable and not the
disturbing signal, and at the same time to use the same label for both--
that could mislead the less skilled in PCT.

Furthermore, Rick uses the _word_ "disturbance" to mean both "disturbance
signal" and "disturbance source", and you don't criticize him for it. But
you do criticize me for doing that even once in a posting in which I have
tried to be specific on each occasion. And you spent a whole message to
criticize me for a message clarifying Rick's dual usages, when you
should (from my viewpoint) have posted a message of your own to make
that clarification if you thought my interpretation of what Rick had
said was wrong.

You wrote a fine message describing what you saw as the nub of this issue.
I agreed with every word, and yet Rick saw the necessity of quoting it
back at me as something I should learn and understand! Do you wonder I
get a little kick out of teasing him?

Martin

[From Bill Powers (970429.0923 MST)]

Martin Taylor 970429 10:15--

Oh dear. My secret is discovered. Yes, I do get a kick out of teasing >Rick.

Thought so.

Furthermore, Rick uses the _word_ "disturbance" to mean both "disturbance
signal" and "disturbance source", and you don't criticize him for it. But
you do criticize me for doing that even once in a posting in which I have
tried to be specific on each occasion. And you spent a whole message to
criticize me for a message clarifying Rick's dual usages, when you
should (from my viewpoint) have posted a message of your own to make
that clarification if you thought my interpretation of what Rick had
said was wrong.

The problem here is once again the default meaning you give to disturbance.
If you automatically think "disturbing quantity", there is no confusion when
you use the unity multiplier and speak as if the disturbance signal is the
same as the disturbing quantity -- because it is.

I noticed this in a post of yours yesterday: you use "disturbing influence"
to mean "disturbing quantity." That is why you think Rick misused the term,
and why he thinks you are still misusing it. Perhaps you missed my post in
which I explained that "influence" can mean EITHER THE CAUSE OR THE EFFECT.
We can say either "The moon influences the tides," meaning that the
influence is the cause, or "The tides represent the influence of the moon,"
meaning that the influence we see is the effect. I realize that the
dictionary gives only the first meaning, but people often use the term in
the other sense.

Perhaps the problem is that "influence" means a "flowing in" of some
mysterious substance (the old meaning), so it actually lies _between_ cause
and effect. Thus it can be associated either with the source or the
destination. It has the same status as "force", a term that has led to
endless arguments because forces do not exist as separately measurable
quantities.

If we use qd to mean "disturbance" or "disturbing quantity" and "ds" {or
"sd") to mean "disturbing signal," we will at least avoid confusion in the
future. And I suppose that Rick and I are going to have to say Fd(qd) when
we mean the part of qi that is due to the disturbance, and explain that Fd
and qo are unity multipliers when we write (o + d). And, of course, that o
and d are short for qo and qd -- separately measureable quantities.

Best,

Bill P.

[Martin Taylor 970429 11:50]

Bill Powers (96042.1826 MST)]

Sorry if this dead horse has been beaten into a pulp in a hole in the
ground...

Rick Marken (970428.1555)--

Just a little addendum:]

>Martin:

>> The experimenter has to deduce the effect of the source variations
>> on the supposed controlled environmental variable.

>Me[Rick]:
>
>>Wrongo! The experimenter has to _observe_ the effect of these
>>disturbances on the supposed controlled variable.

Since the controlled variable, the output quantity, and the disturbing
variable are physical variables or functions thereof, to determine the
effect of the disturbing variable on the controlled variable, all one has to
do is remove the putative control system, vary the disturbing variable, and
observe what happens to the controlled variable.

Rick's comment was to show that I was wrong in saying that the experimenter
had to deduce the effect of the disturbances on the supposed controlled
variable, and that the experimenter actually _observed_ the effect. In
a way, he's right, because the "supposed" environmental variable is a
supposition inside the experimenter, and what is observed actually has
nothing to do with what the subject perceives. My original comment was
from the viewpoint that the experimenter was interested in deducing
what the subject was controlling for--i.e. determining the perceptual
function that defines the subject's controlled environmental variable.
And that, the experimenter has to _deduce_, not observe.

The controlled _environmental_ variable is a function of physical variables,
for sure (provided there's no contribution from imagination; we are
not talking about those situations, but it's best to avoid possible
red-herring attacks these days:-).

The controlled _variable_ is a perception inside the subject. The function
that produces this perception from the physical variables defines the
controlled _environmental_ variable. OK so far?

The experimenter, observing the physical variables that enter into the
subject's controlled environmental variable, can form a perception using
those same physical quantities, but the _function_ that produces the
perception is in the experimenter. The experimenter's environmental
variable may be a function of the same physical quantities as is the
subject's controlled environmental variable, but the experimenter cannot
know that the _function_ that defines the subject's controlled environmental
variable is the same as the _function_ that defines the environmental
correlate of the experimenter's perception.

The objective of "the Test" is to discover the subject's controlled
environmental variable. The experimenter can manipulate those physical
variables believed to enter into the subject's controlled perceptual
signal--that therefore are elements of the subject's controlled
environmental variable. The experimenter does _not_ observe "what happens
to the controlled variable." The experimenter observes what happens to
his/her _own_ environmental variable using those same physical variables.
The experimenter's hypothesized function may be identical to that of
the subject, highly correlated with that of the subject but different,
moderately correlated with that of the subject, or just "wrong."

In the rubber band experiment, send the person playing the controller out
for coffee, stick a tack through his end of the rubber bands into the
tabletop, and record how the knot moves as the experimenter's end is moved
around. If it's a nonlinear environment you might have to try several
positions for the controller's end.

All of which is beside the point. What it would do is to determine the
degree of control the subject has on the value of the function the
experimenter hypothesizes to be the subject's controlled environmental
variable. If the experimenter's function is close to that of the subject,
and the subject is controlling poorly, the result will be the same as
if the experimenter's hypothesis is not very close to the subject's actual
function, and the subject is controlling well. The best the experimenter
can do is make a lot of hypotheses, by either parametric or structural
variation, and see which shows the best control. Of the hypotheses, the
one showing best control is probably the closest to the subject's actual
controlled environmental variable.

Rick actually did this in his area-perimeter experiment, and in a dramatic
form in the Three Squares demo (if it's the one I'm thinking of). But
regardless of which hypothesis best fits the data, there's no guarantee
whatever that the subject's perceptual function is actually the one
embodied in the hypothesis.

Martin

[From Bill Powers (970429.1042 MST)]

Martin Taylor 970429 11:50--

The controlled _variable_ is a perception inside the subject. The function
that produces this perception from the physical variables defines the
controlled _environmental_ variable. OK so far?

Yes, but you're going at this in an unnecessarily complicated way. The only
place you need to consider the observer's perceptions is when the observer
sees a possible controlled variable in the environment. We are dealing with
the WHOLE observer, and one who understands PCT and physics as well.

But now we simply _stipulate_ that the environment is really there, and turn
to physical analysis. How does the output of the control system (whichever
one we're testing) affect the observer's perception? This can be determined
by experimenting with the environment -- the control system doesn't have to
be present. The output is simply a physical quantity, like a force or a
position, which the observer can manipulate just as much as the control
system can. Similarly for the disturbing quantity. The only relevant
questions are, on what environmental variables does the proposed perception
depend, and by what physical pathways or laws? The fact that one of these
variables might be used by a control system as an output is irrelevant at
this stage. It's just a physical variable like any other.

So the observer does not need the purported control system during this part
of the process. The observer can manipulate the disturbance and the output
quantity independently of the control system -- the same quantities, with
the control system removed, or a duplicate of them, or a valid simulation of
them. In this way the observer can establish the partial derivatives of qi
with respect to qd and qo, or even establish their nature by physical
inspection. This is just a problem in physics.

Once Fe and Fd are established, the putative control system can be brought
back and allowed to function. If the output quantity is varying (it may not
be varying!), the question is whether it is cancelling the effect of qd via
Fd by means of qo acting via Fe. Both effects, of course, are effects on the
proposed perception in the observer.

If this condition is found, then the observer has reason to propose that the
control system is perceiving in the same way, and since the observer is not
now controlling that perception, and it IS controlled, the other system must
be doing the controlling.

The other parts of the test are designed to double-check these conclusions.
If the other system is NOT perceiving the controlled variable as defined,
then interrupting the ability to perceive it will not destroy control. It
doesn't matter that the observer is tearing his hair and screaming that the
other system MUST be perceiving that variable -- otherwise, how could it be
controlling it? The fact is that if control continues when the perception is
made impossible, the controlled variable is not defined correctly.

Ditto for double-checking on the assumed output. If you can find no physical
path from the output to the controlled variable you propose, then it can't
be controlled in the way you think it is, even if the other two parts of the
Test are passed. The conductor sits down and opens his lunchbox, and the
orchestra goes right on playing. It's an old joke. Lots of jokes are about
the Test.

What you're forgetting is that as the experimenter you can manulate ALL the
environmental variables, including the one that might or might not be the
relevant output of the control system. You can always think of SOME way to
do that, even if it's only in simulation. Often a simple inspection of the
situation will tell you what you need to know.

Best,

Bill P.

[Martin Taylor 970429 14:10]

Bill Powers (970429.1042 MST)]

Martin Taylor 970429 11:50--

>The controlled _variable_ is a perception inside the subject. The function
>that produces this perception from the physical variables defines the
>controlled _environmental_ variable. OK so far?
>
Yes, but you're going at this in an unnecessarily complicated way. The only
place you need to consider the observer's perceptions is when the observer
sees a possible controlled variable in the environment. We are dealing with
the WHOLE observer, and one who understands PCT and physics as well.

...
If this condition is found, then the observer has reason to propose that the
control system is perceiving in the same way, and since the observer is not
now controlling that perception, and it IS controlled, the other system must
be doing the controlling.

So far, so good. But before the next stage comes a question: Just _what_ is
the other system controlling. If I (experimenter) am disturbing x-y+z, is
the other system controlling x*y/z (logarithmic summation)? I can tell the
difference between these if I manipulate x, y, and z so that the functions
give measurably different answers, but will I do so if I haven't thought
of the possibility? If the subjects is actually controlling x*y/z, I will
see pretty darned good stabilization of x+y-z over a reasonable range of
values of x, y, and z. I'd see better stabilization if I thought to compute
x*y/z, but what I see has already convinced me that x+y-z is being controlled,
so what might induce me to try this other computation (out of an infinity of
related possibilities)?

The other parts of the test are designed to double-check these conclusions.
If the other system is NOT perceiving the controlled variable as defined,
then interrupting the ability to perceive it will not destroy control.

Yes, but consider the example above--perhaps no more extreme than might be
encountered in practice--both possibilities use the same physical variables.
This second part of the Test would have the same result in either case.

What you're forgetting is that as the experimenter you can manulate ALL the
environmental variables, including the one that might or might not be the
relevant output of the control system.

No, I'm not forgetting that, though it is a questionable point as to how
the experimenter can determine that all the relevant environmental variables
have been thought of and manipulated. In "conventional" psychology, there
are always implicit assumptions that certain variables don't matter, and
the same holds in PCT. When doing the rubber-band experiment, I really
should vary the phase of Saturn in its orbit, but I don't, because I
don't believe astrological phenomena matter to _this_ controlled
perception. I _believe_ (i.e., have faith) that the environmental
variables are only the position of the knot in the band and the mark on
the table, and that the function controlled is the distance between them.
But I don't _know_ I've got all the variables, even if by making these
assumptions I discover control with a superb stability factor.

However that may be, my main point was that the _function_ relating the
physical variables manipulated by the experimenter is not determined just
by finding that the value of this function is well stabilized by the
actions of the subject. All you really find is that the subject is
controlling some perception generated by a function closely related
to the one whose value you (experimenter) are perceiving.

Martin

[From Bill Powers (970429.1309 MST)]

Martin Taylor 970429 14:10--

So far, so good. But before the next stage comes a question: Just _what_
is the other system controlling. If I (experimenter) am disturbing x-y+z,
is the other system controlling x*y/z (logarithmic summation)? I can tell
the difference between these if I manipulate x, y, and z so that the
functions give measurably different answers, but will I do so if I haven't
thought of the possibility?

What you'll do is humbly say that the controlled variable SEEMS to be x - y
+ z, and that you haven't been able to think of any function that works
better, and go with what you have.

I've been asked many times, in demonstrating the Coin Game and in other
applications of the Test, "But how do you know what hypothesis to try?" If
we knew the answer to that question, science would be a snap, wouldn't it?

Sometimes you can go by experience -- in the past, systems that look like
this and behave like this have controlled variables of this kind. Or maybe
you can come up with some systematic search algorithm, starting with some
general form and varying the parameters, seeing how each change improves the
fit. And there's always random reorganization, although it's not very
efficient. Also, if you have a controlled variable that works pretty well,
you can start playing about with it, looking for improvements in the fit, at
least over the local landscape. And don't forget your own control systems.
We can sometimes guess that if _we_ were doing what we see the other person
doing, we would probably be controlling X. Not infallible, but it's a start.

But if we really knew how the human mind comes up with new hypotheses, we
would know a lot more about it than we do. Obviously, the chances of coming
up with one that works are a lot better than seems reasonable on the basis
of what we do know.

If the subjects is actually controlling x*y/z, I will
see pretty darned good stabilization of x+y-z over a reasonable range of
values of x, y, and z. I'd see better stabilization if I thought to
compute x*y/z, but what I see has already convinced me that x+y-z is being
controlled, so what might induce me to try this other computation (out of
an infinity of related possibilities)?

All I can say is, "practice." The first thing we have to learn is that when
a hypothesis works, we should be suspicious of it. Most of us do that, and
we don't just accept a "pretty good" fit as the final answer. We may start
out by doing that, but after having lived a while, we learn that there is
usually a better answer and it's worth the trouble to go on looking at least
for a while. Even a really good answer is sometimes wrong, and we learn that
from experience. So the basic answer is really that after you've been doing
this for a while, you won't have to be "induced" to try something else that
might work better. You'll be smart enough to know that you had _better_ keep
looking.

The other parts of the test are designed to double-check these >>

conclusions. If the other system is NOT perceiving the controlled >> >>
variable as defined, then interrupting the ability to perceive it will >>
not destroy control.

Yes, but consider the example above--perhaps no more extreme than might be
encountered in practice--both possibilities use the same physical
variables. This second part of the Test would have the same result in
either case.

So how would you determine that (x+y) instead of (x-y) must be perceived? I
really ought to challenge you to come up with an answer. Remember that this
experiment has three parts, and you must use the information from all of
them. They must be consistent with each other.

I don't know what your solution would be, but I would start by establishing
that it's necessary to perceive _both_ x and y. That might be sufficient,
depending on how you do your tests in phase 1. Phase 1 suggests what the
controlled function of x and y is -- you can show that it couldn't be x-y or
x/y, or any other simple function of x and y but x+y. So to block the
perception of x+y, it is sufficient to block perception of x or y or both
(and naturally, you'd try all three). This will show that there's no _other_
perception involved. That's a double-check on what you found by applying
disturbances. In this way you eliminate the possibility that there's a z
also affected by the output, which affects x and y and which is perceived
instead of x and y.

If you can think of a way in which you could be fooled when doing the test,
you can no longer be fooled. You just test to see if that alternative is
actually happening. So it pays to think of ways in which you could be fooled.

What you're forgetting is that as the experimenter you can manulate ALL
the environmental variables, including the one that might or might not
be the relevant output of the control system.

No, I'm not forgetting that, though it is a questionable point as to how
the experimenter can determine that all the relevant environmental
variables have been thought of and manipulated.

You don't have to. The only case you have to worry about is the one in which
the output variable _fails_ to account for the observations, yet control
still goes on. If you can trace the physical connection from the output
variable to the controlled variable, that's all you need in order to say
that the Test has been passed as far as that phase is concerned. In the case
of perception, once you've shown that blocking access to the controlled
variable as defined in phase 1 destroys control, you don't need to consider
any other perceptions. You've nailed it.

In "conventional" psychology, there
are always implicit assumptions that certain variables don't matter, and
the same holds in PCT.

In conventional psychology, those other variables show up by disrupting the
behavior. The conventional approach is organized to try to _disprove_ the
hypothesis that there's _no_ relationship. An extraneous variable can create
the impression that there's a relationship when none exists.

In the Test, we're trying to disprove the hypothesis that there IS a
relationship -- that control exists. All we need is a sufficient disproof;
we don't need to consider every circumstance that could show that no control
exists. We don't need the simplest, most direct, or most elegant disproof.
One is enough.

When doing the rubber-band experiment, I really
should vary the phase of Saturn in its orbit, but I don't, because I
don't believe astrological phenomena matter to _this_ controlled
perception.

No. Saturn is irrelevant. To disprove control, all you have to show is that
blocking vision of the knot doesn't disrupt control, or that dropping your
end of the rubber bands doesn't disrupt it, or that the knot behaves just as
it should when there is no control. You don't have to go all the way to
Saturn to disprove control; it's much easier than that.

To _explain_ control, of course, you look for immediate relationships that
might be germane. If you like, you can include the influence of Saturn, but
when you solve the system equations you will find the Saturn coefficient to
be pretty small.

I _believe_ (i.e., have faith) that the environmental
variables are only the position of the knot in the band and the mark on
the table, and that the function controlled is the distance between them.
But I don't _know_ I've got all the variables, even if by making these
assumptions I discover control with a superb stability factor.

If you want to _know_, go to church.

Best,

Bill P.

[Martin Taylor 970430 14:20]

Bill Powers (970429.1309 MST)

Up till half-way through your message, I thought our ideas meshed exactly,
but somewhere a divergence showed up toward the end, and I don't know
quite where. Perhaps these comments will help.

Martin Taylor 970429 14:10--

So far, so good. But before the next stage comes a question: Just _what_
is the other system controlling. If I (experimenter) am disturbing x-y+z,
is the other system controlling x*y/z (logarithmic summation)? I can tell
the difference between these if I manipulate x, y, and z so that the
functions give measurably different answers, but will I do so if I haven't
thought of the possibility?

What you'll do is humbly say that the controlled variable SEEMS to be x - y
+ z, and that you haven't been able to think of any function that works
better, and go with what you have.

Yes, that's exactly what I was trying to say, and so is most of the rest,
a better written gloss on my points. Such as:

Sometimes you can go by experience -- in the past, systems that look like
this and behave like this have controlled variables of this kind. Or maybe
you can come up with some systematic search algorithm, starting with some
general form and varying the parameters, seeing how each change improves the
fit. And there's always random reorganization, although it's not very
efficient. Also, if you have a controlled variable that works pretty well,
you can start playing about with it, looking for improvements in the fit, at
least over the local landscape. And don't forget your own control systems.
We can sometimes guess that if _we_ were doing what we see the other person
doing, we would probably be controlling X. Not infallible, but it's a start.

But if we really knew how the human mind comes up with new hypotheses, we
would know a lot more about it than we do. Obviously, the chances of coming
up with one that works are a lot better than seems reasonable on the basis
of what we do know.

So far, we think alike.

If the subjects is actually controlling x*y/z, I will
see pretty darned good stabilization of x+y-z over a reasonable range of
values of x, y, and z. I'd see better stabilization if I thought to
compute x*y/z, but what I see has already convinced me that x+y-z is being
controlled, so what might induce me to try this other computation (out of
an infinity of related possibilities)?

All I can say is, "practice."

"Practice" hardly seems to be an answer for the question of selecting out
of an infinite range of possibilities. But my question was rhetorical,
making the point that you already expanded on in the section quoted above.

The other parts of the test are designed to double-check these >>

conclusions. If the other system is NOT perceiving the controlled >> >>
variable as defined, then interrupting the ability to perceive it will >>
not destroy control.

Yes, but consider the example above--perhaps no more extreme than might be
encountered in practice--both possibilities use the same physical
variables. This second part of the Test would have the same result in
either case.

So how would you determine that (x+y) instead of (x-y) must be perceived? I
really ought to challenge you to come up with an answer. Remember that this
experiment has three parts, and you must use the information from all of
them. They must be consistent with each other.

Yes, exactly. And I don't know the reason for your "challenge." It's the
same question as above. Once you've thought of a counter-hypothesis, testing
which better fits the data isn't a big deal. The real question is to know
that x and y are the _only_ variables that enter into the perceptual
function of the subject, given that they _are_ the only variables in the
perceptual function of the experimenter.

Remember that this thread started from the difference of opinion between
me and Rick as to whether the experimenter could _observe_ the subject's
controlled environmental variable. Rick said it was what was actually done.
I said it was impossible. You seemed earlier to side with Rick, but in the
message to which I am responding, you side with me in saying "No, the
experimenter cannot know the subject's controlled environmental variable."
And you challenge me to prove the contrary of what I assert!

···

-------------

I don't know what your solution would be, but I would start by establishing
that it's necessary to perceive _both_ x and y. That might be sufficient,
depending on how you do your tests in phase 1. Phase 1 suggests what the
controlled function of x and y is -- you can show that it couldn't be x-y or
x/y, or any other simple function of x and y but x+y. So to block the
perception of x+y, it is sufficient to block perception of x or y or both
(and naturally, you'd try all three). This will show that there's no _other_
perception involved.

Huh? If the real function is p = x + y + z, and you block x or y, you
block p, don't you? How does what you propose show that no z is involved?
Showing that perception of x and of y are individually necessary is
sufficient to show that perception of z in not needed? Funny logic, I say.

That's a double-check on what you found by applying
disturbances. In this way you eliminate the possibility that there's a z
also affected by the output, which affects x and y and which is perceived
instead of x and y.

I must be missing something. I don't remember anyone talking about a
function that is an "OR" of functions of different variables. You seem
to be saying that p = (x+y) || z. In any case, even if we were talking
about this kind of an alternative route to p, if you didn't know what
z was, you couldn't know whether the manipulations that blocked x or y
didn't also block z.

If you can think of a way in which you could be fooled when doing the test,
you can no longer be fooled. You just test to see if that alternative is
actually happening. So it pays to think of ways in which you could be fooled.

Yes. It's the magic of thinking of all the ways you could be fooled that
is the problem.

-----------

What you're forgetting is that as the experimenter you can manulate ALL
the environmental variables, including the one that might or might not
be the relevant output of the control system.

No, I'm not forgetting that, though it is a questionable point as to how
the experimenter can determine that all the relevant environmental
variables have been thought of and manipulated.

You don't have to. The only case you have to worry about is the one in which
the output variable _fails_ to account for the observations, yet control
still goes on. If you can trace the physical connection from the output
variable to the controlled variable, that's all you need in order to say
that the Test has been passed as far as that phase is concerned. In the case
of perception, once you've shown that blocking access to the controlled
variable as defined in phase 1 destroys control, you don't need to consider
any other perceptions. You've nailed it.

This is the point where our notions seem to part company. I really don't
see how eliminating the possibility of computing p by eliminating one of
its arguments shows you that the function generating p has no other
arguments that, if blocked, would also block p. If p = x + y + z + w,
I can block any one of them and reduce the accuracy of p. The fact that
I concentrated on x and y and found that they matter doesn't seem to me
to be evidence that there is no z or w.

In "conventional" psychology, there
are always implicit assumptions that certain variables don't matter, and
the same holds in PCT.

In conventional psychology, those other variables show up by disrupting the
behavior. The conventional approach is organized to try to _disprove_ the
hypothesis that there's _no_ relationship. An extraneous variable can create
the impression that there's a relationship when none exists.

In the Test, we're trying to disprove the hypothesis that there IS a
relationship -- that control exists.

You are addressing a different question. The discussion started with the
assertion that control exists. The question is _what_ is being controlled.
Every function not orthogonal to the one actually being controlled will
appear to show some degree of control. Every function in that class that
has an overlapping set of arguments with the true function will show
loss or deterioration of control when one of the common arguments is
blocked.

All we need is a sufficient disproof;
we don't need to consider every circumstance that could show that no control
exists.

That's backwards. We start (this discussion) at the point where we know
that control exists.

When doing the rubber-band experiment, I really
should vary the phase of Saturn in its orbit, but I don't, because I
don't believe astrological phenomena matter to _this_ controlled
perception.

No. Saturn is irrelevant. To disprove control, all you have to show is that
blocking vision of the knot doesn't disrupt control, or that dropping your
end of the rubber bands doesn't disrupt it, or that the knot behaves just as
it should when there is no control. You don't have to go all the way to
Saturn to disprove control; it's much easier than that.

But if Saturn were a part of the perception, then running the Test at a
different phase of Saturn might give you a different answer. Not thinking
that Saturn comes into play, you don't try that variation, so you don't
find out that it is an argument to the perceptual function (of the subject).

To _explain_ control, of course, you look for immediate relationships that
might be germane. If you like, you can include the influence of Saturn, but
when you solve the system equations you will find the Saturn coefficient to
be pretty small.

Exactly. That's what you would find once you looked for the amount of effect
(or so we believe, nobody having tried the experiment). That's the point
about implicit assumptions. We believe them strongly enough that we can
reduce the experiment by ignoring the fact that they _are_ assumptions.

If you want to _know_, go to church.

My point, I believe.

Thanks.

Martin

[From Bill Powers (970430.1155 MST)]

Martin Taylor 970430 14:20--

So how would you determine that (x+y) instead of (x-y) must be perceived?
I really ought to challenge you to come up with an answer. Remember that
this experiment has three parts, and you must use the information from
all of them. They must be consistent with each other.

Yes, exactly. And I don't know the reason for your "challenge."

I was just wondering if you might want to try to solve the problem you posed
instead of waiting for me to think of a solution.

It's the
same question as above. Once you've thought of a counter-hypothesis,
testing which better fits the data isn't a big deal. The real question is
to know that x and y are the _only_ variables that enter into the
perceptual function of the subject, given that they _are_ the only
variables in the perceptual function of the experimenter.

Suppose they are not the only variables involved. In that case, when you
disturb the perceived function of x and y (with the control system acting)
you will not find that the output exactly cancels the effect of the
disturbance, except by chance. On different occasions you will find that
applying the same disturbance will not result in the same opposing value of
the output. If you haven't found all the underlying physical variables (of
consequenc) on which the controlled perception depends, sooner or later
natural disturbances will result in the output changing when you haven't
applied any disturbance, and when f(x,y) hasn't changed. This will tell you
that you've left something out.

Remember that this thread started from the difference of opinion between
me and Rick as to whether the experimenter could _observe_ the subject's
controlled environmental variable. Rick said it was what was actually >done.

I think that is a misinterpretation of whatever Rick said. How could Rick
propose that we can observe the subject's perceptions?

I said it was impossible. You seemed earlier to side with Rick, but
in the message to which I am responding, you side with me in saying "No,
the experimenter cannot know the subject's controlled environmental >variable."

At all? You offer only a binary interpretation here: either we know the
controlled variable, or we don't. When the test works out well, what we know
is a good approximation to the subject's controlled variable; that's all I
say, and all I would presume that Rick would say.

Of course knowing the controlled variable doesn't imply that we know how the
subject perceives it.

And you challenge me to prove the contrary of what I assert!

Why not? Is this a game where we each make assertions and then try to prove
that they're right? The person with the readiest access to the assumptions
you're making is you; I have to try to deduce what they are, and I could get
them wrong.

-------------

I don't know what your solution would be, but I would start by >>establishing
that it's necessary to perceive _both_ x and y. That might be sufficient,
depending on how you do your tests in phase 1. Phase 1 suggests what the
controlled function of x and y is -- you can show that it couldn't be x-y
or x/y, or any other simple function of x and y but x+y. So to block the
perception of x+y, it is sufficient to block perception of x or y or both
(and naturally, you'd try all three). This will show that there's no
_other_ perception involved.

Huh? If the real function is p = x + y + z, and you block x or y, you
block p, don't you? How does what you propose show that no z is involved?

That problem is solved in Phase 1; you don't need to solve it again in Phase 3.

Suppose that in Phase 1, you propose that the perceptual variable is simply
f(x), when it is really g(x,y). When you disturb x, both x and y will change
to oppose the effect of the disturbance, leaving g(x,y) undisturbed. But you
will then _not_ see that your f(x) is undisturbed! How could it be? The
effect of the disturbance is being partly cancelled by a change in y, so x
does not have to be affected in a way equal and opposite to the effect of
the disturbance. I presume you can extrapolate this to the case of proposing
f(x,y) when it is actually f(x,y,z). If you propose too few variables, there
will be one degree of freedom unaccounted for. The control system will be
using this degree of freedom, but your model won't. So the model will not
behave like the real system.

I think you're assuming that if f(x,y) is stabilized, x and y are also
stabilized individually. This is not true. If (x+y) is being controlled, and
you disturb x, the control system's action will change both x and y,
stabilizing (x+y) but neither x nor y.

Showing that perception of x and of y are individually necessary is
sufficient to show that perception of z in not needed? Funny logic, I say.

That, plus the observations you made in phase 1. All that checking out the
perception is needed for is to take care of possibilities you hadn't
considered -- for example, that there is some OTHER system you hadn't
noticed that is really doing the controlling, or that the output is being
generated by some open-loop method of predicting the disturbance, or (as I
pointed out) that there isn't some extra variable on which x and y both
depend, which is really being affected by the output and being sensed.

Phase 3 just interrupts the path by which it is assumed control takes place,
to make sure control is lost as it should be.

That's a double-check on what you found by applying
disturbances. In this way you eliminate the possibility that there's a z
also affected by the output, which affects x and y and which is perceived
instead of x and y.

I must be missing something. I don't remember anyone talking about a
function that is an "OR" of functions of different variables. You seem
to be saying that p = (x+y) || z. In any case, even if we were talking
about this kind of an alternative route to p, if you didn't know what
z was, you couldn't know whether the manipulations that blocked x or y
didn't also block z.

No, I'm proposing that in the _real_ system, p = z, x = f1(z), and y =
f2(z). Of course that would contradict what was found in Phase 1, so I don't
really think we would find this when we examine the environment. Actually,
it's very hard to think of a way in which Phase 1 could be passed and Phase
3 be failed -- but that's why we do Phase 3. The fact that I can't think of
a way means that I am likely to miss this possibility if it happens to be true.

If you can think of a way in which you could be fooled when doing the
test, you can no longer be fooled. You just test to see if that
alternative is actually happening. So it pays to think of ways in which
you could be fooled.

Yes. It's the magic of thinking of all the ways you could be fooled that
is the problem.

It's not a problem at all. If you worried about this problem you would never
do anything. What you do is go ahead, knowing that you're going to be
fooled. When you are fooled, you will learn something new. The worst choice
is to assume that you know something so well that you can never be fooled.
Then, when you are fooled, you are no longer able to admit it.

This is the point where our notions seem to part company. I really don't
see how eliminating the possibility of computing p by eliminating one of
its arguments shows you that the function generating p has no other
arguments that, if blocked, would also block p.

You know the number of arguments from Phase 1. That question has been
settled by the time you get to Phase 3.

If p = x + y + z + w,
I can block any one of them and reduce the accuracy of p. The fact that
I concentrated on x and y and found that they matter doesn't seem to me
to be evidence that there is no z or w.

Again, that is settled by Phase 1. If you neglect z and w in phase 1, you
will not find that your proposed function f(x,y) is stabilized against
disturbances of x, y, or both, by the outputs that you observe. And you will
see the output change when neither x nor y is disturbed.

You are addressing a different question. The discussion started with the
assertion that control exists. The question is _what_ is being controlled.

No, not at all. You start with the _proposal_ that control exists. You
_propose_ a variable that is being controlled. That proposal implies certain
observable consequences of experiments, which you then try out in every way
you can think of to see if the proposal can be disproven.

Every function not orthogonal to the one actually being controlled will
appear to show some degree of control.

Yes, this is true. This is why I insist on getting very good data. If your
correlations are 0.95 or better, you know that your model is fairly well
aligned with the real one, in terms of orthogonality. The orthogonal
component is small. You set your standards so that weakly-predictive models
are discarded.

Every function in that class that
has an overlapping set of arguments with the true function will show
loss or deterioration of control when one of the common arguments is
blocked.

That's why I wouldn't stop with merely an indication that control exists.
The better match of the model to the data, the less this problem matters.

All we need is a sufficient disproof;
we don't need to consider every circumstance that could show that no
control exists.

That's backwards. We start (this discussion) at the point where we know
that control exists.

On the contrary, the whole point of the Test is to prove that control
DOESN'T exist. You do your best to model the control system you think might
be there, and then you forget about that and do your best to prove that the
model doesn't agree with what you observe. If your test is designed to catch
every way in which the model might fail, passing the test says that the
model at least isn't definitely wrong. If you want some indication of its
rightness, you have to look to other kinds of analysis. The Test is designed
only to catch wrong models and eliminate them before you waste any more time
on them.

But if Saturn were a part of the perception, then running the Test at a
different phase of Saturn might give you a different answer. Not thinking
that Saturn comes into play, you don't try that variation, so you don't
find out that it is an argument to the perceptual function (of the >subject).

To _explain_ control, of course, you look for immediate relationships
that might be germane. If you like, you can include the influence of
Saturn, but when you solve the system equations you will find the Saturn
coefficient to be pretty small.

Exactly. That's what you would find once you looked for the amount of
effect (or so we believe, nobody having tried the experiment). That's the
point about implicit assumptions. We believe them strongly enough that we
can reduce the experiment by ignoring the fact that they _are_ >assumptions.

I guess I have to finish out that thought. If you want to explain control,
then maybe you have to consider Saturn. But the Test isn't for explaining
control; it's for testing explanations. If your model says that control
should be lost when you interrupt perception of x and y, and control goes
right on as before, you don't have to worry about Saturn or anything else:
you have shown that control does not depend on sensing x and y, and that
trashes your model. That's all the Test has to accomplish. It won't help you
fix your model.

Best,

Bill P.

[Martin Taylor 970430 18:00]

Bill Powers (970430.1155 MST)

I think we are getting back on the same track again. The clue is in the
juxtaposition of these two sections of your message:

It's the
same question as above. Once you've thought of a counter-hypothesis,
testing which better fits the data isn't a big deal. The real question is
to know that x and y are the _only_ variables that enter into the
perceptual function of the subject, given that they _are_ the only
variables in the perceptual function of the experimenter.

Suppose they are not the only variables involved. In that case, when you
disturb the perceived function of x and y (with the control system acting)
you will not find that the output exactly cancels the effect of the
disturbance, except by chance. On different occasions you will find that
applying the same disturbance will not result in the same opposing value of
the output. If you haven't found all the underlying physical variables (of
consequenc) on which the controlled perception depends, sooner or later
natural disturbances will result in the output changing when you haven't
applied any disturbance, and when f(x,y) hasn't changed. This will tell you
that you've left something out.

and

Every function not orthogonal to the one actually being controlled will
appear to show some degree of control.

Yes, this is true. This is why I insist on getting very good data. If your
correlations are 0.95 or better, you know that your model is fairly well
aligned with the real one, in terms of orthogonality. The orthogonal
component is small. You set your standards so that weakly-predictive models
are discarded.

Putting those two together largely resolves the problem. Accepting
that the control stability observed is well modelled while you vary the
presumed disturbance doesn't leave much room for important extra variables,
unless they are modulators on the perceptual effects of the ones you are
assuming to be the complete set of important ones.

The key is that it is the whole control _model_ that provides the
correlation of 0.95 with the observation, not that the output correlates
0.95 with the disturbance signal. The control model starts with the qd
generated by the experimenter, has an Fd that is part of the model, and
a disturbance signal sd == Fd(qd) that is _known_ to the experimenter
because the model is the experimenter's creation. If the observed
behaviour of the variable the experimenter models as being controlled
by the subject matches the behaviour of the model, there is probably
not too much wrong with the experimenter's putative environmental
variable as a model of what the subject is controlling.

I had forgotten this "key."

···

-----------------

Remember that this thread started from the difference of opinion between
me and Rick as to whether the experimenter could _observe_ the subject's
controlled environmental variable. Rick said it was what was actually >done.

I think that is a misinterpretation of whatever Rick said. How could Rick
propose that we can observe the subject's perceptions?

That's what I couldn't figure out. Here's what he said. Maybe you can
interpret it differently. I can't.

+Rick Marken (970428.1555)

+Martin:
+
+> The experimenter has to deduce the effect of the source variations
+> on the supposed controlled environmental variable.
+
+Me:
+
+>Wrongo! The experimenter has to _observe_ the effect of these
+>disturbances on the supposed controlled variable.
+
+Marin:
+
+> How on Earth is the experimenter supposed to _observe_ the effect
+> of the disturbances on something that is a function of the
+> _subject's_ perceptual input function?
+
+Try my "Test for the Controlled Variable" demo and see how easy it
+is. Remember, the experimenter can perceive the hypothetical controlled
+variable too -- though not necessarily as the subject perceives it.

If the experimenter doesn't perceive it as the subject perceives it, the
experimenter doesn't perceive the subject's controlled environmental
variable. Period. I don't see how Rick's reiterated claim that the
experimenter observes the subject's environmental variable can be
construed as anything but a claim that the experimenter observes the
subject's environmental variable. And that's plain wrong.

(Powers again):

I said it was impossible. You seemed earlier to side with Rick, but
in the message to which I am responding, you side with me in saying "No,
the experimenter cannot know the subject's controlled environmental

variable."

At all? You offer only a binary interpretation here: either we know the
controlled variable, or we don't. When the test works out well, what we know
is a good approximation to the subject's controlled variable; that's all I
say, and all I would presume that Rick would say.

Well, that's what I would say, too, and it is what I _did_ say. It's what
Rick objected to, and that objection led to this little thread.

Martin

[From Bill Powers (970430.1706 MST)]

Martin Taylor 970430 18:00--

I think we are getting back on the same track again. The clue is in the
juxtaposition of these two sections of your message ...

Yes, I think the issue is settled for now.

···

------------------------------

Remember that this thread started from the difference of opinion between
me and Rick as to whether the experimenter could _observe_ the subject's
controlled environmental variable. Rick said it was what was actually >done.

I think that is a misinterpretation of whatever Rick said. How could Rick
propose that we can observe the subject's perceptions?

That's what I couldn't figure out. Here's what he said. Maybe you can
interpret it differently. I can't.

+Rick Marken (970428.1555)

+Martin:
+
+> The experimenter has to deduce the effect of the source variations
+> on the supposed controlled environmental variable.
+
+Me:
+
+>Wrongo! The experimenter has to _observe_ the effect of these
+>disturbances on the supposed controlled variable.
+
+Martin:
+
+> How on Earth is the experimenter supposed to _observe_ the effect
+> of the disturbances on something that is a function of the
+> _subject's_ perceptual input function?

It's pretty clear what you missed in Rick's statement. The observer does
indeed _observe_ the effect of the disturbances on the SUPPOSED controlled
variable. The observer constructs a function of observable physical
variables: that is how he explicitly "supposes" the nature of the controlled
variable. Then he observes the effects of disturbances on the value of this
function. If he supposes for example, that the putative control system is
controlling (x+y), he measures x and y while the disturbance and output
vary, and computes (x+y) to see if the sum is stabilized.

You must have been giving some special interpretation to "supposed
controlled variable", perhaps displacing "supposed" to imply "supposedly
observed controlled variable."

Is that problem taken care of, too, now?

Best,

Bill P.

[Martin Taylor 970501 10:20]

Bill Powers (970430.1706 MST)]

It's pretty clear what you missed in Rick's statement. The observer does
indeed _observe_ the effect of the disturbances on the SUPPOSED controlled
variable. The observer constructs a function of observable physical
variables: that is how he explicitly "supposes" the nature of the controlled
variable. Then he observes the effects of disturbances on the value of this
function. If he supposes for example, that the putative control system is
controlling (x+y), he measures x and y while the disturbance and output
vary, and computes (x+y) to see if the sum is stabilized.

You must have been giving some special interpretation to "supposed
controlled variable", perhaps displacing "supposed" to imply "supposedly
observed controlled variable."

Is that problem taken care of, too, now?

Yes, if Rick agrees that this is indeed what he meant. In that case, I
apologize to all and sundry, and Rick, for wasting net bandwidth.

Martin