[From Bill Powers (970430.1155 MST)]
Martin Taylor 970430 14:20--
So how would you determine that (x+y) instead of (x-y) must be perceived?
I really ought to challenge you to come up with an answer. Remember that
this experiment has three parts, and you must use the information from
all of them. They must be consistent with each other.
Yes, exactly. And I don't know the reason for your "challenge."
I was just wondering if you might want to try to solve the problem you posed
instead of waiting for me to think of a solution.
It's the
same question as above. Once you've thought of a counter-hypothesis,
testing which better fits the data isn't a big deal. The real question is
to know that x and y are the _only_ variables that enter into the
perceptual function of the subject, given that they _are_ the only
variables in the perceptual function of the experimenter.
Suppose they are not the only variables involved. In that case, when you
disturb the perceived function of x and y (with the control system acting)
you will not find that the output exactly cancels the effect of the
disturbance, except by chance. On different occasions you will find that
applying the same disturbance will not result in the same opposing value of
the output. If you haven't found all the underlying physical variables (of
consequenc) on which the controlled perception depends, sooner or later
natural disturbances will result in the output changing when you haven't
applied any disturbance, and when f(x,y) hasn't changed. This will tell you
that you've left something out.
Remember that this thread started from the difference of opinion between
me and Rick as to whether the experimenter could _observe_ the subject's
controlled environmental variable. Rick said it was what was actually >done.
I think that is a misinterpretation of whatever Rick said. How could Rick
propose that we can observe the subject's perceptions?
I said it was impossible. You seemed earlier to side with Rick, but
in the message to which I am responding, you side with me in saying "No,
the experimenter cannot know the subject's controlled environmental >variable."
At all? You offer only a binary interpretation here: either we know the
controlled variable, or we don't. When the test works out well, what we know
is a good approximation to the subject's controlled variable; that's all I
say, and all I would presume that Rick would say.
Of course knowing the controlled variable doesn't imply that we know how the
subject perceives it.
And you challenge me to prove the contrary of what I assert!
Why not? Is this a game where we each make assertions and then try to prove
that they're right? The person with the readiest access to the assumptions
you're making is you; I have to try to deduce what they are, and I could get
them wrong.
-------------
I don't know what your solution would be, but I would start by >>establishing
that it's necessary to perceive _both_ x and y. That might be sufficient,
depending on how you do your tests in phase 1. Phase 1 suggests what the
controlled function of x and y is -- you can show that it couldn't be x-y
or x/y, or any other simple function of x and y but x+y. So to block the
perception of x+y, it is sufficient to block perception of x or y or both
(and naturally, you'd try all three). This will show that there's no
_other_ perception involved.
Huh? If the real function is p = x + y + z, and you block x or y, you
block p, don't you? How does what you propose show that no z is involved?
That problem is solved in Phase 1; you don't need to solve it again in Phase 3.
Suppose that in Phase 1, you propose that the perceptual variable is simply
f(x), when it is really g(x,y). When you disturb x, both x and y will change
to oppose the effect of the disturbance, leaving g(x,y) undisturbed. But you
will then _not_ see that your f(x) is undisturbed! How could it be? The
effect of the disturbance is being partly cancelled by a change in y, so x
does not have to be affected in a way equal and opposite to the effect of
the disturbance. I presume you can extrapolate this to the case of proposing
f(x,y) when it is actually f(x,y,z). If you propose too few variables, there
will be one degree of freedom unaccounted for. The control system will be
using this degree of freedom, but your model won't. So the model will not
behave like the real system.
I think you're assuming that if f(x,y) is stabilized, x and y are also
stabilized individually. This is not true. If (x+y) is being controlled, and
you disturb x, the control system's action will change both x and y,
stabilizing (x+y) but neither x nor y.
Showing that perception of x and of y are individually necessary is
sufficient to show that perception of z in not needed? Funny logic, I say.
That, plus the observations you made in phase 1. All that checking out the
perception is needed for is to take care of possibilities you hadn't
considered -- for example, that there is some OTHER system you hadn't
noticed that is really doing the controlling, or that the output is being
generated by some open-loop method of predicting the disturbance, or (as I
pointed out) that there isn't some extra variable on which x and y both
depend, which is really being affected by the output and being sensed.
Phase 3 just interrupts the path by which it is assumed control takes place,
to make sure control is lost as it should be.
That's a double-check on what you found by applying
disturbances. In this way you eliminate the possibility that there's a z
also affected by the output, which affects x and y and which is perceived
instead of x and y.
I must be missing something. I don't remember anyone talking about a
function that is an "OR" of functions of different variables. You seem
to be saying that p = (x+y) || z. In any case, even if we were talking
about this kind of an alternative route to p, if you didn't know what
z was, you couldn't know whether the manipulations that blocked x or y
didn't also block z.
No, I'm proposing that in the _real_ system, p = z, x = f1(z), and y =
f2(z). Of course that would contradict what was found in Phase 1, so I don't
really think we would find this when we examine the environment. Actually,
it's very hard to think of a way in which Phase 1 could be passed and Phase
3 be failed -- but that's why we do Phase 3. The fact that I can't think of
a way means that I am likely to miss this possibility if it happens to be true.
If you can think of a way in which you could be fooled when doing the
test, you can no longer be fooled. You just test to see if that
alternative is actually happening. So it pays to think of ways in which
you could be fooled.
Yes. It's the magic of thinking of all the ways you could be fooled that
is the problem.
It's not a problem at all. If you worried about this problem you would never
do anything. What you do is go ahead, knowing that you're going to be
fooled. When you are fooled, you will learn something new. The worst choice
is to assume that you know something so well that you can never be fooled.
Then, when you are fooled, you are no longer able to admit it.
This is the point where our notions seem to part company. I really don't
see how eliminating the possibility of computing p by eliminating one of
its arguments shows you that the function generating p has no other
arguments that, if blocked, would also block p.
You know the number of arguments from Phase 1. That question has been
settled by the time you get to Phase 3.
If p = x + y + z + w,
I can block any one of them and reduce the accuracy of p. The fact that
I concentrated on x and y and found that they matter doesn't seem to me
to be evidence that there is no z or w.
Again, that is settled by Phase 1. If you neglect z and w in phase 1, you
will not find that your proposed function f(x,y) is stabilized against
disturbances of x, y, or both, by the outputs that you observe. And you will
see the output change when neither x nor y is disturbed.
You are addressing a different question. The discussion started with the
assertion that control exists. The question is _what_ is being controlled.
No, not at all. You start with the _proposal_ that control exists. You
_propose_ a variable that is being controlled. That proposal implies certain
observable consequences of experiments, which you then try out in every way
you can think of to see if the proposal can be disproven.
Every function not orthogonal to the one actually being controlled will
appear to show some degree of control.
Yes, this is true. This is why I insist on getting very good data. If your
correlations are 0.95 or better, you know that your model is fairly well
aligned with the real one, in terms of orthogonality. The orthogonal
component is small. You set your standards so that weakly-predictive models
are discarded.
Every function in that class that
has an overlapping set of arguments with the true function will show
loss or deterioration of control when one of the common arguments is
blocked.
That's why I wouldn't stop with merely an indication that control exists.
The better match of the model to the data, the less this problem matters.
All we need is a sufficient disproof;
we don't need to consider every circumstance that could show that no
control exists.
That's backwards. We start (this discussion) at the point where we know
that control exists.
On the contrary, the whole point of the Test is to prove that control
DOESN'T exist. You do your best to model the control system you think might
be there, and then you forget about that and do your best to prove that the
model doesn't agree with what you observe. If your test is designed to catch
every way in which the model might fail, passing the test says that the
model at least isn't definitely wrong. If you want some indication of its
rightness, you have to look to other kinds of analysis. The Test is designed
only to catch wrong models and eliminate them before you waste any more time
on them.
But if Saturn were a part of the perception, then running the Test at a
different phase of Saturn might give you a different answer. Not thinking
that Saturn comes into play, you don't try that variation, so you don't
find out that it is an argument to the perceptual function (of the >subject).
To _explain_ control, of course, you look for immediate relationships
that might be germane. If you like, you can include the influence of
Saturn, but when you solve the system equations you will find the Saturn
coefficient to be pretty small.
Exactly. That's what you would find once you looked for the amount of
effect (or so we believe, nobody having tried the experiment). That's the
point about implicit assumptions. We believe them strongly enough that we
can reduce the experiment by ignoring the fact that they _are_ >assumptions.
I guess I have to finish out that thought. If you want to explain control,
then maybe you have to consider Saturn. But the Test isn't for explaining
control; it's for testing explanations. If your model says that control
should be lost when you interrupt perception of x and y, and control goes
right on as before, you don't have to worry about Saturn or anything else:
you have shown that control does not depend on sensing x and y, and that
trashes your model. That's all the Test has to accomplish. It won't help you
fix your model.
Best,
Bill P.