[From Bill Powers (951129.0600 MST)]
Peter J. Burke (11/28/95 13:37) --
This raises an interesting question, for me. I am not sure
(philosophically, even) whether it is possible to discover "the"
controlled variable (perception), since it is likely (as in this
case) to be a gestalt which is made of some weighted combination of
stimuli from the external world (and possibly from internal states
as well).
This is basically what the Test is designed to do. Defining a controlled
variable for purposes of the Test entails not just observing some single
variable in the environment, but _proposing_ a function of observable
variables that might be under control. In Rick's "area" experiment, the
controlled variable was defined in two ways: x + y and x * y, where x
and y were the individual dimensions of the figure on the screen. He
could also have tested for x/y, x^y, and x^2 + y^2. A control model is
set up in which the perceptual input function performs the proposed
functional operation on the inputs x and y to produce a perceptual
signal. The remainder of the model compares the perception with a
reference signal and turns the error signal into a rate of change of
handle position (our usual output model, used as long as it works). Then
the behavior of the model is matched to the real behavior as well as
possible by varying the two available parameter, the multiplier in the
output function and the (assumed constant) setting of the reference
signal. This is done for each proposed perceptual function, and the
perceptual function of the best-fitting model is taken to define the
controlled variable. Rick found that "x*y" was a significantly better
proposal than "x + y".
There is no reason in principle why this approach can't be extended to
perceptual variables that are functions of any number of dimensions of
the observed environment. We don't use the concept of "gestalt," because
it's too qualitative to quantify. But we do use the concept of
perceptions that are functions of multiple input variables (weighted
summation being one possible kind of function).
We can discover what disturbes the controlled perception (by
disturbing it), but I am not sure we can ever know what the "it" is
in this case. ]
In the Test, a disturbance is used which affects the observable
variables in the environment. From the known physical influences of the
disturbing variable or variables, we can compute the effect on the
perceptual signal by applying the proposed perceptual function to the
environmental variables that are its inputs. There are some kinds of
disturbances that would (if unopposed) affect the perceptual signal;
others would not. The Test entails making both kinds of predictions; no
resistance to disturbances that change the environment but do not change
the perception, and resistance to disturbances that do change the
perception. In Rick's "area" experiment, for example, a disturbance that
increases x and decreases y by the same amount would not affect the
perception x + y, so we would predict that it would not be resisted if x
+ y were being controled. On the other hand, it _would_ disturb a
perception computed from x * y, and we would expect resistance to that
disturbance, if that were the variable under control.
Do we ever know exactly what combinations and in what proportions?
Is there some way around this? Should we be content with the more
molar analysis? I am sort of leaning in this latter direction since
it is the one that "works," (at least so far).
I've just sketched out the procedures for finding the combinations and
proportions. The "molar" approach is useful, but the more detailed
approach is to be preferred when you can do it. The problem with the
molar approach is that it leaves even the experimenter unaware of the
underlying perceptual transformations, and leads to mushy results.
···
-----------------------------------------------------------------------
Dag Forssell (951128 1420) --
Thanks for the long reprise of the "science or mush" debate. I hope
Bruce is now aware that we've been around on the IV-DV matter a few
times before.
I highly approve of your idea that any true explanation must include a
proposed mechanism.
-----------------------------------------------------------------------
Martin Taylor 951128 17:30 --
Excellent reply to Chris Cherpas on Herrnstein's girlfriend.
I would like to add a question to that discussion: How did Herrnstein
discover that the controlled variable involved the presence of
Herrnstein's girlfriend in the picture? The pigeon appeared to be
controlling for "(girl present and YES key) or (girl not present and NO
key)". But perhaps the pigeon saw something else in common among the
pictures in which the girl was present, other than recognizing that a
particular human being was there (you brought up the same point).
Herrnstein evidently assumed that because HE saw the same person in the
YES pictures, the pigeon also saw the same person. This explanation is
_sufficient_ to explain the results, but is it _necessary_? A
considerably more careful experiment would have to be done to answer
that question, at least as thorough as the experiments that exposed the
Clever Hans hoax.
-----------------------------------------------------------------------
Bruce Abbott (951128.1750 EST) --
People are reluctant to give up a theory that seems to make sense
of a great number of observations in favor of a new view whose
ability to account for at least some of these phenomena has yet to
be demonstrated.
If you allow people to know the results of an experiment before they are
asked to explain them, most theories can be made to work.
An example: How about autoshaping? Pigeons don't normally peck at
pigeon-keys, so researchers used to have to spend time with each
new bird training it to peck the key using the technique of
successive approximations, otherwise known as response "shaping."
Then in 1968, Brown and Jenkins discovered a procedure that
automatically got the pigeons to peck at the key. The key was
illuminated for 20 seconds, and this was immediately followed by
grain reinforcement (no keypecking required). The keylight then
went dark for about 40 seconds. This sequence was then repeated,
over and over. If the pigeon pecked at the key, reinforcement was
delivered immediately. By the end of the session, the pigeon was
pecking happily away and thereby earning its meals. Both the
procedure and the phenomenon were labeled "autoshaping," for
"automatic shaping."
At what point was reinforcement theory used to predict this result? Was
it before or after the phenomenon had been observed? From what you say,
it was afterward:
The explanation that was eventually developed for autoshaping goes
like this. Hungry pigeons normally exhibit an innate (unlearned)
response toward things that look like they might be edible: they
peck at them. Repeated pairing of key-illumination with grain
produces classical conditioning of this pecking response. The
pigeon therefore directs its pecking at the CS (illuminated key),
which has become a kind of substitute stimulus for eliciting the
response. This gets the pigeon pecking at the key. The keypeck
closes the key's electrical circuit to produce immediate grain
reinforcement as in traditional operant conditioning, further
strengthening the response in the presence of the illuminated key,
which thus comes to serve as a discriminative stimulus for operant
keypecking (as opposed to the classically conditioned variety).
This explanation works because new assumptions are introduced to make it
work. An illuminated key looks as if it might be edible. Classical
conditioning is the cause of the initial key-pecking. The role of the
illuminated key changed from CS to discriminative stimulus.
None of these assumptions is itself the outcome of a test conducted
during the same experiment. Rather, they are brought in in order to make
the theory explain what has already happened, because the theory, by
itself, offers no explanation and could not have predicted this effect.
If you can pick and choose your imagined facts and which other theories
will be invoked when needed (and then be dropped again when no longer
needed), you don't have an explanation at all; you have a plausible
story and nothing more. You can "predict" only after you know what
actually happened, which is no prediction.
Suppose I were to ask that reinforcement theory explain behavior in a
new situation strictly on the basis of assumptions which are stated
beforehand, including any adjustable parameters you wish to provide. How
many phenomena would you then claim that reinforcement theory has
explained and can explain in the future? I think that the number would
shrink dramatically. It might even turn out to be zero.
The point is that autoshaping can be understood within the
framework of what we (think) we already know about conditioning,
without adding any new rules, whereas it is not clear that the same
can be said with respect to the framework provided by PCT.
But you DO have to add new rules, when and as needed. What determines
beforehand whether you will use classical conditioning as part of an
explanation? What determines when you will assume that a lighted key is
treated as something edible? On what basis will you decide that a CS has
turned into a discriminative stimulus? You have so many wild cards up
your sleeve that you can hardly go wrong -- if you know in advance what
results you will have to account for.
In PCT, when we set up a new experiment we normally predict its outcome
from the model before it is ever tried. I have written up many an
experiment before ever doing it, leaving only blanks for specific
numbers to be obtained from the data, and space for the figures which I
have already described qualitatively in the text. Very seldom does the
actual result depart from what we expected from the model. The
predictions of PCT are true predictions; the explanations given after
the fact are the same as those given before the fact, with no new
assumptions except, of course, in the cases where we simply used the
wrong model and had to start over. But those cases are important too,
because all predictions, if they are really predictions, can go wrong.
An explanation given after the fact, with new assumptions allowed that
were not originally part of the explanation, is only a fabrication, with
no scientific value. It has very little danger of failing, because you
can always come up with a new assumption that makes the theory right
after all. The problem is that you never know WHICH assumptions to make
before the experiment is done. You have to wait until you see the
result, and THEN choose the assumptions that make the explanation fit.
That is truly cargo-cult science: the appearance without the substance.
-----------------------------------------------------------------------
Rick Marken (951128.2130) --
... the thermostat experiment Bill Powers described recently. In
the thermostat experiment the experimenter tries to determine
whether a variable -- the degree of openness of the window --
affects the response of the thermostat -- whether the heater goes
on or off. The experimenter finds that the heater goes on when the
window is open and off when it is closed.
You must be referring to an older post. In my latest thermostat puzzle,
the window remained closed. Only the shades and drapes were opened to
let the sunshine in. And anyway, it was cold outside: what effect would
cold air have on the hot-air output of the register? Hint: the room got
cold when the shades were raised because the register output went to
zero.
------------------------------------------------------------------------
i.kurtzer (951128.2200) --
To label an empirical fact this would not be a boon but a
tautology; we are trying to explain a fact (control) for which one
explanation is what you said previously. Maybe we should
commission some of the cyberneticists who gave us such gems as
"autopoesis" and "meta-system transition" or "Negentropy" (just
kidding ;-)). i am partial to a term that includes "stability"
since that might end the present or absent discussions that ensue
with control (and is not solved by invoking sytem gain); maybe
"hyper-stability" ?
Maybe, since we're breaking with traditions, we should flout the
tradition of naming things with Greek or Latin roots. What's the Inuit
word for control? Avery? Bruce Nevin? Anybody?
----------------------------
Also, i knew what the persons meant when they said "natural" its
just that that word leaves me feeling ill ... Just a pet peeve,
but a rectifiable one.
Sorry to hear that you've turned into an S-R device. However, I'm glad
to hear that it's a rectifiable problem. Let me know how it works out.
-----------------------------------------------------------------------
Best to all,
Bill P.