[From Bill Powers (980101.0754 MST)]
Hmm. Looks just like 1997 out there. Oh, well.
Martin Taylor 971231 17:45 --
S.S. Stevens actually thought that
perceptions were quantized; a control-system experiment would show that
they are not.It's interesting to know that even if the perceptual input function is
quantized, a control system experiment would show that it is not. Why
bother with the experiment, if the result would be the same no matter
what the fact of the matter investigated?
I should have said "low-order" perceptions. Of course higher-order
perceptions such as categories are quantized. But as far as I know,
lower-order perceptions are continuously variable. The lower and upper
points between which a JND is measured would not repeat from one experiment
to another (unless the experimenter forced them to repeat). At least they
didn't when other people tried to replicate S.S. Stevens' results.
Signal-to-noise ratio is not the same thing as perceptions
that occur in discrete steps.No, indeed. That's why a d' (d-prime) measure is substituted for the notion
of jnd in respectable psychophysics. Nevertheless, this says nothing about
whether or not perception is quantized. That's a different issue.
I should know better to use physicists' terms. What S. S. Stevens thought
was that subjective estimates of stimulus magnitude were such that the true
plot would be a staircase rather than a smooth line. In other words, he
asserted that changes in perceived stimulus magnitude would occur only at
specific repeatable discretely-separated magnitudes of the stimulus.
(Parenthetically, d' is a measure of the separability of the signal state
from the non-signal state. It is more akin to the magnitude of the
perceptual signal than it is to "the slope of the input function"
(Bill Powers 971230.1012 MST). "Bias" (usually labelled "beta") is the
willingness of the person to assert that a signal is present. It is
not the threshold, though changes in beta are associated with changes
in what would be called a threshold in a casual "is it there or is it not"
study.)
Just out of curiousity, what in standard parlance would correspond to the
slope of the stimulus-magnitude-to-perception function?
Psychophysical experiments _are_ control system experiments, though not in
the sense you intend. In a psychophysical experiment (other than one using
the method of adjustment) the subject is unable to influence what the
experimenter thinks of as the "stimulus," but is able to control an
important perception--the perception of the experimenter's satisfaction
with the subject's performance. The _mechanism_ for controlling this
perception is consistency of relation between stimulus and response. The
overt experiment is S-R. If the subject can hear the tone in an auditory
sensitivity experiment and randonly says "yes" or "no", the experimenter
will not be perceived as very satisfied.
Unfortunately, the degree of the experimenter's satisfaction is a second
perception with its own unknown relation between actual and perceived
satisfaction. This has been the problem with psychophysical experiments all
along. We can estimate the scaling of one perception relative to another,
but there is no absolute scale. If all perceptual input functions share the
same nonlinearity (in addition to differences between functions of
different kinds), there is no psychophysical way to find out what that
nonlinearity is.
There are real problems with a lot of psychophysical studies, but not being
control system experiments is not among them.
What a control-system experiment (method of adjustment) can do that no
other kind of experiment can do is reveal the actual level of input that
the subject considers to have some characteristic. If you say "Now make the
stimulus twice as large as before," you can measure the external correlate
of what the subject perceives as "twice as large." This doesn't tell you
what the ratio of perceptions is, but it does tell you what the subject
means by the verbal statement "twice as large" in terms of the external
measurement.
If you try to do the same thing open-loop, you may or may not get the same
results. The only way to find out is to re-do all the open-loop experiments
in a closed-loop form and see if you do get the same results.
One way of studying any
feedback structure is to break the loop somehwere and to look at the
input-output relations among the components of the loop. That's what
a psychophysical experiment (other than method of adjustment) does.
Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.
Think about trying this with a tracking experiment. If you actually break
the loop in the middle of a run and present a convincingly similar picture
of cursor and target, you will see essentially the same output actions for
some short period of time. But the difference will soom magnify and the
person will discover that control has actually been lost. At that point
you'll see the control handle start to produce experimental wiggles, and
then go into some completely different mode of movement, or stop.
If you simply tried to measure the dependence of handle movement on
cursor-target separation without first having the person actively tracking
for some considerable time, the results would be completely meaningless.
What would you tell the subject to do? "When you see the cursor and target,
move the handle in the way you think it should move?" It would be very hard
to avoid giving instructions that tell the subject how you want to see the
handle moving.
The
loop component we call "environmental feedback function" is broken,
permitting the internal component to be studied in isolation. I think
that is useful, and the results help to establish parameters for the
functioning of completed control loops using the same perceptions.
This would be possible only in special circumstances.
The question arises as to whether the perceptual input functions operate
the same way when the resulting perception is being controlled as when it
isn't. This issue is not ordinarily considered within HPCT, since normally
the perceptual input function is taken to be whatever it is, and only the
magnitude of its output is controlled. But it is an issue, one that might
invalidate the uncritical use of the results of psychophysical studies
to assess the elements of related control loops.
Yes. But it's not only the input function you have to worry about. Suppose
a tracking system could go on working in the same way without the
environmental feedback function. If you were to present the cursor a
certain distance from the target, the output action would be quite extreme;
it would become something like 30 times as large as needed to correct that
error, and it would change along something like a decelerating exponential
path. I think you must know that we would actually observe nothing of the
sort. I can tell you we wouldn't, because I've tried it, but I suppose that
my observations could be considered biased. You should try it yourself. You
can't just break the loop when a living control system is involved, and
assume that you're still looking at the same system.
···
------------------------------------
To whom it may concern:
I don't think that my real message is getting across here. Some people are
acting as if I said that all psychological facts discovered through
traditional methods are wrong. That's not my point at all. What I'm saying
is that because control-system considerations were not taken into account
when those experiments were done, WE DON'T KNOW WHICH RESULTS ARE RIGHT AND
WHICH ARE WRONG. There might be selected cases in which we could review the
methodology and look at the data and conclude that a control-system
experiment was in fact (unwittingly) done, or that if the loop were closed
we could reasonably expect that the results would be the same. But even to
do that requires that we review everything.
My mother used to come up with little jokes that had a point I didn't get
until much later in life. One was about a bank teller who was counting a
stack of dollar bills that was supposed to have $300 in it. He counted
"one, two, three, four ... 151, 152, 153 ... well, it's right so far, it
must be right the rest of the way."
Perhaps my point would be easier to accept we we stipulate that we're
talking only about _other people's_ results in fields of psychology _other
than your own_. And specifically, whoever is reading this, we're exempting
from consideration any experiment you have done yourself, or have publicly
accepted as methodologically correct and factually reliable. All I'm asking
is that we revisit experiments done in other fields with the idea of seeing
whether control processes were properly taken into account, and whether a
re-design of the experiments as closed-loop experiments might lead to
different outcomes.
And in all cases where there is any doubt, I'm asking that we either
actually re-do the experiments using PCT methods, or put the findings on
the shelf until such time as the tests can be done, not using them as facts
until then.
Does that sound like a reasonable proposal?
Best,
Bill P.