Bridges

[From Bill Powers (980101.0754 MST)]

Hmm. Looks just like 1997 out there. Oh, well.

Martin Taylor 971231 17:45 --

S.S. Stevens actually thought that
perceptions were quantized; a control-system experiment would show that
they are not.

It's interesting to know that even if the perceptual input function is
quantized, a control system experiment would show that it is not. Why
bother with the experiment, if the result would be the same no matter
what the fact of the matter investigated?

I should have said "low-order" perceptions. Of course higher-order
perceptions such as categories are quantized. But as far as I know,
lower-order perceptions are continuously variable. The lower and upper
points between which a JND is measured would not repeat from one experiment
to another (unless the experimenter forced them to repeat). At least they
didn't when other people tried to replicate S.S. Stevens' results.

Signal-to-noise ratio is not the same thing as perceptions
that occur in discrete steps.

No, indeed. That's why a d' (d-prime) measure is substituted for the notion
of jnd in respectable psychophysics. Nevertheless, this says nothing about
whether or not perception is quantized. That's a different issue.

I should know better to use physicists' terms. What S. S. Stevens thought
was that subjective estimates of stimulus magnitude were such that the true
plot would be a staircase rather than a smooth line. In other words, he
asserted that changes in perceived stimulus magnitude would occur only at
specific repeatable discretely-separated magnitudes of the stimulus.

(Parenthetically, d' is a measure of the separability of the signal state
from the non-signal state. It is more akin to the magnitude of the
perceptual signal than it is to "the slope of the input function"
(Bill Powers 971230.1012 MST). "Bias" (usually labelled "beta") is the
willingness of the person to assert that a signal is present. It is
not the threshold, though changes in beta are associated with changes
in what would be called a threshold in a casual "is it there or is it not"
study.)

Just out of curiousity, what in standard parlance would correspond to the
slope of the stimulus-magnitude-to-perception function?

Psychophysical experiments _are_ control system experiments, though not in
the sense you intend. In a psychophysical experiment (other than one using
the method of adjustment) the subject is unable to influence what the
experimenter thinks of as the "stimulus," but is able to control an
important perception--the perception of the experimenter's satisfaction
with the subject's performance. The _mechanism_ for controlling this
perception is consistency of relation between stimulus and response. The
overt experiment is S-R. If the subject can hear the tone in an auditory
sensitivity experiment and randonly says "yes" or "no", the experimenter
will not be perceived as very satisfied.

Unfortunately, the degree of the experimenter's satisfaction is a second
perception with its own unknown relation between actual and perceived
satisfaction. This has been the problem with psychophysical experiments all
along. We can estimate the scaling of one perception relative to another,
but there is no absolute scale. If all perceptual input functions share the
same nonlinearity (in addition to differences between functions of
different kinds), there is no psychophysical way to find out what that
nonlinearity is.

There are real problems with a lot of psychophysical studies, but not being
control system experiments is not among them.

What a control-system experiment (method of adjustment) can do that no
other kind of experiment can do is reveal the actual level of input that
the subject considers to have some characteristic. If you say "Now make the
stimulus twice as large as before," you can measure the external correlate
of what the subject perceives as "twice as large." This doesn't tell you
what the ratio of perceptions is, but it does tell you what the subject
means by the verbal statement "twice as large" in terms of the external
measurement.

If you try to do the same thing open-loop, you may or may not get the same
results. The only way to find out is to re-do all the open-loop experiments
in a closed-loop form and see if you do get the same results.

One way of studying any
feedback structure is to break the loop somehwere and to look at the
input-output relations among the components of the loop. That's what
a psychophysical experiment (other than method of adjustment) does.

Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.

Think about trying this with a tracking experiment. If you actually break
the loop in the middle of a run and present a convincingly similar picture
of cursor and target, you will see essentially the same output actions for
some short period of time. But the difference will soom magnify and the
person will discover that control has actually been lost. At that point
you'll see the control handle start to produce experimental wiggles, and
then go into some completely different mode of movement, or stop.

If you simply tried to measure the dependence of handle movement on
cursor-target separation without first having the person actively tracking
for some considerable time, the results would be completely meaningless.
What would you tell the subject to do? "When you see the cursor and target,
move the handle in the way you think it should move?" It would be very hard
to avoid giving instructions that tell the subject how you want to see the
handle moving.

The
loop component we call "environmental feedback function" is broken,
permitting the internal component to be studied in isolation. I think
that is useful, and the results help to establish parameters for the
functioning of completed control loops using the same perceptions.

This would be possible only in special circumstances.

The question arises as to whether the perceptual input functions operate
the same way when the resulting perception is being controlled as when it
isn't. This issue is not ordinarily considered within HPCT, since normally
the perceptual input function is taken to be whatever it is, and only the
magnitude of its output is controlled. But it is an issue, one that might
invalidate the uncritical use of the results of psychophysical studies
to assess the elements of related control loops.

Yes. But it's not only the input function you have to worry about. Suppose
a tracking system could go on working in the same way without the
environmental feedback function. If you were to present the cursor a
certain distance from the target, the output action would be quite extreme;
it would become something like 30 times as large as needed to correct that
error, and it would change along something like a decelerating exponential
path. I think you must know that we would actually observe nothing of the
sort. I can tell you we wouldn't, because I've tried it, but I suppose that
my observations could be considered biased. You should try it yourself. You
can't just break the loop when a living control system is involved, and
assume that you're still looking at the same system.

···

------------------------------------
To whom it may concern:

I don't think that my real message is getting across here. Some people are
acting as if I said that all psychological facts discovered through
traditional methods are wrong. That's not my point at all. What I'm saying
is that because control-system considerations were not taken into account
when those experiments were done, WE DON'T KNOW WHICH RESULTS ARE RIGHT AND
WHICH ARE WRONG. There might be selected cases in which we could review the
methodology and look at the data and conclude that a control-system
experiment was in fact (unwittingly) done, or that if the loop were closed
we could reasonably expect that the results would be the same. But even to
do that requires that we review everything.

My mother used to come up with little jokes that had a point I didn't get
until much later in life. One was about a bank teller who was counting a
stack of dollar bills that was supposed to have $300 in it. He counted
"one, two, three, four ... 151, 152, 153 ... well, it's right so far, it
must be right the rest of the way."

Perhaps my point would be easier to accept we we stipulate that we're
talking only about _other people's_ results in fields of psychology _other
than your own_. And specifically, whoever is reading this, we're exempting
from consideration any experiment you have done yourself, or have publicly
accepted as methodologically correct and factually reliable. All I'm asking
is that we revisit experiments done in other fields with the idea of seeing
whether control processes were properly taken into account, and whether a
re-design of the experiments as closed-loop experiments might lead to
different outcomes.

And in all cases where there is any doubt, I'm asking that we either
actually re-do the experiments using PCT methods, or put the findings on
the shelf until such time as the tests can be done, not using them as facts
until then.

Does that sound like a reasonable proposal?

Best,

Bill P.

[From Rick Marken (980101.0930)]

Bill Powers (980101.0754 MST) --

What S. S. Stevens thought was that subjective estimates of
stimulus magnitude were such that the true plot would be a
staircase rather than a smooth line.

I'm not familiar with this aspect of S. S. Stevens' psychophysics.
I believe that neither Stevens nor Fechner viewed the the perceptual
representation of stimulus magnitude as discrete. Fechner concluded
that perception (p) was a _continuous_ log function of stimulus
magnitude (s); Fechner's law -- p = k log (s) -- was derived under
the _assumption_ the just noticeable differences in stimulus magnitude
(Weber's jnd's, measured in physical units) are subjectively equal
(same size in perceptual units). Stevens wanted to measure the
relationship between p and s "directly" -- without assuming that
jnd's are perceptually equal. He developed a very simple
technique called "magnitude estimation" to do this; in magnitude
estimation a person uses numbers (or some another perceptual
variable) to indicate the perceived magnitude of s. The results
of magnitude estimation of various physical stimulus dimensions
showed that magnitude estimates varied as a power funtion of s.
Stevens assumed that the magnitude estimates were direct measures
of perceptual magnitude and concluded that the actual relationship
between p and s was p = s ^ k -- where the exponent, k, varied
across stimulus dimensions.

Psychophysics will unquestionably benefit from an understanding
of control theory and the methods of studying perceptual control
implied by that theory. But I don't think Fechner and Stevens
can be blamed for the non-analog view of lower level perception
that seems to be prevalent in psychology today. I think the
real culprit was the computer model of the brain, with it's
emphasis on "information processing" and "symbol manipulation".
Stevens was actually a rather analog guy -- as, of course, was
Fechner. Both did their work well before the unfortunate "digital
revolution" in psychology.

Martin Taylor (971231 17:45) --

Psychophysical experiments _are_ control system experiments

This is not true at all. Psychophysical experiments are no more
control system experiments than are any other kinds experiments
in psychology. Of course, people are controlling variables in
psychophysical (and _all_) experiments -- because people control.
The reason these are not control _experiments_ is that you have
no way of telling _what_ subjects are controlling. There is no
systematic attempt to test (by disturbing hypothetical controlled
variables) what perceptions are under control.

We can identify possible controlled variables in _any_
psychological experiment. This is particularly easy to do
in operant experiments where the organism is given the ability
to produce certain results. But identifying _possible_ controlled
variables (like "seeing the experimenter pleased") is nothing
like finding out what variable(s) the organism is _actually_
controlling. It is simply not a control experiment unless we
can identify a controlled variable, monitor it's behavior
under disturbance and monitor the way in which the subject
manages to keep the variable under control. That is why there
are hardly any conventional psychology experiments _of any kind_
that are _control experiments_, Bruce Abbott notwithstanding.

Happy New Year

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (980101.1217 MST)]

Rick Marken (980101.0930)--

I'm not familiar with this aspect of S. S. Stevens' psychophysics.
I believe that neither Stevens nor Fechner viewed the the perceptual
representation of stimulus magnitude as discrete.

I don't have a reference on this; it's something that came up late in
Stevens' career, and as I remember it didn't convince anyone else. If you
had a suitable library handy you could probably dredge it up.

Fechner concluded
that perception (p) was a _continuous_ log function of stimulus
magnitude (s); Fechner's law -- p = k log (s) -- was derived under
the _assumption_ the just noticeable differences in stimulus magnitude
(Weber's jnd's, measured in physical units) are subjectively equal
(same size in perceptual units).

I don't think that can be established. The way you measure JNDs is to make
a change in the stimulus smaller and smaller until the person is wrong half
of the time. But that doesn't tell you how large the perceptual signal is.
Even if the perceptual signal were linear with stimulus magnitude, you'd
see an increasing JND just because of signal statistics.

Stevens wanted to measure the
relationship between p and s "directly" -- without assuming that
jnd's are perceptually equal. He developed a very simple
technique called "magnitude estimation" to do this; in magnitude
estimation a person uses numbers (or some another perceptual
variable) to indicate the perceived magnitude of s. The results
of magnitude estimation of various physical stimulus dimensions
showed that magnitude estimates varied as a power funtion of s.

That means only that they vary _relative to each other_ as power functions.
How big is "2"? There's no way to measure that in absolute terms. Maybe all
perceptions are essentially linear functions of stimulus magnitude, but in
converting analog signals to symbolic numbers we insert a power function.
There's just no way to tell. It's a non-question.

< ... I don't think Fechner and Stevens

can be blamed for the non-analog view of lower level perception
that seems to be prevalent in psychology today. I think the
real culprit was the computer model of the brain, with it's
emphasis on "information processing" and "symbol manipulation".
Stevens was actually a rather analog guy -- as, of course, was
Fechner. Both did their work well before the unfortunate "digital
revolution" in psychology.

I agree with all that. Stevens' downfall probably came about through
wanting to say something "digitally correct" in the atmosphere that was
building up around him. Fechner, I think, simply didn't realize that his
methods couldn't uncover a nonlinearity common to _all_ perceptions.

Best,

Bill P.

[Martin Taylor 990102 01:35]

[From Bill Powers (980101.0754 MST)]

Just out of curiousity, what in standard parlance would correspond to the
slope of the stimulus-magnitude-to-perception function?

I think it is sometimes called "the psychophysical function." But hardly
any psychophysics is concerned with that, anyway. Most psychophysics is
concerned with the conditions under which people can tell whether A is
different from B. Is the area of that triangle bigger than of this disk;
was there a beep in that time interval or not; is this colour redder
than that one or not...

Psychophysical experiments _are_ control system experiments, though not in
the sense you intend. In a psychophysical experiment (other than one using
the method of adjustment) the subject is unable to influence what the
experimenter thinks of as the "stimulus," but is able to control an
important perception--the perception of the experimenter's satisfaction
with the subject's performance. The _mechanism_ for controlling this
perception is consistency of relation between stimulus and response. The
overt experiment is S-R. If the subject can hear the tone in an auditory
sensitivity experiment and randonly says "yes" or "no", the experimenter
will not be perceived as very satisfied.

Unfortunately, the degree of the experimenter's satisfaction is a second
perception with its own unknown relation between actual and perceived
satisfaction.

That hardly matters, since it is the subject's perception of the
experimenter's satisfaction that is being compared with the subject's
reference for the exdperimenter's satisfaction--as with any other
perceptual control. If "behaving well" in the experiment brings the
subject's perception of the experimenter's satisfaction closer to its
reference, the mechanism has served its purpose in controlling the
relevant perception.

···

--------------------

This has been the problem with psychophysical experiments all
along. We can estimate the scaling of one perception relative to another,
but there is no absolute scale.

That's another question entirely, and one that has been of concern since
at least my summer student days in the late 50's. I tried to work around
it in a series of experiments in which I looked for "anchoring effects"
in different perceptual continua--time, grey-scale, colour along an arc
in colour space, location of a dot along a line, location of a nonsense
syllable in an incompletely learned list, and even cardinal number. Some
of these continua were open at one end, some closed at both ends (and as
you may have observed, some even had discrete stimulus possibilities, yet
acted as if the perceptual space was continuous--the opposite of what
you sat Stevens thought).

The basis of my experiment was the idea that you propose later in your
message:

Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.

My idea was that when the subject had control of a stimulus it would look
different from when the stimulus was just "presented" by an experimenter.
What I did was to use one of the continua as "stimulus" and another as
"response". In the classic Stevens magnitude estimation, a number would
be the response. I might instead use a number as the stimulus, and ask
the person to make a grey that matched the number. Or to make the
angle of a pointer in a quadrant match the timing of a light flash between
two others. Anyway, the result was clearly that the perception _was_
different when the subject controlled it as compared to when the
experimenter controlled it (though, of course I was not thinking in PCT
terms).

For magnitude estimation studies, then, your comments are clearly justified.
And they might even be justified for the more usual type of psychophysical
studies in which the ability to discriminate is what is tested. However,
whether or not that is in fact so, any study, by any method, that shows
a subject _can_ discriminate A from B also shows that the difference
between A and B could correspond to a controllable attribute of perception.
It provides at least an upper bound on the least ability to distinguish,
and the ability to distibguish is fundamental to the ability to control.
That is why I think that such studies remain useful. And despite what I
say in the next paragraph, I don't think that there is much likelihood
of large discrepancies in the measured ability of people to discriminate,
regardless of the method.

Now, why I think it possible that direct control studies _might_ give
a different result even in discrimination experiments: Experimental
techniques known as "adaptive threshold" methods, especially one called
PEST that I developed (with Doug Creelman), give results that seem to show
subjects to be a bit more sensitive to delicate discriminations than the
results of fixed level studies. In a PEST study, the task gets easier
if the subject is performing worse than a target level, and harder if the
subject is performing better. It's a control study from the experimenter's
viewpoint, with a reference that the subject should get, say, 80% correct.
Under these conditions, the subject gets 80% correct with a more
difficult task than would be the case if the experimenter just randomly
chose a set of difficulty levels and found by interpolation the level
corresponding to 80% correct. The same might be true if the subject
directly controlled the difficulty. It's hard to know. But the differences
in the PEST and fixed-level studies are large only when the discrimination
gets _very_ difficult, and I don't think there would be much difference
between a optimum direct control study and PEST, since the results
with PEST are usually within about 4 or 5 dB of a mathematically ideal
observer, if the subject is well trained, which gives not much room for
further improvement.

----------------

One way of studying any
feedback structure is to break the loop somehwere and to look at the
input-output relations among the components of the loop. That's what
a psychophysical experiment (other than method of adjustment) does.

Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.

Yes. That's what happens when the person is actually trying to control
the perception in question. But if control is not intended (reference
level zero, as you discussed in another message yesterday), asking the
subject to provide output relating to the stimulus is not _necessarily_
going to provide a different result than would be the case on another
(non-experimental) occasion when the subject is controlling the perception
corresponding to that stimulus.

The question arises as to whether the perceptual input functions operate
the same way when the resulting perception is being controlled as when it
isn't. This issue is not ordinarily considered within HPCT, since normally
the perceptual input function is taken to be whatever it is, and only the
magnitude of its output is controlled. But it is an issue, one that might
invalidate the uncritical use of the results of psychophysical studies
to assess the elements of related control loops.

Yes. But it's not only the input function you have to worry about.

Sure. But if you are trying to study the input function, it is what you
are worrying about at that moment.

------------------------------------
To whom it may concern:

I don't think that my real message is getting across here. Some people are
acting as if I said that all psychological facts discovered through
traditional methods are wrong. That's not my point at all. What I'm saying
is that because control-system considerations were not taken into account
when those experiments were done, WE DON'T KNOW WHICH RESULTS ARE RIGHT AND
WHICH ARE WRONG. There might be selected cases in which we could review the
methodology and look at the data and conclude that a control-system
experiment was in fact (unwittingly) done, or that if the loop were closed
we could reasonably expect that the results would be the same. But even to
do that requires that we review everything.

I can go along with this as stated. But you usually come across a bit
stronger. I think that we can make a fairly easy triage, separating
results/observations into a class that we are reasonably safe to accept,
a class that is obviously untrustworthy or makes no sense in the PCT
way of looking at things, and a class that needs to be examined closely
to see whether the results might be worth rechecking. How large those
three classes are might be a topic of some controversy, given the recent
Marken-Abbott interchange:-) I'd start by putting most results from
discriminative psychophysics into the "safe to accept" category.

Happy New Year.

Martin

i.kurtzer (980102.0300)

Psychophysical experiments _are_ control system experiments, though not in
the sense you intend.

What up, Martin? You knew someone was going to freak on this one. So this must
be some weirdo intro thread, but to what?

In a psychophysical experiment (other than one using
the method of adjustment) the subject is unable to influence what the
experimenter thinks of as the "stimulus," but is able to control an
important perception--the perception of the experimenter's satisfaction
with the subject's performance.

Who knows how important? In determining the believability of the data noone
checks the "degree of importance" a subject has adopted. And so its always and
only implicit. But this applies to all studies involving willing humans--they
are willing to try to follow instructions; we assume as a means to an end.
Ergo, So what gives?

The _mechanism_ for controlling this
perception is consistency of relation between stimulus and >response.

For the subject it is percept A and percept B, not percept A and response B;
now, as an tester we have one _more_ unknown.
Before, the tester wanted to know the relation between "objective" state 1
(wavelength) and subject's percept A (red) . But the introduction of
interpretive protocol lessens the findings tenability. Now there is also state
2 (tester's heard yes) and subject's percept B (yes) . This is bad.

The
overt experiment is S-R.

It is S-R to its core. It is S-R because they cast the PIF in behavioral
terms.

  >There are real problems with a lot of psychophysical studies, but not being

control system experiments is not among them

This effectively rewrites every concievable experiment in the history of
psychology as control-system experiments.
Is this the bridge?
Is an experiment a control-system experiment if the authors acted as if it
wasn't? If so, then that's replacing ontology with methodology and makes it an
apologist to all theories thereafter as being anticipating, but misunderstood.

i.

[Martin Taylor 990102 01:35]

[From Bill Powers (980101.0754 MST)]

Just out of curiousity, what in standard parlance would correspond to the
slope of the stimulus-magnitude-to-perception function?

I think it is sometimes called "the psychophysical function." But hardly
any psychophysics is concerned with that, anyway. Most psychophysics is
concerned with the conditions under which people can tell whether A is
different from B. Is the area of that triangle bigger than of this disk;
was there a beep in that time interval or not; is this colour redder
than that one or not...

So most psychophysics experiments are really concerned with perception of
relationships? That's interesting; the impression I always had was that
much lower levels of perception were being investigated. But from your
descriptions, I can see that you're right.

Unfortunately, the degree of the experimenter's satisfaction is a second
perception with its own unknown relation between actual and perceived
satisfaction.

That hardly matters, since it is the subject's perception of the
experimenter's satisfaction that is being compared with the subject's
reference for the exdperimenter's satisfaction--as with any other
perceptual control. If "behaving well" in the experiment brings the
subject's perception of the experimenter's satisfaction closer to its
reference, the mechanism has served its purpose in controlling the
relevant perception.

I should think that if the subject can tell how well the experimenter is
satisfied by the subject's performance, a great possibility would exist for
the subject to provide the experimenter with the results that are wanted or
expected -- in other words, for the experimenter to control the results
simply by withholding or expressing satisfaction. The "Clever Hans" effect.

This has been the problem with psychophysical experiments all
along. We can estimate the scaling of one perception relative to another,
but there is no absolute scale.

That's another question entirely, and one that has been of concern since
at least my summer student days in the late 50's. I tried to work around
it in a series of experiments in which I looked for "anchoring effects"
in different perceptual continua--time, grey-scale, colour along an arc
in colour space, location of a dot along a line, location of a nonsense
syllable in an incompletely learned list, and even cardinal number. Some
of these continua were open at one end, some closed at both ends (and as
you may have observed, some even had discrete stimulus possibilities, yet
acted as if the perceptual space was continuous--the opposite of what
you sat Stevens thought).

In his Power-Law work, Stevens was investigating a continuum. The stepwise
JND work was something else, and I believe, later.

The basis of my experiment was the idea that you propose later in your
message:

Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.

My idea was that when the subject had control of a stimulus it would look
different from when the stimulus was just "presented" by an experimenter.
What I did was to use one of the continua as "stimulus" and another as
"response". In the classic Stevens magnitude estimation, a number would
be the response. I might instead use a number as the stimulus, and ask
the person to make a grey that matched the number. Or to make the
angle of a pointer in a quadrant match the timing of a light flash between
two others. Anyway, the result was clearly that the perception _was_
different when the subject controlled it as compared to when the
experimenter controlled it (though, of course I was not thinking in PCT
terms).

I think that in PCT terms we have to assume that a subject can never just
"produce a response," open-loop. When one continuum is used as the response
measure and another as the stimulus measure, the subject must be
controlling a relationship between the two continua. The controlled
perception is a function of both continua, but the subject can affect only
one of the two contributing lower-level perceptions. If a perception in
continuum 1 is varied by the experimener, the relationship is disturbed and
is corrected by actions that alter the perception from continuum 2.

The behavioral illusion is that the manipulated perception is a stimulus,
and the action that alters the other perception is a response. The actual
controlled variable is something that is a function of both the stimulus
and the response; here, the relationship between the two (lower-level)
perceptions (the simplest relationship is equality).

When a number is used as the response, it is really (according to PCT) a
perception of magnitude (elicited by the numeral or spoken word) that is
under control at the lower level. Hence my question yesterday: "How big is
'2'"? Between hearing or seeing the number-symbol and getting a sense of
analog magnitude there is a perceptual transformation. The subject utters
the name of a number such that the resulting sense of magnitude seems equal
to the sense of magnitude of some other perception. We are therefore still
comparing two perceptual transformations, with any properties common to
both of them being invisible.

For magnitude estimation studies, then, your comments are clearly justified.

Good. I'm glad that we agree that this is a case where re-examination of an
older experiment leads to a different interpretation. The observations
themselves are not altered in this case, but the conclusions are definitely
altered by PCT analysis.

And they might even be justified for the more usual type of psychophysical
studies in which the ability to discriminate is what is tested. However,
whether or not that is in fact so, any study, by any method, that shows
a subject _can_ discriminate A from B also shows that the difference
between A and B could correspond to a controllable attribute of perception.
It provides at least an upper bound on the least ability to distinguish,
and the ability to distibguish is fundamental to the ability to control.
That is why I think that such studies remain useful. And despite what I
say in the next paragraph, I don't think that there is much likelihood
of large discrepancies in the measured ability of people to discriminate,
regardless of the method.

The experiments as you describe them, which involve comparing two different
perceptions, would seem to be tests of relationship perceptions, rather
than basic abilities to discriminate, say, intensity changes in a single
level-one perception. To test the ability to discriminate small changes in
intensity, all we need to do is give the person control over a single
intensity perception and determine the smallest disturbance that will
produce a corrective action. The concept of a hierarchy of perceptions
suggests that this kind of discrimination is quite different from the kind
in which a person tries so say whether, for example, a light intensity is
greater than a sound intensity, or a sound intensity is greater than the
rate of rotation of something.

I think this discussion gives us an idea of just how PCT is likely to alter
our view of previous experimental results. There are many traditional
categories, like "discrimination," which lump together a wide range of
phenomena that are seen as different in PCT. As soon as we introduce the
idea of a hierarchy of perceptions, we can no longer say that
discrimination is discrimination is discrimination. We might expect JNDs
for intensity perception to be quite different from JNDs for relationship
or logical perceptions, even though the same input stimuli are involved.

Now, why I think it possible that direct control studies _might_ give
a different result even in discrimination experiments: Experimental
techniques known as "adaptive threshold" methods, especially one called
PEST that I developed (with Doug Creelman), give results that seem to show
subjects to be a bit more sensitive to delicate discriminations than the
results of fixed level studies. ...

This falls into the category of experimental results that you have obtained
or have accepted, so by my modest proposal it is exempt from the need for
re-evaluation by you. Your expectation is that a PCT experiment would not
make any material difference in the facts you found, so there is no need
for you to do the experiments, and you can continue to accept those facts.

This may not be true of others thinking about the same experiments and
facts, but you are relieved of responsibility for doing any re-evaluation.
If somebody else becomes interested, they can do the required work.

----------------

Breaking the loop works fine for an artificial nonadaptive control
organization. It is much harder to use with a living control system,
because breaking the loop means loss of control, and loss of control
usually leads to an immediate switch to a different mode of control. So
when you think you're measuring the same system open-loop, you're probably
looking at a different system.

Yes. That's what happens when the person is actually trying to control
the perception in question. But if control is not intended (reference
level zero, as you discussed in another message yesterday), asking the
subject to provide output relating to the stimulus is not _necessarily_
going to provide a different result than would be the case on another
(non-experimental) occasion when the subject is controlling the perception
corresponding to that stimulus.

The question is not whether, in general, a person _might_ produce the same
output with and without control, but whether in a given case the person
_does_ produce the same output. It's not enough to say that your open-loop
results aren't _necessarily_ different from corresponding closed-loop
results -- that doesn't warrant concluding that they would in fact be the
same in any particular case.

And don't forget that the subject can't really "provide output" open-loop.
I showed above how such situations need to be analyzed in terms of PCT, as
controlling a relationship between a perception you can control and one
that is arbitrarily varied. The result is not the same as simply breaking a
loop that was formerly closed. The organization of the system becomes
different.

Yes. But it's not only the input function you have to worry about.

Sure. But if you are trying to study the input function, it is what you
are worrying about at that moment.

I was speaking of confounding factors. If the output function changes, you
may only be concerned with the input function, but that will lead you to
attribute any changes to changes in the input function. In dealing with
living control systems, there is no way for an experimenter to isolate the
input function by breaking the loop. When you eliminate the environmental
feedback function, what is left is the path through the organism consisting
of input function, comparator, and output function. And in a hierarchical
system, you may or may not eliminate feedback at all levels. You might
find, in fact, that eliminating what you see as the EFF for a given control
system has no discernible effect on control -- you have not eliminated ALL
EFFs. The only way to do that would be to immobilize the person or cut all
afferent pathways.

------------------------------------
To whom it may concern:

I don't think that my real message is getting across here. Some people are
acting as if I said that all psychological facts discovered through
traditional methods are wrong. That's not my point at all. What I'm saying
is that because control-system considerations were not taken into account
when those experiments were done, WE DON'T KNOW WHICH RESULTS ARE RIGHT AND
WHICH ARE WRONG. There might be selected cases in which we could review the
methodology and look at the data and conclude that a control-system
experiment was in fact (unwittingly) done, or that if the loop were closed
we could reasonably expect that the results would be the same. But even to
do that requires that we review everything.

I can go along with this as stated. But you usually come across a bit
stronger.

I have never said anything in this regard but that all psychological facts
need to be re-examined in the light of control theory.

In other contexts, I have also said that facts which have a low probability
of being true in any given case (whatever theory they come from) are not
sufficient for building a science. I think this seive needs to be applied
to all facts, even facts from PCT, first. This would greatly reduce the
workload in reviewing past findings.

I think that we can make a fairly easy triage, separating
results/observations into a class that we are reasonably safe to accept,
a class that is obviously untrustworthy or makes no sense in the PCT
way of looking at things, and a class that needs to be examined closely
to see whether the results might be worth rechecking. How large those
three classes are might be a topic of some controversy, given the recent
Marken-Abbott interchange:-) I'd start by putting most results from
discriminative psychophysics into the "safe to accept" category.

Perhaps my comments above might suggest a different conclusion. But as I
say, this is not your responsibility, since you are speaking of results
obtained by you, or that you have accepted and used as factual. It would be
helpful if you were to review work in _other_ fields in the way you
suggest. The more classes of results that we can eliminate by consensus,
the less labor will be involved in dealing with the rest.

I trust that we would also agree that where there are disputes over whether
or not to re-examine some kind of result, that would be sufficient reason
to re-examine it, although the person who thinks it unnecessary would not
have any obligation to do the work. Sort of on the principle that we all
have the right not to incriminate ourselves.

Best,

Bill P.

···

At 02:19 AM 1/2/98 -0500, you wrote:

[From Bill Powers (980102.1237 MST)]

i.kurtzer (980102.0300)

(replying to Martin Taylor):

This effectively rewrites every concievable experiment in the history of
psychology as control-system experiments.
Is this the bridge?
Is an experiment a control-system experiment if the authors acted as if it
wasn't? If so, then that's replacing ontology with methodology and makes

it >an apologist to all theories thereafter as being anticipating, but

misunderstood.

Well said, Isaac. If the goal is to show that PCT is simply an extension of
existing psychology, this is certainly a good way to get there.

Best,

Bill P.

[Martin Taylor 980102 21:30]

Bill Powers 980102

[Martin Taylor 990102 01:35]

Most psychophysics is
concerned with the conditions under which people can tell whether A is
different from B. Is the area of that triangle bigger than of this disk;
was there a beep in that time interval or not; is this colour redder
than that one or not...

So most psychophysics experiments are really concerned with perception of
relationships? That's interesting; the impression I always had was that
much lower levels of perception were being investigated. But from your
descriptions, I can see that you're right.

I suppose you could see it as perception of relationship. But it's a
funny kind of relationship, since it could be a relationship between
variables at any level of the hierarchy: is the tone in this interval
louder than the tone in that? Is the first sequence the same as the second?
Is that chord in tune? Is this noise a /p/ or a /b/? Is the line bent
upward or downward?

I should think that if the subject can tell how well the experimenter is
satisfied by the subject's performance, a great possibility would exist for
the subject to provide the experimenter with the results that are wanted or
expected -- in other words, for the experimenter to control the results
simply by withholding or expressing satisfaction. The "Clever Hans" effect.

I don't know how long experimenters have been aware of this possibility,
but it was drilled into us in my first introduction to experimental
psychology that we had to use methods that made it nearly impossible
for the subject to do this. I imagine that the techniques for making it
hard for the subject to cheat in this way were developed a few decades
earlier than that. As for the "clever Hans" effect, most psychophysical
experiments use machines to present the stimuli, and the subject is isolated
from the experimenter, who doesn't know the correct answer for a particular
presentation, anyway. Maybe it was Clever Hans who made psychophysicists
aware of the problem and started them devising methods to counter it.

I think that in PCT terms we have to assume that a subject can never just
"produce a response," open-loop. When one continuum is used as the response
measure and another as the stimulus measure, the subject must be
controlling a relationship between the two continua.

I'm not clear how you mean this. Surely every action a person does is
(part of) the output action that controls some perception(s). So in that
sense it cannot be "open loop." But I don't think that's what you mean,
is it?

The prototypical trial cycle in a psychophysical experiment is like
this: The experimenter presents something for the subject to see (or
hear, or taste, or...), and the subject says "Yes" or "No", or identifies
the presentation as belonging to a prespecified category. Alternatively,
the experimenter presents two things, one of which contains the item
of interest, and the subject says "Number 1" or "Number 2". (Actually,
in all these cases, the subject is much more likely to push a button
than to say anything, and in some experiments a "magnitude estimation"
element is added, in that the subject may give a number of move a slider
to indicate how sure the answer is; but that's by the way). In no case
can the subject affect what has been presented, or what will be the
"correct" answer on the next trial. I'd call that open loop. Isaac,
in a message I find hard to comprehend (i.kurtzer (980102.0300)), calls
it "S-R to its core." You say it is not open loop. I guess I'm in the
middle between you and Isaac. I say it is a mechanism forming part of
the output path of a control loop.

When a number is used as the response, it is really (according to PCT) a
perception of magnitude (elicited by the numeral or spoken word) that is
under control at the lower level.

I don't see how this applies in the usual psychophysical experiment. "One"
is not bigger than "two" when they are identifiers for the first or second
time interval of a trial--and in any case, the response is usually a choice
of which button to push, since it is hard for the machine (at least until
recent years) to recognize what the subject says.

The experiments as you describe them, which involve comparing two different
perceptions, would seem to be tests of relationship perceptions, rather
than basic abilities to discriminate, say, intensity changes in a single
level-one perception.

I don't understand why you say "rather than" instead of "which depend on"
at least in the case where it is the intensity of a sound or light that
is being compared.

To test the ability to discriminate small changes in
intensity, all we need to do is give the person control over a single
intensity perception and determine the smallest disturbance that will
produce a corrective action.

I dispute that. It depends totally on having a control function that is
linear through zero. There are two places in the loop that might put a
lower bound on the smallest disturbance to produce a corrective action:
the discriminative ability of the input function, and the shape of the
output function. It is surely better to dissociate them if you are trying
to examine one of them.

The concept of a hierarchy of perceptions
suggests that this kind of discrimination is quite different from the kind
in which a person tries so say whether, for example, a light intensity is
greater than a sound intensity, or a sound intensity is greater than the
rate of rotation of something.

Oh, quite different. I don't see much sense in these comparisons as absolute
statements. I do see some sense in asking whether, if sound A is perceived
as being as strong as light B, then is sound 2*A greater or less than light
2*B. I don't find this latter question interesting, but at least it seems
to have some possible internal consistency.

I think this discussion gives us an idea of just how PCT is likely to alter
our view of previous experimental results. There are many traditional
categories, like "discrimination," which lump together a wide range of
phenomena that are seen as different in PCT. As soon as we introduce the
idea of a hierarchy of perceptions, we can no longer say that
discrimination is discrimination is discrimination. We might expect JNDs
for intensity perception to be quite different from JNDs for relationship
or logical perceptions, even though the same input stimuli are involved.

I don't think most traditional psychophysicists would disagree, and there
have indeed been studies in the area of phonemic/phonetic discrimination
that show you to be correct in principle. But not using the idea of "JND",
please:-)

But correct or not, I fail to see the relevance of the comment.

Now, why I think it possible that direct control studies _might_ give
a different result even in discrimination experiments: Experimental
techniques known as "adaptive threshold" methods, especially one called
PEST that I developed (with Doug Creelman), give results that seem to show
subjects to be a bit more sensitive to delicate discriminations than the
results of fixed level studies. ...

This falls into the category of experimental results that you have obtained
or have accepted, so by my modest proposal it is exempt from the need for
re-evaluation by you. Your expectation is that a PCT experiment would not
make any material difference in the facts you found, so there is no need
for you to do the experiments, and you can continue to accept those facts.

No. You didn't quote my reason, which is that in many of those cases there
is only about 4 or 5 dB further _possible_ improvement. Since the question
of interest in analyzing the control system's input function is how
discriminating (sensitive) it can be, experiments that show poorer
sensitivity are of no interest, and no experiment could show much greater
sensitivity. Therefore the results can be treated as useful.

And in any case, you quoted my introduction to the reasons why a PCT
experiment might indeed make a difference, even though that difference
_could not be_ large, at least not in the direction of demonstrating greater
sensitivity than is found in a non-control study.

···

---------------------------

And don't forget that the subject can't really "provide output" open-loop.
I showed above how such situations need to be analyzed in terms of PCT,

I think you should show it again, in a way relevant to the way the
experiments are ordinarily done. As I see it, the experiment is quite
simply open loop from the subject's point of view. What is not open-loop
is the subject's control of the perception of the experimenter's satisfaction.

I trust that we would also agree that where there are disputes over whether
or not to re-examine some kind of result, that would be sufficient reason
to re-examine it,

Yes, we can agree on that.

-------------------
I'm a bit puzzled both by your message and (more so) by Isaac's. I'm
perceiving a subtext that seems to say that a person cannot choose to act
in a way coordinated with a perception that is not being controlled. If that
were so, one could not beat time to an orchestral recording any more than
one could say "Yes, there was a tone in the first time interval." But
people can and do do both. Where's the problem in that? If we wanted to
know how accurately people could perceive the timing of the orchestral
beat, we could put an upper bound on that capability by measuring the
discrepancy between the visible hand-waving and the acoustic beat. Likewise,
if we want to know how well a person can hear a tone, we can put an upper
bound on that ability by measuring how reliably the subject says "Yes"
when the machine actually presented a tone.

What's the problem?

Other than the issue that I raised in my first message on the topic.

+Martin Taylor 971231 17:45
+The question arises as to whether the perceptual input functions operate
+the same way when the resulting perception is being controlled as when it
+isn't. This issue is not ordinarily considered within HPCT, since normally
+the perceptual input function is taken to be whatever it is, and only the
+magnitude of its output is controlled. But it is an issue, one that might
+invalidate the uncritical use of the results of psychophysical studies
+to assess the elements of related control loops.

----------------------------

<i.kurtzer (980102.0300)
<>Psychophysical experiments _are_ control system experiments, though not in
<>the sense you intend.
<
<What up, Martin? You knew someone was going to freak on this one. So this
<must be some weirdo intro thread, but to what?

I'm sorry, Isaac. I'm afraid I understand only small portions of your
message. This part, I think I do understand. The answer is that "freak"
is a response to a stimulus, isn't it? And we have only one person
on CSGnet who acts like an S-R system. I didn't control the perception
(in imagination) of someone freaking.

I said that psychophysical experiments are control system experiments
inasmuch as the subject actually has to be controlling some perception
in order for the experiment to function in its intended manner. But I
said the overt experiment is S-R, and you agreed.

What I did was control not for a perception of someone freaking, but against
a disturbance to my perception that psychophysical studies, by and large,
are useful and provide data that can be helpful in the analysis of
control systems. From time to time comments are made that they can't
be useful because they are not done using control methodology. I do
have a dead-band in my control against this disturbance, but my
output system is at least partly an integrator, and when the comments
are repeated often enough, I provide a countering output (message).

I'm afraid I'm at a loss to respond to your other comments. For example,

<>In a psychophysical experiment (other than one using
<>the method of adjustment) the subject is unable to influence what the
<>experimenter thinks of as the "stimulus," but is able to control an
<>important perception--the perception of the experimenter's satisfaction
<>with the subject's performance.
<
<Who knows how important?

The subject does. I'm afraid that was self-evident in the original, so I'm
sure it doesn't answer your comment, but I can't guess what would.

Likewise with:

<>There are real problems with a lot of psychophysical studies, but not being
<>control system experiments is not among them
<
<This effectively rewrites every concievable experiment in the history of
<psychology as control-system experiments.

What, precisely, does this mean? I think I said that the overt psychophysical
experiment was not a control-system experiment, and I think you agreed.
I said that this was not among the real problems of the studies, and
somehow this means rewriting every conceivable experiment in the history
of psychology as a control system experiment. Does this _really_ make sense
to you? It doesn't, to me.

Perhaps I might be able to respond more satisfactorily if you were to
restate what you see as issues. Sorry not to be more helpful.

Martin

[From Rick Marken (980102.2310)]

Martin Taylor says:

Psychophysical experiments _are_ control system experiments,
though not in the sense you intend.

I already explained why this is not even close. It's because
psychophysical experiments _never_ involve a test for the
controlled variable. However, i.kurtzer (980102.0300) says:

What up, Martin? You knew someone was going to freak on this one.

Martin Taylor (980102 21:30) replies:

I'm sorry, Isaac. I'm afraid I understand only small portions
of your message.

I think that's the problem, Martin.

Martin repeats the fiction:

There are real problems with a lot of psychophysical studies,
but not being control system experiments is not among them

isaac astutely replies:

This effectively rewrites every concievable experiment in the
history of psychology as control-system experiments.

And Martin says:

What, precisely, does this mean?

Martin, do you know the difference between a control system
experiment and a conventional psychology experiment (hint:
read my "Dancer..." paper)? Do you know the difference between
a psychophysical study and a conventional experiment (hint: none)?
Do you know the kind of experiments that have been done throughout
the history of psychology (hint: conventional psychology
experiments)? Now do you understand preciely what isaac's
comment means?

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (980103.0418 MST)]

Martin Taylor 980102 21:30--

So most psychophysics experiments are really concerned with perception of
relationships? That's interesting; the impression I always had was that
much lower levels of perception were being investigated. But from your
descriptions, I can see that you're right.

I suppose you could see it as perception of relationship. But it's a
funny kind of relationship, since it could be a relationship between
variables at any level of the hierarchy: is the tone in this interval
louder than the tone in that? Is the first sequence the same as the second?
Is that chord in tune? Is this noise a /p/ or a /b/? Is the line bent
upward or downward?

I'm sure you know that in HPCT, a perception at any level can be a function
of perceptions from any lower levels, at least as the model stands now. In
this latest batch of examples, however, you've picked some new cases. For
the tone and the sequence, you present a perception (which is presumably
remembered) and then ask if a new example of it matches the first example.
This could correspond to setting up the first perception as a reference
condition and stating whether the second occurrance of the same perception
creates an error. Since we don't postulate that error signals can be
perceived consciously, a better interpretation might be that a remembered
perceptual signal is perceived along with a present-time one, and the
person is judging the two signals in terms of a relationship of equality or
of inequality (higher or lower than, in pitch).

That's different from judging the curvature of a line; curvature can be
understood as a single perception with a variable state. Now the person is
being asked to judge whether curvature is zero, positive, or negative. A
threshold can be detected in this way, but not the relation of the sense of
curvature to the actual magnitude of curvature.

As to relationships between sequences, the levels I have proposed have a
problem with this, because sequences are supposedly of a higher level than
relationships. One possible answer is that the actual perception being
judged is at the event level, rather than the level concerned with ordering
per se. But without some specific experimental evidence on this, I'd just
have to admit ignorance.

As you can see, the PCT analysis of these situations would get us into many
questions that don't have any counterparts in the traditional way of
slicing the pie. Experiments that look the same in traditional terms can be
very different in PCT terms.

I should think that if the subject can tell how well the experimenter is
satisfied by the subject's performance, a great possibility would exist for
the subject to provide the experimenter with the results that are wanted or
expected -- in other words, for the experimenter to control the results
simply by withholding or expressing satisfaction. The "Clever Hans" effect.

I don't know how long experimenters have been aware of this possibility,
but it was drilled into us in my first introduction to experimental
psychology that we had to use methods that made it nearly impossible
for the subject to do this. I imagine that the techniques for making it
hard for the subject to cheat in this way were developed a few decades
earlier than that. As for the "clever Hans" effect, most psychophysical
experiments use machines to present the stimuli, and the subject is isolated
from the experimenter, who doesn't know the correct answer for a particular
presentation, anyway. Maybe it was Clever Hans who made psychophysicists
aware of the problem and started them devising methods to counter it.

Do I take it, then, that you are retracting the explanation that involves
the subject's controlling for the experimenter's being satisfied by the
results? According to the above, experimenters are very careful to prevent
the subject from being able to perceive the experimenter's satisfaction, so
obviously this can't be a controlled variable.

You do point out a basic problem with experimenters being their own subjects.

I think that in PCT terms we have to assume that a subject can never just
"produce a response," open-loop. When one continuum is used as the response
measure and another as the stimulus measure, the subject must be
controlling a relationship between the two continua.

I'm not clear how you mean this. Surely every action a person does is
(part of) the output action that controls some perception(s). So in that
sense it cannot be "open loop." But I don't think that's what you mean,
is it?

What I mean is that saying "yes" or "no" (as in your following paragraph)
is not just an open-loop version of a control action by which the subject
might control the perception in question. Saying yes or no is controlling
the perception of hearing oneself say yes or no, in the relationship to the
perception that is specified in the instructions. The perceptual condition
under control is something like "sound implies yes, no sound inplies no."

In no case
can the subject affect what has been presented, or what will be the
"correct" answer on the next trial. I'd call that open loop.

What is being controlled is a function of a perception of the required
action and the perception being manipulated. So a different perception is
being controlled than when the subject has some means of altering the
perception now being manipulated by the experimenter. In the situation you
describe, a higher-level perception is being controlled (since the
lower-level perception can't be controlled). Some of the characteristics of
what you measure will be those of the higher-level system. Are we measuring
the threshold of intensity-perception or sensation-perception, or of the
system that senses the satisfaction of a logical condition?

When differences are involved, what you may be measuring is the transition
level of perception, rather than the sensation level. Of course these
distinctions didn't exist in the old psychophysics; it was assumed that if
you present one tone, then another tone, all that is being judged is
magnitude.

Isaac,
in a message I find hard to comprehend (i.kurtzer (980102.0300)), calls
it "S-R to its core." You say it is not open loop. I guess I'm in the
middle between you and Isaac. I say it is a mechanism forming part of
the output path of a control loop.

But the "reponse" to which you refer does not affect the "stimulus" -- you
aren't just opening the loop of an existing control system to check its
open-loop behavior. You're setting up a _different_ control system that
uses -- you hope -- the same input function. It certainly doesn't use the
same output function.

Anyway, all you measure in the way described above is the lower threshold
of the perceptual function, distinguishing the smallest detectable amount
of signal from no signal. This gives no information about the relation of
signal to stimulus for above-threshold stimuli, which is the sort of thing
that the power law or the logarithmic law is about.

When a number is used as the response, it is really (according to PCT) a
perception of magnitude (elicited by the numeral or spoken word) that is
under control at the lower level.

I don't see how this applies in the usual psychophysical experiment. "One"
is not bigger than "two" when they are identifiers for the first or second
time interval of a trial--and in any case, the response is usually a choice
of which button to push, since it is hard for the machine (at least until
recent years) to recognize what the subject says.

No, but as magnitude estimates, numbers satisfy the example I gave. You're
bringing up, de novo, the use of numbers as cardinal numbers, which is a
different subject. In that application, 1 and 2 are no different from A and
B. They aren't numbers, they're just arbitrary symbols.

The experiments as you describe them, which involve comparing two different
perceptions, would seem to be tests of relationship perceptions, rather
than basic abilities to discriminate, say, intensity changes in a single
level-one perception.

I don't understand why you say "rather than" instead of "which depend on"
at least in the case where it is the intensity of a sound or light that
is being compared.

Because I am assuming HPCT, in which there are different levels of
perception, each level having its own characteristics. In addition to the
ability to detect intensitites, we have the ability to detect intensity
transitions (which occurs at a different level), and at still higher levels
the ability to detect relationships between intensities, which introduces
still different characteristics. As far as I know, psychophysics has never
dealt with levels of perception, so even when thresholds are measured, we
don't know which level of perception is associated with the threshold. It's
possible that when we look at thresholds for relationship perceptions, we
are seing higher thresholds than exist at the level of intensity
perception. It's not hard to think of examples in which a higher-level
perception isn't altered until a lower-level perception has undergone a
considerable and easily-detected change.

To find out whether this effect has been occurring in psychophysical
experiments, we'd just have to do them again.

To test the ability to discriminate small changes in
intensity, all we need to do is give the person control over a single
intensity perception and determine the smallest disturbance that will
produce a corrective action.

I dispute that. It depends totally on having a control function that is
linear through zero.

I don't see why. If the function is nonlinear near zero, we will find that
a larger disturbance is required to produce a corrective action at low
values of the controlled variable than at high values. In fact, by using
small oscillating disturbances at different mean values of the controlled
variable, you could map out the static input-output curve and also measure
the system noise at each input level, as well as getting the noise
spectrum. Of course this map would include both the input and output
functions of the control system; there's no way to separate them that I can
see at the moment (outside neural measurements). Perhaps the different
reference levels that would be involved would permit the separation.

There are two places in the loop that might put a
lower bound on the smallest disturbance to produce a corrective action:
the discriminative ability of the input function, and the shape of the
output function. It is surely better to dissociate them if you are trying
to examine one of them.

I agree, but I don't see how to do that right now in a behavioral
experiment. We also have the question of thresholds at the comparator.

The concept of a hierarchy of perceptions
suggests that this kind of discrimination is quite different from the kind
in which a person tries so say whether, for example, a light intensity is
greater than a sound intensity, or a sound intensity is greater than the
rate of rotation of something.

Oh, quite different. I don't see much sense in these comparisons as absolute
statements. I do see some sense in asking whether, if sound A is perceived
as being as strong as light B, then is sound 2*A greater or less than light
2*B. I don't find this latter question interesting, but at least it seems
to have some possible internal consistency.

But this is basically what all psychophysical estimates of magnitude are
doing. Is perception A greater or less than perception B? There is no way
to get an absolute scale.

I don't think most traditional psychophysicists would disagree, and there
have indeed been studies in the area of phonemic/phonetic discrimination
that show you to be correct in principle. But not using the idea of "JND",
please:-)

All right. No more JNDs. I'm not interested in them, anyway.

Now, why I think it possible that direct control studies _might_ give
a different result even in discrimination experiments: Experimental
techniques known as "adaptive threshold" methods, especially one called
PEST that I developed (with Doug Creelman), give results that seem to show
subjects to be a bit more sensitive to delicate discriminations than the
results of fixed level studies. ...

This falls into the category of experimental results that you have obtained
or have accepted, so by my modest proposal it is exempt from the need for
re-evaluation by you. Your expectation is that a PCT experiment would not
make any material difference in the facts you found, so there is no need
for you to do the experiments, and you can continue to accept those facts.

No. You didn't quote my reason, which is that in many of those cases there
is only about 4 or 5 dB further _possible_ improvement.

Of course not. I gave _my_ reason. You accept the facts you cite as true,
and you yourself have done many experiments of this kind. So you have no
responsibility, under my modest proposal, for applying PCT to them to see
if anything needs to be re-evaluated. Under the rules I propose, you're not
only exempt from the requirement of re-evaluation, you're disqualified.

Since the question
of interest in analyzing the control system's input function is how
discriminating (sensitive) it can be, experiments that show poorer
sensitivity are of no interest, and no experiment could show much greater
sensitivity. Therefore the results can be treated as useful.

So you agree with me. You see no reason actually to re-do the experiments
because you consider them to have been done so perfectly, in technique and
conception, that PCT could not possibly make any material difference in the
results. You may be quite right, for all I know. But because of your belief
in these results, you are not the right person to do the re-evaluation. If
anyone wants to do one.

---------------------------

I trust that we would also agree that where there are disputes over whether
or not to re-examine some kind of result, that would be sufficient reason
to re-examine it,

Yes, we can agree on that.

Good. That will save a lot of arguments.

-------------------
I'm a bit puzzled both by your message and (more so) by Isaac's. I'm
perceiving a subtext that seems to say that a person cannot choose to act
in a way coordinated with a perception that is not being controlled.

Not at all. But "coordination" is a different kind of controlled variable.
It involves higher levels of perception.

If that
were so, one could not beat time to an orchestral recording any more than
one could say "Yes, there was a tone in the first time interval." But
people can and do do both. Where's the problem in that?

No problem. Both involve higher levels of perception and control than those
associated with the individual elements of the controlled perceptions. I
assume that the higher level perceptual systems will introduced new
characteristics, including perhaps different threholds and sensitivities.
Perhaps not, but we can't settle that without experiments.

+Martin Taylor 971231 17:45
+The question arises as to whether the perceptual input functions operate
+the same way when the resulting perception is being controlled as when it
+isn't. This issue is not ordinarily considered within HPCT, since normally
+the perceptual input function is taken to be whatever it is, and only the
+magnitude of its output is controlled. But it is an issue, one that might
+invalidate the uncritical use of the results of psychophysical studies
+to assess the elements of related control loops.

To this, I add the fact that in the so-called "open loop" situations, we
actually are studying different control systems, not the same control
system with its EFF removed.

Best,

Bill P.

[Martin Taylor 980103 11:45] following up

[Martin Taylor 980102 21:30] to Bill Powers 980102

To test the ability to discriminate small changes in
intensity, all we need to do is give the person control over a single
intensity perception and determine the smallest disturbance that will
produce a corrective action.

I dispute that. It depends totally on having a control function that is
linear through zero. There are two places in the loop that might put a
lower bound on the smallest disturbance to produce a corrective action:
the discriminative ability of the input function, and the shape of the
output function. It is surely better to dissociate them if you are trying
to examine one of them.

When I awaoke this morning I remembered the very first study I was exposed
to in my transition from being an engineer into becoming a psychologist.
It was a study of the rangefinder of some kind of gun. I think it was a
stereo view that was adjusted through a gear train, though it might have
been a vernier view. Part of the experiment involved seeing what effect
the gear ratio might have. The subject adjusted the sight as accurately
as he could, and we determined the variability at different ranges, with
different gear ratios. It was, in fact, exactly the kind of study you
propose, since the subject was adjusting a perception of stereo or vernier
disparity to a reference value of zero, as accurately as possible, and
we were measuring the smallest disturbance that would produce a corrective
action.

What we found was that the setting accuracy depended strongly on the gear
ratio, with an optimum that was neither too high (moving the image too
quickly) or too low (moving it too slowly). The setting accuracy was
therefore clearly not limited by the ability of the person to perceive
the disparity, at least not for most gear ratios and perhaps for none.
Changing the environmental feedback function should not affect the
perceptual input function--the ability to discriminate small changes--but
it does affect the result of the experiment when it is performed as a
control system experiment.

Separately from that, I have a personal (anecdotal) comment, based on my
experience in setting up the tracking for the sleep study. With the
square-wave disturbances, I found that I very frequently would not
correct a 1 or 2 pixel error that was clearly perceptible, the conscious
reason being that a new disturbance would come along soon enough and it
wasn't "worth" fixing such a small error. If this happened to me, who
had a strong motivation to get the best results I could, I think it
quite probable that it would happen to at least some of the experimental
subjects (which is why I tried using a dead-band parameter in some of
the model fits for the study).

If you want to study input functions, it seems important not to obtain
results that are depend strongly on the environment or the output function.

Martin

[Martin Taylor 9801013 11:55]

[From Rick Marken (980102.2310)]

Martin Taylor says:

Psychophysical experiments _are_ control system experiments,
though not in the sense you intend.

I already explained why this is not even close. It's because
psychophysical experiments _never_ involve a test for the
controlled variable.

That's why I said "Not in the sense you intend". I suppose you didn't read
that bit. I also pointed out that the overt experiment is S-R, which
the "astute" isaac agreed with. I'm not clear why you need to explain
that to me. Could you elucidate?

However, i.kurtzer (980102.0300) says:

What up, Martin? You knew someone was going to freak on this one.

Martin Taylor (980102 21:30) replies:

I'm sorry, Isaac. I'm afraid I understand only small portions
of your message.

Actually, I answered that I was not concerned whether our resident S-R
system would emit the "freak" response to my stimulus. It was not a
perception for which I was controlling. You quoted my answer to the
message as a whole, rather than to the bit you quoted. But that's normal:-)

Martin repeats the fiction:

There are real problems with a lot of psychophysical studies,
but not being control system experiments is not among them

I'm not clear which is the fiction--whether there are problems with
a lot of psychophysical studies, or whether the fact that they are not
control studies is not the problem.

isaac astutely replies:

This effectively rewrites every concievable experiment in the
history of psychology as control-system experiments.

And Martin says:

What, precisely, does this mean?

Martin, do you know the difference between a control system
experiment and a conventional psychology experiment

Yes. Since I pointed out that the overt psychophysical experiment is
pure S-R, I think I showed that at least I knew that there just tiddly
might be a teensy difference.

hint:

... Now do you understand preciely what isaac's
comment means?

No. It remains incomprehensible.

Neither do I understand why you think I need lecturing on the difference
between studies of controlled variables and studies of the individual
(input-output) components of control systems using S-R methods.

The issue isn't there at all. The issue is whether the components of
the control systems perform the same way when the relevant perception
is actively under control as when it isn't. I pointed that out in my
original message, noting that there seems to be some neurological
evidence that they may not. It is this, not the question of experimental
method, that may determine whether the results of psychophysical studies
are useful in teasing apart the functions of individual loop components.

Martin

[From Bill Powers (980103.1101 MST)]

Martin Taylor 980103 11:45] --

When I awaoke this morning I remembered the very first study I was exposed
to in my transition from being an engineer into becoming a psychologist.
It was a study of the rangefinder of some kind of gun. I think it was a
stereo view that was adjusted through a gear train, though it might have
been a vernier view. Part of the experiment involved seeing what effect
the gear ratio might have. The subject adjusted the sight as accurately
as he could, and we determined the variability at different ranges, with
different gear ratios. It was, in fact, exactly the kind of study you
propose, since the subject was adjusting a perception of stereo or vernier
disparity to a reference value of zero, as accurately as possible, and
we were measuring the smallest disturbance that would produce a corrective
action.

Nice find. It would be interesting to do the same experiment with
instructions to maintain different nonzero distances between the crosshairs
and the target (or whatever). This would explore the control function for
values of input different from zero.

What we found was that the setting accuracy depended strongly on the gear
ratio, with an optimum that was neither too high (moving the image too
quickly) or too low (moving it too slowly). The setting accuracy was
therefore clearly not limited by the ability of the person to perceive
the disparity, at least not for most gear ratios and perhaps for none.
Changing the environmental feedback function should not affect the
perceptual input function--the ability to discriminate small changes--but
it does affect the result of the experiment when it is performed as a
control system experiment.

The performance of the whole loop depends on the properties of the whole
loop. When you increase the gear ratio (give the control knob less effect
on the controlled variable), the loop gain decreases by the same factor. In
a leaky-integrator system, which behaves like a proportional system for
slow changes, there can still be error at the point where the output ceases
to change. Only in a pure integrator will the output keep changing until no
error remains. As for decreasing the gear ratio, the effect is to increase
loop gain, and since you start out doing the best you can, the immediate
effect is likely to be instability and a longer time to reach zero error.

Separately from that, I have a personal (anecdotal) comment, based on my
experience in setting up the tracking for the sleep study. With the
square-wave disturbances, I found that I very frequently would not
correct a 1 or 2 pixel error that was clearly perceptible, the conscious
reason being that a new disturbance would come along soon enough and it
wasn't "worth" fixing such a small error.

I'm glad you said "conscious reason." I've had the same experiences, and
have come up with the same rationalization. Fitting models is a much more
trustworthy way of characterizing what's happening. When a simulation fit
to the same data shows the same uncorrected small errors, one becomes
rather suspicious of "reasons." There are no "reasons" in the model!

I think that in these low-order tracking tasks, the higher-level systems
keep themselves busy by thinking up explanations for what's going on, but
that these explanations have little to do with what is actually happening.

Best,

Bill P.

[From Bill Powers (980103.1136 MST)]

Martin Taylor 9801013 11:55--

[From Rick Marken (980102.2310)]

  [Also replying to i. kurtzer]

I think that all you guys might be missing the point that if PCT is
correct, there is no such thing as an open-loop experiment because there is
no such thing as open-loop behavior.

Best,

Bill P.

[Martin Taylor 980104 01:20]

Bill Powers (980103.1136 MST)

Martin Taylor 9801013 11:55--

[From Rick Marken (980102.2310)]

[Also replying to i. kurtzer]

I think that all you guys might be missing the point that if PCT is
correct, there is no such thing as an open-loop experiment because there is
no such thing as open-loop behavior.

Actually, it was saying that that got me into trouble with Isaac. At
least I think it was, since I still have trouble understanding Isaac's
message, despite Rick's "helpful" intervention.

I hope it doesn't get you into trouble with Isaac as well.

Martin

[From Rick Marken (980103.2350)]

Bill Powers (980103.1136 MST) --

I think that all you guys might be missing the point that if
PCT is correct, there is no such thing as an open-loop experiment
because there is no such thing as open-loop behavior.

Martin Taylor (980104 01:20) --

Actually, it was saying that that got me into trouble with Isaac.
At least I think it was, since I still have trouble understanding
Isaac's message, despite Rick's "helpful" intervention.

What got you into trouble with isaac (and me -- and Bill for that
matter) for was your claim that:

Psychophysical experiments _are_ control system experiments,
though not in the sense you intend.

isaac and I (and I think Bill too) thought you were saying that
psychophysical experiments are control system experiments in
the sense that they are appropriate tests of control system
behavior. We thought this because you went on to claim that
the results of psychophysical experiments, because they are
"control system experiments", can be retained as part of the
PCT canon.

I think your current position, however, is that psychophysical
experiments _are_ control system experiments in the sense
that they are experiments carried out on control systems. My
"helpful" intervention was aimed at pointing out that (as Bill
notes) if PCT is right, then every experiment that has ever been
done in psychology is a "control system experiment" in this sense
because organisms are control systems. But this would not make
these experiments "control system experiments" in the sense that
they are _appropriate_ tests of control system behavior since
all these experiments were done with no understanding of the
_nature_ of control and no awareness of the possibility that
the subjects in these experiments were busy controlling various
perceptual input variables; simply put, these experiments include
no tests for controlled variables.

Yes, if PCT is correct then there is no such thing as an open-
loop experiment in the sense that there is no experiment in
which the subject's can behave "open loop". But there are
experiments (like psychophysical experiments and all other
conventional psychology experiments) that _ignore_ the
possibility that the subjects are closed-loop control systems.
So even if subjects are closed-loop systems (as per PCT)
psychophysical and other conventional experiments tell us
virtually nothing about the behavior of these systems because
these experiments tell us nothing about the perceptual variables
that these systems are controlling.

I presume that you agree with all this, now that I understand
what you really meant to say.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

i.kurtzer (980104.0200)

night-owling into the new year!

I'm a bit puzzled both by your message and (more so) by Isaac's. I'm
perceiving a subtext that seems to say that a person cannot choose to act
in a way coordinated with a perception that is not being controlled.

That makes us one to one, I suppose; could you say this differently?

<i.kurtzer (980102.0300)
<>Psychophysical experiments _are_ control system experiments, though not in
<>the sense you intend.
<
<What up, Martin? You knew someone was going to freak on this one. So this
<must be some weirdo intro thread, but to what?

I'm sorry, Isaac. I'm afraid I understand only small portions of your
message. This part, I think I do understand. The answer is that "freak"
is a response to a stimulus, isn't it? And we have only one person
on CSGnet who acts like an S-R system. I didn't control the perception
(in imagination) of someone freaking.

Who is that person?

I said that psychophysical experiments are control system experiments
inasmuch as the subject actually has to be controlling some perception
in order for the experiment to function in its intended manner. But I
said the overt experiment is S-R, and you agreed.

And i would say further that this "overt" experiment means simply the type of
experiment. To rewrite the experiment to account for how _we_ concieve of the
scenario and say _that_ is the experiment is confuse methodology with
theory--or if the theory is true then with ontology.

Methods and theories go hand in hand, right? These experiments were cast in a
specific way consistent with their notions of how we tick, right? That the
implicit basis of their methods was wrong does not mean they were studying what
we focus to. That would be like calling Decartes a neurophysiologist.
In fact, the leap might be even greater.

><>In a psychophysical experiment (other than one using

<>the method of adjustment) the subject is unable to influence what the
<>experimenter thinks of as the "stimulus," but is able to control an
<>important perception--the perception of the experimenter's satisfaction
<>with the subject's performance.
<
<Who knows how important?

The subject does. I'm afraid that was self-evident in the original, so I'm
sure it doesn't answer your comment, but I can't guess what would.

Well, since that was introduced I assumed that it might stand that "importance"
would be something the tester would want independent verification of; thereby
the tester might find _independent_ reasons for discarding or keeping the data
other than confirmation with his ideas. We could assume the 'importance" might
function such that the gain would be tighter in such cases, and since we would
want that very much, we should confirm or disconfirm the state of "importance".
But this is again obviated by the Test. Its requirements of "instructional"
protocal are of a far different sort, ones that side-step the
"interpretational". We simply ask the person, for example, to keep something
the same size. This is their size, not the tester's foisting of "size
protocals" which are known to be extremely flexible. I know for a fact that
this situation pervades vowel studies. Make the protocal with two choices or
self-genrated, or embedded in a common word, or a set of orthographic
symbols....and what we find is unaccpetable variations to the _same_
presentation. Taken literally one might suspect that persons really heard
something different _because_ of the scenario.
i believe this is false.
i believe instead that the individual acted differently to cooincide with the
different instructions.
This is a major problem with these and all such studies.

   >Perhaps I might be able to respond more satisfactorily if you were to

restate what you see as issues.

did this help?

[Martin Taylor 980104 12:40]

i.kurtzer (980104.0200)

Thanks. Isaac, for a rather clearer message. I think I can respond to
most of it.

I'm a bit puzzled both by your message and (more so) by Isaac's. I'm
perceiving a subtext that seems to say that a person cannot choose to act
in a way coordinated with a perception that is not being controlled.

That makes us one to one, I suppose; could you say this differently?

I read you and Bill as saying that it was not possible for a person to
say "Yes" when she heard a tone and "No" when she didn't, and for those
statements to be used as evidence for or against the person's ability
to hear the tone at that intensity.

Methods and theories go hand in hand, right?

Right. But different theories can accept the same methods. If that were not
so, there would be no possibility of believing in the unity of science.
I happen to believe (purely on faith) that there is a real world out there,
and that there are various ways of observing it, some more useful than
others, to be sure. But if every method of observation used because the
observer had a particular notion of the real world gave results that were
automatically invalid in any other view of the real world, we would have
no possibility whatever of deeming one view more reasonable than another.
I prefer to look at methods and see whether they seem to provide sensible
(i.e. interpretable) results, whatever the viewpoint of the user of the
method, because I believe there is only one "real world" out there.

These experiments were cast in a
specific way consistent with their notions of how we tick, right? That the
implicit basis of their methods was wrong does not mean they were studying
what
we focus to. That would be like calling Decartes a neurophysiologist.

Who cares whether Descartes was or called himself a neurophysiologist
(I think Goethe would have been a better example)? What we should care
about is whether what he said or observed is or might be valid in other
contexts.

><>In a psychophysical experiment (other than one using

<>the method of adjustment) the subject is unable to influence what the
<>experimenter thinks of as the "stimulus," but is able to control an
<>important perception--the perception of the experimenter's satisfaction
<>with the subject's performance.
<
<Who knows how important?

The subject does. I'm afraid that was self-evident in the original, so I'm
sure it doesn't answer your comment, but I can't guess what would.

Well, since that was introduced I assumed that it might stand that
"importance"
would be something the tester would want independent verification of; thereby
the tester might find _independent_ reasons for discarding or keeping the data
other than confirmation with his ideas.

Huh? Now I am confused as to what you mean. Are you suggesting that the
honest experimenter would discard data because it didn't conform to his
theory? Or are you saying that the subject would be controlling for some
perception that was not "important" to her? I define "important" in this
context as "being worth controlling," which I would have thought clear.

"Experimenter satisfaction" in my book comes with consistency of
subject performance, helping the experimenter to discover how the world
works. Anyway, I used "experimenter satisfaction" as an example of what
the subject might be controlling for in order that he might act in the
experiment. There could be, and probably are, other higher-level perceptions
using the same mechanism.

We could assume the 'importance" might
function such that the gain would be tighter in such cases, and since we would
want that very much, we should confirm or disconfirm the state of
"importance".

Why would that matter, so long as the subject tries to do the experiment
as best she can?

But this is again obviated by the Test. Its requirements of "instructional"
protocal are of a far different sort, ones that side-step the
"interpretational". We simply ask the person, for example, to keep something
the same size. This is their size, not the tester's foisting of "size
protocals" which are known to be extremely flexible.

Again you confuse me. If you ask the subject "did you see it," is it not the
subject's decision as to whether he did see it? It's clearly not the
experimenter's, for if it were, why bother with the subject at all?

I know for a fact that
this situation pervades vowel studies. Make the protocal with two choices or
self-genrated, or embedded in a common word, or a set of orthographic
symbols....and what we find is unaccpetable variations to the _same_
presentation. Taken literally one might suspect that persons really heard
something different _because_ of the scenario.

And can you say that they don't? You are into tricky stuff here:
"unacceptable" implies a purpose for which the vowel might be acceptable;
change the scenario and you change the dead zone for the controlled
perception(s).

i believe this [that they heard something different] is false.

On what ground?

You and Bill appear to be parting company here. Bill asserts equally
baldly that people _do_ perceive the same thing differently at different
hierarchic levels, and perhaps even discriminate differently at different
levels. I go along with him on this.

i believe instead that the individual acted differently to cooincide with the
different instructions.

Yes...but. Did they also perceive differently, and is this why they acted
differently?

I once did a study that might be relevant to this (with G.B. Henning,
reported in the Canadian Journal of Psychology around 1963-65 sometime).
We asked people to listen to repeated short tape loops of English words.
When people listen to such stuff, they start hearing different words after
a while. We found that we could affect how many different words and
what kind of words they heard by altering the instructions. We told
some people that the words would change and that all the changes would
be into other English words, whereas other people were told that the
changes might be into nonsense syllables as well as into English words.
Lo and behold, the subjects told they would hear only English reported
only English, and the subjects told that there might be nonsense reported
both English and nonsense. Surprise, surprise:-)

Now, the question. Were the "English-only" subjects hearing the nonsense
words but suppressing the report, or were they not perceiving the
nonsense words that they might otherwise have perceived if they had had
the other instructions? How could anyone tell? The natural presumption is
that they were just following instructions in what they reported, and
the groups all heard the same kinds of things.

Luckily, there is a way to distinguish the "perceived differently" and
"acted differently" hypotheses.

We asked each subject to report what they heard every time it changed.

From these reports we could plot the number of changes against the

number of different things they reported. In every case, the plot formed
a perfect fit (within the width of the plotting line, almost) to the
function N_changes = k*(N_forms*(N_forms-1). This is the function that
would occur if the subject from time to time generated a new form to
perceive and thereafter changed randomly among the forms so far generated.
We had found the same function for the analogous task in many different
perceptual domains.

Now suppose that the subject had actually perceived a nonsense form, but
had suppressed its report because of the instruction that all the changes
would be to English words. Let's say A and B were English word forms, and
X was a nonsense form. Then there should have been transitions A-X-B,
reported as such by a "nonsense-permitted" subject (2 transitions),
but as A-B by the "English-only" group. The form of the function would
have been quite different, especially as the numbers of forms increased
over time. It wasn't. The function for the "English-only" people tracked
as exactly as can be plotted with the function for the "nonsense-permitted"
people. (For each subject, in case that needs emphasizing). The conclusion
is that it is highly unlikely that the instructions were affecting the
reports, and highly probable that they were affecting the perceptions.

I think this is true much more generally in life. We perceived what we have
been "instructed" (by experience) to perceive. We have, in other words,
reorganized our perceptual input functions so that our controlling our
various perceptions don't lead to too much conflict--and those of us for
whom this is false may seek therapy or jail. In other words, instructions
may well change what we perceive in any particular situation.

did this help?

Oh, yes, thank you. Did my response?

Martin

[Martin Taylor 980104 12:20]

Rick Marken (980103.2350)]

What got you into trouble with isaac (and me -- and Bill for that
matter) for was your claim that:

Psychophysical experiments _are_ control system experiments,
though not in the sense you intend.

I think your current position, however, is that psychophysical
experiments _are_ control system experiments in the sense
that they are experiments carried out on control systems.

That was always my position. The statement that seems to have caused you
a problem was made because, even on CSGnet, it is too often claimed that
they are not, which to me is ridiculous. When this claim is restated
often enough in a short enough time, my integrating output function, for
a perception that we should deal in common sense, produces action in
the form of a message. People act in a psychophysical experiment because
they are controlling some perception, just as they do in any other
situation (including other experiments).

However, I can see how the three of you misconstrued my statement as
coming from the other side and therefore misread the whole of the
rest of what I had to say, which was a pity. Perhaps if you were to
take such statements in the context of all my other writings, you would
be able to understand better what I was getting at.

Yes, if PCT is correct then there is no such thing as an open-
loop experiment in the sense that there is no experiment in
which the subject's can behave "open loop". But there are
experiments (like psychophysical experiments and all other
conventional psychology experiments) that _ignore_ the
possibility that the subjects are closed-loop control systems.

I agree with this.

So even if subjects are closed-loop systems (as per PCT)
psychophysical and other conventional experiments tell us
virtually nothing about the behavior of these systems

And it is this that I dispute.

because
these experiments tell us nothing about the perceptual variables
that these systems are controlling.

What they tell us is about limits on the ability of people to control those
perceptual variables they choose to control.

When you are trying to examine a piece of machinery, you can do it in
at least two ways. One is to treat it as a black box and push and pull
it in various ways. The other is to tease apart its components and see
what each does, and how that component relates to the behaviour you
see when you push and pull. You can't do without the push-pull examination,
but its power is much enhanced if you can also see the function of the
components individually. If you look only at the components, you will
find it hard to discover what the machine is doing, but once you know
that, knowledge of the components is very helpful.

That's the position I think psychophysics has. It matters not a whit that
most psychophysicists haven't a clue about living control systems. They
measure a gear ratio in a machine whose function is obscure. But it is
still a valid gear ratio, useful to those who understand what the machine
is doing.

Martin

i.kurtzer (980104.1400)

[From Bill Powers (980103.1136 MST)]

Martin Taylor 9801013 11:55--

[From Rick Marken (980102.2310)]

[Also replying to i. kurtzer]

I think that all you guys might be missing the point that if PCT is
correct, there is no such thing as an open-loop experiment because there is
no such thing as open-loop behavior.

i agree, but only partly. There is a differnce between methods appropriate to
one theory vs another theory. The arrival of another theory does not render
the previous methodologies "poor realizations" of the newly inspired
methodology, no more than the previous theories are poor realizations of the
new theory. For theories this is a tricky issue, i'll leave for now. But for
methods it is straight-forward. Methods are different as they are carried out
differently. "Overt" method is method. To re-intrepret is one thing. But to
re-write is another. I agree with your re-interpreting and also with Martin if
that is what he is positing. But not with the latter.

i.