Schouten's paper

I'm coming to this very late, but I looked back in the thread to see what Schouten's original experiment was, and there doesn't seem to be a direct reference. With some googling I turned up this:

Acta Psychologica 27 (1967) 143-153;
REACTION TINE AND ACCURACY
J. F. SCHOUTEN and J. A. M. BEKKER
Institute for Perception Research, Eindhoven, The Netherlands

which looks like the journal write-up of the paper Martin described seeing presented at a conference. Access requires a subscription, which my university has, and I doubt if anyone will object if I put a forty-year-old paper online:

http://www.cmp.uea.ac.uk/~jrk/temp/Schouten1967.pdf

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, http://www.cmp.uea.ac.uk/~jrk/
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From David Goldstein (2009.06.06.05:48 EDT)]

Thanks Richard. This helps me understand the experimental situation better and why the experiment was done in the first place.

I now understand that the two lights were vertical, and not horizontal, and that the purpose of the sounds was an attemopt to control when the person would make a decision.

The study was done to evaluate the strategy a person uses when making a simple decision--which light came on. Sometimes people have to make decisions before the situation can be completely understood.

···

----- Original Message ----- From: "Richard Kennaway" <jrk@CMP.UEA.AC.UK>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Friday, June 05, 2009 12:05 PM
Subject: Schouten's paper

I'm coming to this very late, but I looked back in the thread to see what Schouten's original experiment was, and there doesn't seem to be a direct reference. With some googling I turned up this:

Acta Psychologica 27 (1967) 143-153;
REACTION TINE AND ACCURACY
J. F. SCHOUTEN and J. A. M. BEKKER
Institute for Perception Research, Eindhoven, The Netherlands

which looks like the journal write-up of the paper Martin described seeing presented at a conference. Access requires a subscription, which my university has, and I doubt if anyone will object if I put a forty-year-old paper online:

http://www.cmp.uea.ac.uk/~jrk/temp/Schouten1967.pdf

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From David Goldstein (2009.06.06.05:48 EDT)]

Thanks Richard. This helps me understand the experimental situation better and why the experiment was done in the first place.

I now understand that the two lights were vertical, and not horizontal, and that the purpose of the sounds was an attemopt to control when the person would make a decision.

The study was done to evaluate the strategy a person uses when making a simple decision--which light came on. Sometimes people have to make decisions before the situation can be completely understood. Schouten's discussion identifies a conflict between accuracy and speed. So, this situation puts a person in conflict.

In that a person has to listen for the third sound and look at the two lights, it involves dividing attention between the two senses.

···

----- Original Message ----- From: "Richard Kennaway" <jrk@CMP.UEA.AC.UK>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Friday, June 05, 2009 12:05 PM
Subject: Schouten's paper

I'm coming to this very late, but I looked back in the thread to see what
Schouten's original experiment was, and there doesn't seem to be a direct
reference. With some googling I turned up this:

Acta Psychologica 27 (1967) 143-153;
REACTION TINE AND ACCURACY
J. F. SCHOUTEN and J. A. M. BEKKER
Institute for Perception Research, Eindhoven, The Netherlands

which looks like the journal write-up of the paper Martin described seeing
presented at a conference. Access requires a subscription, which my
university has, and I doubt if anyone will object if I put a
forty-year-old paper online:

http://www.cmp.uea.ac.uk/~jrk/temp/Schouten1967.pdf

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bill Powers (2009.06.06.0658 MDT)]

I’m coming to this very late,
but I looked back in the thread to see what Schouten’s original
experiment was, and there doesn’t seem to be a direct reference.
With some googling I turned up this:

Acta Psychologica 27 (1967) 143-153;

REACTION TINE AND ACCURACY

J. F. SCHOUTEN and J. A. M. BEKKER

Institute for Perception Research, Eindhoven, The
Netherlands

Thanks for that, Richard. Very useful.
The structure of the forced-reaction-time experiment is clear, and
reading Schouten’s discussion helped me see a little better what bothers
me about the interpretations of results in experiments like these. There
seems to be an unwillingness to deal with the data close to the level at
which it’s observed, and an urge to jump up several levels to give a more
interesting interpretation. I’ll explain what I mean later.
First, the arrangement of lights in a vertical line (rather than the
horizontal line assumed by Martin and the rest of us) means that the
choice is not between “left” and “right” but between
" upper" and “lower.” If the two buttons were
arranged the same way (not described in the paper), that would seem to
rule out having one hand on each button ready to press. If one hand only
were used, its initial position would affect the time between deciding
which button to press and actually getting a finger to the button and
increasing the force to the level necessary to close the contact. I think
we can take it that in the free-reaction mode, the actual mean transport
lag of the path from the light to the muscle is between one and two
tenths of a second. When I did reaction-time experiments at the VA
research hospital, I used an electromyograph on pronator teres to
detect the delay from a positional jump of a bright spot on an
oscilloscope display to the first signal entering the muscle. When the
overall minimum reaction time to the time of contact closure was about
0.15 second, the delay to the first signal reaching the muscle was about
50 milliseconds. The torsional response had the least possible mass of
the forearm to move, so we can guess that if the whole arm had to be
raised or lowered, the lag after the first signal reaches the muscle and
before contact closure would be even greater than the tenth of a second I
measured.
Fortunately, the data are presented as plots before the interpretation is
imposed on them. What is observed is that when there is a reaction prior
to the normal free reaction time of around 300 msec, the number of errors
increases. This is true whether the reaction time is “free” or
forced. When subject B is free to react, but is encouraged to react as
quickly as possible, there is a 15% error rate when the reaction is about
90 milliseconds earlier than the mean time.
Schouten and Becker say:
“This leads to the following
speculation. The normal instruction in
a free reaction experiment is well known to be
conflicting. The subject is instructed to be both as fast and
as accurate as possible. Let us assume that his results are
determined by an inherent relation between fraction of errors and
actual reaction time. Then his one and only freedom in a
series of reactions is gradually to adjust his
mean reaction time in order to settle for a personal
compromise between slowness and inaccuracy.” (p. 146).

This gives us a perfect beginning for a PCT model of what is going on.
Imagine two control systems, one perceiving the delay and having a
reference level of zero for it (fastest possible reaction time), and the
other perceiving the percentage of mistakes and having a reference level
for zero mistakes.

The outputs of both control systems enter a single lower system and the
mean value of the two reference signals sets a reference level for the
actual delay before feeling the button reach its lower limit.

We can use the data in Fig. 1 to show how the mistake rate is
affected by reaction time. I’m calling it the “mistake” rate,
m, to avoid confusing it with the error signal, e, in a control system.
Decreasing the reaction time below about 350 msec increases the mistake
rate.

If the net reference signal entering the lower system specifies a
reaction time of about 350 milliseconds, the mistake rate will be zero,
satisfying one of the higher control systems, but the reaction time will
not be zero, leading to an error in the other higher control system. The
delay error will result in the delay control system reducing the intended
reaction time. That will cause the mistake rate to start increasing, so
the mistake-control system will start raising its contribution to the
delay reference signal, trying to increase the delay, the actual reaction
time. At the same time, the delay control system will experience less
error and it will reduce its attempt to lower the reaction time reference
signal. This will continue until the two control systems come into
equilibrium.

At that point, the outputs of the two higher control systems will be
higher (mistake control) and lower (delay control) than their mean value
by the same amounts and the reaction time will stabilize with some
non-zero mistake rate. Looking closely at the lower part of Fig. 1, we
see that the mistake rate going with the mean peak number of responses
for Subject B appears to be around 1.5 to 2 percent of the trials. If the
reaction time were any less than that, the error in the mistake-control
system would become larger and bring the reaction time back up again; if
the reaction time were greater, the delay-control system would dominate
and bring the reaction time back down.

We can estimate the closed-loop gain of the two conflicting control
systems taken together, using Fig 1. It is approximately 50 milliseconds
of reaction time per 2 per cent mistake rate, or 25 msec per %.

So without going any further into the experiment, we now have a working
PCT model of a subject in the free reaction time part of the
experiment.

Clearly, we could now introduce another control system which controls the
difference between perceived contact closure time and the third pip (as
Schouten calls beeps. In German, a pip is a peep). This control system
would adjust the reference signal for the desired delay time until the
third beep and the contact closure occur at the same time.

This change in the reference level for the delay control system would
affect the balance point of the conflict with the mistake-control system.
The reference values for all forced delays (set by the third beep) were
greater than zero, ranging from 150 milliseconds to 700 milliseconds, so
we would expect the equilibrium reaction time to be longer in the
forced-delay case.

Schouten and Becker say:

“The error curves obtained from forced reaction (figs. 2 and 3) lie
higher than the partly obtained curves from free reaction (fig. 1). The
conjecture seems warranted that in forced reaction the unaccustomed
instruction degrades the overall performance.” (p 146)

Of course there is no need for the guess that it is the unaccustomedness
of the instruction that “degrades” the performance. The
performance is not degraded; it is what it should be with the new
settings of the reference signal for the desired reaction time.

It would appear that the PCT model will give a reasonably good fit with
the data from this experiment. I leave that as an exercise for the
student. Now, what should we make of the proposals that the subject is
detecting “degrees of certainty” (p. 143)? It is entirely
unneccessary, and is fact is an “intervening variable” of the
type criticised by generations of behaviorists and others. To say that
there is uncertainty adds nothing to the observation that the mistake
rate increases below a certain reaction time. We still don’t know why it
increases, though of course we can add guesses about system noise to the
model. Given that the mistake rate increases as it does, we can fit the
control-system model to the data unambiguously. We could ask subjects in
this experiment how uncertain they felt under different conditions, and
we might even get them to state a number, but since we don’t know how
they perceive the magnitude of a number, we still won’t be able to
measure the actual uncertainty. And we don’t need to, because we have a
working model.

Best,

Bill P.

PS: Fig. 4 is a real monstrosity – Tom Bourbon would be delighted. The
authors have plotted 4000 mistake rates of 20 subjects as a single curve:
“… the fraction of errors [mistakes], averaged over all subjects,
is given; 4000 reactions per enforced reaction time.” This
guarantees that the curve tells us nothing useful about any
subject.

···

At 05:05 PM 6/5/2009 +0100, Richard Kennaway wrote:

[From Rick Marken (2009.06.06.1830)]

Bill Powers (2009.06.06.0658 MDT)]

This gives us a perfect beginning for a PCT model of what is going on.
Imagine two control systems, one perceiving the delay and having a reference
level of zero for it (fastest possible reaction time), and the other
perceiving the percentage of mistakes and having a reference level for zero
mistakes.

While this model can certainly be made to work (though I haven't been
able to yet) I think it is unlikely that people actually control a
perception of percentage of mistakes. But there is a way to test this.
If people really are adjusting their reaction time (RT) relative to
their perception of % mistakes then we should see, in the data, more
variability in RT in the early trials since % mistakes, which
influences the setting for RT, is N(mistakes)/N(trials) and while the
denominator is small (at the beginning of the experiment) there will
be large changes in the estimate of % mistakes through those early
trials.

Of course, this model tell us nothing about _what_ perception is
controlled (or_not_ controlled when a mistake occurs) by the button
press, though it seems likely that is a perception of the relationship
between light location (top/bottom) and button press (left/right).

It would appear that the PCT model will give a reasonably good fit with the
data from this experiment. I leave that as an exercise for the student.

I hope we don't have to hand in our homework;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009.06.07.0521 MDT)]

Rick Marken (2009.06.06.1830) --

While this model can certainly be made to work (though I haven't been
able to yet) I think it is unlikely that people actually control a
perception of percentage of mistakes.

I think you're right. I'd design the system so that when a mistake is made, the reference level for the reaction time is increased by a small amount (in an integrator). The integrator could slowly leak so that when there are no mistakes for a while the reaction time output signal (from that control system) declines toward zero. This would go on over many trials. The magnitude of the perceptual signal would simply indicate "wrong", not a percentage or an uncertainty.

I might be willing to concede that if a person is perceiving a lot of mistakes, there might be a higher-level feeling of uneasiness about that, which some people would describe as uncertainty. I wouldn't -- I'd probably just say "Well, what do you expect when you make me respond so soon after the light comes on?" I'm quite certain I wouldn't be trying to gauge the degree of uncertainty in the period after the light goes on and (as Schouten proposes) react when it gets down to 2%. On what basis could anyone estimate the amount of uncertainty in a single trial, in as little at 150 milliseconds, with an accuracy of 2% of the maximum uncertainty? I think the model we're looking at here can accomplish the same result without considering uncertainty.

You might try a leaky integrator in the perceptual input function instead of the output function, which would yield a perception of mistake rate. The reference level would still be zero but the output function would then have to be proportional (you want only one integrator in a control loop). The error signal could be said to indicate a feeling of uncertainty or uneasiness. It doesn't really matter what you call it; it's not a perception of what the formal definition of uncertainty means.

Of course, this model tell us nothing about _what_ perception is
controlled (or_not_ controlled when a mistake occurs) by the button
press, though it seems likely that is a perception of the relationship
between light location (top/bottom) and button press (left/right).

Are the buttons placed side by side? I haven't see the 1963 article that Schouten and Becker reference. I just assume that mistakes can be detected (the light stays on for one second so there's enough time to see that the wrong button was pressed). I take the button press as simply a means of controlling the mistake rate and the reaction time, both relative to reference levels of zero. It doesn't matter if we propose that a relationship between button and light is being controlled -- or maybe that's what gives the perception of a mistake when one happens.

As we've been imagining the system to work (or at least I have), that system itself would not perceive any mistakes, because it only needs to report correctly what appears to be the state of the relationship at a given moment. Noise in the relationship signal can provide false signals, but those signals lead to correct responses, if you see what I mean. There is no malfunction. Only a higher-order system could realize that a few hundred milliseconds later it is obvious that the relationship is now wrong. "I thought at first it was the upper one, when the beep sounded, but now I see it wasn't." The perceived relationship would change with time.

> It would appear that the PCT model will give a reasonably good fit with the
> data from this experiment. I leave that as an exercise for the student.

I hope we don't have to hand in our homework;-)

With references, please.

I hope it's obvious to all that while I, too, am imagining things, what I am imagining is far more closely related to specific perceptions and actions than is the idea of uncertainty control. When we start asking "how" at the lower levels, some higher-order possibilities can be weeded out as not being feasible.

I'm sure that Martin can still calculate information rates in this proposed model, if he wants to.

Best,

Bill P.