Timing experiment

[From Bill Powers (2009.02.23.0010 MST)]

Martin Taylor 2009.02.21.13.49 --

I 'd like to know a little more about the timing plot. d^2 runs from
0 to 8, so d runs from 0 to about 2.8. That doesn't seem to be
related directly to "probability of a correct press." Could you
translate the numbers on the y axis into fraction of incorrect
presses? That is form of the raw data, isn't it? From that I can use
my Chemical Rubber table to estimate the magnitude of the
hypothetical perceptual signal as a number of standard deviations
above the noise for each point on the plot, assuming Gaussian noise
distribution.

Best,

Bill

No virus found in this outgoing message.
Checked by AVG - www.avg.com
Version: 8.0.237 / Virus Database: 270.11.3/1966 - Release Date: 02/22/09 17:21:00

[Martin Taylor 2009.23.10.21]

[From Bill Powers (2009.02.23.0010 MST)]

Martin Taylor 2009.02.21.13.49 –

I 'd like to know a little more about the timing plot. d^2 runs from 0
to 8, so d runs from 0 to about 2.8. That doesn’t seem to be related
directly to “probability of a correct press.” Could you translate the
numbers on the y axis into fraction of incorrect presses? That is form
of the raw data, isn’t it? From that I can use my Chemical Rubber table
to estimate the magnitude of the hypothetical perceptual signal as a
number of standard deviations above the noise for each point on the
plot, assuming Gaussian noise distribution.

d’ is just the number of standard deviations, so no need to look up
tables. It’s not signal power compared to noise power, but total signal
energy compared to noise power per unit bandwidth (at least in
acoustics, where those quantities are measurable or computable). Maybe
one could use the same concepts when considering the signals in the
neural pathways, but I think it’s not a real issue. If you think only
of the magnitude of the hypothetical perceptual signal when there is a
real signal as compared to its magnitude when there is no real signal,
that’s close to what signal detection psychophysicists consider.

I think you would like the logic of how to get between this and the
probability of a correct response in a forced-choice experiment, and
it’s pretty simple, so here it is in excruciating detail :slight_smile: .

We don’t have to assume Gaussian noise, since the real conditions are
not so stringent, but Gaussian noise works, so let’s assume it. First
let’s consider the case of detecting whether a signal exists or not
(whether a light turned on, for example). The hypotheses predict a
Gaussian distribution of likelihoods (that a signal was present) for
the “signal here” case, and another with the same standard deviation
for the “signal not here” case. We usually call these “signal plus
noise” (S+N) and “noise” (N) likelihood distributions. Any observation
of data can be placed somewhere along the likelihood axis.

I should emphasize that I’m not saying people actually perceive or
compute likelihoods. A mathematically ideal observer would, but that
says nothing about people. In practice, what we are talking about is
modelling – people’s responses do match what a model of this kind
does, but with a sensitivity (d’) less than a mathematically ideal
observer would achieve. The same argument can be made for any way of
creating the perceptual magnitude, provided that it is continuous (at
least on the scale of the observations) and subject to noise
fluctuations. I use likelihoods because that is what the mathematically
ideal observer would use as a perceptual signal.

d-prime.jpg

Looking at the diagram (forgive my freehand Gaussians), an observation
that gave rise to a likelihood as marked would be extremely improbable
if there had been no signal, but quite probable if there had been a
signal. The ratio of likelihoods for these two possibilities gives the
posterior probability of the signal having been present. If that is
high enough, the observer will report that a signal was presented. d’
is the separation of the means of the two distributions, measured in
standard deviations.

The foregoing is just for a single presentation, in which either there
was or there was not a signal. In the Schouten experiment, we are
talking about a “forced-choice” situation. Rather than the question
being whether a signal exists or not, the question is in which of
exactly one of two possible slots there is a signal, knowing that there
is a signal in one of them (in psychoacoustics, the slots are usually
one of two time intervals, but in the Schouten experiment it’s whether
the light is to the left or
right). The diagram above still applies in each of the two slots, but
now the choice is in which of the two intervals the likelihood was
higher (that a signal was presented). This question leads to a new
Gaussian distribution, the distribution of (SN-N). That’s a
distribution with twice the variance, or sqrt(2) times the SD of the
originals. Its mean is d’, and the probability of a correct response is
the probability that SN-N > 0.

d-prime_fc.jpg

So to get the probability of a correct response, you just see how much
of a normal distribution that has SD sqrt(2) and mean d’ lies above
zero. Conversely, to get from forced-choice probability of a correct
response to d’, find the location of the mean of a normal distribution
of SD sqrt(2) that has that much of it above zero. Examples, from the
Wikipedia page on Gaussian distribution, at d’^2 = 2, p correct = 0.84,
and at d’^2 = 8, p correct = 0.978 (approximately).

I hope this is more useful than confusing. And I hope my calculations
don’t include some stupid error like inverting a fraction!

Martin

[From Bill Powersm (2009.02.23.1503 MST)]

Martin Taylor 2009.23.10.21 –

d’ is just the number of
standard deviations, so no need to look up tables.

OK, I see. We want the probability that a chance fluctuation could occur
with a value as large as the time of observation or larger. To the
left of the intercept in your Fig.1, the person will guess wrong as often
as right.

So if the largest d’^2 is 8 on that plot, then the largest d is 2.83 and
the probability of a deviation that large or larger is 0.51%. I won’t
quibble about that small a difference between our numbers. I’m using a
single SD instead of combining two in quadrature.

For the plot of subject B, with the intercept at 230 milliseconds, I get
the following approximate data off the graph:

T T -
T0,msec
d^2 d
Pr(correct) Decision at T=

310
80
4.5 2.1
0.964
210

295
65
4.0 2.0
0.955 195

255
25
3.0 1.7
0.911
155

245
15
1.0 1.0
0.683
145

(I didn’t do this quite right for the lowest point so I skip it)

230
0
0.0 0.0 0.5
(?) 130

For the last column I’m just guessing that it takes 0.1 second after the
correct light has been identified for the reference position to be set,
for the hand to accelerate and decelerate to the correct button, and for
the button to be pressed enough for the contact to be closed. That seems
awfully short, but if as you suggest the person has one finger over each
button, a tenth of a second might be a reasonable time to set up and
execute the button press once the binary decision has been made.

The implied transport lag for this perception thus comes out to be 130
milliseconds, or 7.8 frames equivalent in our tracking experiment: right
in the middle of the range we measure in that experiment. Not bad for a
blind guess.

This seems to say that the perceptual signal rises enough above the noise
(or combined noise on each side, as you say) to give 91% correct guesses
in 25 milliseconds, about what I had guessed. Since we don’t know the
actual signal magnitudes, we don’t know the final value of the signal or
what the actual noise level is. A rise time of the optical signal (after
a step change of intensity) of 0.1 second would be a reasonable guess
(judging from flicker fusion data), which would imply that the 25
milliseconds is 1/4 of the rise time, which fits perfectly, sort of. The
curve would be a pretty straight line over the first 25% of the rise of
the signal.

I’m not using quite the right numbers for the data – maybe you can do it
better. Perhaps I should have used the Probable Error table.

Best,

Bill P.