what's the difference?, psych research

[From Rick Marken (920624.1320)]

Obviously, this is a lot more fun than working.

Martin Taylor (920624 13:00) says:

Oh, wow...disinformation piled on misinformation!

Oops. Hit a controlled variable.

I know that many people
think of aftereffects as the consequence of fatigue, but they can't be, at
least in most cases and perhaps in all.

Just meant to describe 'em, not explain 'em. Let me guess; have you
published in this field?

It is NOT true that if you adapt to up movement and
then show up it still looks up.

This was not intentional disinformation on my part. It was just based on
my own experience. After staring at a waterfall it still looks like it's
going down; after staring at green it still looks green. There may be
adaptation effects but they are not phenomenally obvious to me. But I
believe you if you say it happens.

A naive view based on
"adpaptation=fatigue" cannot work because it predicts a lot of phenomena
wrongly. "Adaptation=improved perceptual precision" accounts for quite a
few of the phenomena that "adaptation=fatigue" does not, and predicts
numerically as well as qualitatively.

OK. If you say so.

In the red-green case, it is possible
(even likely) that fatigue plays a part.

Well, so my "fatigue" story wasn't all THAT bad, after all.

In the movement case, it is less
probable, and when we come to the more shape-based aftereffects, it is not
likely at all.

I didn't mean to step on any theoretical toes. I don't really have any
preferred explanation of perceptual adaptation. It's just fun to
experience it.

I don't follow Rick's analysis of the VOT detector system at all:

I was just thinking on my seat -- in terms of what Cooper/Nagy might have
had in their head when they did this adaptation procedure. I was not
proposing any explanation of my own. And I typically leave the
"disinforming" to my friends in the CIA.

If there really is a VOT detector, it asserts whatever VOT is appropriate
for the phoneme in question as a reference.

No comprendo. How do detectors "assert" anything?

Bruce is right. There are several different effects of adaptation.
Adaptation is a procedure, not a "perceptual effect".

I was just describing the procedure and the phenomenology. I used the
fatigue story to describe what seems to happen. Now I wish I'd never
said anything about "fatigue". Geez. Mea culpa. Mea culpa.

Rick often proclaims that all psychological studies done outside the
control paradigm are worthless.

No. I said that analyzing random noise is worthless -- no matter what
paradigm you use. The chances of getting random noise in the study of
behavior, however, is almost guaranteed if you use the s-r paradigm.

This shouldn't give him the right to assert
his own view of the world that they study, in contradiction to the results
they obtain. You can't claim better truth by throwing away data than by
looking at what data you have, however wrongheaded the data gatherers might
have been.

I did that? I was just saying that you will see a red square on white paper
after you stare at a green square. Any endorsement of a theoretical model of
perceptual adaptation, either implied or stated, was purely coincidental.

Sorry to be so harsh, Rick, but that message really seemed wrongheaded, if
not bull (headed).

It's OK. I hope your perception of what constitutes an appropriate way
to explain perceptual adaptation is back under control.

Now, to create another disturbance:

Martin Taylor (920624 14:10) says:

On the basis of studies of control that can be reduced to tracking, studies
that give correlations of 0.99+, Rick asserts that the near unity correlation
can and should be obtained for all controlled percepts.

Nope. I said (or meant to say) that the criterion for what constitutes a
scientific fact in psychology should be far stricter than it is. I think
a reasonable goal is correlations of .99+. This can be done when you are
studying control -- at least when you are studying variables that can be
quantified relatively easily. It should even be possible with higher
order variables that are harder to quantify (David Goldstein and Dick
Robertson did a study of control of "self esteem" where they got .99+
correlations). It can be done. It must be done if the study of living things
is ever going to be a science instead of a dice game (not that there's
anything wrong with dice games).

The three reasons you give for why one can't expect to get perfect
correlations even when studying control are ok. But they have nothing to
do with current research in psychology. The goal of research should
be high quality data -- always. Nearly all research in psychology provides
low quality data. The fact that this data is collected in research that
is done from the wrong perspective is irrelevant to my point -- which is
that there is precious little to be learned from looking at noise. The
JASA VOT study illustrates this point. Now, you can come up with all kinds
of reasons why they couldn't have gotten better data -- or you can just go
out there and get good data. I say "go with the second option"; the first
option does nothing but try to justify trying to make sense of worthless
garbage -- salt or no.

We know that control can be exercised only to the extent that information is
available to the perceptual function of an ECS. And THAT is inherently a
statistical process,

What is "THAT"? Information? The perceptual function? What is it about
control that is "inherently statistical"?

so we know that control IS a statistical process.

When a non-statistical model accounts for 99.87% of the variance in
the variables involved in control I think it's fair to say that the
"inherently statistical" part of the process of control is not worth
losing much sleep over.

Go for the QUALITY data.

Regards

Phaedrus

ยทยทยท

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)

[Martin Taylor 920626 19:40]
(Rick Marken 920624.1320)

We know that control can be exercised only to the extent that information is
available to the perceptual function of an ECS. And THAT is inherently a
statistical process,

What is "THAT"? Information? The perceptual function? What is it about
control that is "inherently statistical"?

THAT is the input to any ECS. Perception is largely a matter of extracting
useful consistencies out of a very noisy sensory system sensing a highly
variable world.

so we know that control IS a statistical process.

When a non-statistical model accounts for 99.87% of the variance in
the variables involved in control I think it's fair to say that the
"inherently statistical" part of the process of control is not worth
losing much sleep over.

True, in such an experiment. But remember that the amount of variance you
account for depends on the ratio between the range over which the variable
moves and the size of the unaccounted variation. Even in a tracking study,
if the target moved only over a range of 1 mm on a screen viewed at a normal
distance, I doubt you would find 99% correlations anywhere in your analysis.
In that kind of study, you can make the range of variation much larger than
the statistical variability, and good, more power to you. But I doubt you
can do it so readily at higher levels or under more noisy conditions.

In the psychoacoustic experiments you so often exclaim against, the whole
problem is the determination of the perceptual variability. There can be no
PCT-based study in which control will do better than a perfect ECS whose
perceptual function is a mathematically ideal observer. Humans, well trained,
can come within 3 or 4 dB of that, under a wide variety of conditions.
Perceptual statistical variability has to be a limiting condition for control,
and hence control is inherently a statistical process.

Choose your experiment so that perceptual noise is swamped by big disturbances
in the controlled variable, and if none of the other factors I mentioned in
[920624 14:10] is important, then you may get your high correlations.

Go for the QUALITY data.

Yes, the best that suits the problem at hand. And I grant that PCT experiments
are likely to do better than non-PCT experiments, for good reason.

More later, on the other "statistical" postings.

Martin