Purpose in Research

[From Rick Marken (2007.08.18.1100)]

Here's some thoughts I've had recently about conventional
psychological research.

A central feature of virtually every psychology experiment done using
human participants (the new term for "subjects") is instructing the
subject to adopt some purpose, such as "push this button when a noun
appears" or "try to remember as many words as you can from this list",
etc. Indeed, if participants were not asked to adopt the purpose
described by the experimenter, there would be no apparent effect of
the independent variable (IV) on the dependent variable (DV). For
example, suppose an experimenter wants to study the effect of tone
intensity (IV) on reaction time (DV). Ordinarily, tones don't lead
people to press buttons. So, in order to do such an experiment, the
experimenter asks the subject to adopt the purpose of pressing a
button as quickly as possible when a tone is played. In control theory
terms, the subject is asked to control for a relationship between the
occurrence of a tone and the occurrence of a button press. The
experiment is not about what the subject is actually controlling for
or how the subject exerts this control; the experiment is about how
the IV (typically a disturbance to the variable that the subject was
asked to control) relates to the DV (the output that "corrects" for
the disturbance) when this purpose is carried out.

What I have been focused on for years is the fact that the
relationship between IV and DV in such experiments depends on the
nature of the environmental feedback connection between DV and
controlled variable rather than on properties of the organism itself.
But what I have been ignoring is that in nearly all these
psychological experiments on people, the participants are asked to
carry out a purpose (they are asked to control something) and the
experimenter is not interested at all in _how_ the participants do
this (act purposefully) but is focused on the relationship between IV
and DV that exists only because a purpose is being carried out. The
focus is on the IV - DV relationship because this is supposed to
reveal something about the nature of the "processing" that goes on in
the participants' minds. The "behavioral illusion" shows that such
relationships will not reveal such processing. But I think what is
even more interesting is the fact that the controlling done by the
participants is almost completely ignored in such experiments.

So what am I going to do to bring this to the attention of research
psychologists? One thing I want to do is go through descriptions of
experiments and make a catalog of the purposes participants are asked
to carry out. I've already looked through one journal and found that
it is usually very easy to see what purpose the participants are
asked to carry out. It's often a pretty general description but it
can be found very easily. These purposes are often described in the
instructions given to participants. I remember that Chuck Tucker was
always very interested in instructions; if Chuck is still on the net I
would like to see if he has any ideas about this. But what might be
interesting is to see if what kinds of purposes people are asked to
carry out in psychological experiments and, possibly, categorize them.
Then I would pick one or two and try to show what kind of mechanism
(model) would be needed to carry out the instructed purpose. Such an
exercise might help researchers understand that people's ability to
carry out these purposes is something worth trying to understand in
itself. It also might help researchers understand the role of what
they call the IV and DV in these purposeful behaviors.

A second thing I would like to try is to replicate a standard
experiment, keeping the IV and DV the same but changing only the
participant's purpose (I would do it one person at a time, of course).
So the change in purpose would itself be an IV. I have a student who
might work with me on this. For example, I was thinking of doing
something like the Stroop experiment, with the purpose being either
"say the word" or "say the color" or "say whether the word is color
word". The effect of the IV on the DV should differ depending on the
person's purpose, showing (I think) that purpose has to be taken into
account when trying to understand the "causes of behavior". I would
imagine that there are studies where purpose has been manipulated --
in the form of different instructions to the participants -- while
keeping the IV and DV the same. If anyone knows of such studies I'd
appreciate hearing about them.

While the purposes asked of the participants in most psychological
research is ignored, this is not always the case. What distinguishes
the "baseball catching" studies is that they are about trying to
understand how the participants carry out their assigned purpose,
which is to catch a ball. But by and large, what seems to be true of
most psychological research is that 1) participants must have a
purpose or they won't "respond" to the test "stimuli" at all 2) these
purposes are given in the instructions to the participants and 3)
these purposes are then ignored while it is imagined that the
participants react mechanically to the stimuli presented to them as
they carry out their purposes.

Comments and suggestions will be gratefully accepted.

Best regards

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[from Gary Cziko 2007.08.18 13:42 CDT} responding to Rick Marken (2007.08.18.1100)]

Rick:

I think this is an interesting approach–looking for the purposes that subjects, er, participants, adopt in psychological experiments.

A central feature of virtually every psychology experiment done using

human participants (the new term for “subjects”) is instructing the
subject to adopt some purpose, such as “push this button when a noun
appears” or “try to remember as many words as you can from this list”,

etc. Indeed, if participants were not asked to adopt the purpose
described by the experimenter, there would be no apparent effect of
the independent variable (IV) on the dependent variable (DV).

But how is it that the spoken or written words of the experimenter result in the participant having the “desired” purpose? Aren’t these words just disturbances themselves?

Usually, the participant wants to please the researcher (and, indeed, may even accept money for being in the experiment). But participants do become tired and bored and then there is conflict. Some participants have even refused to follow the experimenter’s instructions when they thought that they might be physically harming someone (the famous Milgram experiments).

–Gary

[From Rick Marken (2007.08.18.1315)]

Gary Cziko (2007.08.18 13:42 CDT) --

But how is it that the spoken or written words of the experimenter result in
the participant having the "desired" purpose? Aren't these words just
disturbances themselves?

Usually, the participant wants to please the researcher (and, indeed, may
even accept money for being in the experiment). But participants do become
tired and bored and then there is conflict. Some participants have even
refused to follow the experimenter's instructions when they thought that
they might be physically harming someone (the famous Milgram experiments).

Yes, these are good points. I think Chuck Tucker has thought about
such things and, if he is on the net, may have some comments about it.
But what is interesting to me is simply that people _do_ adopt the
purposes that are requested by the experimenter (as best as they can
figure out what those desired purposes are, anyway). And they _must_
adopt them or the experiment won't work at all.

In the experiment on reaction time (DV) to tones of different
intensity (IV), for example, there is no experiment unless the
participant adopts the purpose of pushing the appropriate button when
the tone comes on. People do not typically react to tones by pressing
buttons; a tone will not lead a person to press the button unless the
person is instructed to do this. So any observed relationship between
IV and DV that is observed in such an experiment hinges on the
participants' adopting a purpose where the controlled variable is
something like "press button when the tone comes on".

I think what I have noticed (belatedly, after 25+ years looking at the
relationship between CT and conventional research; I'm not that swift,
I guess) is that purposeful behavior (control) is a sine qua non of
every psychological experiment, that purposes are provided to
participants by the experimenter (or the experiment won't happen) and
then these purposes are completely taken for granted while attention
is paid to the relationship between IV and DV. There are some
exceptions; the baseball study is one, and the Milgrim compliance
study, too. There the question is how long will the participants
continue to control for what the experimenter has asked them to
control for.

So I guess the point of my project is to show that control (purpose)
is the centerpiece of every experiment in psychology; and what is most
important about control -- the participants' ability to keep the
controlled variable at a particular reference -- is what is taken for
granted by the experimenter. What I am going to recommend is that
psychological researchers pay more attention to the controlling that
the participants are instructed to do. I want to show that this
controlling is fascinating in and of itself and that you need a model
like PCT to understand how it's done. PCT not only explains the IV-DV
relationships observed in conventional experiments, it can also
explain how people carry out the purposes that make it possible to
observe these relationships.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

A central feature of virtually
every psychology experiment done using

human participants (the new term for “subjects”) is instructing
the

subject to adopt some purpose, such as "push this button when a
noun

appears" or “try to remember as many words as you can from this
list”,

etc. Indeed, if participants were not asked to adopt the purpose

described by the experimenter, there would be no apparent effect of

the independent variable (IV) on the dependent variable
(DV).
[From Bill Powers (2007.08.18,1417 MDT)]
Rick Marken (2007.08.18.1100) –
I think you’ve been messing around with going up a level. What a
brilliant idea! Of course you’re right. Without the participant’s
adopting the task of controlling the variable as defined in the
instructions, the IV would have no effect on the DV. And the effect it
does have is to create an error that can be corrected only by making the
right response. If you can define the variable being disturbed and
controlled, you will be able to show that the effect of the instructions
is explained by PCT. I’m sure the role of the instructions has been noted
before, but not in the context of control theory.
I wonder if this view will reveal ambiguities in some instructions.
The participants are given a condition to maintain or bring about, but
that may leave alternatives for the way to do it. Something to keep an
eye on as you examine the instructions.
In the four-lights-four-buttons experiment that Dick Robertson wrote
about, the task was to stop the machine from winning, which it could do
by reaching 1000 points. Unknown to the participant, the way to do this
was to learn which button turned off which light, then learn the sequence
in which the lights came on, and then figure out that by pressing the
right button before the light came on, one could make the
machine’s score count downward. The ambiguity meant that the
participant had to learn several different control processes to bring the
variable “machine wins” to a reference level of zero. I’m also
thinking of Ted Cloak’s experiment with nailing the boards together –
the end-result desired is clear, but not how to achieve it.

That suggests a format for your analysis. The idea is to define a
controlled variable and a reference level for each experiment, plus the
event that disturbs the CV and the action that corrects the
error.

This is going to be interesting.

Best,

Bill P.

[Martin Taylor 2007.08.18.16.53]

[From Rick Marken (2007.08.18.1100)]

Here's some thoughts I've had recently about conventional
psychological research.

A central feature of virtually every psychology experiment done using
human participants (the new term for "subjects") is instructing the
subject to adopt some purpose, such as "push this button when a noun
appears" or "try to remember as many words as you can from this list",
etc.

I have a distinct sense of Deja Vu about all this, from a long discussion on this topic early in my participation on CSGnet. Maybe Dag's archives can be searched to recover it.

The argument, as I remember, revolved around the fact(?) that the cooperative subject has a purpose (a reference state for a perception) that the experimenter should be pleased with S's actions, and the mechanism whereby S controls this purpose is to discover what the experiomenter wants and to do it. Usually this is easy because the experimenter explicitly says what is wanted.

Some time in the mid-60's I had a colleague who told me an anacdote that nicely illustrated that the subject may well try to do what the experimenter wants, but has misinterpreted, and does what the experimenter does NOT want, thereby failing to control the perception of seeing the experimenter as being pleased.

I'll call the colleague "Bruce" (after the faculty members of the Monty Python Australian School of Philosophy). One experiment that used to by quite typical was a tracking control study in which the subject tried to keep a pen-like instrument in contact with a small metal disk mounted peripherally on a rotating plate (like a vinyl record). Sometimes the subject saw the target in a mirror, sometimes in direct view. The measure of performance was usually the proportion of time for which the pen-device kept in contact with the target disk.

When he was an undergraduate, Bruce was a subject in this experiment. Being a psychology student, he tried to figure out the nature of the experiment. The experimenter asked him to keep the pen on the target disk, so Bruce considered why the experimenter might want this, and decided that it must be an intelligence test. So he dismounted the rotating plate and laid the pen on the target disk, thereby scoring 100%, well out of the range of all other scores. Bruce expected the experimenter to congratulate him fof his success in following instructions perfectly, which would not have been possible if he had allowed the plate to keep rotating. He was disappointed!

I'm sure I told this story the last time we had this thread, but there are many people on the list who were not there then, so I tell it again. I think it illustrates Rick's thesis very well. It's an inadvertent instance of:

A second thing I would like to try is to replicate a standard
experiment, keeping the IV and DV the same but changing only the
participant's purpose (I would do it one person at a time, of course).
So the change in purpose would itself be an IV. I have a student who
might work with me on this. For example, I was thinking of doing
something like the Stroop experiment, with the purpose being either
"say the word" or "say the color" or "say whether the word is color
word". The effect of the IV on the DV should differ depending on the
person's purpose, showing (I think) that purpose has to be taken into
account when trying to understand the "causes of behavior". I would
imagine that there are studies where purpose has been manipulated --
in the form of different instructions to the participants -- while
keeping the IV and DV the same. If anyone knows of such studies I'd
appreciate hearing about them.

Martin

[From Rick Marken (2007.08.18.1440)]

Bill Powers (2007.08.18,1417 MDT)--

Rick Marken (2007.08.18.1100) --

>A central feature of virtually every psychology experiment done using
> human participants (the new term for "subjects") is instructing the
>subject to adopt some purpose

I think you've been messing around with going up a level. What a brilliant
idea!

Well, now we're on the same page! :wink: Thanks!

Without the participant's adopting the task of
controlling the variable as defined in the instructions, the IV would have
no effect on the DV. And the effect it does have is to create an error that
can be corrected only by making the right response. If you can define the
variable being disturbed and controlled, you will be able to show that the
effect of the instructions is explained by PCT. I'm sure the role of the
instructions has been noted before, but not in the context of control
theory.

Yes. That's a good way to go. I wonder if I should just stick to
experiments of a particular type or go through the journals
willy-nilly. In some experiments the purpose is pretty clear but the
DV is some kind of measure of brain activity (this is the new
direction of scientific psychology, apparently) which obviously has no
influence of the state of the controlled variable.

I wonder if this view will reveal ambiguities in some instructions.

Yes. I'll keep an eye on that. I think there are experiments where
there may be ambiguities that could account for some of the
differences in the ways people react.

That suggests a format for your analysis. The idea is to define a
controlled variable and a reference level for each experiment, plus the
event that disturbs the CV and the action that corrects the error.

Yes. Great idea (I would say brilliant but that's taken;-) But I do
wonder if I should limit the scope of the experiments I review and, if
so, what might be a reasonable basis for such limitations. Any
suggestions on this would be welcome. FOr now, I think I'll just
start with recent "cognitive" experiments, limiting it to those where
the participants are clearly asked to take actions that restore a
variable to an intended state (as in the tone reaction experiment).

This is going to be interesting.

I hope so. I'll keep you posted.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

{From Chuck Tucker (2007.8.18.1821)]

I have forever been stressing the important of the instructions which are given by the "E" to the "S." The results (data) of any investigation or inquiry is totally dependent on the interaction between the investigator and the investigatee. The problem is that there are very few studies which provide the reader with the video transcript of that interaction (I am recalling Bruce's videos of the rats!) I have not found any study which tells the reader what the investigator said and did and what the investigatee said and did (many experiments have as their "data" answers to questionnaires or as our friend Carl Couch used to say "chicken scrathes" on pieces of paper; his data were transcripts of verbal and nonverbal action which took place between participants in each study). What we do know from studies of studies which were done in the 1950's and 60's (called the "social psychology of psychological research," studies of survey "instruments" [done at U of M but didn't get published until the 1980's], and "ethnomethodological studies" done by one of Rick's Professors Harold Garfinkel) is that instructions make a tremendous difference in the outcome or data of a study and unless you know what those instructions were (for each interaction between "E" and "S") then you can not access the results. One simply does not know what the data are in any study unless the action of the participants is known in detail.

I recall that it was Bill's Father who pointed out (too Bill, I believe) that people were just following the instructions given to them in any study involving control theory. I don't recall whether that was meant as a critique or a simple factual statement. I take it as a factual statement.

Clark and I talked about instructions many times at the meetings of the CSG. I specifically recall that we discussed the Milgram studies (all 23 of them) at the meeting that took place in the nunnery in Wisconsin in late September, 1989. I have a very clear picture in my head now of that presentation and some of the events of the meeting since Hugo hit SC during that time and my family was in AZ awaiting the death of my Father-in-Law (I spent some time calling AZ to inquire about both events). We tried to show have the instructions in each of the studies differed and how the results were related to those instructions (BTW you can find a dramatization of one of the studies on YouTube but I doubt that it is accurate!). The percentage (out of 40) of individuals (called "Teacher" in every study!) that pulled the lever noted as "XXXDanger" was from a high of 97% (one person stopped before going all the way!) and %3 (one went all the way but he did not know that others had dropped out). The instructions made the difference NOT obedience to authority or the desire to shock another person. The difference between those two studies (we have arranged the studies by the percentage of Teachers who went all the way) is that in the 97% study the Teacher didn't pull the lever at all but it was delegated to another person who was part of a "group" (very similar to having someone else do your killing for you so you can avoid any direct responsibility) and in the 3% study all of the members of the "group" dropped out of the study at certain time in the study the last one being the "Experimenter" who told the Teacher to stop pulling levers (the one person who went all the way "explained" his action by the fact that he agreed to complete the study regardless of the others who "whimpped out"; I think we all know someone like that, don't we!) BTW, I have never read any one else but Milgram write about the studies that involved a "group."

Regards,

Chuck

···

-----Original Message-----

From: Richard Marken <rsmarken@GMAIL.COM>
Sent: Aug 18, 2007 5:38 PM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: Purpose in Research

[From Rick Marken (2007.08.18.1440)]

Bill Powers (2007.08.18,1417 MDT)--

Rick Marken (2007.08.18.1100) --

>A central feature of virtually every psychology experiment done using
> human participants (the new term for "subjects") is instructing the
>subject to adopt some purpose

I think you've been messing around with going up a level. What a brilliant
idea!

Well, now we're on the same page! :wink: Thanks!

Without the participant's adopting the task of
controlling the variable as defined in the instructions, the IV would have
no effect on the DV. And the effect it does have is to create an error that
can be corrected only by making the right response. If you can define the
variable being disturbed and controlled, you will be able to show that the
effect of the instructions is explained by PCT. I'm sure the role of the
instructions has been noted before, but not in the context of control
theory.

Yes. That's a good way to go. I wonder if I should just stick to
experiments of a particular type or go through the journals
willy-nilly. In some experiments the purpose is pretty clear but the
DV is some kind of measure of brain activity (this is the new
direction of scientific psychology, apparently) which obviously has no
influence of the state of the controlled variable.

I wonder if this view will reveal ambiguities in some instructions.

Yes. I'll keep an eye on that. I think there are experiments where
there may be ambiguities that could account for some of the
differences in the ways people react.

That suggests a format for your analysis. The idea is to define a
controlled variable and a reference level for each experiment, plus the
event that disturbs the CV and the action that corrects the error.

Yes. Great idea (I would say brilliant but that's taken;-) But I do
wonder if I should limit the scope of the experiments I review and, if
so, what might be a reasonable basis for such limitations. Any
suggestions on this would be welcome. FOr now, I think I'll just
start with recent "cognitive" experiments, limiting it to those where
the participants are clearly asked to take actions that restore a
variable to an intended state (as in the tone reaction experiment).

This is going to be interesting.

I hope so. I'll keep you posted.

Best

Rick
--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Rick Marken (2007.08.19.1050)]

Hi Chuck. Nice to see that you're still with us.

Chuck Tucker (2007.8.18.1821)--

I recall that it was Bill's Father who pointed out (too Bill, I believe)
that people were just following the instructions given to them in any
study involving control theory. I don't recall whether that was meant
as a critique or a simple factual statement. I take it as a factual statement.

I don't know what he might have meant either but my guess is that it
was meant in the sense of "big deal; the person is just doing what you
asked him to do". I think this is the attitude of all psychologists
regarding a person's purposes in the experiment. From an
experimenter's perspective, what is interesting about behavior is not
it's purpose but, rather, what the person does to achieve that
purpose. So when you ask someone to name the color in which words are
written in a Stroop test, the fact that the person can do this is less
interesting than the fact that it is difficult not to also name the
word when it is the name of a color. So the focus in psychology is on
disturbance-output relationships (or relationships between
disturbances and side effects of outputs, like reaction times) rather
than on the fact that people can carry out the requested purpose;
purpose is taken for granted and, therefore, ignored.

The same happens in animal research, by the way. Operant researchers,
for example, find it far more interesting to look at how changes in
the feedback path linking action to the appearance of food affects
what an animal does to feed itself than in the fact that the animal
feeds itself. The "establishing operations", which involve making an
animal hungry as heck, are like the instructions in an experiment with
people: establishing the purpose that will be used as the basis of
research on how various operations influence the organism's ability to
carry out that purpose. Without the purpose -- without the animal
being hungry -- the experimental operations would have no effect.
Schedules of reinforcement, for example, don't have much effect on the
behavior of animals who are not interested in the reinforcement (food,
usually).

What I will be trying to show in my research is that the ability to
carry out a purpose -- to "just do what one is told to do" -- is
interesting in itself. Indeed, understanding how one does this (what
they are asked to do) is what the science of purpose (PCT) is all
about.

What I would also like to show is that what one finds in conventional
psychological experiments depends strongly on the purpose that a
person actually adopts (what perceptual variable they actually
control). One way to show this, I think, would be to test a person
several times in the same experiment using different instructions
regarding what their purpose should be each time. That is, I would
like to show how instructions regarding what to control -- what
purpose to have -- can affect what one discovers about relationships
between experimental and behavioral variables that are found in
conventional experiments.

Given your interest in the role of instructions in research, Chuck, I
wonder if you know of any such experiments that have already been
done. Has anyone done an experiment where the independent variable is
a difference in the purpose that the participant is instructed to
have? The best example would be one where the same participant does
the entire experiment having one purpose (such as, in the Stroop
experiment, the purpose of saying only the names of the colors) and
then does the experiment again with a different purpose (such as
saying whether the word is a color word) and seeing whether the
relationship between IV and DV is different in each case. If it is, it
shows that purpose plays an important role in the nature of the
influence the world appears to have on behavior (a fact expressed in
the second law of PCT: o = r - 1/g(d), the first law, of course, being
p = r).

Best regards

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Chuck Tucker (2007.08.19.2133)]

Theoretically every experiment that is done has the potential of finding out how instructions are related to purpose. Experimenters try to standardize their instructions but unless the instuctions are read from a computer screen w/o any additions from another person the instructions for each participant can be different. As I pointed out in my previous note unless we have a transcript of the action between the participants we don't know what transpired in the experiment.

The other problem is that most experiments are done trying to deceive the participants. Experimenter don't want the participants to know the "purpose" (as you know they call it the "hypothesis being tested") so the participant "make up" the purpose and try to cooperate with the purpose they made up. Experiments have to be done where the purpose is explicit.

The research done on survey research at the U of M in the 50' and 60's showed that interviewers actually change the questions they asked respondents over time. The interviews were tape recorded and the recordings were compared with the questions and answers on the interview schedules. One the most counter intuitive finding of the study was that the more interviewers became familiar with the interview schedule and further away from the initial instructions on how to do the interview the less likely their answers were accurate. What they did was either changed the question over time or recorded an answer which was most familiar to them from previous interviews. This finding lead to a change in training procedures of reinstructing interviewers after about 10 interviews.

The best way to assure that the experiments are done properly is to do them yourself and keep careful records.

Chuck

···

-----Original Message-----

From: Richard Marken <rsmarken@GMAIL.COM>
Sent: Aug 19, 2007 1:51 PM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: Purpose in Research

[From Rick Marken (2007.08.19.1050)]

Hi Chuck. Nice to see that you're still with us.

Chuck Tucker (2007.8.18.1821)--

I recall that it was Bill's Father who pointed out (too Bill, I believe)
that people were just following the instructions given to them in any
study involving control theory. I don't recall whether that was meant
as a critique or a simple factual statement. I take it as a factual statement.

I don't know what he might have meant either but my guess is that it
was meant in the sense of "big deal; the person is just doing what you
asked him to do". I think this is the attitude of all psychologists
regarding a person's purposes in the experiment. From an
experimenter's perspective, what is interesting about behavior is not
it's purpose but, rather, what the person does to achieve that
purpose. So when you ask someone to name the color in which words are
written in a Stroop test, the fact that the person can do this is less
interesting than the fact that it is difficult not to also name the
word when it is the name of a color. So the focus in psychology is on
disturbance-output relationships (or relationships between
disturbances and side effects of outputs, like reaction times) rather
than on the fact that people can carry out the requested purpose;
purpose is taken for granted and, therefore, ignored.

The same happens in animal research, by the way. Operant researchers,
for example, find it far more interesting to look at how changes in
the feedback path linking action to the appearance of food affects
what an animal does to feed itself than in the fact that the animal
feeds itself. The "establishing operations", which involve making an
animal hungry as heck, are like the instructions in an experiment with
people: establishing the purpose that will be used as the basis of
research on how various operations influence the organism's ability to
carry out that purpose. Without the purpose -- without the animal
being hungry -- the experimental operations would have no effect.
Schedules of reinforcement, for example, don't have much effect on the
behavior of animals who are not interested in the reinforcement (food,
usually).

What I will be trying to show in my research is that the ability to
carry out a purpose -- to "just do what one is told to do" -- is
interesting in itself. Indeed, understanding how one does this (what
they are asked to do) is what the science of purpose (PCT) is all
about.

What I would also like to show is that what one finds in conventional
psychological experiments depends strongly on the purpose that a
person actually adopts (what perceptual variable they actually
control). One way to show this, I think, would be to test a person
several times in the same experiment using different instructions
regarding what their purpose should be each time. That is, I would
like to show how instructions regarding what to control -- what
purpose to have -- can affect what one discovers about relationships
between experimental and behavioral variables that are found in
conventional experiments.

Given your interest in the role of instructions in research, Chuck, I
wonder if you know of any such experiments that have already been
done. Has anyone done an experiment where the independent variable is
a difference in the purpose that the participant is instructed to
have? The best example would be one where the same participant does
the entire experiment having one purpose (such as, in the Stroop
experiment, the purpose of saying only the names of the colors) and
then does the experiment again with a different purpose (such as
saying whether the word is a color word) and seeing whether the
relationship between IV and DV is different in each case. If it is, it
shows that purpose plays an important role in the nature of the
influence the world appears to have on behavior (a fact expressed in
the second law of PCT: o = r - 1/g(d), the first law, of course, being
p = r).

Best regards

Rick

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

Theoretically every experiment
that is done has the potential of finding out how instructions are
related to purpose.
[From Bill Powers (2007.08.20.0757 MDT)]

Chuck Tucker (2007.08.19.2133) –

This is especially true in control tasks using a computer, because it’s
possible to include the reference level in the parameters that are used
to fit the model to the data. We can then see what the person was trying
to do, which can be different from what the instructions were supposed to
mean (even if the instructions are read off the screen – words
themselves are often ambiguous).

The other problem
is that most experiments are done trying to deceive the participants.
Experimenter don’t want the participants to know the “purpose”
(as you know they call it the “hypothesis being tested”) so the
participant “make up” the purpose and try to cooperate with the
purpose they made up. Experiments have to be done where the purpose is
explicit.

That’s an excellent observation. I’ve noticed that a lot of psychology
with human beings seems to be trying to show "I know more about you
than you know. In trying to show that I do know, I have to conceal
from you just what it is that I think I know, to keep you from either
helping me prove it (if you like me) or deliberately not showing whatever
it is (if you don’t).

The research done
on survey research at the U of M in the 50’ and 60’s showed that
interviewers actually change the questions they asked respondents over
time. The interviews were tape recorded and the recordings were compared
with the questions and answers on the interview schedules. One the most
counter intuitive finding of the study was that the more interviewers
became familiar with the interview schedule and further away from the
initial instructions on how to do the interview the less likely their
answers were accurate. What they did was either changed the question over
time or recorded an answer which was most familiar to them from previous
interviews. This finding lead to a change in training procedures of
reinstructing interviewers after about 10
interviews.

This reminds me of an experiment that Heinz von Foerster told me about.
He recorded a word (I think it was “banana”) on a tape loop,
and played it to people who were told to report any deviations from the
same word, and what word it was. Of course after listening a while, most
people started hearing other words. They couldn’t believe that the word
was always the same. A lovely demonstration of the imagination connection
– and how it replaces the present-time perceptual signal.

The best way to
assure that the experiments are done properly is to do them yourself and
keep careful records.

That may not be enough, as your own examples above showed. Who can claim
to be immune from these phenomena? I think all you can do is assume that
you will do the same, and try to design your procedures so they don’t
depend only on your own perceptions or your own memory (i.e., multiple
observers, devices recording data and showing instructions).

One of these days I’m going to design that experiment I’ve described
before. Set up a two-person task in which both one person sees a display
of movable objects, and the other sees the same display, but the objects
are not movable. The “instructor” tries to tell the other
person how to move the objects so as to match a configuration the
instructor sees printed out on a sheet of paper.

Example “Put the red circle between the blue square and the yellow
triangle.”

Of course this could be done without a computer, using two sets of
objects on tables with a screen between them. Maybe someone else will
beat me to this.

Best.

Bill P.

[Martin Taylor 2007.08.20.10.45]

[From Bill Powers (2007.08.20.0757 MDT)]

This reminds me of an experiment that Heinz von Foerster told me about. He recorded a word (I think it was "banana") on a tape loop, and played it to people who were told to report any deviations from the same word, and what word it was. Of course after listening a while, most people started hearing other words. They couldn't believe that the word was always the same. A lovely demonstration of the imagination connection -- and how it replaces the present-time perceptual signal.

I did some experiments with this effect in 1962-3, but with a twist, reported in: Taylor, M.M. and G.B. Henning, Verbal transformations and an effect of instructional bias on perception. Canad. J. Psychol., 1963, 17, 210-223

We used tape loops of different lengths, and asked people to report what they heard each time it changed. They expected to hear changes, because we gave them a training trial with what seemed to be repetitions but has slight changes "We ain't so mad Anna today" became "We ate some banana today" or something like that, plus a few more variants. In the experimental trials there were no physical changes, but the subjects did hear a cariety of changes.

In experiments in other perceptual domains, we had found consistently that subjects reports a variety of forms, and that the cumulative number of changes, whether to a new form or back to an old one, was proportional very tightly to N*(N-1) where N was the total number of different forms reported to that point. This relation held equally closely for the verbal transformation effect.

The point of the experiment was not to replicate for the verbal transformation effect what we already knew from other domains (such as the Necker Cube, dynamic rotating spikes on a disk, non-verbal auditory patterns -- I forget what all we had tried). The point was to try to see whether what was changing was truly perceptual, using the N*(N-1) relationship and getting the sbjects to expect different things to appear in the changes.

We had two groups of subjects, and we gave then different training tapes and different instructions, but the same test tapes. One group was told that all the changes they heard would contain only English words, though they might be nonsensical, and that was what the changes on the training tape had. The other group was told they might hear nonsence and the training tape had a few nonsense changes like "We batisababatoday".

On the test tapes, both groups (and I mean the individual subects -- we did analyize by individual before combining the results!) reportd some nonsense, but the "nonsense-primed" group reported many more forms and much more nonsense than did the "English only" group.

Now this result could happen if the subjects were actually perceiving the same regardless of th isntructions and priming, or if the perceptions were different depending on the instructions. However, if they actually perceived the same but suppressed reports of changes to a nonsense form, then the N*(N-1) relation would not hold for the "English only" group. To see this, imagine that two different people both had perceived English forms A and B, and a nonsense form X. That's three forms, and N*(N-1) = 6. But if one of the people reported only A and B N*(N-1) = 2.

Some of the transitions reported by the "anything allowed" person would be of the form A -> X -> A. If X were suppressed by the English group, no transition would be reported. But there are also transitions A->X->B, which the English only group would report as A->B in addition to A->B reports occasioned by a direct A->B transition. They should report too many changes for the number of forms heard. And they don't. The curves for the two groups were essentially identical.

We argued, therefore, that the instructions and training tape had truly changed what the subjects perceived, and not what they reported of what they perceived.

Martin

I did some experiments with this
effect in 1962-3, but with a twist, reported in: Taylor, M.M. and
G.B. Henning, Verbal transformations and an effect of instructional bias
on perception. Canad. J. Psychol., 1963, 17, 210-223

We used tape loops of different lengths, and asked people to report what
they heard each time it changed. They expected to hear changes, because
we gave them a training trial with what seemed to be repetitions but has
slight changes “We ain’t so mad Anna today” became “We ate
some banana today” or something like that, plus a few more variants.
In the experimental trials there were no physical changes, but the
subjects did hear a cariety of changes.
[From Bill Powers (2007.08.20.0925MDT)]

Martin Taylor 2007.08.20.10.45 –

Pretty neat experiment, Martin. Finding the invariant distribution and
then using it to deduce what was perceived is pretty clever.

I wonder if something could be done with requiring people to correct
errors that they heard. I’m not sure that would distinguish between false
reports and false perceptions.

In von Foerster’s experiment, there was no warning or suggestion that
changes would be heard.

Best,

Bill P.

···

In experiments in
other perceptual domains, we had found consistently that subjects reports
a variety of forms, and that the cumulative number of changes, whether to
a new form or back to an old one, was proportional very tightly to
N*(N-1) where N was the total number of different forms reported to that
point. This relation held equally closely for the verbal transformation
effect.

The point of the experiment was not to replicate for the verbal
transformation effect what we already knew from other domains (such as
the Necker Cube, dynamic rotating spikes on a disk, non-verbal auditory
patterns – I forget what all we had tried). The point was to try to see
whether what was changing was truly perceptual, using the N*(N-1)
relationship and getting the sbjects to expect different things to appear
in the changes.

We had two groups of subjects, and we gave then different training tapes
and different instructions, but the same test tapes. One group was told
that all the changes they heard would contain only English words, though
they might be nonsensical, and that was what the changes on the training
tape had. The other group was told they might hear nonsence and the
training tape had a few nonsense changes like “We
batisababatoday”.

On the test tapes, both groups (and I mean the individual subects – we
did analyize by individual before combining the results!) reportd some
nonsense, but the “nonsense-primed” group reported many more
forms and much more nonsense than did the “English only”
group.

Now this result could happen if the subjects were actually perceiving the
same regardless of th isntructions and priming, or if the perceptions
were different depending on the instructions. However, if they actually
perceived the same but suppressed reports of changes to a nonsense form,
then the N*(N-1) relation would not hold for the “English only”
group. To see this, imagine that two different people both had perceived
English forms A and B, and a nonsense form X. That’s three forms, and
N*(N-1) = 6. But if one of the people reported only A and B N*(N-1) =
2.

Some of the transitions reported by the “anything allowed”
person would be of the form A → X → A. If X were suppressed by
the English group, no transition would be reported. But there are also
transitions A->X->B, which the English only group would report as
A->B in addition to A->B reports occasioned by a direct A->B
transition. They should report too many changes for the number of forms
heard. And they don’t. The curves for the two groups were essentially
identical.

We argued, therefore, that the instructions and training tape had truly
changed what the subjects perceived, and not what they reported of what
they perceived.

Martin

No virus found in this incoming message.

Checked by AVG Free Edition. Version: 7.5.476 / Virus Database:
269.11.17/951 - Release Date: 8/13/2007 10:15 AM

No virus found in this incoming message.

Checked by AVG Free Edition. Version: 7.5.476 / Virus Database:
269.11.17/951 - Release Date: 8/13/2007 10:15 AM

[From Rick Marken (2007.08.20.1030)]

Bill Powers (2007.08.20.0757 MDT)--

> Chuck Tucker (2007.08.19.2133) --

Theoretically every experiment that is done has the potential of finding out
how instructions are related to purpose.

This is especially true in control tasks using a computer, because it's
possible to include the reference level in the parameters that are used to
fit the model to the data.

Right now it will suit my purposes just to get a reasonable idea of
the purpose the participants are instructed to adopt and to see to the
extent possible) whether this purpose was carried out. For example, in
one study I just read the method section says: "Subjects were to make
a keypress response indicating the orientation of the first stimulus
...". To the extent that participants made differential keypresses to
stimuli of different orientations, they carried out the instructed
purpose. That's the part of the experiment that is typically ignored;
the participants' ability to carry out such purposes.

One of these days I'm going to design that experiment I've described
before. Set up a two-person task in which both one person sees a display of
movable objects, and the other sees the same display, but the objects are
not movable. The "instructor" tries to tell the other person how to move the
objects so as to match a configuration the instructor sees printed out on a
sheet of paper.

I presume the goal of this study is to show that instructions can be
ambiguous specifiers of purpose (reference states for controlled
variables). In some cases (as in Martin's experiment on "verbal
transformations" , Martin Taylor [2007.08.20.10.45]) ambiguity is the
point. But I think this is fairly rare and, besides, the ambiguity of
the specification of purpose is not the problem I am addressing.

The problem that I see is not that researchers are unaware of the fact
that their instructions might be somewhat ambiguous; the problem is
that researchers fail to see (or ignore) the fact that, when the
participants in their research try to follow their instructions (as
they understand them) they are carrying out a purpose. This is a
problem because it keeps researchers from appreciating the importance
of understanding the phenomenon of purpose (control). Instead,
researchers take purpose for granted (participants usually carry out
the purpose that the researcher instructs them to carry out) and
focus only on the side effects of how this purpose is carried out.

I think this is related to your observation that psychologists often
seem most interested in showing to participants in their research (and
the audience for their research in general) "I know more about you
than you know". A person who is instructed to press different keys
depending on an object's orientation knows as much as the experimenter
knows about what he is doing (ie. what his purpose is; what he is
controlling for). But he doesn't know that, say, his speed of response
to some orientations differs from that to others. So when the
psychologist finds this in an experiment, he knows more about the
participant than the participant knows. But, of course, neither know
how the participant was able to carry out the purpose of pressing a
different key to objects in different orientations.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Rick Marken (2007.08.20.1100)]

Martin Taylor (2007.08.20.10.45) --

We used tape loops of different lengths, and asked people to report
what they heard each time it changed...
We had two groups of subjects, and we gave then different training
tapes and different instructions, but the same test tapes.

On the test tapes, both groups (and I mean the individual subects --
we did analyize by individual before combining the results!) reportd
some nonsense, but the "nonsense-primed" group reported many more
forms and much more nonsense than did the "English only" group.

This is exactly the kind of experiment I was hoping to find. Thanks.

The main difference between the two groups was in their purpose (as
given in the instructions): one group had the purpose of reporting
nonsense words and the other group didn't. What could be called the
IV (the type of words on the tape) were reported differently depending
on the purpose of the participants. Thanks, I will use this study as
one example of how a "mere" change in a person's purpose can change
the apparent relationship between an IV and a DV.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[Martin Taylor 2007.08.20.14.16]

[From Rick Marken (2007.08.20.1100)]

Martin Taylor (2007.08.20.10.45) --

We used tape loops of different lengths, and asked people to report
what they heard each time it changed...
We had two groups of subjects, and we gave then different training
tapes and different instructions, but the same test tapes.

On the test tapes, both groups (and I mean the individual subects --
we did analyize by individual before combining the results!) reportd
some nonsense, but the "nonsense-primed" group reported many more
forms and much more nonsense than did the "English only" group.

This is exactly the kind of experiment I was hoping to find. Thanks.

The main difference between the two groups was in their purpose (as
given in the instructions): one group had the purpose of reporting
nonsense words and the other group didn't.

I doubt that. Each group had the purpose of reporting what they heard.

What could be called the
IV (the type of words on the tape) were reported differently depending
on the purpose of the participants.

No. The evidence of the experimental analysis is that they reported similarly but perceived differently.

Thanks, I will use this study as
one example of how a "mere" change in a person's purpose can change
the apparent relationship between an IV and a DV.

You can use it as an example of how a change in a person's expectations alters the relation between environmental input and conscious perception. (I assume that telling them they might hear nonsense and giving them a tape in which the variations did include nonsense gave them the expectation that they would continue to hear some nonsense, whereas telling them they wouldn't and giving them a training tape without nonsense led them not to expect to hear nonsense -- though some did, all the same).

Usually you can't distingush between what a person perceives and what they report they perceive. This experiment was set up so that we could make that distinction.

Martin

[From Rick Marken (2007.08.20. 1210]

Martin Taylor (2007.08.20.14.16)

>Rick Marken (2007.08.20.1100)
>
>This is exactly the kind of experiment I was hoping to find. Thanks.
>
>The main difference between the two groups was in their purpose (as
>given in the instructions): one group had the purpose of reporting
>nonsense words and the other group didn't.

I doubt that. Each group had the purpose of reporting what they heard.

Yes, of course. But one group was also told that there would be only
English words while the other was told that they might hear nonsense.
I take this as implicitly giving one group the purpose of reporting
nonsense if they heard it and the other not.

No. The evidence of the experimental analysis is that they reported
similarly but perceived differently.

I would say that that's a conclusion based on your assumptions. The
evidence (what was observed) was that the group told that there might
be nonsense words reported more nonsense words than the group that was
not told that there would be only English words. This is evidence that
the group told that there would be nonsense words adopted the purpose
of reporting nonsense words. Reporting whether words are nonsense or
not is a purposeful activity, I think, and the fact that one produced
this result (reported nonsense words) far more than the other suggests
that most of the people in one group had the purpose of doing this and
few of those in the other group did.

You can use it as an example of how a change in a person's
expectations alters the relation between environmental input and
conscious perception.

OK. But I see expectations as being basically the same as a purposes
in this situation. One group expected that they would be reporting
nonsense words so they had the purpose of reporting nonsense words if
they heard them -- or thought they heard them. The fact that the
"expect nonsense words" group reported nonsense words more often than
the other group suggests that people in the "expect nonsense words"
group had, as one of their purposes, to report what they heard as
nonsense words; people who did not expect nonsense words apparently
did not always adopt reporting such words as their purpose.

Usually you can't distingush between what a person perceives and what
they report they perceive. This experiment was set up so that we
could make that distinction.

Yes. And it was very clever. But the question of whether the "expect
nonsense words" group reported nonsense words more because they
perceived them or simply said they perceived them is irrelevant to the
question of whether they adopted the purpose of reporting nonsense
words. The simple fact that the "expect nonsense words" group reported
nonsense words far more then the other group suggests that this group
did have the purpose of reporting nonsense words, whether this was
because they perceived them or simply wanted to say that they heard
such words.

I think the interesting finding of the study is that by simply
implying that people should have different purposes you can influence
the apparent effect of "stimuli" (the sounds on the tape) on
responses (what kind of word they say they hear).

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

I doubt that. Each group had the
purpose of reporting what they heard.
[From Bill Powers (2007.08.20.1425 MDT)]

Martin Taylor 2007.08.20.14.16 –

Rick’s comment about expectations and purposes is probably relevant here.
Having an expectation may be like turning on a particular perceptual
input function (by turning on an associated control system). If one is
employing a PIF that detects nonsense syllables, then there will be
reports of nonsense syllables (including false reports owing to system
noise). If no such PIF is active, there will be no reports of nonsense
syllables.

This follows from the PCT model of perception in which one kind of
perception goes with one physical PIF. The other model, the
“coding” model, assumes that there is just one complex PIF and
its output is coded to say what the output means – real word, nonsense
syllable, or any other class of meaning like “rowboat.” The
coding model seems to go best with category perception, rather than
continuously-variable perceptions like speed or distance. The code says
what category the perception belongs to, but not how much of it there
is.

With the PCT approach, many PIFs can be active at the same time, each
responding to the degree that the inputs reaching it resemble the
“canonical” perception is it organized to detect. So a word
like “gorp” or “surd” does get some response fromt a
nonsense-detector, even though there are other detectors that clearly
recognize those words. A momentary loss of sensitivity, some noise, or a
slight flaw in the recording could lower the resp[onse of the
familiar-word detector enough to let the “nonsense” perception
dominator, leading to a false report of “nonsense”.

This also could explain how a person can get a quite clear impression of
a meaningful word that is different from the word actually being
replayed. The word we recognize is simply the response of the PIF that
responds the most to the pattern of sound intensities reaching
it.

That is somewhat off the track here, but it does relate to how
expectations, understood as active PIFs, can lead to differences in
perception (as opposed to reports of perceptions).

I suspect that we haven’t quite got the right story yet. It’s a little
hard to swallow the idea that for every possible different perception
there is a perceptual input function designed just to detect that one
perceptual variable. I don’t see how it would work yet, but I have vague
stirrings of ideas about constructing control systems “on the
fly” using different combinations of input and output functions.
That, of course, would rule out the idea that minority percepions coexist
with the majority, though a compromise would be major subsystems
operating in parallel each of which can construct control systems of
different kinds. We need data, of course, that will force us to choose a
new model.

Best,

Bill P.

[Martin Taylor 2007.08.21.16.15]

[From Bill Powers (2007.08.20.1425 MDT)]

Martin Taylor 2007.08.20.14.16 --

I doubt that. Each group had the purpose of reporting what they heard.

Rick's comment about expectations and purposes is probably relevant here. Having an expectation may be like turning on a particular perceptual input function (by turning on an associated control system). If one is employing a PIF that detects nonsense syllables, then there will be reports of nonsense syllables (including false reports owing to system noise). If no such PIF is active, there will be no reports of nonsense syllables.

My objection to Rick's approach was that he took the purpose to be the generation of _responses_ that included nonsense syllables by the group that was told that nonsense might appear in the repetitive stream. The experimental data strongly suggest that it was the perceptions that were different between the two groups, and that the responses to the perceptions just reported pretty accurately what was perceived.

Your proposal that the purpose was to achieve different perceptions because of the different instructions does agree with the experimental data analysis, and moreover, it is in the spirit of PCT that behaviour is the control of perception, not of response. I don't really believe it, nevertheless. My impression is that the subjects in the "anything goes" group had no particular goal to report nonsense -- most of their responses were not nonsense, but more of them were nonsense than were the responses of those "primed" to hear only English.

I don't have a reprint of the study here at home, but if I remember to get one the next time I am at DRDC, I'll check the instructions. My memory is that there was not so much an invitation to hear nonsense for that group than there was an inhibition against doing so for the "English-only" group. But that doesn't really matter; either way, reporting exactly what they heard was supposed to be the purpose they used in support of their purpose of pleasing the experimenter. We can't tell whether that was in fact the purpose they used, but the analysis suggests that it probably was.

This follows from the PCT model of perception in which one kind of perception goes with one physical PIF. The other model, the "coding" model, assumes that there is just one complex PIF and its output is coded to say what the output means -- real word, nonsense syllable, or any other class of meaning like "rowboat." The coding model seems to go best with category perception, rather than continuously-variable perceptions like speed or distance. The code says what category the perception belongs to, but not how much of it there is.

With the PCT approach, many PIFs can be active at the same time, each responding to the degree that the inputs reaching it resemble the "canonical" perception is it organized to detect. So a word like "gorp" or "surd" does get some response fromt a nonsense-detector, even though there are other detectors that clearly recognize those words. A momentary loss of sensitivity, some noise, or a slight flaw in the recording could lower the resp[onse of the familiar-word detector enough to let the "nonsense" perception dominator, leading to a false report of "nonsense".

My subjects were not asked to report a binary "sense" or "nonsense", but to speak what they heard every time it changed. The experimenter transcribed that, and afterwards checked with the subject that the transcription was a fair representation. The timings were based on the subject's click of a microswitch at the moment they heard the change.

This also could explain how a person can get a quite clear impression of a meaningful word that is different from the word actually being replayed. The word we recognize is simply the response of the PIF that responds the most to the pattern of sound intensities reaching it.

That's the usual interpretation, yes. It doesn't seem to have much relation to control, though.

That is somewhat off the track here, but it does relate to how expectations, understood as active PIFs, can lead to differences in perception (as opposed to reports of perceptions).

I'm not clear what you would think to be an "active" PIF. In my own conceptual model, all PIFs are active all the time, whether the perception in question is at the moment being controlled, or even whether it is ever controlled. Their sensitivities, however, may change according to circumstances (context, purpose, expectation, association). The strongest of these is probably associative context, but "you see what you want to see" (control) seems also to be important when the data are ambiguous (as they usually are in politics).

If my poly-flip-flop concept of category perception and association is half-way correct, it would be relatively easy to see how an output from some control system could supply a signal that affected the gain for one conceptual area while reducing it for another. Whether this would apply for the experiment jnder discussion is a different matter.

I suspect that we haven't quite got the right story yet. It's a little hard to swallow the idea that for every possible different perception there is a perceptual input function designed just to detect that one perceptual variable.

Yes, perception psychologists used also to find that hard to swallow, but I seem to remember some years ago reading that neurophysiological research might suggest that it could be a correct idea, nevertheless. One might perhaps look up "grandmother cell" to see whether my vague memory is correct, and what is the prevailing understanding at the moment.

Martin

[From Rick Marken (2007.08.21.1600)]

Martin Taylor (2007.08.21.16.15]

My objection to Rick's approach was that he took the purpose to be
the generation of _responses_ that included nonsense syllables by the
group that was told that nonsense might appear in the repetitive
stream.

I probably was not clear. What I meant to say is that the two groups
had slightly different purposes, as evidenced by their behavior.
Exactly what those purposes were, I don't know but it looks like one
groups purpose was to report both words and nonsense, the other's
purpose was to report words, so this group would have to deal with
what sounded like nonsense as they saw fit. What subject's actual
purpose was, in the controlled variable sense, I don't know, but,
given my control theory bias, I would imagine that all purposes
involved control of a perception that included the perception of the
words played on the tape loop. I don't imagine any person had the
purpose of generating responses; it's possible that some had, as one
of their purposes, controlling their perception of how often they said
that a sound was nonsense.

I don't have a reprint of the study here at home, but if I remember
to get one the next time I am at DRDC, I'll check the instructions.

I would appreciate it if you could scan it and send it to me. Thanks!

But that doesn't really matter; either
way, reporting exactly what they heard was supposed to be the purpose
they used in support of their purpose of pleasing the experimenter.
We can't tell whether that was in fact the purpose they used, but the
analysis suggests that it probably was.

Yes, of course, one of their purposes was to report what they heard.
Otherwise nothing would have happened in the experiment; the subjects
would have just sat there and done who knows what. You asked them to
adopt the purpose of reporting what they heard. Both groups heard
exactly the same thing but reported hearing different things. The only
difference between the groups was in what you said to them about
nonsense words. So it was probably this information that led the
groups to act differently. The only reason they would act differently
is because they adopted somewhat different goals. Different goals
means controlling different perceptions. So your conclusion from the
study, that the groups behaved differently because they perceived
differently is basically correct. But what is clear is that the groups
did carry out different purposes because they produced different
results under exactly the same circumstances.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[Martin Taylor 2007.08.21.23.11]

[From Rick Marken (2007.08.21.1600)]

> Martin Taylor (2007.08.21.16.15]

> I don't have a reprint of the study here at home, but if I remember

to get one the next time I am at DRDC, I'll check the instructions.

I would appreciate it if you could scan it and send it to me. Thanks!

If I remember to get it, I'll do that. (The caveat is non-trivial.)

But that doesn't really matter; either
way, reporting exactly what they heard was supposed to be the purpose
they used in support of their purpose of pleasing the experimenter.
We can't tell whether that was in fact the purpose they used, but the
analysis suggests that it probably was.

Yes, of course, one of their purposes was to report what they heard.
Otherwise nothing would have happened in the experiment; the subjects
would have just sat there and done who knows what. You asked them to
adopt the purpose of reporting what they heard. Both groups heard
exactly the same thing but reported hearing different things. The only
difference between the groups was in what you said to them about
nonsense words. So it was probably this information that led the
groups to act differently. The only reason they would act differently
is because they adopted somewhat different goals.

I'm not convinced of that, though it is a viable hypothesis. Since error in a control system is what leans to output, an alternative hypothesis is that the priming changed the perceptual functions and thereby the perceptual signals, and not the reference inputs. To alter the perceptual functions would not necessarily involve control at all. How would one distinguish the two possibilities?

Different goals
means controlling different perceptions.

You have to distinguish between changing which perceptions are controlled, and changing the reference levels at which the same perceptions are controlled. "Different goals" could mean either.

So your conclusion from the
study, that the groups behaved differently because they perceived
differently is basically correct. But what is clear is that the groups
did carry out different purposes because they produced different
results under exactly the same circumstances.

The circumstances were only the same insofar as the physical tapes they heard were the same. But "the circumstances" also include the context, and the two groups had been exposed to different things just before hearing the same tape loops. Therefore they were producing different results under different circumstances, if the context is included, as it normally should be.

Again, I see no obvious way of distinguishing the possibilities. I'm not saying you are wrong, just that you aren't demonstrably right.

Martin