Fw: media and violence

[From Rick Marken (2008.12.08.2240)]

Hi Paul

I think you just sent this to me but I hope you don’t mind my copying my reply to CSGNet.

This research is fine as sociology or policy research; but I think

it’s worthless as an approach to understanding individual human behavior.

This is a strong statement, as I’m sure you are aware, and one that really

undercuts a huge discipline.

I know. But this is not a uniquely PCT point of view. Several psychologists have pointed out what they thought were problems with studying individuals using statistical analysis of group data (David Bakan and B.F.Skinner come to mind). The fact of the matter is that group level statistical studies can tell us something about individuals only if 1) the causal (general linear) model of behavior is correct (IVs cause DVs) and 2) the effect of the IV on the DV is the same for all individuals. I think the poor performance of the general linear model (measured by r square or eta square values that are typically in the .3 range) suggests that 1) is not true. And I think there is certainly good anecdotal evidence that suggests that 2) is not true either (for example, showing an aggressive model to some kids leads them to be less rather than more aggressive).

I will

assert that the media violence studies (and those linked to such studies

– let’s go ahead and say the bulk of contemporary psychological research

into the development of aggressive and violent behavior) are NOT

“worthless as an approach to understanding human behavior.”

I agree. As I said they are not worthless as an approach to understanding the behavior of groups of humans; they are just worthless (well, “misleading” is probably a better term) as an approach to understanding individual human behavior.

There is a

substantial literature in clinical psychology on “best practice” or

“evidence-based” or “empirically validated” treatments… The best of these

programs have had striking, significant and long-lasting intervention

effects – take for example Scott Henggeler’s Multisystemic Therapy or

Patty Chamberlain’s Multidimensional Treatment Foster Care. These are top

shelf interventions built on strong foundations of psychological theory

and research – and the findings from these programs have returned the

favor by providing new insights into dynamic risk factors amenable to

treatment.

Yes, at the group level there may be a benefit from using one treatment rather than another. But at the individual level there are probably many people who do worse with the treatment than they would without it.

Violent media won’t go away - we know this. The entertainment industry

makes too much money from violent programming, and in any case people seem

to want it. But, as I said in the interview, parents can and should

control their children’s access to it.

IU think you must be saying this because you think that violent programming causes aggressive behavior and you don’t want to see such behavior? I just don’t think violent programming can possibly cause violent behavior because behavior doesn’t work that way. People control what happens to themselves; what happens to them doesn’t control them.The fact that this is true is demonstrated by the many instances where exposure to violence leads to precisely the opposite of violent behavior.

This is an individual level theory. And it’s an open loop causal theory,

which is consistent with the general linear model of statistics that is

used to test it. If the results of these statistical tests don’t produce

consistently high r squared (goodness of fit) values then the model should

be rejected. We have a better alternative: closed loop control theory.

That’s a fine assertion; as I’ve mentioned to David previously, I’m

waiting for the empirical research to support this model with respect to

children’s aggression.

I think there is plenty of evidence that the causal model of aggression is wrong. I think it would be pretty easy to get empirical evidence for the control theory model of aggression. According to PCT, whether or not exposing a child to an aggressive model produces aggressive behavior depends on the purposes of the child. To demonstrate that this is the case, I would do an experiment where I would ask a child to view a violent video with the purpose of either learning how to act or how not to act when another person comes in the room. Tell the child that he or she will get a prize if the correct action is taken when the person comes in. My prediction is that every kid who has the purpose of learning how to act from the video will act violently; every kid who has the purpose of learning how not to act from the video will not act violently. It should be 100%. This little piece of empirical research will show that it’s not violent media that causes violence; it’s the relationship of that violent media to a person’s purposes.

In the experimental research, exposure to violent media causes aggression.

It appears to cause it (statistically). PCT shows why it looks that way, and why the apparent causal connection is only statistical.

model) is typically about .3 and rarely greater than .5. I thinkit’s about

time that the basic model underlying research on media violence – indeed,

the basic model underlying all research in psychology, which is the

general linear model of statistics – is what is wrong and should be

rejected. It’s time to try PCT.

This last point is where David and I have most of our discussion – if the

PCT model cannot square with the general empirical model underlying social

and behavioral science, there really is nothing I nor any of my colleagues

could say regarding media violence research that would be convincing.

Which leaves us where we started!

I don’t understand this. But it doesn’t matter, really. I see you are an Assistant professor. I think your best off staying the course with conventional psychology, at least until you get tenure. That’s basically what I did, though I left academia after getting tenure anyway. But thanks for the discussion; it has rekindled me interest in trying to write a book about doing research from a PCT perspective.

Oh, and I do have a paper coming out on this topic – in Review of General Psychology – in June 2009. I can send you a pre-publication copy if you’re interested.

Best regards

Rick

···

On Sun, Dec 7, 2008 at 4:44 PM, Paul Boxer pboxer@psychology.rutgers.edu wrote:

Thanks for the interesting thoughts on this…


Paul Boxer, PhD

Assistant Professor of Psychology,

Rutgers University

Adjunct Research Scientist,

Institute for Social Research,

University of Michigan

Mail: 101 Warren Street, Newark, NJ 07102

Phone: 973-353-3943

Fax: 973-353-1171

Email: pboxer@newark.rutgers.edu



Richard S. Marken PhD
rsmarken@gmail.com

[Goldstein (2008, 12, 09,0628 EST)]

Your research proposal was interesting, but …

The child could be given or not given a specific purpose. If the child is not given a purpose, will the actions of a child to another child walking in the room be the same after the different kinds of media? Could the effect of the media be to give a child a purpose? That looks interesting, let me try it?

Also, I doubt that your research proposal as it is stated will get past a human research committee.

Interviewing the child exposed to different kinds of media may be acceptable alternative.

David

···

----- Original Message -----

From:
Richard Marken

To: CSGNET@LISTSERV.ILLINOIS.EDU

Sent: Tuesday, December 09, 2008 1:41 AM

Subject: Re: Fw: media and violence

[From Rick Marken (2008.12.08.2240)]

On Sun, Dec 7, 2008 at 4:44 PM, Paul Boxer pboxer@psychology.rutgers.edu wrote:

Hi Paul

I think you just sent this to me but I hope you don’t mind my copying my reply to CSGNet.

>This research is fine as sociology or policy research; but I think
> it's worthless as an approach to understanding individual human behavior.

This is a strong statement, as I'm sure you are aware, and one that really

undercuts a huge discipline.

I know. But this is not a uniquely PCT point of view. Several psychologists have pointed out what they thought were problems with studying individuals using statistical analysis of group data (David Bakan and B.F.Skinner come to mind). The fact of the matter is that group level statistical studies can tell us something about individuals only if 1) the causal (general linear) model of behavior is correct (IVs cause DVs) and 2) the effect of the IV on the DV is the same for all individuals. I think the poor performance of the general linear model (measured by r square or eta square values that are typically in the .3 range) suggests that 1) is not true. And I think there is certainly good anecdotal evidence that suggests that 2) is not true either (for example, showing an aggressive model to some kids leads them to be less rather than more aggressive).

I will
assert that the media violence studies (and those linked to such studies
-- let's go ahead and say the bulk of contemporary psychological research
into the development of aggressive and violent behavior) are NOT

“worthless as an approach to understanding human behavior.”

I agree. As I said they are not worthless as an approach to understanding the behavior of groups of humans; they are just worthless (well, “misleading” is probably a better term) as an approach to understanding individual human behavior.

There is a
substantial literature in clinical psychology on "best practice" or
"evidence-based" or "empirically validated" treatments.... The best of these
programs have had striking, significant and long-lasting intervention
effects -- take for example Scott Henggeler's Multisystemic Therapy or
Patty Chamberlain's Multidimensional Treatment Foster Care. These are top
shelf interventions built on strong foundations of psychological theory
and research -- and the findings from these programs have returned the
favor by providing new insights into dynamic risk factors amenable to

treatment.

Yes, at the group level there may be a benefit from using one treatment rather than another. But at the individual level there are probably many people who do worse with the treatment than they would without it.

Violent media won't go away - we know this. The entertainment industry
makes too much money from violent programming, and in any case people seem
to want it. But, as I said in the interview, parents can and should
control their children's access to it.

IU think you must be saying this because you think that violent programming causes aggressive behavior and you don’t want to see such behavior? I just don’t think violent programming can possibly cause violent behavior because behavior doesn’t work that way. People control what happens to themselves; what happens to them doesn’t control them.The fact that this is true is demonstrated by the many instances where exposure to violence leads to precisely the opposite of violent behavior.

> This is an individual level theory. And it's an open loop causal theory,
> which is consistent with the general linear model of statistics that is
> used to test it. If the results of these statistical tests don't produce
> consistently high r squared (goodness of fit) values then the model should
> be rejected. We have a better alternative: closed loop control theory.

That's a fine assertion; as I've mentioned to David previously, I'm
waiting for the empirical research to support this model with respect to
children's aggression.

I think there is plenty of evidence that the causal model of aggression is wrong. I think it would be pretty easy to get empirical evidence for the control theory model of aggression. According to PCT, whether or not exposing a child to an aggressive model produces aggressive behavior depends on the purposes of the child. To demonstrate that this is the case, I would do an experiment where I would ask a child to view a violent video with the purpose of either learning how to act or how not to act when another person comes in the room. Tell the child that he or she will get a prize if the correct action is taken when the person comes in. My prediction is that every kid who has the purpose of learning how to act from the video will act violently; every kid who has the purpose of learning how not to act from the video will not act violently. It should be 100%. This little piece of empirical research will show that it’s not violent media that causes violence; it’s the relationship of that violent media to a person’s purposes.

In the experimental research, exposure to violent media causes aggression.

It appears to cause it (statistically). PCT shows why it looks that way, and why the apparent causal connection is only statistical.

> model) is typically about .3 and rarely greater than .5. I thinkit's about
> time that the basic model underlying research on media violence -- indeed,
> the basic model underlying all research in psychology, which is the
> general linear model of statistics --  is what is wrong and should be
> rejected. It's time to try PCT.

This last point is where David and I have most of our discussion -- if the
PCT model cannot square with the general empirical model underlying social
and behavioral science, there really is nothing I nor any of my colleagues
could say regarding media violence research that would be convincing.

Which leaves us where we started!

I don’t understand this. But it doesn’t matter, really. I see you are an Assistant professor. I think your best off staying the course with conventional psychology, at least until you get tenure. That’s basically what I did, though I left academia after getting tenure anyway. But thanks for the discussion; it has rekindled me interest in trying to write a book about doing research from a PCT perspective.

Oh, and I do have a paper coming out on this topic – in Review of General Psychology – in June 2009. I can send you a pre-publication copy if you’re interested.

Best regards

Rick

Thanks for the interesting thoughts on this...

Paul Boxer, PhD

Assistant Professor of Psychology,
Rutgers University
Adjunct Research Scientist,
Institute for Social Research,
University of Michigan

Mail: 101 Warren Street, Newark, NJ 07102

Phone: 973-353-3943
Fax: 973-353-1171
Email: pboxer@newark.rutgers.edu



Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2008.12.09.1010)]

[Goldstein (2008, 12, 09,0628 EST)]

Your research proposal was interesting, but …

The child could be given or not given a specific purpose. If the child is not given a purpose, will the actions of a child to another child walking in the room be the same after the different kinds of media? Could the effect of the media be to give a child a purpose? That looks interesting, let me try it?

The thing about purposes is that they can’t be given to a person from the outside. People always walk into experiments with some kind of purposes; they just aren’t always the one’s the experimenter wants them to have. That’s why people are given instructions in experiments – so that they will adopt something like the purposes the experimenter wants them to have. (This works under the assumption that people have the higher level purpose of wanting to please the experimenter; I know from experience that this is not always true;-) My description of my proposed experiment included operations – like offering a reward for following instructions – that were aimed at making sure that the kids adopt the purpose the experimenter wants them to adopt. But people – even kids – are autonomous systems in the sense that the set-points (references or purposes) of their control systems cannot be set from outside (as one can with a thermostat). My experiment is designed only to show that the effect of seeing violence depends on a person’s purposes; seen violence doesn’t cause violent behavior in the same way that, say, hitting a ball with a bat causes the ball to accelerate. What a kid (or grown-up) does after being exposed to violence depends on their purposes: what perceptual variables they are controlling that are in some way disturbed by the violence to which they are exposed.

If a child is not given a specific purpose regarding the violence they will be shown, then how they react to it depends on what purposes they have that are relevant to the violence; and we don’t know what those purposes are (unless we test for them) so we have no idea why the child reacts (or doesn’t react) in a particular way to the violence. My experiment is designed so that we have a pretty good idea what the kids purposes are – either to learn how to or how not to react to a person coming into the room.

I think it would be great if you would try this experiment ; I’m sure you’ll run into all kinds of real world problems if you actually try to do it. That’s why I prefer to do the simple tracking experiments, which demonstrate the point without all the messy problems of ethics and whatnot. The tracking analog is to show that behavior (in this case, “corrective” mouse movements) in response exposure to a deviation of a cursor from the target depends on the person’s purposes. If the person purpose is to keep the cursor aligned with the target then exposure to the cursor 1/2 inch to the left of the target will “cause” the person to “react” – moving the mouse to the right in order to get the cursor back on target. If, however, the person’s purpose is to keep the cursor 1/2 inch to the left of the target then exposure to the cursor 1/2 inch to the left of the target will “cause” no reaction at all. This is exactly analogous to the situation where exposure to violence sometimes does and sometimes doesn’t “cause” a violent reaction; it all depends on a person’s purpose. It’s easier to demonstrate this in a tracking situation because it doesn’t involve ethical questions, the relevant variables are more easily isolated and measured and it’s easy for the subject to adopt the purpose suggested by the experimenter.

Also, I doubt that your research proposal as it is stated will get past a human research committee.

That’s for sure. Another reason why tracking studies are preferred. They illustrate the principles without all that messy complication.

Interviewing the child exposed to different kinds of media may be acceptable alternative.

Yes, for sure.

Best

RIck

···

David


Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2008.12.09.0729 MST)]
[Note: the embedded figures cause funny jumps when scrolling. Use up-down
arrow keys to be sure of seeing all the next, or the cursor in the scroll
bar.]
Rick Marken (2008.12.08.2240) –
Rather than just generalizing by saying that group statistics are
“worthless” for understanding individual behavior, I would
prefer to see some sort of explanation of this conclusion. The binary
scale including only scores of 0 (worthless) and 1 (perfect) is
insufficient to give the true picture and makes counterexamples too easy
to find. On a continuous scale of 0 to 1, a score of 0.3 does not mean
“worthless.” A score of 0.0 is “worthless.”
The issue here is not whether we should try to see if exposure to violent
media results in an increase of violent behavior. Of course we should
try, and if there is an effect we should try to limit such exposures even
if (as is the case) we have no idea how this effect works. As David
Goldstein pointed out to me yesterday, the link between smoking and
cancer is well-established at the population level, and there is strong
justification for banning smoking in public places and making every
effort to discourage smoking by the young. Yes, this ban will
inconvenience people who would never have got cancer from smoking, but
since we don’t understand the mechanism and the effects are deadly, there
is justification for asking people to endure the inconvenience –
especially since they don’t know whether they’re at risk any better than
their doctors do.
A similar argument holds for banning violent media. Yes, this will
inconvenience people who want to see violent shows or sports and whose
own level of violent behavior will not be increased by doing so. But the
results of violent behavior are so bad for many individuals and for
society that the loss of one non-essential form of entertainment is
relatively unimportant. People who do not themselves tend toward violence
are harmed by people who do, so it’s in everyone’s self-interest to try
to reduce the population level of violence, rather than waiting
until we can predict which people are prone to it and using treatments
that affect only them.

I hope I have stated the case fairly, David G. and Paul B… I have not
said that statistical studies are worthless, and I have defended their
use in circumstances where we have no way of narrowing the effects of
countermeasures to selected individuals. I think that’s basically how
psychology, medicine, and other similar disciplines see themselves –
they employ statistics because it is the best way they know of to find
suggestions of relationships, and to find at least broadly-targeted means
of reducing the severity of problems within a population. There are
powerful statistical methods for teasing out relationships that would
otherwise pass unnoticed, and of course the most powerful methods are the
ones that should be used.

So much for Runkel’s “Casting Nets.”

What is hard for psychologists, doctors, sociologists, and so on to come
to grips with is that the exclusive use of this empirical-statistical
approach means that their discipline is still in a pre-scientific stage
of development. Everyone wants to feel that one’s chosen field is
admirable and highly advanced, state-of-the-art, a vast improvement over
what went before. There is, to cite Glasser, a strong sense of
belongingness that comes from seeing onself as part of a large noble
enterprise. A criticism of one’s field of expertise is a criticism of
one’s Self, so one defends the field against critics just as one defends
oneself – even when the critics are right.

Unfortunately, this is an excellent way to put an end to progress in any
discipline. In this case it’s a good way of preventing psychology from
developing into a science as successful as physics and
chemistry.

Improbably or not, suppose we found that in some individuals, there is a
gene that causes young children to be highly susceptible to visual
displays of violence – say a gene that is responsible for the ability to
learn vicariously. People without this gene can watch any degree of
violence and be unchanged by it, while those who have it are certain to
learn new violent behaviors from the exposure.

With the ability to detect this gene in our toolkits, would it make any
sense to go on studying this phenomenon statistically? Would it make any
sense to act as if everyone in the population were susceptible, and ban
everyone from exposure to violent media? Since we could identify the
individuals who will become more violent, all we would need to do is keep
them from being exposed to violent media, while the rest can be exposed
to anything they please to experience.

This fanciful case shows that the power of the statistical method is
relative. Good statistical analysis is very powerful compared to bad
statistical analysis or none at all. But even the best statistical
analysis of population effects is a very limited and imprecise way of
predicting individual behavior in comparison with a method that depends
on measuring individual characteristics one person at a time. That’s what
Phil Runkel called “Testing Specimens.”

Suppose there were a way of determining whether any individual, child or
adult, was susceptible to bad effects from exposure to violent media. I
proposed finding a gene (very unlikely), but there are many other
possibilities. One simple way would be to watch an individual
continuously and simply look at the correlation between exposures to
violent media and subsequent violent behavior by that individual.
Individuals who show a very high correlation could then easily be
distinguished from those who show a very low correlation, and a cutoff
point could be set to determine if that person was permitted to see
violent media.

This is still a statistical approach; the difference is that the
statistics are applied within individuals instead of across individuals.
If the correlations within observations of one individual were high
enough – say 0.95 or higher – we would probably stop referring to the
tests as “statistical analysis” and start speaking of them as
measurements of properties, with some small measurement error. We would
say that we measure each person’s degree of susceptibility to violent
media, and protect all those with more than some particular degree of
it.

Why don’t all life-scientists use such individual measurements?
Measurement of individuals would clearly be immensely more powerful for
predicting individual behavior than even the best statistical studies of
populations.

There is a very clear reason why they don’t use individual measurements:
they don’t have any measures that are good enough. And this being CSGnet,
I will say why: because they don’t have the right theory of behavioral
organization. This leads to another reason which is purely economic. We
can’t assign one researcher exclusively to watch each subject in an
experiment with a large population. The exhaustive empirical approach is
extremely wasteful of resources and time. We can’t screen all 300 million
people in this country, or all 6 or 7 billion on Earth, for
susceptibility to violent media. We can’t do this even for a decent
fraction of them, say 10%, or even 1%.

James Clerk Maxwell, among others, said “there is nothing so
practical as a good theory.” What makes a theory practical, when
it’s a good one, is that it improves our ability to predict phenomena by
orders of magnitude over the population-statistics method. It improves
prediction so much that on the basis of experiments with just a few
subjects, we can determine properties that are common to the vast
majority of people, with facts having p < 0.0000000001. The models we
have of tracking behavior achieve that level of confidence or better. All
of the facts we have so far about human control behavior come from data
with probable errors of prediction less than 5 or 10 percent. See this
table (from the Handbook of Chemistry and Physics):

31b63fcc.jpg

Here is the lower right portion magnified a bit:

31b6400b.jpg

If the standard deviation is slightly less than 50% of the average value
of the predicted variable (standard deviation = 2.0 in magnified table),
the odds against this agreement being due to chance are 20:1, the usual
standard for accepting a relationship as significant (p < 0.05). If we
could somehow reduce the standard deviation from the currently-accepted
50% down to 20% of the value of the predicted variable (see entries in
right-hand table for s.d. = 5.0), the odds against the fit being due to
chance rise to 1.7 million to one, and the confidence level is p <
0.00000057. Therefore, if we could make only a 60% reduction in the size
of the acceptable standard deviation relative to the measurement, we
would change the status of a measurement from a probability to a
certainty by anyone’s standards.

This would still leave us far short of the standards of physics, where
prediction errors in beginning physics laboratory courses are expected to
be around 3%, or roughly 1/30 of the size of the measurement. The mere
existence of a systematic relationship, at that level of error, is not
even in dispute: imagine what the entries in the table would look like
for 30.0 standard deviations.

In PCT tracking experiments we regularly reach errors of fit between
model and data of 5% of the range of the predicted variable, which
corresponds roughly to 20 standard devations in the table. Clearly there
can be no doubt that the model’s behavior is systematically related to
the real behavior. Using statistics at all in that situation would be
ludicrous.

This is what I mean by bringing psychology up to the standards of the
physical sciences. It means reducing standard deviations of predictions
relative to measurements by about 60%. That’s all. I should think the
resulting qualitative change in the science of psychology would be well
worth the effort.

I don’t have access to the papers on violent media, so someone else will
have to look up the relevant data to see where it fits into this
discussion and the table above. Basically, my question is “what is
the probability that a given person exposed to violent media will show an
increase in violent behavior?” I leave it to the experts to
translate that into the appropriate terms for the studies in
question.

Best,

Bill P.

[From Rick Marken (2008.12.09.1500)]

Bill Powers (2008.12.09.0729 MST)–

Rick Marken (2008.12.08.2240) –

Rather than just generalizing by saying that group statistics are
“worthless” for understanding individual behavior, I would
prefer to see some sort of explanation of this conclusion.

I did explain it; same explanation as yours. It’s “worthless” (I revised it to “misleading”) for studying individuals but it’s just fine for studying groups.

The issue here is not whether we should try to see if exposure to violent
media results in an increase of violent behavior. Of course we should
try, and if there is an effect we should try to limit such exposures even
if (as is the case) we have no idea how this effect works.

Exactly what I said, though I would limit it only if the costs of not limiting it outweight the costs of not limiting it. Based on the evidence I am in favor of limiting distribution of violent material using warning labels, not allowing sales to minors and limiting broadcast to pay per view cable.

I hope I have stated the case fairly, David G. and Paul B… I have not
said that statistical studies are worthless

Nor did I. I said that group studies are worthless (revised to “misleading”) as an approach to studying individuals. Statistics really has nothing to do with it.

This fanciful case shows that the power of the statistical method is
relative. Good statistical analysis is very powerful compared to bad
statistical analysis or none at all. But even the best statistical
analysis of population effects is a very limited and imprecise way of
predicting individual behavior in comparison with a method that depends

There is a very clear reason why they don’t use individual measurements:
they don’t have any measures that are good enough. And this being CSGnet,
I will say why: because they don’t have the right theory of behavioral
organization.

Gosh, I think I said that too.

I don’t have access to the papers on violent media, so someone else will
have to look up the relevant data to see where it fits into this
discussion and the table above. Basically, my question is “what is
the probability that a given person exposed to violent media will show an
increase in violent behavior?” I leave it to the experts to
translate that into the appropriate terms for the studies in
question.

Same point as I made except that I suggested evaluating the results in terms of r2, the proportion of variance in the DV explained by the IV. I would like to see the results of the studies on aggression myself but based on my survey of psychological research results the r2 values in this research are not likely to be much to be much higher than .3 – not more than 1/3 of the variance in aggression scores is accounted for by the variance in the aggression level viewed.

Paul is not on CSG net so I think it would be nice to copy these posts to him. I have one from him that I will reply to in a bit that I will post to the net.

Best

Rick

···

on measuring individual characteristics one person at a time. Yes, that’s what I said.


Richard S. Marken PhD

rsmarken@gmail.com

[From Bill Powers (2008.12.09.1833 MST)]

Rick Marken (2008.12.09.1500) --

I did explain it; same explanation as yours.

Exactly what I said, though I would limit it only if the costs of not limiting it outweight the costs of not limiting it.

Nor did I. I said that group studies are worthless (revised to "misleading") as an approach to studying individuals. Statistics really has nothing to do with it.

Yes, that's what I said.

Gosh, I think I said that too.

Same point as I made except that I suggested evaluating the results in terms of r2,

I know, it gets annoying to have someone else come up with exactly the same analysis in PCT terms. On the other hand, it's kind of reassuring, too.

Do you want to make the point about needing a model to test explanations of those economic games Ted Cloak is describing, or should I do it? Go ahead if yours is ready. I'm sure it will be the same as mine.

Best,

Bill P.

[From Rick Marken (2008.12.09.1850)]

Bill Powers (2008.12.09.1833 MST)–

Rick Marken (2008.12.09.1500) –

I did explain it; same explanation as yours.

I know, it gets annoying to have someone else come up with exactly the same analysis in PCT terms. On the other hand, it’s kind of reassuring, too.

Very. I only responded because I though you were chiding me about my reply. I’m very sensitive (high gain) about this topic (methodolgy) especially because I am all juiced again about writing a book on the topic. I didn’t to get my parade rained on – especially by you;-)

Do you want to make the point about needing a model to test explanations of those economic games Ted Cloak is describing, or should I do it? Go ahead if yours is ready. I’m sure it will be the same as mine.

No, it’s all yours!!

Thanks

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2008.12.09.2150)]

Hi Paul

I’m copying to CSGNet again. I think it would be worth it to join because there is at least one other person involved in the discussion (Bill Powers). You can join by going to

https://listserv.illinois.edu/wa.cgi?SUBED1=csgnet&A=1

It’s easy to leave, too, if it starts to interfere with your life (which it can do;-)).

It seems to me like a big part of the problem that PCT has with mainstream

approaches to studying psychological phenomena is linked to the

statistical and experimental methodology applied. Specifically, the

practice of inferring meaningful phenomena through the examination of

group-level scores on measured constructs.

Yes. Although I have nothing against statistics per se (heck, I teach the undergraduate statistics course at Ucla). It’s just that the statistical tests psychologists use are based on the general linear model , which I believe is clearly the wrong model of behavior (since it accounts for so little of the variance in the behavior observed in experiments).

So, an experiment showing that a group of children exposed to a violent tv

show behaves more aggressively than does a group of children exposed to a

nonviolent tv show is inadequate because the statistical test is based on

a summary of each group’s performance (i.e., the group means). Therefore

we can say nothing about individual behavior as the function of violent tv

exposure because there could be great variability in indvidual responses

that is blurred by the computation of group means.

Yes, that’s the problem with trying to use group data to infer the psychology of the individuals in the group. And it’s not just the variability that creates a problem. As Bill Powers has shown (in an article called “Control Theory and Statistical Generalizations”, American Behavioral Scientist, v. 34, Sept/Oct 1990, pp 24-31) relationships between variables at the group level (such as a positive relationship between reward and effort) can be the opposite of the relationship at the individual level (such as a negative relationship between reward and effort). It’s one of Powers’ shortest but sweetest efforts.

Well, I agree with that. Group means are hardly sufficient for inferring

meaningful differences among the members of the groups. So we do a few

other things to take this issue into account: examine variance

heterogeneity across groups; estimate confidence intervals for the means

generated by the statistical model; examine outliers; and – probably of

greatest interest to this group – examine plots of actual scores produced

by the individuals.

None of this would help you determine the nature of the individuals that make up the group. None of these things tells you what you need to know to determine whether the group data is a valid reflection of individual characteristics. What you need to know is whether or not the effect of the IV on the DV is the same for each individual. Indeed, the statistics that are used to analyze group data assume that this is true; that the effect of the IV on the DV is the same for all subjects. You can’t use statistics to determine whether or not that assumption is true or not. What you have to do is test each individual in a single subject experiment to determine the effect of the IV on the DV for each person. This, of course, is rarely, if ever, done.

If all – ALL – the kids in the violent media exposure group have higher

aggression scores (regardless of the specific aggression measure, and they

are legion, and this is a separate issue) than do the kids in the

nonviolent media group, would you conclude that violent tv exposure

causes, or is linked to, or enhances risk for, aggressive behavior?

No, causation is another matter. The general linear model assumes that, in a properly controlled experiment, a statistically significant relationship between IV and DV means that variations in the IV are the cause of variations in the DV. But PCT shows that, if the system under study is closed loop (rather than open loop, as assumed by the general linear model) an apparent causal relationship between IV and DV is actually an illusion (the behavioral illusion). This is really the killer for conventional psychology. I describe it brieflyy in my paper, which I will send to you in a separate e-mail.

One could extend this example to treatment – i.e., if everyone in the

target treatment condition improved more than everyone in the control

condition, would the treatment be seen as effective?

Yes. Which is all we need to know for policy purposes. We don’t have to know why the treatment has an effect.

My point is that the general complaint about group-targeted methods can

easily be addressed via the inspection of individual scores.

Only if those scores are inspected in a single subject experiment aimed at determining the effect of the IV on the DV for that individual. Looking at individual scores in a completely randomized or even a within subjects design won’t tell you anything about possible differential effects of the IV on the DV for different individuals.

Yet I would argue emphatically that when a problem has been tackled in a

number of different ways (experimental lab studies; field experiments;

correlational and quasi-experimental designs; prospective cohort studies)

by a number of different researchers working within different disciplines

(developmental and social psychology; sociology; economics; public health)

and yet the results all converge on the same conclusion, one really can

and should put some serious credence in the findings.

Sure. I think we just differ in our credes. For example, I put credence in studies that show a strong beneficial effect of reduced classroom size on educational outcomes. So I’m for paying for more teachers so that we have more and smaller classes. But I don’t put credence in the idea that such studies tell me anything about how children learn.

This is what has been done with the study of violent media effects. There

are extensive meta-analytic reviews that confirm the impact of violent

media on aggressive behavior. The observation has been stable over time,

across labls, and across disciplines. Thousands of studies with

disconfirming evidence would have to be done to dampen the meta-analytic

effect.

Right, that’s good group level data. Good for deciding on policies that affect groups. They just don’t tell you much about why exposure to violent media leads some people to act violently, leads others to act non-violently and has no influence at all on still others.

So I can accept in the main the notion that PCT might offer a different,

better view on some internal cognitive mechanisms linking contextual

inputs to behavioral outputs, but the extant literature on media violence

is, in my view and the views of my senior colleagues, unimpeachable.

I’m not impeaching; I’ve done policy research and I think it’s just fine. It’s just not the kind of research that helps us understand human nature, which is what I want to understand.

Now onto specifics.

what happens to them doesn’t control them.The fact that this is true is

demonstrated by the many instances where exposure to violence leads to

precisely the opposite of violent behavior.

By “the opposite of violent behavior,” do you mean prosocial behavior? I

assume then you are referring, for example, to the widespread drive to

volunteerism that was prompted by 9/11? Or similar events?

Sure, that’s an example. But I know that when I was first exposed to violent media (at about age 3 or so; TV was new and I remember seeing people shooting each other in a Western) my first though was “gosh, aren’t people embarrassed to be behaving that way.” For me, violent media is a model for how not to behave (same was true of my kids, by the way).

These are

extreme examples (case studies, really) that I won’t try to argue. All

that can really be said here is that in the lab, where it is ethical only

to use violent media and not “real” violence exposure, this does not

happen.

How do you know? I can’t believe that there has never been a kid like me (or my kids) as a subject in that research.

My prediction is that every kid who has the purpose of learning how to

act from the video

will

act violently; every kid who has the purpose of learning how not to act

from

the video will not act violently. It should be 100%. This little piece

of

empirical research will show that it’s not violent media that causes

violence; it’s the relationship of that violent media to a person’s

purposes.

Well, it’s the mediation of the violent media exposure on actual behavior

by cognitive processes. The theory is pretty clear that a number of

internal cognitive structures can moderate or mediate the impact of

contextual violence on behavior. No one has ever advocated a straight S-R

model of violent media effects. What I would propose though is that the

kids in the “how not to behave” condition would still be aroused by the

violent media and have to suppress the cognitive activation associated

with that arousal in order to obtain their goal. I also would propose that

your experimental results would vary greatly as the function of the age of

the child given that younger children would have a harder time sustaining

focus on a longer-term goal. Either way, the results would demonstrate

more that kids are able to behave in accordance with a desired reward even

in the presence of stimuli that could push them away from that reward than

they would anything meaningful about media violence per se. A nice test of

operant theory, by the way.

As I said to your father-in-law, you don’t really even need to do this experiment to show that the apparent effect of environmental “stimuli” (like violent media) on behavior (like aggression) depends on purposes (a specific kind of cognitive variable that makes it possible to predict exactly what the apparent effect of stimulus on response will be). It can be demonstrated in a simple tracking task. But if this “mediation” and “operant theory” explanation satisfies you then I’m kind of at a loss. Could you explain how the mediation works? How do you know what the effect of the mediation will be? Does the nature of the mediation differ across individuals? Can you predict the effect of the mediation on the behavior of each individual?

Oh, and I do have a paper coming out on this topic – in Review of
General

Psychology – in June 2009. I can send you a pre-publication copy if you’re

interested.

Sure thing – send it along.

It will be coming along shortly.

Best regards

Rick

···

Paul


Paul Boxer, PhD

Assistant Professor of Psychology,

Rutgers University

Adjunct Research Scientist,

Institute for Social Research,

University of Michigan

Mail: 101 Warren Street, Newark, NJ 07102

Phone: 973-353-3943

Fax: 973-353-1171

Email: pboxer@newark.rutgers.edu



Richard S. Marken PhD
rsmarken@gmail.com