statistical predictions

[From Bill Powers (2007.07.29.1422 MDT)]

Actually I'm still in Minneapolis waiting to come home (tomorrow) from the CSG meeting. Rick and I have talked several times about the misclassification problem, if that's the right thing to call it, and are agreed that we want to be sure we get it right before reaching any conclusions.

I'll take the "preventing fatal heart attacks" theme as the basis for constructing a thought-experiment. Here there is no question of getting multiple determinations for each individual (as Martin T. mentioned), since we die only once.

Given: a population with a certain incidence of fatal heart attacks per year, say K per capita per year. In a sample of N individuals, K*N are expected to die of a heart attack each year.

Now suppose there is a treatment that is hypothesized to have either of two effects:

1. It actually reduces the chance of a fatal heart attack by 10% for every individual in the population. That is, if the whole population is given the treatment, the incidence of heart attacks will become 0.9*K*N per year.

2. It actually reduces the incidence of heart attacks by 100% for 10% of the individuals, and has no effect on the incidence of heart attacks in the other 90% of the population. In this case, too, the incidence of heart attacks will become 0.9*K*N per year.

I claim that if the treatment is given to the whole population, these two cases are statistically indistinguishable. All we can say is that K*N of the population will die without the treatment, and that 0.9*K*N will die with it. We can't tell if this equation should be written (0.9*K)*N, 90% of the previous risk for 100% of the population, or K*(0.9*N), 100 percent of the previous risk for for 90% of the population.

If my conjecture is right, it opens the door to a continuum of statistically indistinguishable cases in which a treatment appears to provide a B% reduction of risk to individuals in a population where N% of the members are at risk. Is this the result of a B% reduction in risk for the whole population, or of a 100% reduction in risk for (100 - B)% of the population?

It is also my contention that the most likely case is that a treatment will benefit some proportion of the population for physical/physiological reasons, and will not benefit the rest at all for similar reasons. Problems are not caused by the chances of things happening, but by specific physical effects that stem from regular relationships among variables. The appearance of stochastic effects arises primarily from errors of measurement, from the use of incorrect models, and from lack of knowlege.

Best.

Bill P.

···

--
No virus found in this outgoing message.
Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.10.23/924 - Release Date: 7/28/2007 3:50 PM

[Martin Taylor 2007.07.30.01.01]

[From Bill Powers (2007.07.29.1422 MDT)]

Now suppose there is a treatment that is hypothesized to have either of two effects:

1. It actually reduces the chance of a fatal heart attack by 10% for every individual in the population. That is, if the whole population is given the treatment, the incidence of heart attacks will become 0.9*K*N per year.

2. It actually reduces the incidence of heart attacks by 100% for 10% of the individuals, and has no effect on the incidence of heart attacks in the other 90% of the population. In this case, too, the incidence of heart attacks will become 0.9*K*N per year.

I claim that if the treatment is given to the whole population, these two cases are statistically indistinguishable.

They must be indistinguishable, and not just statistically, since you have only one degree of freedom for measurement. You are asking a two degree-of-freedom question. You need at least two numbers to distinguish the cases, not one.

In real life studies, usually more than one number is obtained, and if appropriate numbers are found, then you might be able to distinguish the two cases. But from first principles, you can't when you have only one number to work with.

Suppose you gave different levels of the treatment and you had another measure besides the binary choice of dead or alive (counting mild heart attacks at different levels of severity, for example). then you have the chance of getting scattergrams like the ones I posted for your simulation question on this topic. Such scattergram shapes allow you to distinguish the two cases. There are probably better ways, but whether there are or not, you can't do it with a one-degree-of-freedom measure.

···

---------------------------------

There's a separate issue implicit in your statement of the question, which is that if nobody -- patients or doctors -- has any other information about the value of the treatment for a given patient, then the two cases are indistinguishable in practice.

Look at it the way you look at a control system: there are different points of view. An omniscient observer would be able to see the difference between the cases, just as an omniscient observer can see the disturbance signal separately from the output signal and separately from the perceptual signal. Nothing in the control system can do that. Likewise, in the situation you propose, nobody in the system can know any more than that if a particular patient takes the treatment, that specific patient is 10% less likely to have a fatal heart attack in the next year.

The only time the two cases differ in practice is if some second measure is available to the doctor or the patient that allows some distinction to be made between the two-population and the one-population cases, and to allow them to assess to which population the patient is lilkely to belong in the two-population case.

Martin

[From Rick Marken (2007.07.30.0920)]

Bill Powers (2007.07.29.1422 MDT)

Rick and I have talked several times about the
misclassification problem, if that's the right thing to call it, and
are agreed that we want to be sure we get it right before reaching
any conclusions

Good thing you're doing this because what you describe here is not
what I've been working on;-)

I'll take the "preventing fatal heart attacks" theme as the basis for
constructing a thought-experiment. Here there is no question of
getting multiple determinations for each individual (as Martin T.
mentioned), since we die only once.

Unless you are a coward, in which case it can apparently go up to 1000;-)

Given: a population with a certain incidence of fatal heart attacks
per year, say K per capita per year. In a sample of N individuals,
K*N are expected to die of a heart attack each year.

Now suppose there is a treatment that is hypothesized to have either
of two effects:

1. It actually reduces the chance of a fatal heart attack by 10% for
every individual in the population. That is, if the whole population
is given the treatment, the incidence of heart attacks will become
0.9*K*N per year.

2. It actually reduces the incidence of heart attacks by 100% for 10%
of the individuals, and has no effect on the incidence of heart
attacks in the other 90% of the population. In this case, too, the
incidence of heart attacks will become 0.9*K*N per year.

I claim that if the treatment is given to the whole population, these
two cases are statistically indistinguishable.

Agreed. No simulation necessary for that.

All we can say is
that K*N of the population will die without the treatment, and that
0.9*K*N will die with it. We can't tell if this equation should be
written (0.9*K)*N, 90% of the previous risk for 100% of the
population, or K*(0.9*N), 100 percent of the previous risk for for
90% of the population.

If my conjecture is right, it opens the door to a continuum of
statistically indistinguishable cases in which a treatment appears to
provide a B% reduction of risk to individuals in a population where
N% of the members are at risk. Is this the result of a B% reduction
in risk for the whole population, or of a 100% reduction in risk for
(100 - B)% of the population?

I think your conjecture is right. All we know is that the treatment
changes the group level statistics; the effect on the individual is
(from my perspective) completely unknown. The only reason for an
individual to take the drug in this case is for the sake of society.
If everyone takes the drug, the cost to society, in terms of fatal
heart attacks, goes down. I would say that the treatment should only
be given to individuals in the group if it is also known that the risk
of adverse side effects from taking the drug is small. How small is a
matter of social judgment and agreement. This is the same situation
that occurs with immunization. We give all children polio vaccine even
though it is not going to prevent many individuals from getting polio
(only those exposed to it, which is virtually no one anymore). But the
cost (financial and health) of taking the vaccine is considered so
small that we go ahead and vaccinate everyone.

It is also my contention that the most likely case is that a
treatment will benefit some proportion of the population for
physical/physiological reasons, and will not benefit the rest at all
for similar reasons. Problems are not caused by the chances of things
happening, but by specific physical effects that stem from regular
relationships among variables. The appearance of stochastic effects
arises primarily from errors of measurement, from the use of
incorrect models, and from lack of knowlege.

If this is what we're looking into then I think there is no argument.
What I think you are arguing against is the use of group level data to
make recommendations to individuals. The drug company that says, based
on group level data, that their drug will reduce your chances of
having a heart attack, is lying, just like psychologists who say,
based on group level data, that watching aggression on TV will
increase the aggressiveness of your child, are lying. This is not a
statistical problem; it is simply a problem (and it is a very big and
very pervasive problem) of using group level data as the basis for
making recommendations for individuals.

This discussion started in the context of what would be the best
approach to financing health care in the US. I think the group data
are overwhelmingly in favor of a single payer system, which will cost
less (due to lower administrative costs) and provide better outcomes
(due to greater access). But there are going to be individuals for
whom this may not be the best system. Whether single payer is adopted
or not depends on what will work out best at the group level with an
acceptably low cost at the individual level. I think that is where the
political argument will focus now. The group data is overwhelmingly in
favor is single payer, in terms of group level cost and outcomes. But
adopting a single payer system will create some pain for a number of
individuals, particularly individuals who run and/or own stock in
private health insurance providers.

What I have been looking at is (to me) a much more interesting
question; that is the question of how individual prediction based on
regression analysis might be screwed up by differences between
individuals in terms of the "behavioral law" relating criterion to
predictor. Using you fatal heart attack example, the problem would be
to find an indicator (a predictor) of who in the population might
benefit from the drug. The goal is to classify people as those who
would benefit vs those who would not. I want to know how the
effectiveness of a predictor is affected by differences in the way
individuals actually respond to the treatment depending on their
standing on the predictor.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Kenny Kitzke (2007.07.29)]

<Bill Powers (2007.07.29.1422 MDT)>

<Actually I’m still in Minneapolis waiting to come home (tomorrow)
from the CSG meeting. Rick and I have talked several times about the
misclassification problem, if that’s the right thing to call it, and
are agreed that we want to be sure we get it right before reaching
any conclusions.

I’ll take the “preventing fatal heart attacks” theme as the basis for
constructing a thought-experiment. Here there is no question of
getting multiple determinations for each individual (as Martin T.
mentioned), since we die only once.>

I made it back from Minneapolis without a hitch and hope you did also. I am going to make some comments on your thought-experiment. They may help, but only you can assess that. :sunglasses:

<Given: a population with a certain incidence of fatal heart attacks
per year, say K per capita per year. In a sample of N individuals,
K*N are expected to die of a heart attack each year.>

First, K is an historic average of a population (usually you show a population average with a capital letter such as X with a bar on top). What K will be next year is acknowledged to be a probability prediction usually using a confidence interval. So, assuming the heart attack phenomena is not fundamentally altered next year by an epedimic, you would say something like, "It is 90% sure that the actual deaths next year will be K + or - say 0.06. What the confidence level and the interval are is rigorously determined from the historic distribution of the annual K’s.

Statistical laws are powerful like mathematical laws but they are different in nature. In mathematics, the equation y = x + 4 makes the true statement that for every x there is one unique y. But in statistics, for every x there is a unique distribution of y’s.

There are a couple of misleading messages in your statement <In a sample of N individuals,
K*N are expected to die of a heart attack each year>. First, it would be more accurate to say that with 90% confidence K(+ or - 0.06) of the population will die next year. Alternatively, you could rearrange the prediction to relate to the chance of any one person in the population having a fatal heart attack. This would most likely be such a small number that no one would pay much attention to it.

Second, the idea of applying population data to a sample N is essentially as meaningless as applying it to any one individual. Statistical predictions are ALWAYS a distribution not an individual outcome. Most use of statistical data analysis is to try to estimate population statistics from measured samples.

<Now suppose there is a treatment that is hypothesized to have either
of two effects:

  1. It actually reduces the chance of a fatal heart attack by 10% for
    every individual in the population. That is, if the whole population
    is given the treatment, the incidence of heart attacks will become
    0.9KN per year.

  2. It actually reduces the incidence of heart attacks by 100% for 10%
    of the individuals, and has no effect on the incidence of heart
    attacks in the other 90% of the population. In this case, too, the
    incidence of heart attacks will become 0.9KN per year.>

Your 1. has a strange twist in it. You would not give the treatment to every individual if you did not expect it to reduce each persons chance of a fatal heart attack, would you? The hypothesis is simply that of the entire population there will be enough of a reduction in risk to actually record 10% less deaths in the population during the next 12 months. On your 2., while you have posited an attribute variable, die or not die, it does not mean that there was no effect of the treatment for those who did not die.

<I claim that if the treatment is given to the whole population, these
two cases are statistically indistinguishable.>

I do not see why this matters?

<All we can say is
that KN of the population will die without the treatment, and that
0.9
K*N will die with it.>

No, but you can say with some confidence that the treatment reduced fatal heart attacks by between x and Y %. That should help one decide if the treatment is worthwhile.

<We can’t tell if this equation should be
written (0.9K)N, 90% of the previous risk for 100% of the
population, or K
(0.9
N), 100 percent of the previous risk for for
90% of the population.>

Again, why does how it came about really matter?

<If my conjecture is right, it opens the door to a continuum of
statistically indistinguishable cases in which a treatment appears to
provide a B% reduction of risk to individuals in a population where
N% of the members are at risk. Is this the result of a B% reduction
in risk for the whole population, or of a 100% reduction in risk for
(100 - B)% of the population?>

This entails who is included in the population. People with healthy hearts would not be expected to have a significant reduction and would not be included? If only at risk people receive the treatment and the fatalities fall 10%, that would seek relevant. Even then it is not conclusive, that could have happened by chance. Chance must be elimated by statistical probability laws.

<It is also my contention that the most likely case is that a
treatment will benefit some proportion of the population for
physical/physiological reasons, and will not benefit the rest at all
for similar reasons. Problems are not caused by the chances of things
happening, but by specific physical effects that stem from regular
relationships among variables. The appearance of stochastic effects
arises primarily from errors of measurement, from the use of
incorrect models, and from lack of knowledge.>

Well, even if that contention is true, when one has a possible treatment, there is a way to use statistical laws to help verify its efficacy. My guess is there would be monthly data on fatal heart attacks. If you gave the treatment to the entire at-risk population from last year, it may take only a few months to confirm that the treatment is effective. You could then study the particulars of who still died to get a better picture of the characteristics of those who benefited. However, it is more likely that random samples would be used which would probably take longer to have the same confidence but cost far less.

Though the statistical laws can add value, they will not tell you which population person will live rather than die that year. It will still be a population statistical method that improves the odds that more will not die from heart attack Such improvement still is valuable. The bottom line is that statistical analyses is more valuable to separate chance from certainty by actual experiment than in trying to project the future be it for groups or individuals where changes in the system that generates the data can give errant predictions.

Best,

Kenny

Still EATing!

Best wishes,

Kenny

···

Get a sneak peek of the all-new AOL.com.

[Martin Taylor 2007.07.30.15.50]

[From Rick Marken (2007.07.30.0920)]

This discussion started in the context of what would be the best
approach to financing health care in the US. I think the group data
are overwhelmingly in favor of a single payer system, which will cost
less (due to lower administrative costs) and provide better outcomes
(due to greater access).

Until I made my big spreadsheet of CIA and WHO data, I would have expected the same. But I have to say that I no longer believe what once seemed to be a no-brainer truism.

Check the scattergram in the tab "govt health share", which shows the deviation of log infant mortality from the regressions line of log infant mortality vs health cost per capita, as a function of the proportion of health cost paid by government. If a single payer system is better, then one wol expect there to be a correlation between this deviation and the government share. There isn't. Countries with much worse than expected infant mortality given the money devoted to health care include ones with government shares from the teens (percentage) to near 90% (many of them in Southern Africa, but Including the USA at 44.7%), and countries with much better than expected infant mortality also range from the teens to the high 90s (Singapore is best, at 34%, but the other good ones are Cuba and European ex-Soviet countries).

What I have been looking at is (to me) a much more interesting
question; that is the question of how individual prediction based on
regression analysis might be screwed up by differences between
individuals in terms of the "behavioral law" relating criterion to
predictor. Using you fatal heart attack example, the problem would be
to find an indicator (a predictor) of who in the population might
benefit from the drug. The goal is to classify people as those who
would benefit vs those who would not. I want to know how the
effectiveness of a predictor is affected by differences in the way
individuals actually respond to the treatment depending on their
standing on the predictor.

Exactly the right question, and the one I believe is normally asked in such studies.

Martin

[From Rick Marken (2007.07.30.1430)]

Martin Taylor (2007.07.30.15.50)

> Rick Marken (2007.07.30.0920)--
>

I think the group data

>are overwhelmingly in favor of a single payer system, which will cost
>less (due to lower administrative costs) and provide better outcomes
>(due to greater access).

Until I made my big spreadsheet of CIA and WHO data, I would have
expected the same. But I have to say that I no longer believe what
once seemed to be a no-brainer truism.

Check the scattergram in the tab "govt health share", which shows the
deviation of log infant mortality from the regressions line of log
infant mortality vs health cost per capita, as a function of the
proportion of health cost paid by government. If a single payer
system is better, then one wol expect there to be a correlation
between this deviation and the government share.

This kind of evidence is not that convincing to me. It's not evident,
for example, that the proportion of health costs paid by the
government would have anything to do with single payer.

The evidence that convinces me is stuff like this from the Commonwealth Fund:

http://www.commonwealthfund.org/publications/publications_show.htm?doc_id=482678

Compared to other Western nations that have single payer, the US
(which does not) does worst on nearly every variable that is relevant
to evaluating the quality and cost of health care at the national
level.

And there is really no downside to adopting single payer. The "free
market" system just required that you buy the insurance from private
insurers. Well, you can still do that in a single payer system if you
want that extra special coverage for rich people.

I'm sure some ways of implementing single payer are better than
others. But I can see no down side to giving it a try (except to
insurance companies, who will lose a ton of business). If worse comes
to worse it will be no worse than the existing system. And no matter
what it will help American business in terms of international
competition since the cost of health care won't have to be included in
the price of their product. This will create jobs for all those people
who used to work for the health insurance companies;-)

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Bill Powers (2007.07.30.1620 MDT)]

This is interesting. The American Heart association’s recommendation is
for people who have already had a heart attack! Where did that other one
come from? You will see a lot of additional data about individuals in
this discussion.


http://www.americanheart.org/presenter.jhtml?identifier=4456

Best,

Bill P.

I think your conjecture is
right. All we know is that the treatment

changes the group level statistics; the effect on the individual is

(from my perspective) completely unknown. The only reason for an

individual to take the drug in this case is for the sake of society.

If everyone takes the drug, the cost to society, in terms of fatal

heart attacks, goes down. I would say that the treatment should only

be given to individuals in the group if it is also known that the
risk

of adverse side effects from taking the drug is small. How small is
a

matter of social judgment and agreement. This is the same situation

that occurs with immunization. We give all children polio vaccine
even

though it is not going to prevent many individuals from getting
polio

(only those exposed to it, which is virtually no one anymore). But
the

cost (financial and health) of taking the vaccine is considered so

small that we go ahead and vaccinate everyone.
[From Bill Powers (2007.07.30.1625 MDT)]

Rick Marken (2007.07.30.0920) –

Very well said. “For the sake of society” is a group-level
consideration. And it’s not at all obvious where the tradeoffs should be
set. The polio vaccine has some very tiny but nonzero history of giving
polio to people. At what point is prophylactic use of that vaccine, or
any other, too dangerous?

If this is what
we’re looking into then I think there is no argument.

What I think you are arguing against is the use of group level data
to

make recommendations to individuals. The drug company that says,
based

on group level data, that their drug will reduce your chances of

having a heart attack, is lying, just like psychologists who say,

based on group level data, that watching aggression on TV will

increase the aggressiveness of your child, are lying. This is not a

statistical problem; it is simply a problem (and it is a very big
and

very pervasive problem) of using group level data as the basis for

making recommendations for individuals.

I seem to have been communicating very poorly, because this is exactly
what I have been trying to say all along. I am arguing against using
group level data to make recommendations to individuals. Group statistics
can be used to predict group phenomenon. Individual analysis is needed to
make predictions about individuals. Yet people try all the time to mix
the two.

Relative to your comments on the single-payer system, the
last I heard, the administrative overhead of the Social Security
Administration was about 2.5% of the money it handles. I think of that
every time someone says that the private sector could take over that
system more efficiently.

Best,

Bill P.

[Martin Taylor 2007.07.30.22.58]

[From Rick Marken (2007.07.30.1430)]

Martin Taylor (2007.07.30.15.50)

> Rick Marken (2007.07.30.0920)--
>

I think the group data

>are overwhelmingly in favor of a single payer system, which will cost
>less (due to lower administrative costs) and provide better outcomes
>(due to greater access).

Until I made my big spreadsheet of CIA and WHO data, I would have
expected the same. But I have to say that I no longer believe what
once seemed to be a no-brainer truism.

Check the scattergram in the tab "govt health share", which shows the
deviation of log infant mortality from the regressions line of log
infant mortality vs health cost per capita, as a function of the
proportion of health cost paid by government. If a single payer
system is better, then one wol expect there to be a correlation
between this deviation and the government share.

This kind of evidence is not that convincing to me. It's not evident,
for example, that the proportion of health costs paid by the
government would have anything to do with single payer.

Who else could be the single payer?

The evidence that convinces me is stuff like this from the Commonwealth Fund:

http://www.commonwealthfund.org/publications/publications_show.htm?doc_id=482678

Compared to other Western nations that have single payer, the US
(which does not) does worst on nearly every variable that is relevant
to evaluating the quality and cost of health care at the national
level.

I wouldn't call that report convincing evidence of your thesis. The US is the worst on most measures, but Canada is the second worst, and Canada is the closest country of the group to having a single-payer system. The others are rather more mixed.

If you look at 220 countries rather than 6, the situation becomes even less clear.

And there is really no downside to adopting single payer. The "free
market" system just required that you buy the insurance from private
insurers. Well, you can still do that in a single payer system if you
want that extra special coverage for rich people.

That would really remove it from being a single-payer system, to my understanding of the concept. The idea of a single payer system is that being wealthy shouldn't mean you get better medical service.

I'm temperamentally on your side. I believe that single-payer systems are better, but I don't have evidence to suggest they provide better public health outcomes. It makes simple good sense that siphoning off a percepntage of the costs to shareholders of a profit-making medical company puts private health care at a disadvantage that it must overcome in ways that would also be available to publicly funded medicine. More compelling to me, though, is the immorality of the rich using medical accessibility be used as another tool to keep the poor in their place.

--------------Aside, even further from PCT------

(By the way, I see the same kind of immorality in allowing lawyers to be privately funded, only it's worse in the case of lawyers because there is direct competition between the rich person's lawyers and those of the poor opponent, at least in a civil suit; and in criminal cases, rich people can hire better lawyers than can poor people, so poor people are more likely than rich to be convicted on the same evidence. I live in dread of having to use a lawyer for anything more serious than drawing up a will or a house deed.

Law, like medicine, should be single payer, with no private payment of lawyers permitted, other, perhaps, than for providing advice on the legality or otherwise of possible courses of action. But that's just my opinion, and I don't know what kind of evidence might be used to support it other than simple morality arguments).

-------------sorry about that---------------

Martin

[From Rick Marken (2007.07.30.2225)]

Martin Taylor (2007.07.30.22.58) --

I wouldn't call that report convincing evidence of your thesis. The
US is the worst on most measures, but Canada is the second worst, and
Canada is the closest country of the group to having a single-payer
system. The others are rather more mixed.

Note that the per capita cost of coverage in Canada is about 1/2 of
what it is in the US. So Canada may be nearly as bad as the US in
terms of outcomes but they are paying only half as much for it. A
better way to say it is that Canada gets care that is slightly better
than that in the US for half the price. Canada may not be perfect but
it looks like it's on the right track.

>And there is really no downside to adopting single payer. The "free
>market" system just required that you buy the insurance from private
>insurers. Well, you can still do that in a single payer system if you
>want that extra special coverage for rich people.

That would really remove it from being a single-payer system, to my
understanding of the concept. The idea of a single payer system is
that being wealthy shouldn't mean you get better medical service.

I think the idea of single payer is that risk is spread over the
entire population and there is no profit so the administrative costs
go down substantially (as per Bill's note on Medicare). I don't see
how letting people buy extra insurance (once they have paid their
single payer tax to the government) would hurt anything.

I'm temperamentally on your side. I believe that single-payer systems
are better, but I don't have evidence to suggest they provide better
public health outcomes.

It looks like they provide outcomes that are at least as good as those
that come with a "free market" approach and some provide outcomes that
are much better. There is no "free market" system that is among the
top rated health care systems. The fact that the country with the
highest rated health care system in the world -- France -- is single
payer is also evidence that single payer is the way to go. As I said,
some ways of implementing a single payer system are probably better
than others. But a free market, for profit system is unquestionably
the worst. Another piece of evidence for this is that the US has the
resources that should make it the top health care provider in the
world. The fact that the US is last among industrialized nations in
outcomes /cost show pretty clearly that the the US "for profit" system
is the wrong way to go.

Even E. coli does a better job of finding it's way to a solution than
does a free market ideologue;-)

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Bill Powers (2007.07.31.1120 MDT)]

Martin Taylor 2007.07.30.22.58 –

Below is a link to a discussion of the use of aspirin as a preventive for
heart attacks both for those at known risk and for the general
population. It is estimated that 36% of the US population is taking
aspirin for this purpose. I see no sign that any of the issues we have
been talking about are considered. The primary contraindications are
gastrointestinal bleeding and the danger of hemorrhagic strokes. All
these effects are detected after they happen, of course – there is no
way to predict what will happen to an individual because nobody knows why
one person is protected and another isn’t, or why one person bleeds and
another doesn’t.


http://www.medscape.com/viewarticle/556309

It is terribly tempting to say that the individual differences in
response are just a matter of chance, but of course they’re not. They are
due to specific differences in the individuals. If we knew more about the
human system, we could tell each person whether taking aspirin would
prevent a heart attack, and how much aspirin would be needed – although
we would probably have some other treatment with no side effects to use
instead. And we would not have a hundred million people or so eating
aspirins each day.

As we keep discussing this, it seems clearer all the time to me that the
issue is very simple. If we understood the human system, we would do what
is necessary to fix a problem without causing other problems, and we
would not apply that same treatment to those who do not have that
problem. As matters stand now, we apply treatments to many people because
in the past those treatments have helped some of them, but we can’t
predict which ones. From the standpoint of a clinic or a hospital or a
private practice that deals with many people, this approach does result
in a net improvement for the population; that is what the statistical
analyses are for, to assure that there is a population benefit (as Kenny
Kitzke has pointed out). But because group statistics are inevitably
applied to individuals, large numbers of people are treated with no
benefit at all, because that is the only known means of reaching those
who will benefit. This is true whether or not “fine slicing” is
used to identify subpopulations in which a greater proportion of benefits
will be obtained. No treatment benefits everyone to whom it is
applied.

All the talk about “risk factors” and “improving one’s
chances” is really about populations, not about individuals. Unless
you’re speaking of phenomena on the scale of of subatomic particles,
chance is not a factor in determining the actual effects of treatments on
individuals. The uncertainty, dear Brutus, is not in phenomena but in our
understanding of them. That’s probably true of subatomic particles,
too.

Best,

Bill P.

[Martin Taylor 2007.07.31.10.12]

[From Bill Powers (2007.07.31.1120 MDT)]

Martin Taylor 2007.07.30.22.58 --

Bringin this thread back toward PCT...

Below is a link to a discussion of the use of aspirin as a preventive for heart attacks both for those at known risk and for the general population. It is estimated that 36% of the US population is taking aspirin for this purpose.

I'm not sure why this discussion is going in the direction it is. All I can see is that you wish to demonstrate that you are proposing something original, rather than agreeing what with people, and especially medical researchers. I don't see any contradiction between researchers wanting to know why things happen and practitioners doing the best they can with the knowledge currently available.

You do seem to see some contradiction, the underlying tenor seeming to be that one shouldn't act to influence a controlled paerception (I extend from medical practice to the more general) if one knows only statistically the probable effect of the action on the controlled perception without knowing all the branches of the environmental feedback path. We NEVER can be certain that our actions will affect a particular perception in a given direction; we only know that it has in the past. When we get to higher level perceptions, and particularly social ones, we very often get different effects from the same actions (a mantra of yours).

Sometimes the effects of our actions influence our perceptions away from their reference values, in ways that are unrecoverable -- a friendship broken by an ill-advised comment, a death due to stepping on a rock that seemed solid while on a mountain slope -- sometimes the perception may still be controlled. But we act on what we NOW know (know being implicit in the reorganized structure of control elements, and in conscious logical representation). We do what works usually best and with fewest side-effects (wasted energy that may well influence other perceptions in random directions.

The medical situation is no different. The doctors and the patients well know that this patient may be one who will get no benefit from aspirin and may instead suffer from internal bleeding. But from what they both know, death by heart attack is unrecoverable, whereas internal bleeding can usually be stopped; statistically the chances of dying from heart attack are lowered appreciably, the chances of suffering internal bleeding are increased a bit.

I don't think there's any disagreement that it would be better if the doctor and the patient knew whether this patient is one that would or would not have a reduced heart attack risk, and whether this patient is one that would or would not have an increased risk of internal bleeding (the patient could get both effects). That's what researchers would like to be able to determine, preferably by understanding mechanisms, but if not that, by using other
measures that correlate with the different risks and benefits.

If you want to protect your perception of yourself as being uniquely percipient on this problem, that's OK. It's a quirk that doesn't really matter to me, and I doubt I will pursue the thread any more unless it has some PCT implications. I've posted the spreadsheet of stuff from the CIA World Factbook and from the World Health Organization, for those that are interested, with a few scattergrams relevant to the prior discussion, but really all this stuff belongs on some other mailing list.

The PCT-relevant aspect, so far as I can see, is the control of one perception through varying environmental feedback paths, when a separate perception can provide some imperfect information about the current state of the feedback path.

Martin

I’m not sure why this discussion
is going in the direction it is. All I can see is that you wish to
demonstrate that you are proposing something original, rather than
agreeing what with people, and especially medical researchers. I don’t
see any contradiction between researchers wanting to know why things
happen and practitioners doing the best they can with the knowledge
currently available.
[From Bill Powers (2007.07.31.0845 MDT)]

Martin Taylor 2007.07.31.10.12 –

That, of course, sounds like a perfectly reasonable point of view. How
could anyone object to people “doing the best they can?” I
certainly don’t.

What I object to is trying to make “the best we can do” sound
far better than it is. Statistical facts are really pretty inadequate
when they’re based on low correlations. My point is that if we exaggerate
the usefulness of such facts enough – and this is commonly done – we
can end up asserting truths that are false in more cases than they are
true – such as the truth that taking an aspirin every day will protect
against heart attacks. For most people that is false, even though it’s
true for the population, and even more true for the subpopulation with
prior heart attacks.

You do seem to see
some contradiction, the underlying tenor seeming to be that one shouldn’t
act to influence a controlled paerception (I extend from medical practice
to the more general) if one knows only statistically the probable effect
of the action on the controlled perception without knowing all the
branches of the environmental feedback path.

That is not accurately stated. It’s not the idea that we don’t know
“all the branches” that bothers me. It’s when we don’t know any
of them with sufficient accuracy to control something that I pull back
from enthusiastic endorsement. “Acting to influence a controlled
variable” assumes that there must be at least a little influence in
an individual case if the statistics indicate a population trend. I’ve
been arguing for the last week or so (and actually for much longer than
that) that if you’re talking about controlling a specific variable in an
individual case, that is very likely to be untrue.

We NEVER can be
certain that our actions will affect a particular perception in a given
direction; we only know that it has in the past. When we get to higher
level perceptions, and particularly social ones, we very often get
different effects from the same actions (a mantra of
yours).

In many cases that is true. In other cases it’s near enough to false as
to make no difference. Most of the control processes we carry out have a
probability of working the way we expect that is so close to a certainty
that there’s no point in doubting that they will happen correctly. The
probabilities that apply to most real behavior are in a range that simply
doesn’t overlap with the probabilities we find in psychological
experiments. If our control processes were as uncertain as you seem to
believe they are, we could not drive a car, make a phone call, type a
letter, stay out of jail, or avoid poisoning ourselves for a week, much
less a lifetime. I agree that there is more uncertainty at the higher
levels, but I don’t agree that it’s as extreme as you seem to believe it
is – especially for the variables most important to control
accurately.

Sometimes the
effects of our actions influence our perceptions away from their
reference values, in ways that are unrecoverable – a friendship broken
by an ill-advised comment, a death due to stepping on a rock that seemed
solid while on a mountain slope – sometimes the perception may still be
controlled. But we act on what we NOW know (know being implicit in the
reorganized structure of control elements, and in conscious logical
representation). We do what works usually best and with fewest
side-effects (wasted energy that may well influence other perceptions in
random directions).

I’m sorry that you think a friendship can be broken by a single
ill-advised comment. I assure you that it would take more than that for
you to lose my friendship. And if there were a significant chance that
stepping on a loose rock will result in the death of a mountain climber,
the only mountain climbers would be those attempting to commit suicide.
There is a huge difference between something that could just barely
conceivably happen , and something that has a dangerous probability of
happening. Most of what we act on is as solid as almost all of those
rocks on mountainsides, and as predictable as the support of almost all
good friends.

I think you are still confusing predictions about group phenomena and
predictions about individual instances. Nothing you say about uses of
population statistics for purposes related to population variables seems
objectionable to me.

The medical
situation is no different. The doctors and the patients well know that
this patient may be one who will get no benefit from aspirin and may
instead suffer from internal bleeding. But from what they both know,
death by heart attack is unrecoverable, whereas internal bleeding can
usually be stopped; statistically the chances of dying from heart attack
are lowered appreciably, the chances of suffering internal bleeding are
increased a bit.

That’s a good example of what I just said. “This patient” may
get no benefit from aspirin and may suffer from internal bleeding.
Factors inside the patient have, in fact, already determined what those
outcomes would be for that person to a very high degree of certainty, so
no important probabilities are involved except that we don’t know if
aspirin will be given or not, since the doctor and the patient are
uncertain about what to do.

After talking about “this patient,” however, you go on to say
that “death by heart attack is unrecoverable” and
“internal bleeding can be stopped”, so the chance of dying from
heart attack is lowered while the chance of internal bleeding is
increased a bit. Those are statements about the population, not about any
individual (“unrecoverable” implies that you have knowledge
from many cases). There will be fewer deaths and slightly more bleeding,
statements with which I have to agree. The only problem is that for most
of the patients, there will be no effect from the aspirin except a slight
increase in coagulation time. Some of them, but still a minority, will
suffer some bleeding. Most of the population will have heart attacks at
the same rate as before.

If you want to
protect your perception of yourself as being uniquely percipient on this
problem, that’s OK. It’s a quirk that doesn’t really matter to me, and I
doubt I will pursue the thread any more unless it has some PCT
implications.

My, we’re in a nasty mood this morning, aren’t we? Could it be that I’m
casting doubt on something you value?

The PCT-relevant
aspect, so far as I can see, is the control of one perception through
varying environmental feedback paths, when a separate perception can
provide some imperfect information about the current state of the
feedback path.

“Imperfect information” is a nicely ambiguous term, in that the
information can be almost totally false, or it can be 99.9% correct, and
still be classified as “imperfect.” If the information about
the controlled variable is as imperfect as, say, a “high”
correlation of 0.8 would imply, the RMS error would be something like 50%
of the magnitude of the reference signal, and the amount of control would
be rather hard to see.

a52c5dd.jpg(Snatched from
Richard Kennaway’s article)

This wouldn’t matter if the controlled variable were unimportant;
otherwise it would matter a lot. It would matter a lot if the above plot
showed the actual effect (Y) of turning the steering wheel by a given
amount (X).

It still seems to me that the really basic mistake is to assume that a
population trend implies at least some effect of the same kind on each
individual in the population. Richard Kennaway has shown, in his analysis
of probability ellipses, that there is no necessary relation between
population effects and individual effects. Yet the assumption of such a
relationship appears in essentially every statistical treatment I have
ever seen in the psychological literature.

Best,

Bill P.

[Martin Taylor 2007.08.01.17.52]

[From Bill Powers (2007.07.31.1120 MDT)]

Martin Taylor 2007.07.30.22.58 --

Below is a link to a discussion of the use of aspirin as a preventive for heart attacks both for those at known risk and for the general population. It is estimated that 36% of the US population is taking aspirin for this purpose. I see no sign that any of the issues we have been talking about are considered.

The link didn't work, but I don't think it's very relevant to the discussion.

The question to me is why you think something aimed at practitioners, as such articles mostly are, should be concerned with the fact that ...

It is terribly tempting to say that the individual differences in response are just a matter of chance, but of course they're not. They are due to specific differences in the individuals.

Even practitioners presumably know that, and researchers look for what those differences may be, when they can.

If we knew more about the human system, we could tell each person whether taking aspirin would prevent a heart attack, and how much aspirin would be needed -- although we would probably have some other treatment with no side effects to use instead. And we would not have a hundred million people or so eating aspirins each day.

You are an idealist, aren't you! But what's surprising is not your idealism, it's that you make these motherhood statements as though they were novel, rather than being the ideal of every medical researcher interested in the issue, and self-evident into the bargain.

As we keep discussing this, it seems clearer all the time to me that the issue is very simple. If we understood the human system, we would do what is necessary to fix a problem without causing other problems, and we would not apply that same treatment to those who do not have that problem.

Wow! Isn't that a simple solution! Understanding all of the PCT structure in the individual is really what you are talking about, since all the cellular processes that can be varied are really perceptual control systems. The only thing you need to know is all the perceptual functions (chemical at the cellular level, I guess, though it looks more and more as though mechanical shapes are important at the molecular level), all the linkages, which presumably differ among individuals, and all the available actions for every control system, from molecular through cellular, through psychological. Very simple. Now we know what we need to know, well, it must be perversity that leads us not to know it.

As matters stand now, we apply treatments to many people because in the past those treatments have helped some of them, but we can't predict which ones. From the standpoint of a clinic or a hospital or a private practice that deals with many people, this approach does result in a net improvement for the population;

Which actually means that for EVERY individual, knowing what is currently known about the individuals and the mechanisms, the probability is that the treatment will help that individual more than it hurts. I emphasise EVERY individual, because without correlative knowledge, individuals are strictly interchangeable in respect of the effect of the treatment.

that is what the statistical analyses are for, to assure that there is a population benefit (as Kenny Kitzke has pointed out). But because group statistics are inevitably applied to individuals, large numbers of people are treated with no benefit at all, because that is the only known means of reaching those who will benefit.

Yep. That is the only way of getting at those who will benefit when you don't know how to discriminate them from those who won't. Very true.

This is true whether or not "fine slicing" is used to identify subpopulations in which a greater proportion of benefits will be obtained. No treatment benefits everyone to whom it is applied.

All the talk about "risk factors" and "improving one's chances" is really about populations, not about individuals. Unless you're speaking of phenomena on the scale of of subatomic particles, chance is not a factor in determining the actual effects of treatments on individuals. The uncertainty, dear Brutus, is not in phenomena but in our understanding of them. That's probably true of subatomic particles, too.

No argument there. From what quarter would you expect an argument?

I really don't know what you are arguing for, unless it's more funding for medical research. Week after week there are reports dealing with mechanism of this and that effect. And sometimes those research results do allow the practitioners to say "You should/should not expect to benefit from taking this drug" or "You would be at some risk whereas she would be likely to benefit with very little risk". Sometimes they are correlative results based on genetic analysis, sometimes they are mechanistic.

From the practitioner's point of view it doesn't matter whether the reason they don't recommend a treatment to Joe Blow is because red-heads with a big left foot tend to have bad side effects or because they know that Joe Blow's production of some hormone is higher than optimum and this drug makes it even higher when the symptoms can come equally from excess hormone or low levels of it (more common in the population). What they know is that Joe Blow shouldn't take this drug, whereas Mary Mulligan probably would benefit.

You said a couple of messages ago that I was missing your point. Maybe I am, but if so, it's devilish hard to find.

Martin

[From Rick Marken (2007.08.01.2100)]

Martin Taylor (2007.08.01.17.52) --

The question to me is why you think something aimed at practitioners,
as such articles mostly are, should be concerned with the fact that
...
>It is terribly tempting to say that the individual differences in
>response are just a matter of chance, but of course they're not.
>They are due to specific differences in the individuals.

Even practitioners presumably know that, and researchers look for
what those differences may be, when they can.

That's just fine slicing (to use Phil Runkel's lovely term for studies
that look for other variables that may be responsible for the
difference in response of individuals). I completely with Bill on this
one. Group data cannot be used to understand individual processes, no
matter how fine the slicing. Group data is relevant only to policy
makers; people who deal with groups. Practitioners, who deal with
individuals, should be taught what is known of how the system actually
works (precious little in the case of psychology) and they should base
their practice on that. If they want to prescribe drugs based on group
level data they should be instructed to tell the patient something
like: "This may not help you, and it may actually make things worse
for you, but if all doctors prescribe this for cases like yours, there
is more success in the group of patients who take the drug. I hope it
helps you individually but I have absolutely no idea whether it will
or not. Do you still want to take it?"

The only psychological research of any value in psychology is the kind
based on models of one person at a time, ie, mine;-)

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[Martin Taylor 2007.08.02.0.33]

[From Rick Marken (2007.08.01.2100)]

Martin Taylor (2007.08.01.17.52) --

The question to me is why you think something aimed at practitioners,
as such articles mostly are, should be concerned with the fact that
...
>It is terribly tempting to say that the individual differences in
>response are just a matter of chance, but of course they're not.
>They are due to specific differences in the individuals.

Even practitioners presumably know that, and researchers look for
what those differences may be, when they can.

That's just fine slicing (to use Phil Runkel's lovely term for studies
that look for other variables that may be responsible for the
difference in response of individuals). I completely with Bill on this
one. Group data cannot be used to understand individual processes, no
matter how fine the slicing.

Quite so. Who said they could? However, fine slicing can lead to questions that ask why this group has one effect and that group has another. Meanwhile, for the practitioner, the fact that this group shows one effect and that group shows another can help guide the prescription of differing therapies for each group.

Group data is relevant only to policy
makers; people who deal with groups. Practitioners, who deal with
individuals, should be taught what is known of how the system actually
works (precious little in the case of psychology) and they should base
their practice on that.

They should base their practice on anything that is known about the effects of proposed treatment, using ALL the information at hand. That includes, of course, whatever is known about mechanism (though in that area a little knowledge can be a dangerous thing). From the viewpoint of the individual patient, I would change my doctor if he refused to use data that showed many people "like" me had benefited from using the treatment, simply on the grounds that he didn't know why it worked.

If they want to prescribe drugs based on group
level data they should be instructed to tell the patient something
like: "This may not help you, and it may actually make things worse
for you, but if all doctors prescribe this for cases like yours, there
is more success in the group of patients who take the drug. I hope it
helps you individually but I have absolutely no idea whether it will
or not. Do you still want to take it?"

"Absolutly no idea" is too strong, and "all doctors prescribe" is unnecessary, but what you suggest is otherwise pretty close to the wording that I said should be used, some messages back. My own doctor tends to say things like "Well, it could help, if you want to try, but there's no guarantees, and I don't think it's likely to hurt, but if you do get a headache, stop." He doesn't usually give statistical probabilities.

The only psychological research of any value in psychology is the kind
based on models of one person at a time, ie, mine;-)

I don't know that this happens in psychology, but I think one of the most useful features of large-scale medical studies is the way they suggest avenues of research into mechanism.

Martin

[From Jeff Vancouver (2007.08.02.0930)]

I am not sure if Martin will get this (at least directly), but I wanted to
express my support for his positions. For example, his [Martin Taylor
2007.08.01.16.53] post that modeling and research are required to understand
the questions of social organization counters speculation of how these
systems ought to behave (it is probably too hard to work this through in our
heads without modeling and research to support our thinking). On the issue
of statistical prediction, Martin's post [Martin Taylor 2007.08.01.17.52]
captures my reaction to Bill's post, mostly, what is the argument/point?

But let me expand on Martin's point by asking this hypothetical. What if a
group study were done such that all the members of the experimental group
benefited and none of the members of the control group benefited. That is,
the experimental treatment affected all the individuals exposed to it. Then
to confirm the finding, the study was repeated (using large numbers, random
assignment, placebo controls, etc.) and the findings repeated. True, this
rarely happens, but if it did, would you advise the doctor to hedge then?
Was nothing learned? Sure, we might not understand the process, but it would
also motivate a search for process. Now as we back off that extreme case is
there a point where the effects are on so small a portion of the sample (and
small in magnitude) to claim nothing meaningful is happening for anyone such
that one should never recommend the possible treatment? That is a more
difficult question which statistical analysis helps us determine (though I
and most educated researchers would argue that there is some arbitrariness
in that analysis). Most of the time, we are in between these extremes,
learning something along the way.

Now also consider the individual analysis (within person) approach. Suppose
I give the treatment, withdrawal the treatment, etc. over time measure the
results along the way. I find that this treatment does help the individual.
Should I say with complete confidence that the treatment will work for that
individual? No, it might have been that during the study the individual
never did something that interacts with the treatment (e.g., drank alcohol)
which would completely undermine the treatment. That is, the design is not
the problem. Not understanding the limitations of designs is the problem.

Bill P. is right that if we knew everything about the processes we would be
in a much better position. Of course, but I for one do not want to take
group difference studies (e.g., randomized clinical trials) out of the
scientists' toolbox. Sure sometimes it might misdirect efforts, but if one
is to critically incorporate findings from these types of studies, the field
is more likely to make progress. In addition, these types of studies can be
used to test process models (I have done such studies). They are not
mutually exclusive. In think Runkel and McGrath had it right when they
argued for trianglization of research methods.

Jeff V.

[Martin Taylor 2007.08.02.11.59]

[From Jeff Vancouver (2007.08.02.0930)]

I am not sure if Martin will get this (at least directly), but I wanted to
express my support for his positions.

Got it, loud and clear! Thanks for the support.

Bill P. is right that if we knew everything about the processes we would be
in a much better position. Of course, but I for one do not want to take
group difference studies (e.g., randomized clinical trials) out of the
scientists' toolbox. Sure sometimes it might misdirect efforts, but if one
is to critically incorporate findings from these types of studies, the field
is more likely to make progress. In addition, these types of studies can be
used to test process models (I have done such studies). They are not
mutually exclusive. In think Runkel and McGrath had it right when they
argued for trianglization of research methods.

"Triangulation" is what my supervisor tried to drum into me in graduate school. If you can find two different ways of looking at what ought to be the same question, you are much more likely to find a pointer to the mechanism than you are by refining the precision of a single view.

In war movies, you often see a plane caught in two searchlight beams from different directions. The intersection of the beams gives the anti-aircraft gunners much more information than they would get from doubling the intensity of one beam.

One of the issues about Bill P's insistence on finding mechanisms is that, like a hierarchic perceptual control system, the mechanisms in question are at different levels, each of which can either be treated within its own domain, or can be seen as dependent on a foundation of mechanisms at a supporting level. In medical practice, there are psychological mechanisms (which presumably a perfect knowledge of PCT would elucidate), and much therapy could probably be done for many diseases by using purely psychological methods (as attested by the success of witch-doctors, or more mundanely, of placebos sold by high-powered advertising). Psychological phenomena often have physiological mechanisms -- consider the changes in personality that often accompany brain damage or stroke. Physiological phenomena often have chemical mechanisms... and so forth, down to quantum-level mechanisms.

When one asks for the mechanism of a particular medical treatment, where in the hierarchy do you stop and say "it's a black box below this, that just works"? Must the doctor choosing a suitable drug for THIS patient understand the quantum interactions that affect the protein folding that influences the ability of the drug to permeate the cell membrane,... Or is it enough for him to know that 80% of the people LIKE this patient do well on the drug, especially if, with conviction, he tells the patient that this is so? Or is it somewhere between those extremes, and if so, where?

My answer is that any information is better than none, and more is better than some. And if the doctor has information of different kinds, such as "the way this treatment seems to work makes it look suitable for THIS patient" and at the same time "statistical studies show that people like THIS patient usually benefit from this treatment", then he's better off than if he had only one of those two kinds of support for his decision to use or not to use the medicine in question, no matter how precise the one kind of information seems to be.

Martin

[From Rick Marken (2007.08.02.1600)]

Jeff Vancouver (2007.08.02.0930)--

I am not sure if Martin will get this (at least directly), but I wanted to
express my support for his positions.

And I want to re-express my lack of support for them.

What if a
group study were done such that all the members of the experimental group
benefited and none of the members of the control group benefited. That is,
the experimental treatment affected all the individuals exposed to it. Then
to confirm the finding, the study was repeated (using large numbers, random
assignment, placebo controls, etc.) and the findings repeated. True, this
rarely happens, but if it did, would you advise the doctor to hedge then?

Saying that this "rarely happens" is a wild overstatement. It never
happens. But if it did, then the group level data would, in this case,
also apply to the individuals in the group.

Now as we back off that extreme case is
there a point where the effects are on so small a portion of the sample (and
small in magnitude) to claim nothing meaningful is happening for anyone such
that one should never recommend the possible treatment?

That's not the point. I would be willing to recommend the treatment to
improve group level results even if there were a fairly low
correlation between treatment and outcome, as long as there was also
no evidence of ill effects from taking the treatment. My main
complaint is that group level data -- even when there is a high
correlation between treatment and outcome -- cannot tell you anything
about the individual processes that are responsible for any observed
group level results.

Now also consider the individual analysis (within person) approach. Suppose
I give the treatment, withdrawal the treatment, etc. over time measure the
results along the way. I find that this treatment does help the individual.
Should I say with complete confidence that the treatment will work for that
individual? No, it might have been that during the study the individual
never did something that interacts with the treatment (e.g., drank alcohol)
which would completely undermine the treatment. That is, the design is not
the problem. Not understanding the limitations of designs is the problem.

Right. This is not a study designed to understand the individual
processes that are responsible for the treatment's effects.

Bill P. is right that if we knew everything about the processes we would be
in a much better position. Of course, but I for one do not want to take
group difference studies (e.g., randomized clinical trials) out of the
scientists' toolbox.

Neither do I. I just want the scientist (and, more importantly, the
practitioner) to know that the results of these randomized trials
tell you only about groups, not individuals, so they are relevant only
to people who are in policy making positions.

Sure sometimes it might misdirect efforts, but if one
is to critically incorporate findings from these types of studies, the field
is more likely to make progress.

But they never do anything _but_ these kinds of studies. I have
_never_ seen properly done individual studies that were motivated by
group level findings. The few cases of individual studies with which I
am familiar -- like the baseball catching studies --were based on an
analysis of how the individual might work.

In addition, these types of studies can be
used to test process models (I have done such studies).

You shouldn't really use group level data to test process models of
individual behavior (although I did do it to analyze my error model,
but that model was really more of a framework for understanding group
level behavior; I did the work as a policy study so it really was a
group level model, like my model economy).

The problem of using group data to test a model of individual behavior
is nicely illustrated in Powers' paper in the Perceptual Control
Theory issue of the _American Behavioral Scientist_. I highly
recommend it to anyone doing conventional research!!

They are not
mutually exclusive. In think Runkel and McGrath had it right when they
argued for trianglization of research methods.

I can't believe that Phil would endorse the kind of
"triangularization" you suggest. But again, I don't believe that it
really happens much anyway. As I said, I have never seen a study of
individuals carried out as a result of "triangularization" based on
the results of group level experiments. I think one of the reasons
conventional psychology has gotten nowhere in terms of understanding
individual behavioral processes (which we know to involve the control
of perceptual variables) is precisely because of the persistent use of
group designs to study individual behavior. Group experiments are
always based on an open loop model (the general linear model of
statistics) so it is _guaranteed_ that such research, even if treated
as a source of ideas for research on individuals (for which, as I
said, it is never really used), will be misleading regarding
individual psychological processes.

Best

Rick

···

--
Richard S. Marken PhD
Lecturer in Psychology
UCLA
rsmarken@gmail.com

[From Jeff Vancouver (2007.08.03.0915)]

[From Rick Marken (2007.08.02.1600)]

> Jeff Vancouver (2007.08.02.0930)--

> What if a
> group study were done such that all the members of the experimental
group
> benefited and none of the members of the control group benefited.
That is,
> the experimental treatment affected all the individuals exposed to
it. Then
> to confirm the finding, the study was repeated (using large numbers,
random
> assignment, placebo controls, etc.) and the findings repeated. True,
this
> rarely happens, but if it did, would you advise the doctor to hedge
then?

Saying that this "rarely happens" is a wild overstatement. It never
happens. But if it did, then the group level data would, in this case,
also apply to the individuals in the group.

I NEVER make wild overstatements. Seriously though, you acknowledged my
conceptual point (I am not interested in defending the literal point).

> Now as we back off that extreme case is
> there a point where the effects are on so small a portion of the
sample (and
> small in magnitude) to claim nothing meaningful is happening for
anyone such
> that one should never recommend the possible treatment?

That's not the point. I would be willing to recommend the treatment to
improve group level results even if there were a fairly low
correlation between treatment and outcome, as long as there was also
no evidence of ill effects from taking the treatment. My main
complaint is that group level data -- even when there is a high
correlation between treatment and outcome -- cannot tell you anything
about the individual processes that are responsible for any observed
group level results.

As I see it, you are missing the point. Above you acknowledge that in the
extreme case (perfect correlation), the group data is no different than the
individual data. Yet here you seem to suggest that once that effect loses
perfection, the group and individual levels are completely separate.

> Now also consider the individual analysis (within person) approach.
Suppose
> I give the treatment, withdrawal the treatment, etc. over time
measure the
> results along the way. I find that this treatment does help the
individual.
> Should I say with complete confidence that the treatment will work
for that
> individual? No, it might have been that during the study the
individual
> never did something that interacts with the treatment (e.g., drank
alcohol)
> which would completely undermine the treatment. That is, the design
is not
> the problem. Not understanding the limitations of designs is the
problem.

Right. This is not a study designed to understand the individual
processes that are responsible for the treatment's effects.

Even the test for the control variable is subject to the limitations I am
describing.

> Bill P. is right that if we knew everything about the processes we
would be
> in a much better position. Of course, but I for one do not want to
take
> group difference studies (e.g., randomized clinical trials) out of
the
> scientists' toolbox.

Neither do I. I just want the scientist (and, more importantly, the
practitioner) to know that the results of these randomized trials
tell you only about groups, not individuals, so they are relevant only
to people who are in policy making positions.

A group that is given chicken soup recovers from a cold more quickly (or
with less symptoms) than a group not given chicken soup. In a follow-up
study, a group given hot liquids recovers better than a group given chicken
bouillon. We have learned that the mechanism by which chicken soup works
appears to have more to do with the soup than the chicken.

> Sure sometimes it might misdirect efforts, but if one
> is to critically incorporate findings from these types of studies,
the field
> is more likely to make progress.

But they never do anything _but_ these kinds of studies. I have
_never_ seen properly done individual studies that were motivated by
group level findings. The few cases of individual studies with which I
am familiar -- like the baseball catching studies --were based on an
analysis of how the individual might work.

I doubt anyone can win this argument. If the study was "done properly" it
was based on analysis of how the individual might work; if not, it was group
level. Tautology alert.

> In addition, these types of studies can be
> used to test process models (I have done such studies).

You shouldn't really use group level data to test process models of
individual behavior (although I did do it to analyze my error model,
but that model was really more of a framework for understanding group
level behavior; I did the work as a policy study so it really was a
group level model, like my model economy).

I created two computational process models. Found a condition in which they
predicted different things. Exposed individuals to the conditions. If the
exposure to one condition interfered with the other condition (e.g., I
cannot erase knowledge trained), I better use a between-subjects (groups)
design; if not, then one can use a within-subjects design (more efficient as
well). I have done both. One wants to use the design that eliminates the
most (and most likely) alternative explanations.

The problem of using group data to test a model of individual behavior
is nicely illustrated in Powers' paper in the Perceptual Control
Theory issue of the _American Behavioral Scientist_. I highly
recommend it to anyone doing conventional research!!

Yes, and these are very useful illustrations. But there are also
illustrations of cases where individual data is misinterpreted. Indeed, you
had persuaded me to use the test for the control variable in my research. I
find that in practice that is very difficult to justify. For instance, if
one is studying a case where the reference level is changing, or the cues
that might be used by a hypothesized input function (i.e., hypothesized
controlled variable) are not directly measurable, the TCV loses much of its
value.

> They are not
> mutually exclusive. In think Runkel and McGrath had it right when
they
> argued for trianglization of research methods.

I can't believe that Phil would endorse the kind of
"triangularization" you suggest.

Well certainly not the kind that was spelled wrong (my bad, thanks Rick and
Martin for correcting that - another form of triangularization).

But again, I don't believe that it
really happens much anyway. As I said, I have never seen a study of
individuals carried out as a result of "triangularization" based on
the results of group level experiments. I think one of the reasons
conventional psychology has gotten nowhere in terms of understanding
individual behavioral processes (which we know to involve the control
of perceptual variables) is precisely because of the persistent use of
group designs to study individual behavior. Group experiments are
always based on an open loop model (the general linear model of
statistics) so it is _guaranteed_ that such research, even if treated
as a source of ideas for research on individuals (for which, as I
said, it is never really used), will be misleading regarding
individual psychological processes.

I am interested in developing studies to test whether individuals really
control perceptions of uncertainty, justice (or injustice), self-worth,
self-consistency, respect, affectance/competence, etc. This desire is
inspired by group level studies, but I am stymied by the difficulty involved
in measuring the perceptions/input quantities because of the difficulty of
measuring said quantities. I (and others mostly) have figured out ways to
influence the likely cues and seeing if the individual responses to the
changes in a way that would suggest they are trying to dampen the
disturbance. Sometimes I can do this in a TCV-like design, though I often
cannot influence those changes within an individual without raising
questions of demand characteristics and other alternative explanation
problems. So I do it between subjects and use many of them because the
influence and the measure of the responses are noisy. I don't like it any
better than you (okay, a little better than you), but the alternatives (not
trying or hopelessly uninterpretable results) do not seem good either.

Jeff V.