[spam] Re.: Traditional Statistics (was PCT Secific Methodology)

[Martin Taylor 2006.12.16.15.53]

[From Bill Powers (2006.12.16.0555 MST)]

In the traditional approach, the null hypothesis is simply that there is NO RELATIONSHIP between the manipulated environmental variable (IV) and the action of the organism (DV). So rejecting the null hypothesis does not support any particular relationship -- it simply says there is some kind of relationship, without saying what kind it is.

I don't intend here to quarrel with the IV-DV-CV set of issues under discussion, but some of the comments on "traditional statistics" are, I think, a bit unfair.

I Hold no brief for "traditional" statistics, meaning statistics based on "significant" deviations from "null hypotheses", but I do think soem of your criticism is unwarranted. The above is an example. The "null hypothesis" can be anything at all, and very typically asserts that "if there is a relationship between X and Y, it is non-negative". Another null hypothesis possibility is "The effect of X on Y is exatly Z (or 'at least Z')".

Note that the sign of the correlation is not considered in the traditional analysis.

The sign is as likely as not to be considered. Whether it is or not will depend on the experimenter's theoretical predeliction. If the theory being tested says that the effect must be in one direction, and absence of effect or an effect in the other direction would be counter-evidence, then the analysis will use one-sided tests.

Because of changes of units between input and output, there is no "natural" interpretation of the signs: is a bar-pressing response in the same direction as the sound of food rattling in the dish? Is jerking the arm away opposed to the direction of the pain caused by a needle jab?

If the experimenter theorizes that the rat will press the bar less when it hears the food rattle, then that imposes a directionality on the interpretation, and on the statistical analysis. The "null hypothesis" will be that the rat does not increase the bar pressing when the food rattles, and the analysis will use one-sided significance tests.

It's the same when you use a PCT-theoretic basis for analyzing the observations of an experiment. To quote Bill: "the null hypothesis is that there is NO CONTROL SYSTEM." Exactly. There's no difference in principle. The difference is only on how you clump observations -- which ones are deemed to support and which are deemed to contradict the null hypothesis.

The real argument against "traditional" statistics is one I have been pressing since my graduate school days almost 50 years ago: the use of "signifiance levels" to assert the existence or non-existence of an effect. For one thing, no matter how small an effect, a sufficiently large experiment will be able to show that it is significant. Conversely, a too small experiment may dismiss as "non-significant" a real effect that is large enough to matter in the real world.

While I was in graduate school, one of my contemporaries re-analysed a group of experiments relating to children learning to read. All six of the experiments had shown "non-significant" effects, and the theoretical and practical literature had come to the conclusion that a plausible brain interaction did not exist, and therefore a reasonable intervention that might help some children learn better was not worth testing.

My colleague, instead, estimated the most likely size of the effect as measured in each of the six experiments, together with the statistical spread of likelihood. All six gave similar values for the magnitude of the effect, but in each individual experiment there was a reasonable spread of probability density across zero. When he correctly put the results all together, it became clear that the effect not only existed, but was of a magnitude that suggested it could be worthwhile to test the "reasonable intervention" in field trials. I have no idea whether it went any further, but it confirmed my belief that significance test are potentially very dangerous, and should never be used. Estimates of effect magnitude, or Bayesian analyses of likelihoods are theoretically defensible, whereas significance tests are not.

In that context, the quote "the null hypothesis is that there is NO CONTROL SYSTEM" has to be changed into something along the lines of "the measure is the quality of control".

So, I'm not supporting "Tests of the null hypothesis" in any way. I just think that they should not be criticised improperly.

Martin