Statistics

[From Bill Powers (2009.01.21.2218 MST)]

It seems to me that statistics (Bayesian or otherwise) is not relevant to
the kinds of models PCTers have been working on, for the simple reason
that the models are expressed in terms of systems made of differential
equations, not propositional logic. That statement may merely reveal my
lack of mathematical sophistication, but if that’s the case, someone else
is going to have to demonstrate the relevance of statistics to PCT,
because I can’t do it.

The only reason I can see for using ideas like correlation and
probability in our modeling is to try to suggest, in a semi-meaningful
way, how accurately the control-system models represent behavior, when
we’re communicating with people who use statistics as their main means of
evaluating theories. It’s highly unlikely that real behavioral data
conform to the assumptions that underlie concepts like correlation or
probable error, such as having a distribution that is normal, Poisson, or
something else with known properties. Much simpler kinds of analysis are
sufficient to show when one control model predicts better than another in
the realms we explore; if we had only ourselves to satisfy, why would we
ever bother to calculate a correlation or a probability? You can see that
the correlations are going to be almost perfect just by looking at the
data plots. The probability of such fits by chance is too close to zero
to measure.

In the tracking experiment, for example, the “damping”
coefficient can be set to zero, effectively leaving it out of the model,
and we can then assess how the fit is improved by adjusting it for the
least RMS difference between model behavior and real behavior. The same
can be done for the delay. The differences in RMS error we see are in the
third decimal place for the damping, and the second for the delay, with
total RMS error being 1% to 5% of the range of the real data. The
correlations we would compute here would be from 0.99 on up, with no
justification at all for using correlation since the differences we’re
talking about are not even random. If a difference can be reduced by
changing a single parameter, we’re looking at systematic effects, not
random effects. Even by eye one can see that the residuals are not
distributed unsystematically.

Our problem, I think, is that normal psychological experiments produce
data that are barely visible as a bias in the predominently random
fluctuations. In a control-system experiment, it’s just the opposite:
often, you have to look closely to see any irregularities. Here is the
fit of model mouse position to real mouse position over a one minute run
(done just now) with medium difficulty factor:

20ea17b6.jpg

The darker trace is the model prediction; the light green trace is the
real mouse position. I would guess that p is much less than 1E-10. The
RMS error is a little over one per cent of the total range. The black
trace shows the actual residual difference between the two mouse traces.
It is nothing like normally distributed.

Obviously, the uses for advanced statistical treatments of this kind of
data are minimal. As long as we continue to do the right kinds of
experiments, that will continue to be the case – and why do any other
kind? It’s not as though we have explained such a large proportion of
known phenomena that we have to start searching with a magnifying glass
for something to study. See my Essay on the Obvious.

Best,

Bill P.

[From Rick Marken (2009.01.22.1030)]

Bill Powers (2009.01.21.2218 MST)

It seems to me that statistics (Bayesian or otherwise) is not relevant to
the kinds of models PCTers have been working on

I completely agree, of course. I think it's just hard for people to
let go of past loves. Look at the lengths to which people will go to
make their beloved sow's ear look like a silk purse.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2009.01.22.12.59]

[From Bill Powers (2009.01.21.2218 MST)]

It seems to me that statistics (Bayesian or otherwise) is not relevant
to
the kinds of models PCTers have been working on, for the simple reason
that the models are expressed in terms of systems made of differential
equations, not propositional logic.

That’s really quite irrelevant. The models could be based on the I
Ching, for all that it matters.

The only reason I can see for using ideas like correlation and
probability in our modeling is to try to suggest, in a semi-meaningful
way, how accurately the control-system models represent behavior, when
we’re communicating with people who use statistics as their main means
of
evaluating theories.

That’s certainly one reason. I can think of others, much more relevant.
One, for example is “Which of these N control model structures more
probably represents what is going on in the person’s mind”. Another is
“Does the particular model fit the data so well because it does
represent what is going on in the subject’s mind, or because both the
model and the subject control well?”

It’s highly unlikely that real behavioral data
conform to the assumptions that underlie concepts like correlation or
probable error, such as having a distribution that is normal, Poisson,
or
something else with known properties.

True, and I did not use any of them in the demo analysis I did for Rick
(either last night or this morning). One can use them as a fallback
position, if one has nothing better to work with, but if you can avoid
using them, do.

Much simpler kinds of analysis are
sufficient to show when one control model predicts better than another
in
the realms we explore; if we had only ourselves to satisfy, why would
we
ever bother to calculate a correlation or a probability? You can see
that
the correlations are going to be almost perfect just by looking at the
data plots. The probability of such fits by chance is too close to zero
to measure.

“Probability of such fits by chance” is the language of significance
tests.

I should ask “correlations of what with what”? Anyone, whether they
understand PCT or not, would expect that when a person controls well,
their track will look like the inverse of the disturbance, and if a
model controls well, its track will look like the inverse of the
disturbance. Those correlations will be high, and necessarily the
correlation between model and person must also be high. It’s redundant
to show that it’s true in any specific case. Of course, if the model
track matches the person’s track when neither control very well, that
does tell you that the model may well mimic what the person is doing.

One can do a bit better, perhaps, by comparing three correlations:
person-disturbance, person-model, and model-disturbance. If the
person-model correlation is appreciably better than the other two, then
you would have a reason to argue that the model structure fits the
person’s machinery “better than chance”. This comparison can be done
whether the person and model control well or poorly.

If a difference can be reduced by
changing a single parameter, we’re looking at systematic effects, not
random effects. Even by eye one can see that the residuals are not
distributed unsystematically.

I’m, afraid my eye does not see that in the track you displayed, other
than possibly that the error seems to increase when the second
derivative of the track has a large absolute value.

On the other hand, if you mean you can modify P(D|H) by varying the
model parameters, that’s the point of optimizing, isn’t it? I did a
discrete version of that with the lag in Rick’s demo [ [Martin Taylor
2009.01.21.17.45 and Martin Taylor 2009.01.22.10.03]. But what has it
to do with random variation or the systematic distribution of residuals
(which I used in judging that it might be useful to try varying the lag
in Rick’s demo)?

Here is the
fit of model mouse position to real mouse position over a one minute
run
(done just now) with medium difficulty factor:

The darker trace is the model prediction; the light green trace is the
real mouse position. I would guess that p is much less than 1E-10. The
RMS error is a little over one per cent of the total range.

“Significance Test” language again! “p” of what? Probability that the
subject is not controlling? Not a very interesting finding. Probability
that the model is a precise mimic of what the subject’s mind and body
is doing? I don’t think so. So “p” of what?

It really depends on what question you want to ask. The Ward Edwards
paper that really put me solidly on the Bayesian trail so long ago made
the point that the only really reliable test was the “InterOcular
Traumatic” test. In other words, he made the point that you started
with: that a good result needs no statistics. If you want your trace as
an argument that the person is controlling, and that the model is
controlling, the trace satisfies the IOT.

The situation, if the point is to show that both model and person are
controlling, is rather similar to a situation I once found myself in
with a paper I submitted to journal. I had six male and six female
subjects. All the males scored in the region of 5% on a task, all the
females scored in the region of 95%. I made the claim that there seemed
to be a sex difference. The editor wanted me to perform a significance
test, and I refused on the grounds not only that significance tests can
be used to prove ANY relation to be significant if you get enough data,
but more importantly, that the results satisfied the IOT. He controlled
the journal, and I didn’t publish.

Back to your trace. Do you also have the residuals between the model
and the (inverse) disturbance, and between the person and the
disturbance? When you plot the three together, how do they correlate?
Is there in fact any relation between their mutual relations and the
second derivative of the disturbance trace, as a by-eye check of your
trace suggests there might be, or is that apparent relation just a
visual illusion? If you make a 3D scatter plot of the sample-by-sample
values of the three residuals, does it look uniform (i.e. elliptical
with the main diameters parallel to the axes) or does it have a more
interesting shape?

The black
trace shows the actual residual difference between the two mouse
traces.
It is nothing like normally distributed.

Obviously. But that’s really not an issue, as I explained in connection
with Rick’s example. In fact, the deviations from normality are often a
cue to further enquiry as to what is going on. When and why do they
occur? It’s an aspect of the advancement of science.

Obviously, the uses for advanced statistical treatments of this kind of
data are minimal.

Obviously, I disagree.

As long as we continue to do the right kinds of
experiments, that will continue to be the case – and why do any other
kind? It’s not as though we have explained such a large proportion of
known phenomena that we have to start searching with a magnifying glass
for something to study. See my Essay on the Obvious.

Well, I’ll just ask a question about the trace you presented. Are those
apparent “blips” in regions of high values for the second derivative of
the trace reliable? Is it a trick of the eye that they are easily seen
in those places, or do similar excursions happen equally probably in
places with any value of the second derivative? If it’s not a trick of
the eye, what might that tell you about possible improvements in the
model?

Some things that seem Obvious are not necessarily what they seem.

Martin

( Gavin
Ritz 2009.01.22.9.43NZT)

[From
Bill Powers (2009.01.21.2218 MST)]

It seems to me that statistics (Bayesian or otherwise) is not relevant to the
kinds of models PCTers have been working on, for the simple reason that the
models are expressed in terms of systems made of differential equations, not
propositional logic.

PCT has everything
to do with propositional logic. PCT wouldn’t exist otherwise. In fact
differential equations are in some way a means to quantify and a reflection of propositional
logic. Propositional logic has been shown by Elliot Jaques to be the very basis of
human thought.

Best

Gavin

PCT has everything to do with
propositional logic. PCT wouldn�t exist otherwise. In fact differential
equations are in some way a means to quantify and a reflection of
propositional logic. Propositional logic has been shown by Elliot Jaques
to be the very basis of human thought.
[From Bill Powers [(2009.01.22.1401 MST)]

Gavin Ritz
2009.01.22.9.43NZT –

In my suggested hierachical levels, propositional logic operates at the
ninth level of organization, what I call the “Program level.”
We can think (and act) at 10 other levels as well.

Best,

Bill P.

[From Bill Powers (2009.01.22.1207 MST)]

Martin Taylor 2009.01.22.12.59 –

[From Bill Powers
(2009.01.21.2218 MST)]

It seems to me that statistics (Bayesian or otherwise) is not relevant to
the kinds of models PCTers have been working on, for the simple reason
that the models are expressed in terms of systems made of differential
equations, not propositional logic.

That’s really quite irrelevant. The models could be based on the I Ching,
for all that it matters.

The problem is that of defining the universe of possible values that has
to be known to calculate a probability. Models that are based on
differential equations have no defined set of possible values: it’s an
infinite set, no matter what the range of values is. The probability that
the result of any finite calculation equals pi (or any other real number)
is zero. A throw of the I Ching yarrow stalks has 64 possible values;
that one’s easy – but interpreting those values is a lot less
objective.

The only reason I can see for
using ideas like correlation and probability in our modeling is to try to
suggest, in a semi-meaningful way, how accurately the control-system
models represent behavior, when we’re communicating with people who use
statistics as their main means of evaluating theories.

That’s certainly one reason. I can think of others, much more relevant.
One, for example is “Which of these N control model structures more
probably represents what is going on in the person’s mind”. Another
is “Does the particular model fit the data so well because it does
represent what is going on in the subject’s mind, or because both the
model and the subject control well?”

Yes, the latter is definitely a valid and answerable question. When
very-low-difficulty disturbances are used, the subject can control so
close to perfectly that the model’s parameters become useless
determinants of the fit. However, as the difficulty is increased and
tracking errors increase, we find that the model begins to match the
behavior better than the behavior matches the target movements: Here are
the results of three runs that I just did with TrackAnalyze:

Difficulty delay gain
damping reference model-real
target-real

60ths
pixels % of max % of
max

1        

4 14.6
0.000
0.0
0.718% 0.827%

3        

8 7.8
0.026
0.0
1.147% 1.920%

5        

8 6.4
0.086
0.8
2.285% 4.983%

The critical numbers are in the last two columns. “Model-real”
means the difference (as a percent of the target’s range) between the
model’s mouse positions and those of the subject – the fit error.
“Target-real” means the difference similarly calculated between
the real person’s mouse positions and the target positions – the
tracking error. You can see that as difficulty increases, the fit error
becomes a smaller fraction of the tracking error. In other words, the
model’s errors begin to be more like the subject’s and less like that of
a perfect tracker (which would be 0% in the last column). These trends,
by the way, are the same for any order of presentation of the
difficulties.

In fact, as the errors get larger, the effect of background noise such as
my own “essential tremor” contribute less to the total error.
If we could add a parameter to the model that recreated this more or less
systematic tremor, the fit would get even better in all cases.

The matching procedure does not try to give the model the best possible
tracking performance. The parameters are adjusted to make the model’s
movements as close as possible to the real person’s. This match gets
better as the disturbances get faster, causing larger tracking errors but
also making the model’s performance more sensitive to the parameter
values. Notice what happens to the optimal delay adjustment at the lowest
difficulty level. The rates of change of the variables are the lowest
here, the slopes of the plots the shallowest, and the best-fit delay the
most difficult to determine. Notice too how the damping factor increases
with the difficulty; this is a robust finding that is always seen, and
may well reflect an actual change (or nonlinearity) in the real system,
not just an improving match of performances.

I don’t really see what estimating probabilities and guessing at
subjective states about which we can know nothing directly, however it is
done, can add much to this approach. But maybe they can – if so, show
me.

Best,

Bill P.

[Martin Taylor 2009.01.22.17.40]

[From Bill Powers (2009.01.22.1207 MST)]

Martin Taylor 2009.01.22.12.59 --

[From Bill Powers (2009.01.21.2218 MST)]

It seems to me that statistics (Bayesian or otherwise) is not relevant to the kinds of models PCTers have been working on, for the simple reason that the models are expressed in terms of systems made of differential equations, not propositional logic.

That's really quite irrelevant. The models could be based on the I Ching, for all that it matters.

The problem is that of defining the universe of possible values that has to be known to calculate a probability. Models that are based on differential equations have no defined set of possible values: it's an infinite set, no matter what the range of values is.

I'm not sure why you think that's an issue. The question asked by the Bayesian analysis is always "what is the probability of these data given this hypothesis". If the possible data are from a continuum, the answer is in the form of a probability density. If the possible data are in a discrete space, the answer is in the form of a probability. There really isn't an issue that I can see.

If the model is defined by differential equations and the parameters are fixed, then the model's behaviour has no variation. You don't need a probabilistic analysis to know what it will do. Run it once, and you have your answer. However, the question we are interested in is not what the model will do, but how well it matches what a person does, and the person's actions are, from an observer's point of view, noisy. My biggest problem in analyzing Rick's demo was to figure out a way of determining how this noisiness could be determined and represented.

I'm sorry that Rick, having asked me for a detailed description of the procedure and having been given that description, calls it "hand-waving". Maybe I should have just given him the result, so he could justify such a comment. That might have pleased him better.

The probability that the result of any finite calculation equals pi (or any other real number) is zero. A throw of the I Ching yarrow stalks has 64 possible values; that one's easy -- but interpreting those values is a lot less objective.

The only reason I can see for using ideas like correlation and probability in our modeling is to try to suggest, in a semi-meaningful way, how accurately the control-system models represent behavior, when we're communicating with people who use statistics as their main means of evaluating theories.

That's certainly one reason. I can think of others, much more relevant. One, for example is "Which of these N control model structures more probably represents what is going on in the person's mind". Another is "Does the particular model fit the data so well because it does represent what is going on in the subject's mind, or because both the model and the subject control well?"

Yes, the latter is definitely a valid and answerable question. When very-low-difficulty disturbances are used, the subject can control so close to perfectly that the model's parameters become useless determinants of the fit. However, as the difficulty is increased and tracking errors increase, we find that the model begins to match the behavior better than the behavior matches the target movements: Here are the results of three runs that I just did with TrackAnalyze:

Difficulty delay gain damping reference model-real target-real
             60ths pixels % of max % of max

    1 4 14.6 0.000 0.0 0.718% 0.827%
      3 8 7.8 0.026 0.0 1.147% 1.920%

    5 8 6.4 0.086 0.8 2.285% 4.983%

The critical numbers are in the last two columns. "Model-real" means the difference (as a percent of the target's range) between the model's mouse positions and those of the subject -- the fit error. "Target-real" means the difference similarly calculated between the real person's mouse positions and the target positions -- the tracking error.

We need one more column: model-target %of max.

It's quite possible that these numbers, or numbers like them, could be used to estimate the subject's variability. They wouldn't be much use in determining the distribution of the subject's variations or their relationship to phases of the track, but they would be a good start in doing what I tried for a while to do in fitting the sleep study data.

You can see that as difficulty increases, the fit error becomes a smaller fraction of the tracking error. In other words, the model's errors begin to be more like the subject's and less like that of a perfect tracker (which would be 0% in the last column). These trends, by the way, are the same for any order of presentation of the difficulties.

In fact, as the errors get larger, the effect of background noise such as my own "essential tremor" contribute less to the total error. If we could add a parameter to the model that recreated this more or less systematic tremor, the fit would get even better in all cases.

Yes, that's what I was suggesting.

... Notice too how the damping factor increases with the difficulty; this is a robust finding that is always seen, and may well reflect an actual change (or nonlinearity) in the real system, not just an improving match of performances.

That's an interesting finding.

I don't really see what estimating probabilities and guessing at subjective states about which we can know nothing directly, however it is done, can add much to this approach. But maybe they can -- if so, show me.

I'm trying!

Martin

( Gavin
Ritz 2008.01.22.12.00NZT)

[From Bill Powers
[(2009.01.22.1401 MST)]

Gavin Ritz 2009.01.22.9.43NZT –

PCT has
everything to do with propositional logic. PCT wouldn’t exist otherwise.
In fact differential equations are in some way a means to quantify and a
reflection of propositional logic. Propositional logic has been shown by Elliot Jaques to be the very basis of
human thought.

In my suggested hierachical levels, propositional logic operates at the ninth
level of organization, what I call the “Program level.” We can think
(and act) at 10 other levels as well.

I’m actually talking about the type
of propositional logic at say your 11th level of organization “Systems
Concept”. I’m not talking about your level 9 which is “a
structure of tests and choice points connecting sequences”

Jaques has shown with high correlations
0.8 and higher that propositional logic (4 types that repeat again and again) are
the structure (vessel) whilst all knowledge is the content. So one uses the knowledge
one has within certain personal levels of abstraction and propositional logic.

Neither does your level 9 show how this
propositional logic actually works.

Best

Gavin R

[From Bill Powers (2009.01.22.1724 MST)]

Martin Taylor 2009.01.22.17.40 --

I'm not sure why you think that's an issue. The question asked by the Bayesian analysis is always "what is the probability of these data given this hypothesis". If the possible data are from a continuum, the answer is in the form of a probability density. If the possible data are in a discrete space, the answer is in the form of a probability. There really isn't an issue that I can see.

OK, you know this stuff better than I do. Is a probability density convertible to a probability? I don't really understand what the answer to the question "what is the probability of these data given this hypothesis" might mean. A probability, as I understand it, is a number between 0 and 1. How can a set of recorded values have "a probability of 0.9?" A probability of what? Existing? Being within some specified distance of an ideal array of numbers? I don't think you're really saying what you mean.

... the question we are interested in is not what the model will do, but how well it matches what a person does, and the person's actions are, from an observer's point of view, noisy. My biggest problem in analyzing Rick's demo was to figure out a way of determining how this noisiness could be determined and represented.

I'm asking if "How well does this model match what a person does?" be answered "the probability of the data is 0.9." The answer doesn't seem to have anything to do with the question. Can you expand this into what you really mean?

Best,

Bill P.

[Martin Taylor 2009.01.23.21.26]

[From Bill Powers (2009.01.22.1724 MST)]

Martin Taylor 2009.01.22.17.40 --

Is a probability density convertible to a probability?

One way to understand probability density versus probability is to think of the real number line compared to the integers.

Suppose we think of the range 0 < x <= 10. In this range there are ten integers, and aleph-one real numbers. Now, considering only the set of integers, let's say (the conditional) that one of these ten integers is chosen, and each is chosen with equal probability. So, for example, the probability that 3 is chosen is 0.1. That's what's supposed to happen in choosing the winning numbers for a lottery. Among all the ten integers, the probability that one of them is chosen totals 1.0. That is the sum of the individual probabilities, which are mutually exclusive and include all possibilities.

If we are choosing not among the integers, but among the real numbers, the probability of choosing 3 is infinitesimal, as is the probability of choosing pi, or e, or 5.1322376, but the probability that some number along the line is chosen is still 1.0. Among all the real numbers between zero and ten, the probability that one of them is chosen totals 1.0. That is the sum of the individual probabilities, which are mutually exclusive and include all possibilities. But we can't really talk about the "sum" of an infinite number of infinitesimals. We talk about the integral instead.

The numbers along the real line in this example don't really have a probability of being chosen. They have a probability density. Probability density integrated over a range of possibilities yields probability. So, if the probability density is uniform over the real numbers 0 < x <=10, the probability that a number between 3 and 3.1 is chosen is 0.01.

Probability density shows up a lot, but is used most often after it is integrated over some range. Continuing the real-line example, the probability that the chosen number <4.3 is 0.43. It is often useful to ask how probable is a deviation of more than x from the most likely datum given the hypothesis and conditionals (confidence intervals are of this form, and Bayesian methods can be used to produce similar kinds of intervals).

I don't really understand what the answer to the question "what is the probability of these data given this hypothesis" might mean. A probability, as I understand it, is a number between 0 and 1. How can a set of recorded values have "a probability of 0.9?" A probability of what?

Of being the exact set of numbers recorded (the data) among all the sets that might have been recorded given this hypothesis and the relevant conditionals. In the case of a control model, such as you and Rick provided, the hypothesis is that "the model represents the person's tracking behaviour and the person has some particular pattern of variability". The conditional includes that the target is whatever sequence of values it had. With that target and that model, the model provides a repeatable track, but the person doesn't, so the sequence of differences between model track and person track varies from run to run using the same target sequence. That variation is what allows the computation of the probability of getting that exact data sequence of values.

Rick thought that "D" in P(D|H) was a random variable. It isn't. It is a precise datum, such as the sequence of model-minus-person sample values obtained in an actual run, or a made up sequence that you have some reason to consider. The randomness comes in the process of determining what might or might not happen given the hypothesis and the conditionals, and in control modelling, the randomness is in the person. Remember again "random" simply means "variability of which the precise cause is unknown to you".

... the question we are interested in is not what the model will do, but how well it matches what a person does, and the person's actions are, from an observer's point of view, noisy. My biggest problem in analyzing Rick's demo was to figure out a way of determining how this noisiness could be determined and represented.

I'm asking if "How well does this model match what a person does?" be answered "the probability of the data is 0.9." The answer doesn't seem to have anything to do with the question. Can you expand this into what you really mean?

If you ever did have data D for which P(D|H) was 0.9, getting any other data at all would be very surprising! It wouldn't say much about the datum, because you expected it before you ever ran the study.

As I think I mentioned to Rick, P(D|H) is a measure of the data fit to the hypothesis, but its absolute value doesn't in itself tell you very much. In most cases its absolute value is extremely small, and if D varies along a continuum, it is a probability density rather than a probability. To make it mean something, you can ask one of two questions: (1) what is the total integrated probability of other data that would be more probable than D given this hypothesis?, or (2) what related hypotheses (e.g. by parameter variation) would make this particular D more probable? I used the latter for Rick's demo data. To use the former requires a method of describing the possible data sets and their probability given the hypothesis and conditionals. I've used a bit of this in a message in the thread "Statistics, tracking data", which I hope to post before I go to bed tonight.

Martin

[FROM: Dennis Delprato (930106)]

Those who work with psychological and social data and
take a strict anti-statistical stance might find of
interest an article entitled "How Hard is Hard Science,
How Soft Is Soft Science?" authored by L. V. Hedges
(American Psychologist, 1987, v. 42, 443-455).

Hedges notes striking parallels between quantitative
methods used to sythesize research in the physical and
in the social sciences. He found that the most common
method used in the physical sciences makes use of weighted
least squares, and notes that this procedure is now
used in the social sciences (and I would add bio-medical
sciences), as well. Hedges takes as examples the Particle
Group Data reviews that examine "stable-particles" and
focuses on mass and lifetime estimates. He argues that
the Birge ratio (the accepted index of determining "how
well the data from [a] set of studies agree (except for
sampling error)" is comparable to that obtained with
socio-behavioral research.

Hedges recognizes that he did not examine research
concerned with testing point predictions.

My point here is that anti-statistical positions such
as taken by operant psychologists and PCT experts might
appear naive from the perspective of science as a whole
unless they are qualified.

Dennis Delprato
Dept. of Psychology
Eastern Mich. Univ.
Ypsilanti, MI 48197

[From Rick Marken (930106.1800)]

Dennis Delprato (930106) --

Those who work with psychological and social data and
take a strict anti-statistical stance might find of
interest an article entitled "How Hard is Hard Science,
How Soft Is Soft Science?" authored by L. V. Hedges
(American Psychologist, 1987, v. 42, 443-455).

What's the point of the article? That physics is
based on statistical analysis, just like psychology?
I never saw a t test or an anova in the Principia.

My point here is that anti-statistical positions such
as taken by operant psychologists and PCT experts might
appear naive from the perspective of science as a whole
unless they are qualified.

But we do qualify it -- nobody in PCT is against statistics;
we are against using statistical studies of group behavior as
a basis for understanding individual behavior. Even when
statistics are applied to individual behavior (it can be) we
argue that statistics based on a causal model of behavior
(like the general linear model) are still inappropriate --
not, in this case, because they are statistics but because
they are based on the wrong model of behavior.

For the record -- I am NOT ANTI-STATISTICS; some of my best
freinds are statistics. I am anti the inappropriate use of
statistics -- as when group data is applied to individuals
or when a linear regression model is applied to closed loop
control.

Best Regards

Rick (friend of statistics) Marken