Catching Up

[Dan Miller (980210)]

Bill Powers:

Once again my hair trigger delete was clicked on instead of reply.
So, I will reply to two issues you raised on your post.

You attacked the example I used of an interesting finding - one that
might sensitize a PCT researcher to a real world process. This might
lead to PCT research on the issue. I argued that such findings were
useful, and that they were needed - insofar as there are people out
there in the world doing some observing and measuring. I never
suggested that it was particularly good research.

However, you then opened your file drawer and did the old piece on
why correlations of only .62 can't tell us anything. The problem
with this standard rap is that you are incorrect in your analysis. I
haven't stated anything before because I didn't think it really
mattered. I'm beginning to think that it does. For those who have
forgotten (lucky them), the finding I reported was this:

     Amount of reading (IV) is statistically related to Political
     Progressivism (DV) with a correlation of, r = 0.62.

You say that nothing can be said about this because for nearly half
of the subjects the relationship does not hold. Actually, this is
not how population statistics work. A correlation of .62 tells us
that the relationship is quite strong for the sample studied. The
correlation is a proportionate reduction in error statistic - using
errors (or distance) from the optimal line drawn through the graph of
all points if we were to plot them. A correlation of this magnitude
would look like a pretty tight array of points. There would not be
half (or nearly so) of the individual points that would stray so far
from the plotted line that we would say that the opposite holds true.
That is, some with only moderate amounts of reading would be very
progressive, and some with only moderate amounts of reading would be
unprogressive.

But to say even this is an ecological fallacy. We aren't talking
about an average person, or any particular person, but rather about
the sample of individuals (and, perhaps, the population of
individuals). These are population statistics, and not individual
statistics. Using such population statistics to describe any
specific individual would be incorrect usage. This is why I wonder
why you use correlations to measure control as done by a single
individual. What is the population? What is the sample? No half
bright sociologist would reduce population statistics to discuss an
"average" individual. To use statistics to do so is to use them
incorrectly.

In your point about my suggesting that the control model is not the
thing you made the following analogy. You said that to call the
heart a pump is not a metaphor, because the heart is a pump. Yes,
but what a pump. My point was that if all you do is act toward the
heart as like any other pump, then you are missing exactly how it
works. Also, I would not want you to be my cardiologist. To use a
simplistic model of a pump to understand how the heart works is, in
my humble opinion, a very limited and unwise plan. I would suggest
direct observation, description, and more complex schematics and
processes developed.

As to the fact that humans are control systems and that we can
precisely use the model of control systems to study how human
behavior works precisely is, in my opinion, a very limiting and
unwise plan. You really think that all we are is the control system
you describe? You really believe this? Well, I think we are
different from your charming, but quickly aging, schematic. We are,
like the heart, different and a quite a bit more complex than that.
I think that the electrical wiring schematic is misleading and
disingenuous, and most certainly a metaphor. Again, I would suggest
lots of observation, descriptions, more complex (and more accurate)
schematics and well, you get my idea.

These suggestions are completely gratuitous. Why should you do
anything other than what you do? You have established a significant
definition of behavior and a distinct approach to research. The
people involved in this line of research continually demonstrate the
superiority of controlled perceptions as the scientifically valid
description of behavior. Why do anything more? It's all done -
nothing left but the details.

So long,
Dan

Dan Miller
miller@riker.stjoe.udayton.edu

[From Richard Kennaway (980219.1717 GMT)]

Dan Miller (980210), responding to what appears to be an off-list message
from Bill Powers (at least, I didn't see it yet on CSGNET):

However, you then opened your file drawer and did the old piece on
why correlations of only .62 can't tell us anything.

If you would be more convinced by a new piece from someone else's file
drawer, I can email you a paper of mine I completed a couple of months ago
on the use and practical meaning of correlations, in PostScript format, or
snailmail you a paper copy.

    Amount of reading (IV) is statistically related to Political
    Progressivism (DV) with a correlation of, r = 0.62.

You say that nothing can be said about this because for nearly half
of the subjects the relationship does not hold. Actually, this is
not how population statistics work. A correlation of .62 tells us
that the relationship is quite strong for the sample studied.

I'm sure Bill Powers will make (or has already made) the following points
much more clearly than I can, but I can't resist.

Firstly, what do you mean by "quite strong"? For what purpose is it
"strong"? What can you do with it?

Secondly, what do you mean by "the relationship"? The observation of the
population correlation does not imply anything at all about, say, what will
happen to an individual's political views if they read more. It doesn't
even make a probabilistic statement about that. So what is this
"relationship", other than a word to suggest that the experimental result
is an observation of a real entity?

In short, what can you do with such a result? Well, this is what you do:

Dan Miller (980204.1645)

What is it about the act of reading (and reading a lot) that creates
a context within which progressive political ideas can generate and
thrive?

What you do is presuppose a mechanism to connect the two variables: the act
of reading creates a context where certain ideas can thrive. You don't
even put it forward as a hypothesis or speculation, but slide it in as a
presupposition. Why?

You later avow that of course you don't believe the IV causes the DV, but
all you put in place of that looks to me like a vague fog of words which
comes down to the same thing. For example, "Maybe it is a heightened
awareness of, say, reference signals". I can't find any grammatical
antecedent for that "it". It seems to mean, "the mechanism whereby reading
causes progressivism", but you claim not to believe there is such a thing.

Are you claiming that this assumed something about the act of reading is
present in individuals? If you are, you are applying population statistics
to individuals. If you are not, where is it? And whatever it is, how do
you account for the 29% of people in whom this mechanism appears to be
inoperative?

So,how do we make sense of this intriguing finding?

[various speculations]

My question is, why do we tend to deflate people who may be
interested in doing this?

Because it isn't science, it's coffee-table chat.

Back to Dan Miller (980210):

The
correlation is a proportionate reduction in error statistic - using
errors (or distance) from the optimal line drawn through the graph of
all points if we were to plot them. A correlation of this magnitude
would look like a pretty tight array of points.

"Pretty tight" is another vague phrase. What does it mean? Have you
looked at scatterplots of bivariate normal distributions of given
correlation? I have, and I wouldn't call the cloud you get with c=0.62
"pretty tight". You can see scatterplots for various values of the
correlation from 0 to 0.99995 in the paper I mentioned.

The paper also computes some formulas concerning the probability of making
a successful prediction about an individual's value of one variable given
the other. These calculations assume a bivariate normal distribution, but
I doubt the results would be substantially different in other situations.

(1) The proportion of individuals which have "Amount of reading" above
average and "Political Progressivism" below average (i.e. the opposite of
the overall direction of the correlation) is arccos(0.62)/pi = 29%.

(2) The mutual information between the two variables is log (1/(1-c*c))
(using binary logs). For c=0.62, this is 0.35, or about one-third of a
bit. For an individual, there is not enough information in the value of
one variable to even make a single yes-no prediction about the value of the
other.

(3) Knowing the value of one variable reduces the standard deviation of the
other by a factor of 1.27.

(4) The proportion of variance in one variable which is "explained" by the
other (in the technical sense of "explain" used in analysis of variance, a
sense which bears no relation to its everyday sense), is 38%.

(5) No prediction whatsoever, not even a probabilistic one, can be made
about how an individual's views, or a whole populations views, will change
if they are induced to read more. (You don't describe the experimental
designs, but I'm assuming that the experiments you cite simply survey a
large number of people, measuring their "amount of reading" and "political
progressivism".)

There would not be
half (or nearly so) of the individual points that would stray so far
from the plotted line that we would say that the opposite holds true.

Considering that in the case of zero correlation, there would be exactly
half, this is not a strong statement. As indicated above, the proportion
is 29%. And you don't know which 29%. For a bivariate normal
distribution, only 3.7% of the population are so far from the mean of one
variable that they are 95% likely to be on the same side of the mean for
the other variable. (This figure is sensitive to the shape of the extreme
tails of the distribution, and it would require an extremely large sample
to have any reason to believe that the distribution is normal out to that
range.)

That is, some with only moderate amounts of reading would be very
progressive, and some with only moderate amounts of reading would be
unprogressive.

But to say even this is an ecological fallacy. We aren't talking
about an average person, or any particular person, but rather about
the sample of individuals (and, perhaps, the population of
individuals). These are population statistics, and not individual
statistics. Using such population statistics to describe any
specific individual would be incorrect usage.

We agree on that, at least. But then, what would be a correct use of the
population statistic you cite?

This is why I wonder
why you use correlations to measure control as done by a single
individual. What is the population? What is the sample?

Have you run, and understood, Rick Marken's demos? Then you will know what
the variables are whose correlations are computed, and why this is a useful
thing to do.

No half
bright sociologist would reduce population statistics to discuss an
"average" individual. To use statistics to do so is to use them
incorrectly.

There must be a lot of less than half bright sociologists around, or
non-sociologists of various brightnesses. Population statistics are used
to make statements about individuals all over the place. Look at
psychometric testing -- a field whose whole purpose is to use population
statistics to make statements about individuals.

You have established a significant
definition of behavior and a distinct approach to research. The
people involved in this line of research continually demonstrate the
superiority of controlled perceptions as the scientifically valid
description of behavior. Why do anything more? It's all done -
nothing left but the details.

Rick, is "PCT is a useful strand of experimental psychology" on your list
of standard reasons for rejecting PCT?

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[Dan Miller (980210)]

From Richard Kennaway (980219.1717 GMT):

My goodness, but I suppose I should have expected it. Flames beget
flames (sorry for the probable/causal allusion). I can only answer a
few of your queries and address some of your concerns, but the
readers will get enough of it, I'm sure.

If you would be more convinced by a new piece from someone else's file
drawer, I can email you a paper of mine I completed a couple of months ago
on the use and practical meaning of correlations, in PostScript format, or
snailmail you a paper copy.

When you first came onto CSGNet a few months ago, I read your papers,
including the one you note. If you want to send it to me, then go
ahead.

In my reply to Bill Powers I wrote:

> Amount of reading (IV) is statistically related to Political
> Progressivism (DV) with a correlation of, r = 0.62.
>
>You say that nothing can be said about this because for nearly half
>of the subjects the relationship does not hold. Actually, this is
>not how population statistics work. A correlation of .62 tells us
>that the relationship is quite strong for the sample studied.

Richard Kennaway speaking for Bill writes:

Firstly, what do you mean by "quite strong"? For what purpose is it
"strong"? What can you do with it?

In the social sciences this is a finding that would raise eyebrows.
So, my measure of strong is level of eye-raising. The stronger the
statistical relationship the higher the eye-raising. BTW, I would
only make such a statement if I had the measures of at least fifty
sociologists. Only then would I make the kind of statement I do.
The fifty has a meaning, right, Richard?

Richard identifies himself as a psychologist by writing the
following:

Secondly, what do you mean by "the relationship"? The observation of the
population correlation does not imply anything at all about, say, what will
happen to an individual's political views if they read more. It doesn't
even make a probabilistic statement about that. So what is this
"relationship", other than a word to suggest that the experimental result
is an observation of a real entity?

It is a statistical relationship - one derived by using a statistic
on variables in a sample. I am talking about the sample,
specifically, and perhaps the population generally. I am not now,
nor never would use the relationship to discuss an individual.

Richard Kennaway then asserts and asks:

What you do is presuppose a mechanism to connect the two variables: the act
of reading creates a context where certain ideas can thrive. You don't
even put it forward as a hypothesis or speculation, but slide it in as a
presupposition. Why?

PCTers have already established the control mechanism (are these the
right words?). This is really all they have established, which I
think is great. It is an outstanding demonstration of how behavior
works. So, I presuppose control. Do we have to establish it in each
case?

Richard, read my posts carefully. This will help you in school. You
ask what I claim.

Are you claiming that this assumed something about the act of reading is
present in individuals? If you are, you are applying population statistics
to individuals. If you are not, where is it? And whatever it is, how do
you account for the 29% of people in whom this mechanism appears to be
inoperative?

I used the finding as an example of possibly interesting IV-DV
research - research that might sensitize people, raise an eyebrow,
get them off their computer. I could have used any of several other
findings that I find interesting. I never for a moment wondered if
you would find it interesting. BTW, how can you claim that the
mechanism is not operative for 29% of the people if you have not seen
the data?

> Because it isn't science, it's coffee-table chat.

Richard, we are lucky to have you to tell us the difference.
Einstein first thought of the special theory of relativity while
in a street car, riding away from a town clock. I don't mean to
equate myself with Einstein, but would you say that such a silly idea
expressed to a friend would be coffee table chat or science? It was,
at the time, only idle speculation.

Have you
looked at scatterplots of bivariate normal distributions of given
correlation? I have, and I wouldn't call the cloud you get with c=0.62
"pretty tight". You can see scatterplots for various values of the
correlation from 0 to 0.99995 in the paper I mentioned.

Yes, the r=.62 is a cloud, but one with a definite shape, wouldn't
you say?

The paper also computes some formulas concerning the probability of making
a successful prediction about an individual's value of one variable given
the other. These calculations assume a bivariate normal distribution, but
I doubt the results would be substantially different in other situations.

Richard, I'm sorry, but why would you want to do the ecological
fallacy? That is, why would you think you could predict "an
individual's value of [sic] one variable given the other"? These are
population statistics. We can only say something about the
population (or sample) we have observed or measured.

> (1) The proportion of individuals which have "Amount of reading"
above

average and "Political Progressivism" below average (i.e. the opposite of
the overall direction of the correlation) is arccos(0.62)/pi = 29%.

Accepting your formula, this is well short of one-half.

(2) The mutual information between the two variables is log (1/(1-c*c))
(using binary logs). For c=0.62, this is 0.35, or about one-third of a
bit. For an individual, there is not enough information in the value of
one variable to even make a single yes-no prediction about the value of the
other.

There you go trying to talk about individuals, again. This is
reductionism at its most flagrant, and an ecological fallacy. Surely
you have the scientific proof for establishing an ecological fallacy?
If so, then use it.

(3) Knowing the value of one variable reduces the standard deviation of the
other by a factor of 1.27.

(4) The proportion of variance in one variable which is "explained" by the
other (in the technical sense of "explain" used in analysis of variance, a
sense which bears no relation to its everyday sense), is 38%.

Analysis of variance is an inferential statistic. We've been talking
about population statistic/descriptive statistics, haven't we. Even
in statistical packages for computers, analysis of variance is not
computed like I did it by hand in graduate school. Is this what you
are getting at? I don't follow.

(5) No prediction whatsoever, not even a probabilistic one, can be made
about how an individual's views, or a whole populations views, will change
if they are induced to read more. (You don't describe the experimental
designs, but I'm assuming that the experiments you cite simply survey a
large number of people, measuring their "amount of reading" and "political
progressivism".)

Who was talking about prediction? The only thing I would do,
supposing the finding had any merit, would be to teach kids how to
read (and, if possible, how to love reading).

Again, defending Bill Powers you write:

Considering that in the case of zero correlation, there would be exactly
half, this is not a strong statement. As indicated above, the proportion
is 29%. And you don't know which 29%. For a bivariate normal
distribution, only 3.7% of the population are so far from the mean of one
variable that they are 95% likely to be on the same side of the mean for
the other variable. (This figure is sensitive to the shape of the extreme
tails of the distribution, and it would require an extremely large sample
to have any reason to believe that the distribution is normal out to that
range.)

And your point is what? That the statistical association doesn't
really exist? Have you ever heard of Kaplan's Law of the Instrument?

> We aren't talking
>about an average person, or any particular person, but rather about
>the sample of individuals (and, perhaps, the population of
>individuals). These are population statistics, and not individual
>statistics. Using such population statistics to describe any
>specific individual would be incorrect usage.

We agree on that, at least. But then, what would be a correct use of the
population statistic you cite?

We then can agree that we can only make descriptive statements
about the sample, or perhaps the larger population from which the
sample was derived.

Then, speaking of Perceptual Control Theorists use of population
statistics to tell us that they are scientists, I wrote:

>This is why I wonder
>why you use correlations to measure control as done by a single
>individual. What is the population? What is the sample?

Richard responds:

Have you run, and understood, Rick Marken's demos? Then you will know what
the variables are whose correlations are computed, and why this is a useful
thing to do.

Sorry, but correlations are point in time measures. Were
point-biserial correlations used? Are these the measures of errors
over a period of time? I don't know exactly which demonstrations you
talk about, but I consider them just that - demonstrations. I've
always been impressed with them as demonstrations. I am convinced
about control. You do understand this. My dispute is with the cant
and rhetoric of this forum.

Are we so needy that we have to have statistics to secure our faith
that we are scientists, or to convince others of that status? Darwin
didn't have them. Is he a scientist? Oh, I forgot - Darwin was
proven wrong by Bill.

>No half
>bright sociologist would reduce population statistics to discuss an
>"average" individual. To use statistics to do so is to use them
>incorrectly.

Richard's snappy reply:

There must be a lot of less than half bright sociologists around, or
non-sociologists of various brightnesses. Population statistics are used
to make statements about individuals all over the place. Look at
psychometric testing -- a field whose whole purpose is to use population
statistics to make statements about individuals.

I've been in sociology for a long time - going to meetings, reading
the journals, talking to folks, and one thing I can say is this.
Maybe only a handful of all sociologists I've had contact with in
these ways has stooped to using psychometrics. The only folks I know
who use psychometric testing in association with population
statistics is, you guessed it, psychologists.

Right about now, my late friend and teacher Carl Couch would tell me
that I've given these people enough of my time. He would remind me
of Thomas Pynchon's great quote in Gravity's Rainbow. He calls it
Proverbs for Paranoids and it goes like this: "You don't have to
worry abou the answers if you get them asking the wrong questions."
Psychologists who use psychometrics (measuring psychos?) with
population statistics to talk about an average, above average,
exceptional individual are asking the wrong questions.

You conclude:

Rick, is "PCT is a useful strand of experimental psychology" on your list
of standard reasons for rejecting PCT?

My god, Richard, I'm sorry. Couch was right. I am giving this too
much time. You were talking to Rick Marken and not to me. The
points you were making were with him and not with me. Silly me. I
rechecked and you did address me by name, go ad hominem, the whole
flame, but you were really trying to get some brownie points with
Rick the Bruiser. I didn't even have to do the test to see what
variable you were controlling for.

I think you've found a home.

Adios,
Dan

Dan Miller
miller@riker.stjoe.udayton.edu

[From Richard Kennaway (980211.1000 GMT)]

Dan Miller (980210):

When you first came onto CSGNet a few months ago, I read your papers,
including the one you note.

Which of the others did you read? "Infinitary lambda calculus"?
"Transfinite reductions in orthogonal term rewriting systems"? Or maybe
that best-seller, "Graph rewriting in some categories of partial
morphisms"? I wouldn't have thought any of my papers but the one on
correlations would be of interest to a sociologist. BTW, I've been reading
CSGNET for several years. I say this only to correct a misstatement, not
to claim any sort of old-timer's authority.

If you want to send it to me, then go
ahead.

It's not whether I want to send it to you, but whether you want to receive
it. I have no interest in foisting it on you. If you specifically want to
receive it, let me know, specifying either that you can handle a uuencoded
gzipped PostScript file, or giving me a snailmail address to send a paper
copy to.

Richard Kennaway speaking for Bill writes:

I am not speaking for Bill. I am speaking for myself. Bill speaks for
himself. Yours is a strange way -- canting and rhetorical, one might say
-- to characterise comments which I prefaced with the words

I'm sure Bill Powers will make (or has already made) the following points
much more clearly than I can, but I can't resist.

You continue:

So, my measure of strong is level of eye-raising. The stronger the
statistical relationship the higher the eye-raising. BTW, I would
only make such a statement if I had the measures of at least fifty
sociologists. Only then would I make the kind of statement I do.
The fifty has a meaning, right, Richard?

Fifty just means fifty to me. As you say, argument from authority cuts no
ice these days, so presumably you don't regard the agreement of fifty other
sociologists as evidence for anything more than that fifty-one sociologists
agree. What meaning does the fifty have for you?

What you do is presuppose a mechanism to connect the two variables: the act
of reading creates a context where certain ideas can thrive. You don't
even put it forward as a hypothesis or speculation, but slide it in as a
presupposition. Why?

PCTers have already established the control mechanism (are these the
right words?). This is really all they have established, which I
think is great. It is an outstanding demonstration of how behavior
works. So, I presuppose control. Do we have to establish it in each
case?

I have some difficulty understanding the connection between these remarks
and the paragraph of mine they appear to be in response to. But to answer
your final question, in each case, if one wants to know what is being
controlled, one does need to establish that, by applying the test for
controlled variables. PCT says that there are controlled variables around;
speculation about what they might be may be useful; but only an actual
performance of the test will tell one what they are.

Richard, read my posts carefully. This will help you in school.

Condescension, yet. And you claim to dislike "cant and rhetoric"?

I used the finding as an example of possibly interesting IV-DV
research - research that might sensitize people, raise an eyebrow,
get them off their computer. I could have used any of several other
findings that I find interesting. I never for a moment wondered if
you would find it interesting. BTW, how can you claim that the
mechanism is not operative for 29% of the people if you have not seen
the data?

I'm just guessing that the proportion is 29%, on the basis of an assumed
distribution, because all the information I have is the figure of 0.62. I
plug 0.62 into the relevant formula from my paper, and out comes 29%. (My
paper includes derivations of all the formulas it tabulates.) So maybe the
actual proportion is 35%, or 20%. If the exact proportion mattered I'd
want to know what it actually was instead of guessing, but the point I am
making, and the point you are making, only require being in the right
ballpark.

So you find 0.62 "interesting". So fifty other sociologists might find it
"interesting", categorise it and several other findings as being "strong"
correlations. So what?

Yes, the r=.62 is a cloud, but one with a definite shape, wouldn't
you say?

There it is again. "Definite shape." "Strong correlation." "Eyebrow
raising." "Interesting." Is all you claim for these experiments, that
they may suggest to researchers where to look to start doing real science?
And what does that real science look like? More surveys and population
statistics? I am not a sociologist. I don't know what goes on in your
journals and conferences when you're not producing the 0.62 findings that
according to you, only intrigue, only point the way to what might be worth
studying. You do. Tell me.

Richard, I'm sorry, but why would you want to do the ecological
fallacy? That is, why would you think you could predict "an
individual's value of [sic] one variable given the other"?

The whole thrust of that paper of mine, and my posting, is to stone the
ecological fallacy to death and nuke the remains from orbit, and you say
I'm committing it? I am baffled.

Accepting your formula [for the 29% figure], this is well short of one-half.

"Well short of." Another vague phrase which refers only to your
perceptions, and perhaps those of fifty sociologists, that this number is
to be categorised as "well short of one-half".

[My demonstration of less than one bit of mutual information]

There you go trying to talk about individuals, again.

There *you* go, reading me saying why one can't do that, and accusing me of
doing that.

Surely
you have the scientific proof for establishing an ecological fallacy?

Your grammar is obscure to me here. I don't know what you mean by that
sentence.

Analysis of variance is an inferential statistic. We've been talking
about population statistic/descriptive statistics, haven't we. Even
in statistical packages for computers, analysis of variance is not
computed like I did it by hand in graduate school. Is this what you
are getting at? I don't follow.

Perhaps it would be clearer expressed in mathematics. My figure of 38% is
c-squared, c being 0.62. c is the product-moment correlation over a set of
pairs (x,y), defined as Sigma xy / sqrt( Sigma xx * Sigma yy ) (assuming
means of zero to keep the formula simple). That calculation can be carried
out with any set of pairs; what it means depends on what those pairs mean.
This is the calculation I assume Rick's demos perform (I've not seen his
code); this is the calculation I am guessing is performed in the studies
that your 0.62 figure comes from.

Who was talking about prediction? The only thing I would do,
supposing the finding had any merit, would be to teach kids how to
read (and, if possible, how to love reading).

Why does the cited statistic move you to do that? Of course, you very
likely already have other reasons for getting kids to read, and so would I.
I'm not asking about those reasons, but about the relevance that you see
of the statistic to that task.

Again, defending Bill Powers you write:

I am not defending Bill Powers, nor does he need any defence. I am seeing
you write things which I disagree with, and want to take issue with.

And your point is what? That the statistical association doesn't
really exist? Have you ever heard of Kaplan's Law of the Instrument?

No.

[web search]

Ah. Are you referring to the proverb that "to someone with a hammer,
everything looks like a nail"? In some versions, "to a small boy with a
hammer etc." If so, the relevance escapes me. If not, the allusion
escapes me. If I were minded to engage in cant and rhetoric, I might
allude here to Lacombe's Rule of Percentages and Life's Law of the Highway,
or perhaps the Fourth Law of Understanding. Oh, don't miss the second
corollary to Maier's Law, it's a real hoot in this context. I'll save you
the trouble of looking that one up:

"The experiment may be considered a success if no more than 50% of the
observed measurements must be discarded to obtain a correspondence with the
theory."

And 29% is well short of 50%, isn't it?

But enough of this digression into cant and rhetoric.

Sorry, but correlations are point in time measures. Were
point-biserial correlations used? Are these the measures of errors
over a period of time?

These are the correlations between disturbance and output, and between
disturbance and perception, measured over a period of time during the
performance of a control task, e.g. cursor tracking. Several of Rick's
demos do that, showing that successful control typically yields a value for
the former of below -0.95, and a value for the latter in the mush region.
Have you tried the "Mind reading" demo?

My dispute is with the cant
and rhetoric of this forum.

Are we so needy that we have to have statistics to secure our faith
that we are scientists, or to convince others of that status? Darwin
didn't have them. Is he a scientist? Oh, I forgot - Darwin was
proven wrong by Bill.

Do you see anything incongruous about the juxtaposition of the above two
paragraphs? They appear to me to flatly contradict each other.

I didn't even have to do the test to see what
variable you were controlling for.

Dream on -- for if you don't do the test, that's exactly what you're doing.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Bill Powers (980211.0316 MST)]

Dan Miller (980210) --

You attacked the example I used of an interesting finding - one that
might sensitize a PCT researcher to a real world process. This might
lead to PCT research on the issue. I argued that such findings were
useful, and that they were needed - insofar as there are people out
there in the world doing some observing and measuring. I never
suggested that it was particularly good research.

You, in turn, attacked the idea of using simulations, apparently under the
impression that we use simulations _instead_ of studying real-world
behavior. Let's see if we can't reach some sort of mutual understanding
about all this.

Let's consider the Little Man. What kind of behavior is being simulated
here? It's the behavior that consists of reaching out to touch something
that can be seen. This is a very common kind of behavior that's part of any
manipulation of the environment. So if we understood how this kind of
behavior works, we would know something important about a large fraction of
all behaviors.

There are two main classes of explanation of this behavior other than PCT.
One of them says that the visual stimulus-object, the target, acts on the
nervous system and thence the muscles to cause the hand to move toward the
target. The other says that the brain uses information about the target
position to compute the neural signals needed to move the hand to the
target. Of course the PCT model says that neither of these explanation is
correct; instead, the brain perceives the distance between the hand and the
target, and acts to bring this distance to zero (or whatever relation is
desired, such as hand circling target, which the Little Man can do).

Those competing explanations can't all be right. So how do we choose among
them? One way is to look more deeply into each of them, to see what they
entail. If the stimulus of the target causes movements of the hand toward
the target, we have to ask how this could be brought about. Can we imagine
any realistic way to hook up the eyes, brain, and muscles that would
achieve the observed result in this way? If the brain computes the signals
needed to cause the muscles to drive the hand to the target, can we imagine
what kinds of computations are needed to accomplish this end? And for PCT,
can we imagine the control systems that would have to exist to guarantee
that the hand would always be moved toward the target?

Nobody has ever tried to build an S-R model to back up that explanation.
Some attempts have been made to produce a computed-output model that would
reproduce the observations, and of course I have produced the Little Man
model to back up the PCT explanation. What do such models consist of?

Basically, the models represent an attempt to include all that is known
about the physics of vision and the dynamics and kinematics of moving
masses, as well as data and reasonable guesses about properties of the
muscles and nervous system. The arm, for example, is driven by muscle
forces that apply torques at each joint in each degree of freedom. Using
physical equations, we can calculate quite accurately how the masses of the
arm segments will move when any torques are applied at the joints. We also
have information about how muscle tensions are generated out of neural
signals, and how the shortening of the muscles and the forces on the
attachments are sensed and fed back to spinal motor neurons. Any model must
take into account these known features of the situation.

Just as important, any model has to _behave_ correctly. This means not just
doing the calculations on paper or generalizing about what will happen, but
as nearly as possible _simulating_ the behavior by actually constructing a
working model and showing that out of its own properties it generates the
same kind of behavior that is observed in the real system. The simulation
is a detailed embodiment of the claims that underly the explanation. In the
Little Man simulation, these detailed claims include the physics of
movement of the arm segments, the contractile and elastic properties of
muscles, the stretch and tendon reflexes, and the geometric optics of
vision including binocular depth perception. Of course we have to guess at
some things and bypass problems we haven't yet solved. For instance, nobody
knows how two retinal images give rise to perceptions of object position in
three dimensions, so in the Little Man we simply skip over the how and
provide three signals calculated from the retinal images: x and y retinal
positions for the objects seen by each eye, and z (depth) from the
disparity in the two images. We don't know how the target and fingertip are
picked out of the scene for positional representation, so we just pick them
out and skip over that problem. We also have to guess at the details of the
neural circuits.

But as much as humanly possible here and now, this model is a feasible
representation of how the brain, muscles, and physical masses behave, and
the complete model takes all these things into account, quantitatively.
What you see on the screen is the result -- the hand reaches out to touch
the target, and if the target moves, the hand tracks it.

I hope you can see that this simulation constitutes a far more complete
explanation of the hand-eye behavior than the verbal statements I gave in
the beginning. It uses much more detailed observations of the phenomena
than those verbal statements represent. And it explains far more than just
reaching out to touch a target: it explains the details of this process,
including accelerations and decelerations, and pointing errors, and the
forces that are actually generated by the muscles. It explains the roles of
the stretch and tendon reflexes in achieving dynamic stability.
Constructing the simulation has actually forced us to examine the real
system and its behavior in far more detail that a simple naturalistic
description of behavior would require.

Now let's consider a different aspect of simulations. When we know very
little about how a behavior works, we can offer all sorts of explanations
of the observations. But one important question is, what is the _simplest_
explanation we could think of that would cover all the observations? Tne
Crowd demo is an example of this use of simulations.

In the Crowd demo, each active person has only a few control systems, all
based on controlling proximity to other objects in different ways. There is
a collision-avoidance system, a system for approaching a fixed goal
position some distance away, and a system for maintaining a fixed proximity
and direction relative to one other person. These control systems act by
varying the speed and direction of movement. The means of locomotion isn't
even modeled -- whether it involves walking, rolling, crawling, or
slithering is irrelevant.

What we learn from the Crowd demo is that when independent control systems
with simple properties like these interact, the outcome looks a great deal
like what we see in real gatherings. We see some obvious social patterns,
like marching in columns or forming arcs and rings about a focus or vying
for position in the manner of a King Of The Hill game. But because this is
a simulation, we know that there is nothing in the program concerned with
any social patterns; there are no "social control systems" or "social laws"
in the program. The patterns we see are strictly emergent from the
properties and goals of the interacting individuals, whose knowledge of
each other is confined to two measures: spatial proximity, and direction
relative to the direction of travel. Well, there is a third: identity. Each
one can selectively control the proximity and direction relative to one
other object, so it can distinguish one object from all the others.

When you observe the behavior of this Crowd simulation naturalistically,
you see many phenomena that you would see in a real crowd observed in the
same way. But you would also see a lot of things that don't exist. In one
case, the simulation is set up to have a single active person threading
among multiple stational obstacles to reach a goal position. As you observe
the movements of this individual, you see what appears to be a selection of
one path over another; you see exploration of some paths that prove to be
blocked, and retracing of steps to try different paths. You see abandonment
of one path in favor of a better one.

All of these apparent and easily observable features of the behavior are
illusions, when we are speaking of observing the stimulation. We can say
that with confidence because we know what is in the program, and there is
nothing there to correspond to any of the above interpretations. All that
the individual is trying to do is get to the distant goal position while
avoiding too much proximity to other nearby objects. There is no selection
of paths. There is no instruction to back out of cul-de-sacs. There is no
planning ahead, no optimizing, no comparing of different routes, no
selection of a "best" route. Yet people watching this utterly simple
simulation will say "Oh, he missed that short-cut," assuming that there is
some criterion for selecting the best path.

If such attributions reflect illusions when observing the simulation, the
chances are good that they would also be illusions when observing the real
gathering of people. This is a case of naturalistic observation revealing
far more about the observer than the observed. What we see happening
reflects what we assume to be going on beneath the surface. Naturalistic
observations, when uninformed by any conscious explicit model, are
hopelessly contaminated by the observer's expectations and unconscious
models. The observer sees things happening which are probably not happening
at all. Atheoretical naturalistic observation is simply a form of
Rohrschach Test.

I have to close, not because the subject is exhausted but because I am
(it's now 5 in the morning and time to finish my beddie-bye). However,
there's one last point:

You then opened your file drawer and did the old piece on
why correlations of only .62 can't tell us anything. The problem
with this standard rap is that you are incorrect in your analysis.

Yes, and thanks to Richard Kennaway for giving is the correct numbers. With
a correlation of 0.62, the chances that your explanation will be wrong are
29 percent, as opposed to 50 percent if you were guessing randomly.

The problem with your explanation of this phenomenon is that it can't
possible be "correct" in any sense I can think of. The generalization you
offer doesn't say that 71 percent of people who read more will have
progressive opinions; it says that ALL of them will. Of course you don't
want to say that 71% of them will have progressive opinions, because the
next obvious question will be "All right, _which_ 71%?" In other words,
when do you expect this generalization to be right, and when wrong? It's
much easier to say that "people" who read more will have progressive
opinions, and let the listener jump to the conclusion that this "fact" is
true of everyone. That, of course, lends credibility to your logical
explanation, which also doesn't say that 71 percent of people who read more
are subject to influences that nurture progressive thought; it says that
ALL people who read more are led toward progressive thought. (An aside --
doesn't it matter WHAT they read? Ever been in a John Birch bookstore?)

Well, enough of that for now.

Best,

Bill P.

[Dan Miller (980211)]

Bill Powers:

Bill, a very nice post. I agree with nearly all you write. My
intention all along in this string is to suggest that we get off the
obsession with behaviorism. If that is the point - to do a better
job than behaviorists, or even better than those who give glossy
descriptive explanations, then I am in total agreement.

I never wanted to defend the reading correlation study. It was
merely an example, not my idea of good research. If a student of
mine were to come to me full of excitement with this example, I would
try to get him/her to unpack it and figure out how we could study the
possible relationship in a more valid manner (some form of PCT
research I would hope). A different tactic.

One final point about the correlations. I worked with statistics
most of my undergraduate, graduate, and professional career. I
helped write some programs for the SPSS Manual. In my own research I
never thought that the use of point in time measures was particularly
helpful, and time-series or chain analysis measures also did not
capture the at once processual and then structural nature of my data.
I've always thought that people should be critical users of
statistics - as you are and Richard Kennaway (and many others). So,
when I and others see correlations of r=.95+ we get very wary. Are
they measuring an identity? Or as my late teacher asked, "Are they
telling someone to walk a straight line then measuring their
deviation from straightness?" I know it is more complex than that,
but he has a point - that this (or similar such) finding is not going
to shake the world of behavioral science. Also, he would note that I
have understood the idea of control since I was a junior in college.

The need to be scientistic - to use (and maybe misuse) statistics to
gain favor with those you really don't like is silly, in my opinion.
There must be other, more appropriate, ways to write the findings.
Anyhow, I (and others) don't find the correlations very convincing.

Are we all messed-up? Of course we are.

Dan

Dan Miller
miller@riker.stjoe.udayton.edu

[Dan Miller (980211)]

Richard Kennaway:

My goodness, I was wrong on every point (from your perspective).
I must be a complete idiot. But, I'll respond generally instead of
trading barbs. When you first posted the CSGNet (or when I first
read you) I did check you out and read some of your pieces to get a
sense of who you were and what you did. So, yes I suppose I did read
some of those papers. If they were available, I'm sure I did. You
seem like a bright guy. Does this make you feel superior to me?

If some of my statements and paragraphs seem contradictory, then,
perhaps I was using irony or sarcasm. Or, maybe I am confused. I
reread the statements and paragraphs you said you had trouble
understanding, or that you said were contradictory. I didn't have
any trouble understanding what I said, and I didn't see any
contradiction. Then, again, I don't know what you are controlling
for - and I can't perceive your perceptions.

About the sloppy use of language, e.g., interesting, strong, shape -
how about 29%? What does that mean? What is its significance? How
does it relate to other such percentages? Just where does your
sloppiness end and mine begin? I was talking about what was an
interesting (to me) finding - one that I might think could lead to
some PCT research. Please, King Richard, turn this sincere statement
into a percentage so I can see how much smarter you are than I.

Finally, the TEST. When I perform the test I do this with the
intention of revealing the CV of another, right? But, what am I
doing? I am acting on the environment purposively. The only thing
that will change is my perception (input), right? I mean I can't
actually make a claim about objective and obdurate reality can I?
So, with the test, am I then testing my own CV? I'm really having
trouble with the scientific claim for facts given that we are control
systems. I can handle it in terms of a collective agreement -
symbolically based (language, math) - that we are perceiving the same
(or functionally so) thing, process, event. But to claim objective
fact from, say delusion, based on the control of perception keeps me
amused.

Richard, I may be dreaming, but I'm not stupid.

Dan

Dan Miller
miller@riker.stjoe.udayton.edu

[From Richard Kennaway (980211.1548 GMT)]

I'll try to keep this PCT-relevant, and not go at length into the stuff
which isn't.

Dan Miller (980211) writes:

I'll respond generally instead of
trading barbs.

and two paragraphs later:

Please, King Richard, turn this sincere statement
into a percentage so I can see how much smarter you are than I.

No comment.

If some of my statements and paragraphs seem contradictory, then,
perhaps I was using irony or sarcasm. Or, maybe I am confused. I
reread the statements and paragraphs you said you had trouble
understanding, or that you said were contradictory. I didn't have
any trouble understanding what I said, and I didn't see any
contradiction. Then, again, I don't know what you are controlling
for - and I can't perceive your perceptions.

I am attempting, in this discussion, to control for understanding what you
are saying; where I disagree with it, to explain how and why; and to do
this while avoiding canting and rhetorical forms of expression, which in my
view hinder communication.

About the sloppy use of language, e.g., interesting, strong, shape -
how about 29%? What does that mean?

I thought I had explained that. Here is another explanation.

In a bivariate normal probability distribution with a product-moment
correlation of 0.62, the measure of that part in which one of the variables
is above its mean and the other is below is 29% (to two significant
figures). This is a purely mathematical statement.

The meaning of this 29% in terms of what you can do with the cited study of
reading and politics has been amply described by Bill Powers and myself,
and more generally by others (e.g. Philip Runkel's "Casting Nets" book): no
reliable individual predictions, no predictions at all about what the same
individual will do in varying circumstances, no ascription of any
"tendency" to individuals to behave in any way resembling the population
distribution, no attribution of any general influence of one variable on
the other. It appears you agree with the first two of these; your attitude
to the last two I am not sure of.

I might describe it as mush, but only as a shorthand way of alluding to the
above. What, if anything, are you alluding to when you call it "strong",
other that the conjectured opinions of other sociologists that they would
call it the same, or that it is unusually high for sociological research?

I was talking about what was an
interesting (to me) finding - one that I might think could lead to
some PCT research.

Fine. Describe the PCT research you envisage: the variables you might
study to see if they are controlled, the disturbances you might apply, etc.

Finally, the TEST. When I perform the test I do this with the
intention of revealing the CV of another, right? But, what am I
doing? I am acting on the environment purposively. The only thing
that will change is my perception (input), right? I mean I can't
actually make a claim about objective and obdurate reality can I?

I cannot tell whether you are being ironic or serious in your last question.

Taking your question seriously, my answer is, yes, you can. The claims you
make may be true or false, but you can certainly make them. Not only can
you make them, you can go a long way to determining whether they are true.

So, with the test, am I then testing my own CV?

No, you are testing the CV which you are observing while applying
disturbances to it.

I'm really having
trouble with the scientific claim for facts given that we are control
systems.

This sounds like you are arguing from PCT to solipsism. That all of our
experience is perception, a process within our own brains, does not to me
imply any such conclusion. If you don't think there's a world outside your
perceptions, I don't see how to continue the discussion.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Bill Powers (980211.1533 MST)]

Dan Miller (980210)--

In my reply to Bill Powers I wrote:

> Amount of reading (IV) is statistically related to Political
> Progressivism (DV) with a correlation of, r = 0.62.
>
>You say that nothing can be said about this because for nearly half
>of the subjects the relationship does not hold. Actually, this is
>not how population statistics work. A correlation of .62 tells us
>that the relationship is quite strong for the sample studied.

Here is a scatter plot of a Gaussian random variable with a correlation of
0.62:

···

*
                       *
                                    *
    *
                 *
                                   *
                      *
        *
             *
                            *
           *
                                           *
                                               *
                                                          *
                          *
                                          *
                                         *
                             *
                         *
                               *
                          *
                                                    *
                                                                   *
                                                 *
0.622

With a correlation of 1.0, the plot would be a straight line running from
upper left to lower right. As you can see, this "strong" relationship
involves many points that are far from the mean. Where I grew up, a data
plot like this would be interpreted as "back to the drawing board."

PCTers have already established the control mechanism (are these the
right words?). This is really all they have established, which I
think is great. It is an outstanding demonstration of how behavior
works. So, I presuppose control. Do we have to establish it in each
case?

Yes. In every experiment.

Best,

Bill P.

[From Bill Powers (980212.0307 MST)]

Dan Miller (980211)--

So,
when I and others see correlations of r=.95+ we get very wary. Are
they measuring an identity? Or as my late teacher asked, "Are they
telling someone to walk a straight line then measuring their
deviation from straightness?" I know it is more complex than that,
but he has a point - that this (or similar such) finding is not going
to shake the world of behavioral science. Also, he would note that I
have understood the idea of control since I was a junior in college.

Actually, one of the interesting things we do in PCT, in effect, is to tell
people to walk in straight lines and then measure deviations from
straightness. One of the basic questions is "How can a person walk in a
straight line?" -- especially if there's a gusty wind blowing and the
terrain is uneven. Merely instructing someone to walk in a straight line is
far from enough to explain the result -- all it could possibly explain is
the goal of walking in a straight line. How such things are achieved is the
whole business of PCT.

When people actually try to walk in a straight line, we observe that they
don't actually do so, especially not when obvious disturbances are acting
but even when we can't see any disturbances. What we observe are repeated
little deviations from walking in a straight line, which are immediately
corrected -- or at least they're not allowed to build up and become large
deviations. To explain such observations, we try to guess at the way the
person is organized inside, and of course our best guess so far is that
it's a control system in there. To test this idea, we design a control
system and simulate the same behavior as realistically as we can. We adjust
the (few) parameters of the model until the model, given known
disturbances, deviates from the ideal behavior just as much as the real
person, and in the same manner through time.

We can't actually simulate human walking behavior, although we're gradually
working up to it with our bug simulation. But we can simulate simpler
behaviors like those in the Demo 1 experiments and in Marken's many
experiments and simulations. And our models, with at most 3 adjustable
parameters, turn out to behave like real people given the same tasks, with
correlations of 0.95+ between the model's output fluctuations and those of
the person, sampled 30 times per second for one minute. A large part of
that correlation comes from the model's showing the same deviations from
ideal behavior that the real person shows, in detail, through time.

What has to be understood, and what many behavioral scientists appear not
to understand, is that these models are not "instructed to behave." Their
behavior is not specified in advance. Instead, the behavior grows out of
the relationships specified in the model; the connection between error and
action, the effect of action and disturbance on perception, and the process
of comparing the perception with the goal-perception to calculate the error.
This is what gives us the high correlations: the fact that we can predict
very accurately how the person will _fail_ to follow the instructions.

The need to be scientistic - to use (and maybe misuse) statistics to
gain favor with those you really don't like is silly, in my opinion.
There must be other, more appropriate, ways to write the findings.
Anyhow, I (and others) don't find the correlations very convincing.

I agree that this strategy backfired. What I had hoped to do by expressing
the tracking results in terms of correlations was to use terms with which
psychologists were familiar, so they could see how much better we could
predict by using control theory (I would never use that measure for any
other reason, as it's too insensitive to errors of prediction).
Unfortunately, we can predict SO much better that many psychologists (and
others like you) simply can't believe that the results are legitimate. A
referee of my 1971 rat paper actually accused me of fudging the data to fit
the theory, even though it was Verhave's results I was using.

All this supports my claim that the behavioral sciences have pretty much
given up on understanding behavior. In other sciences, r = 0.95+ is
commonplace -- in fact, if the data are much worse than that, one tends to
start looking for methodological or procedural errors, or even for basic
errors in understanding.

Imagine a bridge design in which the length of the supported roadway
consistently was 3% longer or shorter than the distance to be spanned.
Unlikely? That is because bridges are designed and built by organisms, and
we expect them to be built very precisely indeed. The 3% prediction error
we get when modeling human tracking behavior is not caused by randomness in
the behavior, but by shortcomings in our models. The actual noise component
is probably under 1 per cent.

Human behavior is precise and precisely repeatable in its effects. If any
measure of behavior shows variability of more than one or two percent, the
measure is too crude or the wrong variables are being measured.

It's time to stop blaming the inadequacies of our theories on the inherent
randomness of behavior. There is essentially no randomness in normal adult
behavior. If we see a relationship in which r = 0.62, we're simply looking
at the wrong thing, like looking at the relationship between birth rate and
the cost of electricity. If we accept an r of much less than 0.95, we're
barking up the wrong tree.

The other side of this judgment, of course, is that we CAN do much better
in understanding behavior. Much, much better. If behavioral scientists
could only start believing that, maybe we'd start getting somewhere.

Best,

Bill P.

[From Tim Carey (980213.0655)]

[From Bill Powers (980212.0307 MST)]

Great post Bill ... thanks,

Tim