Phone conversation about locus of control

[From Rick Marken (2009.06.24.0740)]

Bill Powers (2009.06.24.0118 MDT)–

It’s great to see everyone pulling together here as if PCT might even be
the preferred idea.

Must be one of those mostly imagination perceptions I here tell about;-)

RM to MT: The goal of
experimental research is to find the variables that account for the
variance in behavior (the DV in experiments). In most psychological
experiments, several variables (IVs) are manipulated
simultaneously.

BP: I think the big difference here is whether the relationship being
studied is based on a model or is purely empirical.

I was describing how it is conceived from the conventional point of view. And the experimental relationships being studied in conventional experiments are based on a model: the general linear model of statisitics: DV = a + b1IV1+b2IV2+b3IV1*IV2 (for a two way factorial experiment, for example). The idea is that, after studying the relationship between a DV and many different IVs it should eventually be possible to learn which IVs contribute to the variance in the DV so that we should be able to account for nearly all the variance in the DV in an experiment where all those IVs are included.

RM: Have a nice dip in de
Nile;-)

BP: Do you want us all to be together in this, or are you trying to drive
Martin out?

Who were you trying to drive out when you said: "The response to
my comments and Richard Kennaway’s analyses has been “Oh, it can’t
possible be that bad, I don’t believe it, no way, how could so many
people have been that wrong?” A severe case of denial if I ever saw
one.

See a shrink.

I will if you do;)

You’re sounding like Rush Limbaugh.

If only. He makes millions being nasty (and stupid). I just scrape by being funny (and bright).

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009,.06.24.0945 MDT)]

Rick Marken (2009.06.24.0740) –

Who were you trying to drive out
when you said: "The response to my comments and Richard Kennaway’s
analyses has been “Oh, it can’t possible be that bad, I don’t
believe it, no way, how could so many people have been that wrong?”
A severe case of denial if I ever saw one.

I wasn’t picking on anyone in particular, and that’s the difference. If
someone wants to try that shoe on and wear it, that’s their business. The
guilty flee when no one pursueth. Or “Methinks he doth protest too
much.” I’m not about to tell anyone that this description fits
them. If it does, they will know and react.

See a shrink.

I will if you do;)

Already did, long ago.

You’re sounding like Rush Limbaugh.

If only. He makes millions being nasty (and stupid). I just scrape
by being funny (and bright).

When you’re the only one laughing, you have to ask what
“bright” means.

Best,

Bill P.

[From Rick Marken (2009.06.24.0920)]

Bill Powers (2009,.06.24.0945 MDT)–

When you’re the only one laughing, you have to ask what
“bright” means.

It means keeping oneself gently amused in this crazy world;-)

Love

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2009.06.24.0925)]

Richard Kennaway (2009.06.24.0705 BST)–

Bottom line: When more than one independent variable can substantially influence a dependent variable, high correlations are impossible in principle. Nevertheless, the existence of even a low correlation between x and y shows either that x influences y, y influences x, or something else influences them both.

This is true if “influences” is taken to include indirect effects, i.e. A would be said to influence B in the situation where A has a direct causal effect on some variable C and C has a direct causal effect on B. But this actually tells you very little about where the direct causal links are…

Super post, Richard!! Thanks!

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2009.06.24.0930)]

Bill Powers (2009,.06.24.0945 MDT)]

Rick Marken (2009.06.24.0740) –

Who were you trying to drive out
when you said: "The response to my comments and Richard Kennaway’s
analyses has been “Oh, it can’t possible be that bad, I don’t
believe it, no way, how could so many people have been that wrong?”
A severe case of denial if I ever saw one.

I wasn’t picking on anyone in particular, and that’s the difference. If
someone wants to try that shoe on and wear it, that’s their business. The
guilty flee when no one pursueth.

So it’s better to drive people out passively rather than actively? I guess we’re both driver-outers, you’re just a higher class one;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009.06.24.1127 MDT)]

Rick Marken (2009.06.24.0930) --

So it's better to drive people out passively rather than actively? I guess we're both driver-outers, you're just a higher class one;-)

It's a matter of leaving room to change. But yes, definitely a higher-class one. Thanks for noticing.

Best,

Bill P.

[From Rick Marken (209. 06.24.1140)]

Bill Powers (2009.06.24.1127 MDT)–

Rick Marken (2009.06.24.0930) –

So it’s better to drive people out passively rather than actively? I guess we’re both driver-outers, you’re just a higher class one;-)

It’s a matter of leaving room to change.

Hey, I leave room to change. My approach is sort of water-boarding; people are free to change their mind and tell me what I want to hear;-)

But yes, definitely a higher-class one. Thanks for noticing.

Hard to miss;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Bruce Abbott (2009.06.24.1820 EST)]

Richard Kennaway (2009.06.24.1403 BST) --

Bruce Abbott (2009.06.24.0835 EST)

Richard Kennaway (2009.06.23.1507 BST)
RK: One might compare this to the effect of genuine measurements. My
weight varies over a range of about 5 pounds -- suppose a standard
deviation of 2 pounds. When I weigh myself, my scales have a
resolution of 0.2 pounds. If I assume they're accurate, then that's an
improvement ratio of roughly 2/0.2 = 10. Equivalent correlation: way off

the end of Table 1.

Genuine measurements? I don't understand the distinction you are making
between "genuine measurements" (weighing yourself) and predictions
based on regression. The measurements entered into regression analysis
can be just as "genuine" as any.

RK: But the act of predicting the other variable isn't a real measurement of
that variable. You can't measure someone's weight by measuring their
height, however accurately you do the latter. You really measure their
weight by standing them on a set of scales.

O.K., now I understand the distinction you are drawing. Using the predictor
variable(s) to estimate Y, rather than measuring Y directly, introduces
additional error variance that is an inverse function of the correlation
between the predictor(s) and Y. Not surprisingly, it's better to measure a
variable directly (using a reliable measure) than to attempt to infer its
value from a rather poorly correlated predictor.

RK: A strict Bayesian might say here that it's all evidence, it all adds up,
and it's a sin to discard it, but with low correlations that's like saying
that unplugging your mobile phone charger helps to economise on energy. If
the scales are accurate to 1%, or 6.6 bits of information, their height
won't significantly shift that measurement.

I am reminded of a study done by colleagues to investigate possible factors
contributing to performance in Elementary Psychology, as reflected in the
course grade-point average (GPA). The best individual predictor among those
investigated was the Nelson-Denny Reading Test, which had around a 0.6
correlation with GPA, if memory serves. To the extent that reading-test
scores measure reading ability under the conditions tested, about 36% of the
variation in GPA could be accounted for by differences in reading ability
among these college students. That left about 64% of the variation in GPA
unexplained. According to your analysis, this correlation provides a poor
basis for predicting the performance of any given student in the course.
However, if other, independent factors could be identified that correlate at
least moderately with GPA and were included in the regression, the
correlation might be raised high enough to make reasonably accurate
predictions of individual performance in the course. So yes, it's all
evidence, and (sometimes) it may add up to something useful. (In this case,
however, none of the factors investigated raised the multiple correlation
much above the simple correlation with reading test scores alone.)

My colleagues hoped that their study would identify variables that could be
taken advantage of to improve student performance in the course. For
example, if reading ability influences how well students do in the course,
then perhaps students who score low in reading ability might be given a
remedial course in reading to improve reading speed and comprehension. At
that time students were routinely given a reading comprehension test upon
admission to the university and made to take a remedial course if they
scored below a certain cutoff. The Department decided to prevent students
who fell below the cutoff from registering for Elementary Psychology until
they had satisfactorily completed the remedial course. The practical effect
of this was that most of these students, being good little control systems,
simply registered for another course that did not impose this requirement.
In light of your results, it would appear that low reading comprehension
scores may have screened out many individual students who would have passed
Elementary Psychology. And in terms of raising the course average GPA, there
were too few supposedly poor readers being removed, and too many who do
poorly despite good reading scores, to have a noticeable impact.

Bruce

[Martin Taylor 2009.06.24.23.57]
I posted a comment 24 hours ago almost exactly [Martin Taylor
2009.06.23.23.03] about the impossibility of getting high correlations
between x and z or between y and z when x and y are independent and
have a similar degree of influence on z. In that 24 hours, we have had
substantive comments from Rick, Bill, and Richard, none of whom
addressed the point I made. Rick just continued his usual mantra about
control being the only thing that can or should be studied, Bill more
or less repeated at greater length what I said about the problem with
“conventional” psych being the inability to test by modelling, and
Richard pointed out in one message that individual samples are even
less useful when dealing with a secondary influence than when dealing
with a first, and in another he repeated his recent point, which I
covered in detail in my earlier (1998, repeated in a recent thread)
analysis, about the lack of correlation where the influence is totally
causal, showing that you can’t use degree of correlation to judge
causal relationships. That is a point I have made many times over the
years, and I’m glad to see Richard pushing it.
It’s interesting that none of the comments addressed my main point,
which I can restate by quoting my original message: "Bottom line:
When more than one independent variable can substantially
influence a dependent variable, high correlations are impossible in
principle. Nevertheless, the existence of even a low correlation
between x and y shows either that x influences y, y influences x, or
something else influences them both.
Do not discard evidence of mutual influence just because the proportion
of variance accounted for is less than 0.5 (correlation 0.707).
Consider instead whether the influence indicated by the “low”
correlation is one that might be interesting to investigate further,
using different methods.

"

By “different methods”, I hope you understood “within a PCT framework”.

There’s another underlying point, which is that if there is a
consistent correlation between two variables of interest, it really
doesn’t matter whether the study was conducted in a way we might
consider proper. Either one variable affects the other or some
unspecified other variable affects them both. That is a fact. It is my
considered belief that properly understood, PCT will explain that fact
one day if at least one of the variables is a property of a living
organism. The fact itself should never be discarded, though any
proposed explanation of the fact may be, and probably will be, some day.

On the whole, I agree with most of the substance of the comments made
by Bill and Richard. I suppose I must, since, where they are at all
relevant to what I said, their gist more or less expands on what I
said. There are quite a few details with which I might take issue, but
I don’t think those details matter to my point. They might matter in a
different discussion, however, and it’s possible I might bring them up
some day in some other context.

Martin

[From Richard Kennaway (2009.06.25.1248 BST)]

[Martin Taylor 2009.06.24.23.57]
It's interesting that none of the comments addressed my main point, which I can restate by quoting my original message: "Bottom line: When more than one independent variable can substantially influence a dependent variable, high correlations are impossible in principle. Nevertheless, the existence of even a low correlation between x and y shows either that x influences y, y influences x, or something else influences them both.

Besides the response I posted three hours ago (but which has not shown up for me on CSGNET), another problem occurs to me with that principle.

Suppose that X and Y are causally unconnected, but both influence a third variable C. We might have Z = X + Y + noise.

If you inadvertently select a sample, the members of which are drawn from a narrow range of Z, then you will observe a correlation between X and Y, in this case a negative one. In the whole population, X and Y have zero correlation, but their correlation conditional on Z is negative.

Therefore, if you observe that X and Y are correlated, you cannot conclude a causal influence of either on the other, or both from something else, unless you know that your method of sample selection has not introduced a spurious correlation.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Richard Kennaway (2009.06.25.0946 BST)]
(actually a repost at 12:56, as the original message appears to have got lost)

[Martin Taylor 2009.06.24.23.57]
It's interesting that none of the comments addressed my main point, which I can restate by quoting my original message: "Bottom line: When more than one independent variable can substantially influence a dependent variable, high correlations are impossible in principle. Nevertheless, the existence of even a low correlation between x and y shows either that x influences y, y influences x, or something else influences them both.

I quoted that very paragraph at the top of my post of (2009.06.24.0705 BST), and went on to list all the causal networks consistent with the disturbance/perception/output correlations.

Do not discard evidence of mutual influence just because the proportion of variance accounted for is less than 0.5 (correlation 0.707). Consider instead whether the influence indicated by the "low" correlation is one that might be interesting to investigate further, using different methods. "

I am less optimistic about the usefulness of such a procedure. A correlation only tells you that something is going on somewhere. It doesn't narrow down the possibilities very much, especially if it's a weak one. If I can have a weak correlation between D and O, it would be easier to list the few causal graphs not consistent with the D/P/O correlations than the many that are consistent with it.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bill Powers (2009.06.25.0813 MDT)]

Martin Taylor 2009.06.24.23.57 –

MT: I posted a comment 24 hours
ago almost exactly [Martin Taylor 2009.06.23.23.03] about the
impossibility of getting high correlations between x and z or between y
and z when x and y are independent and have a similar degree of influence
on z. In that 24 hours, we have had substantive comments from Rick, Bill,
and Richard, none of whom addressed the point I made.

BP: I don’t have the same level of competence concerning statistics that
others in this group have. However, for me the point has never been
whether correlations are high or low, except as a vehicle for
communicating with people who use correlations all the time. I don’t draw
any conclusions from the correlations I calculate – I use control theory
instead.

MT: It’s interesting that none
of the comments addressed my main point, which I can restate by quoting
my original message: "Bottom line: When more than one independent
variable can substantially influence a dependent variable, high
correlations are impossible in principle. Nevertheless, the existence of
even a low correlation between x and y shows either that x influences y,
y influences x, or something else influences them both.

BP: As I say, I’m not interested in correlations per se, so if I see
a low correlation, that will not lead me to investigate further because
there are so many more reliable relationships to investigate. See my
“Essay on the Obvious.” I assume that if correlation is low and
I still don’t understand the system, it’s because I have the wrong idea
of what is going on, or because it’s just low. When I look at
experimental results, all I care about is how well the model fits them,
and if it doesn’t fit very well (one possible low correlation situation),
how the model (or the theory) can be modified until it fits better.
Finding a correlation doesn’t show I’m on the right track – in fact,
that seems to be what you’re saying, too: a low correlation can be all
you’re ever going to get because a higher one is impossible. So that
makes the following seem contradictory to what you said before:

MT:" Do not discard
evidence of mutual influence just because the proportion of variance
accounted for is less than 0.5 (correlation 0.707). Consider instead
whether the influence indicated by the “low” correlation is one
that might be interesting to investigate further, using different
methods.
"

By “different methods”, I hope you understood “within a
PCT framework”.

BP: If a low correlation is all that is possible to obtain, what is there
to “investigate further?” Why not just skip the correlation
part and go on investigating? I didn’t use statistics to arrive at the
PCT model: I used control theory and experience with designing and
building control systems.
The methods that depend on statistics assume that relationships among
variables are hard to find and that the best we can usually hope for is
just a hint, extracted with much labor, from data that consist
mostly of noise. With no other choice than doing the labor, a scientist
will be grateful to fate for granting him a little hint, and will work as
hard as necessary to pursue it. On the other hand, if you’re constructing
models, any failure to predict properly is taken as an error in the
model. You don’t just keep turning the crank and hoping to get steak
instead of hot dogs out of the machine. You fix the machine.
You say When more than one independent variable can substantially
influence a dependent variable, high correlations are impossible in
principle.
The control system equations are a good example of a case
like this: the disturbance and the reference signal, both independent
variables relative to the control system, substantially influence all the
variables in the control loop, which are all dependent variables (you can
solve for any one of them). What this means is that if you know both the
disturbance and the reference condition, you can calculate the exact
values of all the other system variables from the model. If there’s an
integrating output function you would calculate low correlations between
input and output variables, but since they are all functions of the two
independent variables, that’s irrrelevant. You can already calculate the
system variables exactly, without any uncertainty, so the correlations
are of no interest. Neither does a low correlation imply that you can
improve your knowledge by looking for a higher correlation.

Calculating the exact values of system variables in a model does not, of
course, guarantee that they will match measurements of the real variables
exactly. That’s a different matter. When there are errors in the fit, the
first thing we have to do is look for systematic, not random,
errors. It’s not even a statistical problem. It’s a matter of examining
the system more closely to get a better idea of the actual relationships
among the variables. Only when you find that all the residuals are random
do you give up and attribute the errors to unpredictable system noise.
And then, of course, since the errors are truly random, you’ve done all
you can.

I think the point Richard K. may be leading up to, or about to discover,
is that correlations are what they are and do not increase our
understanding at all. They provide an illusion of understanding. The
whole point of statistics is that it purportedly can reveal the presence
of a relationship without giving any information about what it really is.
As far as model-building is concerned, “degree of relatedness”
is an empty phrase: once you see that there is some degree of
relatedness, you still have to do exactly the same things you have to do
in modeling to find out what the relationship is. I don’t think that
modeling depends very often on tracking down relationships through
statistical methods. Mostly you just examine the real system and try to
represent its components as a system of equations which you can then
solve analytically or by simulation. And that usually works, in my
experience.

MT: There’s another underlying
point, which is that if there is a consistent correlation between two
variables of interest, it really doesn’t matter whether the study was
conducted in a way we might consider proper. Either one variable affects
the other or some unspecified other variable affects them both. That is a
fact. It is my considered belief that properly understood, PCT will
explain that fact one day if at least one of the variables is a property
of a living organism. The fact itself should never be discarded, though
any proposed explanation of the fact may be, and probably will be, some
day.

BP: My view is that models will entirely supplant the statistical
approach, because models propose systematic relationships, not simple
straight-line approximations that only scratch the surface of a
phenomenon. I think we can freely discard low correlations because if
there is really any relationship there (there is probably not), we will
discover it again through modeling and get a far better understanding of
it.

MT: On the whole, I agree with
most of the substance of the comments made by Bill and Richard. I suppose
I must, since, where they are at all relevant to what I said, their gist
more or less expands on what I said. There are quite a few details with
which I might take issue, but I don’t think those details matter to my
point. They might matter in a different discussion, however, and it’s
possible I might bring them up some day in some other
context.

BP: It’s always frustrating to think that you have made a new point only
to have it be ignored. I often have that feeling, since I’ve been making
the same points I’m making now for 50 some years. On the other hand, you
can be encouraged by seeing others making the same points, which can
indicate that your ideas have independent support, or perhaps that they
have made some inroads even when others don’t remember that they learned
them from you. In the end, what matters is that new ideas get preserved
and spread while the rest sink back into the primordial ooze where they
came from. We are all better off for that.

Best,

Bill P.

[From Richard Kennaway (2009.06.25.1705)]

[From Bill Powers (2009.06.25.0813 MDT)]
I think the point Richard K. may be leading up to, or about to discover, is that correlations are what they are and do not increase our understanding at all. They provide an illusion of understanding.

Heh. Here's something I posted this morning to the lesswrong.com blog:

"Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can't possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don't even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog."

I've hooked one person there, although he's now at the stage of saying "PCT is great, but it's even better if you combine it with all these other psychological theories!"

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bill Powers (2009.06.25.1051 MDT)]

Richard Kennaway (2009.06.25.1705) --

JRK: "Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can't possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don't even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog."

I've hooked one person there, although he's now at the stage of saying "PCT is great, but it's even better if you combine it with all these other psychological theories!"

I've been having some interesting conversations with David Goldstein, who also sees things of value in other approaches even though he's one of the oldest insiders in PCT. I hope David will post some of his arguments, because while I disagree with him, he is articulate and organized about his ideas and he demands some heavy thinking from us. I often find it hard to answer him.

The basic problem, I think, is with "other psychological theories." Is PCT just another psychological theory, or is it different in some basic way from what psychologists think of as a theory? The "locus of control" idea is an example. What is the locus of control? As near as I can see, it is a person's belief about what determines a person's experiences. If a person believes, or knows, that forces external to him determine what will happen, the "locus of control" is external and the person will try less hard to control, believing it to be impossible. So why isn't it useful to do tests about the locus of control in order to predict when a person will and will not try to exert control?

My answers to that question are multiple:

1. Such predictions will be wrong a lot of the time when applied to individuals. If it doesn't bother you to be flat wrong one time in three, then go ahead and predict. But it bothers me.

2. The general prediction is, if a person believes X to be impossible, the person will not try to do X. But that is almost tautological: the person believes that trying to control X will not work; the person does not try to control X. That's like saying the same thing twice in a row. Why do you not try to control X? Because it's impossible. How do you know it's impossible to control X? Because when I try to, it doesn't work. That comes down to "I can't control X because I can't control X." This is a theory?

3. Even though the word "control" is used in "locus of control", the subject is not control in the PCT meaning. Instead, "to control" means "to influence or determine." "External control" in PCT would mean that if X is externally controlled, anything you do to change X will result in some external agency's acting on X to restore it to its former condition. So if you say the wind controls the steering of a car, you're saying that any attempt you make to change the car's direction will result in the wind's changing its own direction and speed in just the way needed to cancel your attempt to change the car's direction, which is very unlikely. Control always is aimed at achieving or maintaining some goal condition. The wind does not try to maintain a goal condition of the car's motion. Therefore it does not control the car's motion.

So if all people mean is that they can control some experiences and not others, so what? Where is the theory in that? Why would we want to know that?

4. What a person believes is an accident of upbringing and experience, and has no significance as a theory of how people are organized. We could interact with people and find out what they believe, but that would only let us predict some of their actions some of the time and would not tell us anything about how they are basically organized. PCT is a theory about how people are basically organized. Most other theories are about behaviors people will carry out under certain circumstances, which is why they predict so poorly. There are too many variables to allow recreating any particular "circumstance." And anyway, it's not circumstances that determine behavior.

I hope we can get organized here and figure out the relationship between PCT and those "other theories."

Best,

Bill P.

[From Rick Marken (2009.06.25.1250)]

Martin Taylor (2009.06.24.23.57) –

Well, I can see that my attempts to drive you away have been manifestly unsuccessful. I think I’ll just throw in the towel and assume you’re going to stick with with no matter how inconsistent it is with conventional psychology I knew you could hang in there. Bravo!

I posted a comment 24 hours ago almost exactly [Martin Taylor
2009.06.23.23.03] about the impossibility of getting high correlations
between x and z or between y and z when x and y are independent and
have a similar degree of influence on z. In that 24 hours, we have had
substantive comments from Rick, Bill, and Richard, none of whom
addressed the point I made. Rick just continued his usual mantra about
control being the only thing that can or should be studied

I see that Bill and Richard have made their own wonderful replies to this. Let me just say that I did address your point about the impossibility of getting “high correlations between x and z or between y and z when x and y are independent and
have a similar degree of influence on z”. But perhaps I didn’t make it clearly enough. I answered from the point of view of conventional psychology, where z is a behavioral (dependent) variable and x and y are two environmental (independent) variables. If the two IVs are the only influences on the DV and the degree of influence of each is equal, then (ignoring error) each IV accounts for half the variance in the DV. So the r2 for each IV is .5 so r will be .707 (as you say). If there were three IVs that had equal and independent influences on the DV then each IV accounts for 33% of the variance in the DV and the correlation of each IV with the DV will be .57.

So when a conventional psychologist does an experiment and finds that the IV(s) account for, say, 30% of the variance in the DV, he or she assumes that some other variables can account for a large portion of the remaining 70% of the variance. That’s what the enterprise of conventional research is about; finding the variables that together account for all the variance in a DV (or, at least, all of the non-noise portion of the variance in the DV). The success of this enterprise depends, to some extent, on the open-loop “general linear” model of behavior being correct. My point was simply that after over 100 years of pursuing this research program psychologist are rarely able to account for more than about 30% of the variance in any DV (behavior) using any combination of IVs. To me, this is a profound failure of behavioral science and strongly suggests that the open loop model of behavior on which this research is based is wrong.

Bill and Richard make much stronger point in response to your post. But I do think my point above is another nail in the coffin of conventional psychology. But, apparently, no matter how many nails we put in, conventional psychology seems to always rise, Dracula like, for those who have been bitten. I guess that, like Dr Van Helsing, I just have a strong vill;-)

···


Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2009.06.25.16.42]

[From Richard Kennaway (2009.06.25.1248 BST)]

[Martin Taylor 2009.06.24.23.57]
It's interesting that none of the comments addressed my main point, which I can restate by quoting my original message: "Bottom line: When more than one independent variable can substantially influence a dependent variable, high correlations are impossible in principle. Nevertheless, the existence of even a low correlation between x and y shows either that x influences y, y influences x, or something else influences them both.

Besides the response I posted three hours ago (but which has not shown up for me on CSGNET), another problem occurs to me with that principle.

Suppose that X and Y are causally unconnected, but both influence a third variable C. We might have Z = X + Y + noise.

If you inadvertently select a sample, the members of which are drawn from a narrow range of Z, then you will observe a correlation between X and Y, in this case a negative one. In the whole population, X and Y have zero correlation, but their correlation conditional on Z is negative.

Therefore, if you observe that X and Y are correlated, you cannot conclude a causal influence of either on the other, or both from something else, unless you know that your method of sample selection has not introduced a spurious correlation.

You are quite right. I hadn't thought of this possibility -- I was thinking of the underlying correlations rather than the estimates from sampled data -- but it is an important possibility. You bring up a problem that always has dogged studies of sampled population: "How can you be sure that your sampling has not generated the result in question?" Typically, one would be looking at X and Y, knowing nothing of Z. You don't know that something is clamping Z, and therefore see a relation between X and Y that vanishes when the clamp is released. Nasty! But if you do know about Z, not nasty, but potentially interesting.

In your other message [From Richard Kennaway (2009.06.25.0946 BST)] in which you pointed out that you had indeed quoted my comment right up front, I understood the point of your argument to hinge on the fact that non-correlation does not imply lack of causal influence, and that therefore in a system of several variables, you can't use the level of correlation to judge the relative strength or directness of causal influences. You used a control system as an example. In a control system, every variable is causally related to the disturbance and reference variables, but none is causally related solely to either of those. The direct causal links include some for which the correlation is zero or near zero (e.g. error -> output, disturbance -> perception). It's a nice illustration of the point.

My point, however (which is somewhat diluted by your sampling argument when we consider real-world studies) is that the existence of correlation does imply a causal link. You point out that if the correlation is between x and y the causal network could be (1) x->y, (2) y->x, (3) w->x and w->y, or (4) x->z and y->z. Any one of these could indicate a fact that might be important to examine more closely. Case 4 is often of interest when z is deliberately fixed as a conditional.

Can you think of other cases?

Martin

[Martin Taylor 2009.06.25.17.04]

[From Rick Marken (2009.06.25.1250)]

Martin Taylor
(2009.06.24.23.57) –

Well, I can see that my attempts to drive you away have been manifestly
unsuccessful. I think I’ll just throw in the towel and assume you’re
going to stick with with no matter how inconsistent it is with
conventional psychology I knew you could hang in there. Bravo!

I must admit that your attempts to drive me away sometimes do succeed
– for a while. But then I think to myself: “PCT is REALLY important,
and I should not allow myself to be discouraged by polemics that have
the effect of trying to separate PCT from the body of Science (and I
don’t mean 'from conventional Psychology”)". So I return, and try once
more to treat PCT as a science rather than as the private preserve of a
religious elite.

What I really fail to understand is why you have such a strong gain
controlling your twin perceptions (1) that PCT is not a real science,
and (2) that my objective is to subvert PCT rather than to develop it.

I posted a comment 24 hours
ago almost exactly [Martin Taylor
2009.06.23.23.03] about the impossibility of getting high correlations
between x and z or between y and z when x and y are independent and
have a similar degree of influence on z. In that 24 hours, we have had
substantive comments from Rick, Bill, and Richard, none of whom
addressed the point I made. Rick just continued his usual mantra about
control being the only thing that can or should be studied

I see that Bill and Richard have made their own wonderful replies to
this. Let me just say that I did address your point about the
impossibility of getting “high correlations between x and z or between
y and z when x and y are independent and
have a similar degree of influence on z”. But perhaps I didn’t make it
clearly enough. I answered from the point of view of conventional
psychology, where z is a behavioral (dependent) variable and x and y
are two environmental (independent) variables. If the two IVs are the
only influences on the DV and the degree of influence of each is equal,
then (ignoring error) each IV accounts for half the variance in the DV.
So the r2 for each IV is .5 so r will be .707 (as you say). If there
were three IVs that had equal and independent influences on the DV then
each IV accounts for 33% of the variance in the DV and the correlation
of each IV with the DV will be .57.

Yes, you did say this, but I could not see that it said anything
relevant to my point. It only illustrated my statement about the fact
that when you have truly causal effects of several variables on
another, you can’t get high correlations. You gave a hypothetical
example in which there are indeed hard links between several IVs and
one DV, and showed the way a conventional psychologist would use
relatively low correlations to argue for the existence of these causal
connections. Scientifically, the fact that these causal relations exist
is what matters. The fact that conventional methods don’t tease them
out is a knock on conventional methods, not a reason for dismissing the
fact that those conventional methods highlighted a fact that needs
explaining.

So when a conventional psychologist does an experiment and finds
that the IV(s) account for, say, 30% of the variance in the DV, he or
she assumes that some other variables can account for a large portion
of the remaining 70% of the variance. That’s what the enterprise of
conventional research is about; finding the variables that together
account for all the variance in a DV (or, at least, all of the
non-noise portion of the variance in the DV).

So far, I suppose I’m with you. I would call that the “Exploratory
stage” in which the researcher finds suggestions of some effects that
are going to need explanation, following which a real science requires
a modelling stage that uses the facts suggested in the exploration to
ascertain what really might be going on. You can’t really do useful
modelling unless you have a reasonable idea of the territory being
modelled, any more than a 17th century explorer could navigate freely
through the barrier reef near a newly discovered continent. We build
understanding on top of probability, not on guesswork. Within PCT, we
do usually control quite a few variables at any one moment, and quite
often one of these control systems will influence another. You don’t
find those by conducting a single “Test for the Controlled Variable” or
by modelling the control of one continuous variable using linear
operators. To study such situations is well within PCT, but to find
what may be conflicting with what, correlation might sometimes be
useful – might it not?

Richard’s example today [From Richard Kennaway (2009.06.25.1248 BST)],
of x and y both influencing z is a case in point; if z is a controlled
variable disturbed by x and y, then x and y will be correlated despite
being causally unconnected. What the correlation does is to show that
there is something in the environment that connects them, and to seek
out that something (guessing that there might be a controlled variable
that they both disturb, for example) could be a valuable exercise.

The success of this enterprise depends, to some extent, on the
open-loop “general linear” model of behavior being correct.

Here I disagree. Unless by “to some extent” you mean “sometimes, for
some researchers, investigating some problems”.

Much more important, I think, is whether the researchers have taken to
heart William James’s comment about the shape of the bottle not being
important so long as one gets drunk. Whether the researcher knows how
to analyze general linear control systems (the kind usually modelled in
PCT studies) is much less important than whether they consider the
effects of feedback. Few do, but that’s irrelevant in some studies
(here you and I have a strong disagreement, which I take to depend on
our different attitudes toward whether PCT follows the rules of normal
science such as the laws of thermodynamics, or rules of consistency in
applying its own internal laws).

My point was simply that after over 100 years of pursuing this
research program psychologist are rarely able to account for more than
about 30% of the variance in any DV (behavior) using any combination of
IVs.

Have you ever really looked at the modelling of auditory detection and
discrimination data? Truly?

Bill and Richard make much stronger point in response to
your post.

I don’t really think they made any point relevant to mine in their
messages of yesterday, though Richard has made a very relevant point
today. I have some disagreements with what Bill posted both yesterday
and today, but since those disagreements are off-thread, I’m not
concerned with exposing them now, nor am I in a position to take the
time to deal with them properly in the thread that would probably
follow if I did go into them.

Martin

[From Dick Robertson,2009.06.26.1100CDT]

[From Bill Powers (2009.06.25.1051 MDT)]

I hope we can get organized here and figure out the relationship
between PCT and those “other theories.”

How coy can you get? Don’t we all already know that the relationship between PCT and those “other theories” is like the relationship between Ptolemy’s cosmology and Copernicus’s? However if I’m not mistaken, sailors continued to use the tables Ptolemy had created about the positions of stars at different dates of the year to help decide where they were and in which direction to go, even after they were convinced that Copernicus had the right idea about the relationship between earth and sun.

I wonder if that comparison has any similarity to what Martin has in mind in arguing for taking “findings” from conventional psychology studies and looking at them from a PCT point of view? Just a thought.

The discussions about what inferences one can draw from the math in the correlation examples–I wonder whether Warren might have some graduate students who would illustrate them with concrete examples of different research studies, for IMP2? Like along the lines of one of the illustrations that Bruce Abbot gave recently.

You guys who are conversant with the math don’t need to see it illustrated with concrete examples, but it could help those of us who are interested but have trouble drawing meanings from looking at the different equations.

Best,

Dick R

···

[From Bill Powers (2009.06.26.1044 MDT)]

Dick Robertson,2009.06.26.1100CDT --

How coy can you get? Don't we all already know that the relationship between PCT and those "other theories" is like the relationship between Ptolemy's cosmology and Copernicus's? However if I'm not mistaken, sailors continued to use the tables Ptolemy had created about the positions of stars at different dates of the year to help decide where they were and in which direction to go, even after they were convinced that Copernicus had the right idea about the relationship between earth and sun.

Sure, we know it, but can we make that obvious to anyone else?

The point I'm hoping we can make is that the standard psychological categories don't explain anything. Depression, anxiety, and so on through the DSM are categories of visible behavior patterns and feeling states, not diagnoses of underlying causes. They call to mind a constellation of symptoms without ever saying what they are symptoms of.

Warren Mansell has been writing about the "transdiagnostic" approach, which seems to be gaining support. This approach takes us a step closer to abandoning the disease categories and beginning to look for answers to the question, "What is wrong with a person diagnosed as X?" The usual answer in the past has been "What's wrong is that this person has X disorder." But with control theory we can look for a completely different sort of answer. We can ask about the person's hierarchy of perceptions and goals, and the degree of success the person has in controlling the relevant variables. We can investigate conflicts and other causes of chronic error; we can see whether perceptions have been defined so they are controllable, and whether skills are missing.

In other words, rather than dealing with invented categories, we can use control theory to investigate directly how well a given hierarchy of control is working. Doing this means asking questions never asked by psychology, and it also means ceasing to have much interest in the questions that are normally asked. This causes trouble: a psychologist asks, "What does PCT have to say about the Locus of Control?" But I'm not interested in the locus of control and PCT isn't concerned with that sort of thing at all. I don't really have anything to say about it, and shouldn't even try.

I'm still trying to find a base of ideas from which to work, here. Don't hold me to my mental wanderings. I must be reorganizing, because I have a sense of disorganization. With luck, something may condense out of the fog.

Best,

Bill P.

[From Rick Marken (2009.06.26.1610)]

Martin Taylor (2009.06.25.17.04)–

What I really fail to understand is why you have such a strong gain
controlling your twin perceptions (1) that PCT is not a real science,
and (2) that my objective is to subvert PCT rather than to develop it.

I can see why you might think this. If your idea of real science is the open-loop causal approach that is considered to be the scientific method_ in psychology then the closed loop approach I am arguing for may look like I’m trying to say PCT is not real science. Actually, PCT does suggest a pretty new way of doing science since it is based on the possibility that the systems under study are closed rather than open loop systems. Up to this point, all “real” science hasan make been based on the assumption that the system under study is open loop. This assumption seems to work fine in the physical sciences but in the life sciences, not so much. And I certainly don’t think your goal is to subvert PCT. I just don’t think you (or anyone) can make any real scientific contribution to PCT until you understand the importance of using closed loop methods to understand closed loop systems…

So the r2 for each IV is .5 so r will be .707 (as you say). If there
were three IVs that had equal and independent influences on the DV then
each IV accounts for 33% of the variance in the DV and the correlation
of each IV with the DV will be .57.

Yes, you did say this, but I could not see that it said anything
relevant to my point. It only illustrated my statement about the fact
that when you have truly causal effects of several variables on
another, you can’t get high correlations. You gave a hypothetical
example in which there are indeed hard links between several IVs and
one DV, and showed the way a conventional psychologist would use
relatively low correlations to argue for the existence of these causal
connections.

That was not my point. My point was that the goal of scientific research is to explain the variance in some behavior (DV). One goal of research is to find the variables that explain this variance. Eventually, research should lead to identification of the variables that account for nearly all the variance in a behavior. In fact, after 100 years of research, this has never happened with any behavior.

Scientifically, the fact that these causal relations exist
is what matters. The fact that conventional methods don’t tease them
out is a knock on conventional methods, not a reason for dismissing the
fact that those conventional methods highlighted a fact that needs
explaining.

Have you ever read Bill’s 1978 Psychological Review paper? If not, I wish you would please read it and post a critique. What you say here – that conventional methods highlight the existence of causal relationships–contradicts the main point of that paper. What I got from Bill’s paper is the idea that where conventional psychology sees an IV having a significant (causal) effect on a DV, control theory see a behavioral illusion.

Within PCT, we
do usually control quite a few variables at any one moment, and quite
often one of these control systems will influence another. You don’t
find those by conducting a single “Test for the Controlled Variable” or
by modelling the control of one continuous variable using linear
operators. To study such situations is well within PCT, but to find
what may be conflicting with what, correlation might sometimes be
useful – might it not?

I have no idea what you are talking about here so I can’t answer your question.

Richard’s example today [From Richard Kennaway (2009.06.25.1248 BST)],
of x and y both influencing z is a case in point; if z is a controlled
variable disturbed by x and y, then x and y will be correlated despite
being causally unconnected.

This is simply not true. If x and y are disturbances to a controlled variable, z, they can be completely uncorrelated.

My point was simply that after over 100 years of pursuing this
research program psychologist are rarely able to account for more than
about 30% of the variance in any DV (behavior) using any combination of
IVs.

Have you ever really looked at the modelling of auditory detection and
discrimination data? Truly?

Certainly, and I’ve even done some of it. I said “rarely” able to account for more than 30% of the variance because that is the mean (and modal) value based on my review of research. But you do sometimes find very high r2 vaues in experimental research, usually using just a single IV as the predictor. All this means is that the research involved a situation where the person was able to perfectly resist the disturbance (IV) to the controlled variable with their behavior (DV).

Rather than carry on with this argument, I would really like to hear what you think of Bill’s 1978 Psychological Review article. That’s the one that really got me into PCT. It seems like you disagree with the basic conclusion of that paper. I’d really like to know what you think of it.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com