[From Kenny Kitzke (2009.11.20)]
Richard, your post is sunshine that makes me “high” today as some needed rain in western Pennsylvania is ending today and the sun is peeking out. I think that 'high" describes an emotion! It’s also in a favorite song by John Denver whose music almost always makes me “high.”
Correlation is a useful statistical tool. I have some knowledge of it from a college course on business statistics. That knowledge was expanded when at the age of 40, I entered the field of (statistical) quality and process control typically referred to as SPC (statistical process control). SPC was touted as one of the valuable tools in Japan’s ascent to product, business and economic parity and even global dominance in some industries.
As you, and some of our listmates, may well know, another more popular tool for problem solving is the Cause and Effect Diagram (CED), more properly called the Ishikawa Diagram for its Japanese inventor, or is often called the Fishbone Diagram.
Both tools are popular in the business world but the CED is used far more frequently, especially in service rather than manufacturing businesses. These tools are often taught to “experts” known as “black belts” under such company performance improvement systems as Six Sigma developed and made famous by USA companies like Motorola, General Electric, etc. But, these systems have also taken root in some service businesses. Just yesterday, at a monthly Westinghouse retiree breakfast, one man told me his daughter just received her “black belt” and she works for the second largest health insurer in our region, a company named Highmark Blue Cross and Blue Shield.
The reason for this posting is that the improper use of CEDs and Correlation in problem solving is a hot button (big disturbance in PCT speak) for me. I have seen so much misuse by the blind leading the blind that I am appalled. The misuse of such analysis tools in fields like economics and health care are like adding gasoline on the fire.
So, I want to thank you and encourage you to pursue this topic. Better applications of these tools can have real impact in the decisions made about business and social policy. One of the original attractions for me to PCT was the idea of “control” as applied in human behavior rather than in inanimate processes.
My only business contract in semi-retirement is ending in December. If I fully retire, I will have more time to do work relevant to these tools as they relate to PCT. So, I urge you to continue to post what you discover in this book. You comment about how PCT analyses can produce seemingly paradoxical proof where highly correlated variables can have no causality or lowly correlated variable can actually reveal what is called a “root cause” is particularly profound. Of all the black belt practitioners I have met, I would be shocked if anyone would understand this. Yet, I believe a knowledge of PCT and testing for controlled variables would add a paradigm revolution is causation and how to more effectively solve problems.
I think I will post some opening thoughts about Root Causes that are buzz words in businesses and politics that often ignore reality and misuse statistical tools. I perceive that CSG Net is blessed with folks like you, Bill, Rick, Martin, Dag, David, Dick, Bruce A., Bruce N, etc., whose knowledge of PCT combined with individual vs. group (population) statistics (specimens and nets) can really add value to the ability of people to understand and solve problems for people and society.
Again, thanks for the sunshine, which I now see out my window.
In a message dated 11/20/2009 9:25:34 A.M. Eastern Standard Time, R.Kennaway@UEA.AC.UK writes:
“Any researcher in the behavioral or social sciences can sympathize
with the urge to make causal inferences from correlational data. We
persistently find these moderate .40-.80 correlations between
variables. It is frustrating not to know the causes of these
correlations. We can speculate about causality, and we can elaborate
the speculations with complex statistical analyses. Such speculation,
whether based on simple or complex statistics, does not demonstrate
causality, and to assume that it does is frustration-reduction, not
scientific rigor. What is needed in order to discover causal
relations is (a) better data that yield closer relations, (b)
creative but self-critical attempts to validate the implications of
the models that are developed, and © insight and ingenuity in
creating appropriate models rather than reliance on mechanical
statistical processes. Until our disciplines can take such steps,
they will flounder in the limbo that several sardonic observers have
dubbed “casual muddling” not causal modeling. One can feel that it is
not “causality” that is in crisis. Rather, it is those disciplines
that accept weak evidence for causality as proving that relation that
are themselves in crisis.”
Norman Cliff, ending a review of V.R. McKim and S.P. Turner (Eds.).
“Causality in Crisis? Statistical Methods in the Search for Causal
Knowledge in the Social Sciences.” (Notre Dame, 1997). Full review
(probably behind a paywall) at
(Psychometrika, V. 64, N. 2, 253-257)
I have the volume on order through interlibrary loan. The context
for this is that I’ve recently been looking at books and papers on
causal analysis of statistical data, and comparing the techniques
with the reality that control systems often produce the paradoxical
effect of low correlations between causally connected variables and
high correlations where there are only indirect causal connections.
Richard Kennaway, firstname.lastname@example.org, http://www.cmp.uea.ac.uk/~jrk/
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.