From Tom Bourbon [940802.1418]
[from Jeff Vancouver 940728]
Tom Bourbon [940725.1200]
Briefly, 1) organizations are going to select (discriminate) regardless of
psychologists providing them tests (they will do something because they must)
Fine, so let them do it. Just don't expect me to participate in the
misapplication of poor psychological data in a manner that unjustifiably
harms the people who are tested and discriminated against. People who do
such things do so because they intend to do so, not because they must. If
psychologists are satisfied to earn a fat fee by helping employers in that
discriminative task, power to them. For me, I've taken the PCT poverty vow.
2) prior to providing those tests organizations tended to discriminate
unfairly (the popular notion of discriminating) and poorly, that is,
organizations used methods that predicted performance very poorly.
And now they discriminate fairly? Using "instruments" that are wrong in
from 80% to 95% of the cases? Sorry, but I don't buy into that. The
tests harm many more innocent people than they help.
4) now _some_ organization use method that predict performance much better
(particular when used together) and thus save the organizations large
amounts of money
Much better? In your original post on this thread, you said the correlation
between interviews and job performance was about .1 (any data on that?) and
that the correlation between psychological tests and job performance ranges
from .3 to .6 -- correlations that yield the percentages of incorrect
predictions I mentioned above. Even I
can see that the proportion
of failed predictions went from .995 (99.5% of them were wrong) when only
the interview was used, to .80 or .954 with the tests. I can see the
difference, but I can also say, as a psychologist expressing a personal
opinion, the difference isn't something I would be proud of.
Much better? Could you describe your criteria for making that statement?
As for saving the organizations money, if they say so, I accept it. After
all, we are talking about _their_ bottom lines.
You say that when tests are used in combination, the results are even
more impressive. But if the tests are independent, as the tester would
want them to be, then the results are multiplicative. If you use two
independent tests, each of which correlates .3 with job performance, then
each of them "explains" .09 of the variance -- when used alone. When you
use them together, they do not explain .09 + .09 = .18; but .09 * .09 =
.008. Use them together and you are worse off than with either used alone,
and neither was very good when used alone.
5) individuals who are not selected by these tests are often better off
because they would have been fired eventually or not done well,
which is usually frustrating and debilitating.
Hmm. That's _very_ interesting. Let me try to get this straight, because
you seem to be alluding to a breakthrough in predictions that is of major
proportions. At a correlation of .3, a test would misclassify 95.4% of the
takers with regard to their performance on the job for which they applied.
Yet, you are saying many of the people were in fact _correctly_ identified
and further that those who were rejected would indeed have done sufficiently
poorly they would have been fired. Can you tell us about how someone would
decide whether any particular person who was rejected would have been one
of the sure-fire fired failures? We could probably make a fortune by
applying your technique. 
6) the general public is often better off (we do not want airplane pilots
that cannot fly very well, which we might not be able to tell except under
adverse - or in this case - simulated adverse conditions).
Agreed, and I'm damned pleased the pilots who took me to the meeting and
back were good at their profession. But you were talking about something
else -- tests that lead to wrong conclusions in from 80% to 95.4% of their
administrations. Pilots aren't selected that way.
7) individuals can use the results of tests to clue them into deficiency
and competences - and often do.
Sorry, I don't follow you here. Can you help me?
bottom line: tests give use more information than no tests. We must use
that information responsibly (and we have associations that attempt to see
that we do).
Once more, I do not deny that there is a difference between 99.5% errors
and 95.4% errors. Speaking for myself, I think the only way to use such
information responsibly is to warn the public and do all we can to eliminate
the present abuse of innocent test takers.
But psychologists help develop the tests and the methods for using the
information gained from them responsibly. (e.g., we have alway advised
against using the MMPI for selection purposes - it was to designed to aid
in diagnosis).
Yes, psychologists often do try to prevent applications of their tests
outside the settings for which they were designed. I respect (some of)
those efforts. However, I'm afraid my concerns also extend to applications
in the original, intended settings. Poor correlations are poor correlations,
no matter where they occur; abuses that arise from the application of poor
correlations are abuses, no matter where they occur.
I just picked up Runkel's casting nets book. He acknowledges the uses of
the method of frequencies. This is what I have described above.
I'm glad to hear you got Phil's book. I would recommend it to _everyone._
However, I don't think the tests you described illustrate the method
described by Phil. In fact, I believe Phil would identify most uses of
psychological assessment as _inappropriate_ applications of the method of
relative frequencies. When it is used properly, the method of relative
frequencies tells you that certain proportions of people are found in
certain categories. As important as that result can be sometimes, that
is all the method of frequencies tells you. It leaves you in a situation
where you can make _absolutely_ _no_ statement about any particular
individual. Any application of group data (even of properly collected
group data) to specific individuals is unjustified.
The method of specimens (and PCT) is we our profession needs to better
understand humans and thus construct better tests (instruments is the
better word, but too long).
Yes! On this, I believe Phil Runkel would agree, as well. And the _only_
way to design better tests for predicting what a given individual will do is
to study people one at a time and, paradoxically, thereby learn something
about how all of them "tick." Phil called that kind of research the method
of specimens. PCT is an example of a science that studies individuals, as
specimens of the species, or more generally as specimens of life.
Let us know what else you think of Runkel's book.
Tom Bourbon [940725.1633]
See my address above for sending the models paper. Appreciate it.
Great. A copy of "Models and their worlds" will be in the mail tomorrow.
If you read it, that will make a total of five or six people in all the
world. 
I am still waiting to hear your reply to the rest of my post. I do not,
nor does Bandura or Locke, interpret the S-O-R symbol as requiring lineal
causality (although I see why it is easily interpreted that way).
But it _is_ lineal, Jeff. It includes two assumed end points, with
causality moving from the beginning to the conclusion. It doesn't matter a
whit that they put something between the beginning and the conclusion --
causality still works in one direction with two end points. The same can be
said of _every_ information processing "model" that speaks of
Input-Processing-Output. Every such model is a variation on the same lineal
theme -- and that theme is inadequate as an explanation of the behavior of
living things.
Bandura
spends much time in his recent work to the recripricol determinism idea
(cyclical causality), which I use frequently (but have a problem with the
looseness of words - given I know PCT)
Ah, but the fact you know PCT should make Bandura's cute little word games
all the more unacceptable to you -- well, I can't defend that kind of
prescription for you, but it certainly applies to me. The phenomenon of
control is not an example of "cyclical causality," as Bandura defines that
term, but it is an example of a continuous, simultaneous relationship
between an organism and some particular part(s) of its environment. If
Bandura knows the difference, then he would serve science better were he to
speak clearly and draw the distinctions crisply. But I believe there is
ample evidence he does not know the difference; what he believes, he says.
It is one thing to believe there is something "reciprocal" about the
relationship between person and environment; it is quite another to
understand how such a relationship can come about and persist. Up to now,
from all I have seen, Bandura hasn't a clue about how it can happen. In
fact, Bandura has made a special point of rejecting, out of hand, both
(1) descriptions of the phenomenon of control and (2) the PCT model. He is
clueless.
But Bandura and Locke's models are flow charts (Powers 940507.1420), not
system diagrams. That is why they cannot model their theories (and why
PCT is fundamentally better than their theories).
Agreed, on the first and final points, but not quite agreed on one in the
middle. They use flow charts by design -- by intention -- not out of
necessity. They have no intention of modeling their ideas. They verify
their ideas by assertion, by citations of data that are lousy but are
statistically significant, and by appeal to authority, not by demonstrating
that the ideas work. It is by their own design that they do not model their
theories.
However, there are
practical application of their flow charts that PCT is not capable of
making. Like, if performance is low, check self-efficacy, if it is low,
try to increase it, performance often improves (which makes EVERYONE
happier).
Again, Jeff, I believe I understand why someone, a psychologist for example,
would want to know about or talk about such things, but the constructs are
just too inexact for me. They define "self-efficacy" operationally -- in
terms of test scores that correlate with -- with what? And why do they
accept correlations that, while statistically significant, suffer error
rates as high as those for the pre-employment screenings you described
earlier? I have seen no evidence from them that "self-efficacy" exists, as
they define it, much less that it can act to cause behavior.
They use poor data and untested theories as justifications for their
statements about "big" topics. The fact that PCT modelers often refrain
from speaking about many of those topics should not be taken as evidence
that those who do speak, speak from a base of scientific knowledge.
What I want to know is how does a belief like self-efficacy
plug into PCT? (My previous post began to talk about that).
I think Rick gave a good answer to this question.
One final question (for now).
Wow! You have really fired off a salvo of questions! I have been at this
longer than I should have been and must run to the lab for a while. I
promise to come back to the final questions. Note the plural -- you
didn't stop with just one! :-))
Later,
Tom