[From Dag Forssell (951128 1420)]
[Bruce Abbott (9527.1650 EST)]
And you were beginning to think I'd never reply to this one! (;->
And you probably have concluded I am not going to respond to you
either.
Since I have not posted for a month, a short recap of my assignment
may be in order:
[Bruce Abbott (951030.1715 EST)]
At this point, Dag, I will stop to ask you what you think
about what I've said so far in this little exposition of
reinforcement theory. Do you think that Thorndike's approach
was a sensible one? Was it, in your opinion, science or mush?
Explain.
[Dag Forssell (951031 0945)]
I am glad you have joined my challenge to distinguish science
from mush. I have come to think that this is a very important
issue, one that the overwhelming majority of people never give
any thought, and one that is at the root of many fruitless
discussions on CSG-L.
Bill P., Rick M. Fransisco A. and others joined the open-book exam.
Significant to me, you then stated:
[Bruce Abbott 95-10-31 17:27:25 EST]
I take Thorndike seriously, but perhaps not in the way you
imagine. I offer Thorndike's experiment and his analysis in
order to bring into focus what sorts of activity you (and I)
believe qualify as legitimate science. The fact that
Thorndike's analysis may have "glaring errors" (from our
perspective nearly 100 years later) is another issue. I take
Thorndike seriously, not because I think his analysis was
correct, but because I think his experiment and its analysis
point to unresolved difficulties for HPCT. But this, again, is
another issue.
I felt that you pulled the rug out from under my assignment even
before I began to address it. Both Bill and Rick provided detailed
critiques, however.
I am very pleased to see that an extensive thread has unfolded from
my challenge, while I have attended not one but, on successive
weekends, a total of three weddings and otherwise kept very busy in
my new, regular life (not spending full time on PCT, but developing
a paying career before I get back to PCT full time in the future).
I sent you my book on PCT. You may have noted that I recognize
that the words theory and science have such vastly different
meaning to different people, that we might as well consider any
infant to be a scientist. After all, the infant is very busy
creating an understanding of the world in the developing mind. We
are all scientists. A review of my book would be welcome.
You have apparently felt that I, and other PCTers with me, slight
scientists of centuries past by labeling their theories mush. I
will readily grant you that Aristotle was the greatest scientist
that ever lived (If you count longevity of acceptance and
reverence), but does that mean that we consider his ideas science
today? Misleading mush is more like it.
By focusing your assignment on whether Thorndike's approach was
scientific in the context of his time, you pull the whole
discussion in a direction I did not expect. You have since been
granted that Thorndike did his best. Is this the essence of your
argument that we should learn from existing EAB research? They
have done their best! Why not learn from Aristotle and Alchemists
with the same sincere respect? I am not respectful of Aristotle--
he never tested anything. I *am* respectful of Alchemists. They
were able to make things! But their explanations were mush, and we
have thrown out their theories as essentially useless for
contemporary science.
I am also respectful of wise clinical psychologists, but
psychological theories are mush. They continue to be so, long
after Thorndike. There is no excuse for this, but we must respect
these as serious theories anyway? I will grant you that clinical
psychologists are scientists, but not in the modern, physical sense
of the word that I have advocated (I have repeatedly suggested that
PCT is a physical science). Conventional psychologists are no more
and no less scientific than the wise men who wrote the Old
Testament, the Tora, the book of Tao, Buddha, Confucius etc. The
discussion of what is science or a scientific approach came down to
the observation that it is a matter of private opinion. These
concepts are obviously carefully controlled, subjectively held
systems concepts. Every individual develops and holds their own
systems concepts.
Without a concept of a mechanism that holds up when tested, you
cannot build a science that "works" to a high degree, whether in
chemistry, astronomy, physics or psychology. This has nothing to
do with respect for the individual.
In your response to Bill P.'s critique of Thorndike you say:
[Bruce Abbott (951101.1535 EST)]
This misstates the case. Probability is inferred from
relative frequency (the observations), but the probability of
a behavior is assumed to be a function of some as-yet-unknown
mechanism in the brain, operating in conjunction with current
perceptual inputs. The probability itself does not cause
anything.
Your appeal to and willingness to take seriously "some as-yet-
unknown mechanism" is a major reason for the mushiness of
reinforcement theory. This kind of reasoning is not falsifiable.
This is why discussions around it become fruitless.
Shannon Williams (951101) (2 posts) made lucid observations,
further illustrating the mushiness of reinforcement theory:
S-R describes a correlation between environment and behavior.
But correlation is not cause. If you cannot visualize a
causal mechanism, then you delude yourself if you think you
hypothesize about cause. What is worse is: a hypothesis that
does not have a causal mechanism is not subject to error. It
is infallible. And progress stops because if you cannot see
where a hypothesis needs improvement, what would cause you to
improve it? And what would cause you to turn away from it?
Debate continued. You explained Thorndike's observations:
[Bruce Abbott (951102.1115 EST)]
QUESTION 2
During this ongoing activity, the door suddenly opened,
presenting the cat an obvious escape route. This immediately
satisfied the reference for the "find-a-way-out" system and
thus reestablished control by the "minimize distance to food"
system.
To talk about a "find-a-way-out" system and a "minimize distance to
food" system strikes me as a version of conventional talk about
behavior. I read: "find-a-way-out" behavior and "minimize distance
to food" behavior. This is a mushy way to discuss PCT, and this
kind of talk is what makes conventional psychological theory mush.
With PCT and especially HPCT we recognize that there is no such
thing as a "find-a-way-out" system or "minimize distance to food"
system. You are guessing about reference signals in a hierarchy
and doing a poor job of it. I don't think I have ever had a
reference for "minimum distance to food". Have you? (I did not
smear Christine's face with wedding cake).
The classic answer is that the cat is FORMING AN ASSOCIATION.
That is, it is beginning to relate the perceptions that were
active at the time the door opened to the opening of the door.
Why to the opening of the door? Because that event was at the
time very important to the cat; it eliminated error in one of
the cat's perceptual control systems at a time when its usual
actions were proving incapable of doing so.
However, there is a problem confronting the cat, and that is
to identify which if any of its perceptions correlates with
the opening of the door--the "assignment of credit" problem.
It could have been a coincidence. It might have been produced
by something the cat DID (i.e., cause-effect). If the latter,
what was the critical element? What perception arising from
the cat's ongoing behavior must be recreated? The perception
of moving from left to right across the box? The perception
of the pole pressing against the skin of the neck? Something
else? If the cat lacks the capacity to reason it out, to
develop and test hypotheses, what other mechanism might lead
to a workable solution?
My guess would be some kind of "coincidence detector." Back
in the box for Trial 2, the cat's coincidence detector does
not yet have enough input to narrow down the possibilities.
So the cat goes back to its usual activities, under the
control of those systems previously described, and probably
others. With time, the cat's activities carry it across the
pole for the second time, and again the door opens. The
coincidence detector is starting to develop an association
(correlation) between the perception of "doing _this_ as
opposed to _that_, that this particular goal was being pursued
(e.g., perceive rubbing against the pole) and not some other.
So the cat attempts to repeat what it was doing at the time
the door opened. It rubs against the pole, but not hard enough
to spring the latch. No coincidence. A little later, the cat
tries again, and the door opens. What was different? More
pressure against the pole? A different angle of approach?
The coincidence detector adjusts its parameters. Ultimately,
what is being adjusted is a set of reference levels having to
do with approaching the pole from a given direction and making
contact with it, with at least a certain amount of pressure,
along a particular surface of the body. Approach from another
direction, contact with a different part of the body, would
also work, but the cat's mechanism does not "know" that. So
the now efficiently-escaping cat is observed to repeat a
highly stereotyped movement against the pole on each trial.
Hans Blom (951101) has already proposed a mechanism similar to
the one I have just described:
Bruce, I cannot see that you have proposed any kind of mechanism in
your description above. At best, you have painted an implausible
flow chart of words and images. It cannot be tested. It is
subject to different interpretation by every reader. The
"mechanism" you describe above is mush all the way through. And it
is mush coming from Bruce Abbott TODAY, not a century ago.
The next day Bill P. commented on the same post:
[Bill Powers (951102.1420 MST)]
I think your discussion, EXPLAINING THORNDIKE'S OBSERVATIONS,
is reasonable and well-ordered. I wonder, however, if you
recognize just what an enormous mouthful you are biting off.
. . .
. . I think this is getting me close to my real objections. I
think that in constructing his "simple" experiment Thorndike
actually set up an enormously complex situation which seemed
simple only because Thorndike chose to look at it in a simple
way.
My personal view of this is that psychologists study enormously
complex phenomena. They appear not to be interested in a simple
thing like how a person can bend a single finger at will. That is
uninteresting "finger-bending behavior", taken for granted as
magic, explanation not expected. Hardly the stuff grants are made
of. Yet, a PCTer need only to stretch and bend a single finger to
reassure him or her self that PCT offers a sound explanation for
life and our experience.
By theorizing about apparent complex phenomena, psychologists have
constructed systems of explanations that I visualize as the upper
stories of high rise houses of cards -- without any foundation,
first or second story. They hang in thin air. Many different
versions exist, competing with each other, none connected to terra
firma.
Psychologists say that this is the best that can be done, and are
firmly convinced that when a physical foundation is eventually
found, it must necessarily connect with their research (since it is
scientifically sound) and validate it retroactively. Psychology
will then take a leap forward, joining the physical sciences, and
all the efforts of pioneering EAB psychologists will fit like
pieces in the then-completed puzzle.
PCT lays a physical foundation for a new psychology, but it turns
out that this foundation is located in the next county, and does
not connect with or support existing ideas. PCT demonstrates
clearly that the phenomena that are observed and explained by
contemporary ideas are illusions.
All year, I have not seen you agree that reinforcement is an
illusion. (The most recent comment: [Bill Powers (951123.0700)].
You keep talking about alternative explanations, as if they somehow
have equal validity.)
The houses of cards that have been built so elaborate without
foundation and hanging in thin air are bound to collapse. They are
mush.
Bill P. ended his post:
The points you have brought up are points that Thorndike never
considered, yet they are things that obviously have to be
established before we can say ANYTHING scientific.
You were terribly upset by Bill's post. Was it because the obvious
implication of his statement was that Thorndike's EAB approach was
mush, and by implication that EAB is mush? That's what I suspect.
But Bill has told you dozens of times in the past year that
reinforcement theory is mush, based as it is on the study of an
illusion.
Discussing explanations with Shannon Williams you say:
[Bruce Abbott (951106.1055 EST)]
Now I know what you are calling an explanation, although I do
not agree that there is only one type of explanation. You are
looking for a mechanistic explanation, as opposed to, say, a
functional one.
I find this mushy, too. Functions never exist without a mechanism
that makes them appear, not in a physical universe. A mechanistic
explanation and a functional explanation are necessarily the same
thing. The mechanism is what physical science explains and what
PCT explains. You have indulged several times in painting a wordy
flow chart of "functions", none of them supported by any plausible
mechanism. This does not build a science that can stand the test
of time.
Discussions of Newton and Copernicus have been most interesting.
The analogy falls down, however. Tycho Brahe made observations of
angles, times, and other physical quantities. He did not interpret
his findings, thus distorting them with his preconceived ideas.
Johannes Kepler studied the data and observed that if the heavenly
bodies moved in ellipses (a mechanism), observations would fit the
data. Modern physical science recognizes the validity of Tyho's
observations and Kepler's analysis. Newton built on this.
By comparison, Thorndike made a large number of wild, unspoken,
subjective assumptions flavoring his verbal descriptions of his
"data". Analysis that follows from this is way off base.
Scientists have built on Thorndike and his guesswork analysis,
creating an ever more elaborate structure of guesses. Contemporary
EAB apparently suffers the same disease. Remember your admiration
for the EAB analysis of the goose rolling eggs, contrasted with
Bill P.'s posts from 1991? We cannot build on the ANALYSIS of EAB
research as you have claimed, because most of it is guesswork of
poor quality. Not all is poor however, as we have conceded you
already. The continuing discussion has recently turned to IV-DV.
I shall pull a post on IV-DV from the archive on and post it
immediately below.
The thread goes on and on. I shall end this post soon.
Debating Rick you lament:
[Bruce Abbott (951110.1250 EST)]
Could you describe this moderate position, please?
Well, I've been trying to for about a year now. As your
question so amply illustrates, it's been a waste of my time.
I don't think your participation on CSG-L has been a waste of time.
Seems to me that you have reconsidered a large number of your own
convictions already, sharply limiting your claims of the validity
of reinforcement theory and other accepted truths. You have drawn
out the best in Bill P. over and over again. PCTers and lurkers
have learned more about reinforcement theory, PCT and the arguments
on both sides. Seems to me that you have reorganized your firmly
held principles and systems concepts a great deal in the past year,
but that there is more to go before you become a genuine PCT
scientist, free from beliefs in and co-dependence on the mush of
reinforcement theory :-).
I'll end by going back to the beginning:
[Bruce Abbott 95-10-31 17:27:25 EST]
What I have wanted to challenge are some of your systems
concepts, so carefully constructed by your mind over a long
time, still resisting disturbances.
Fair enough. That is precisely what I have been doing with
regard to certain systems concepts of yours.
Now, what systems concepts of mine did you mean to challenge?
Best, Dag
···
-------------------------------
Here is the PCTDOCS archive post on IV-DV I promised above.
-------------------------------
STUDY_IV.DV
Independent Variable - Dependent Variable
Unedited posts from archives of CSG-L (see INTROCSG.NET):
Date: Wed Apr 28, 1993 6:20 am PST
[From Bill Powers (930428.0700)]
General, on IV-DV:
IV = Independent Variable; DV = Dependent Variable
The term "IV-DV" threatens to degenerate on this net into a
stereotype of an approach to human behavior. All that this phrase
means is that one variable is taken to depend on another and the
degree and form of the dependence is investigated experimentally.
This is a perfectly respectable scientific procedure. Want to know
how the concentration of salt affects the boiling point of water?
Keep the atmospheric pressure constant, carefully vary the salt
content, and carefully measure the boiling point. You can find
relationships like this throughout the Handbook of Chemistry and
Physics, and so far nobody has suggested anything methodologically
wrong with these tables and formulae.
If we're going to object to a procedure for investigating behavior,
let's not indulge in synecdoche, but say exactly what it is about
the method to which we object. There can be no valid objection to
the IV-DV approach itself.
The basic problem with the IV-DV approach as used in the bulk of
the behavioral sciences is that it is badly used; that bad or
inconclusive measures of IV-DV relationships are not discarded, but
are published. The basic valid approach has been turned into a
cookbook procedure that substitutes crank-turning for analysis,
thought, and modeling. The standards for acceptance of an apparent
IV-DV relationship have been lowered to the point where practically
anything that affects anything else, by however indirect and
unreliable a path, for however small a proportion of the
population, under however ill-defined a set of circumstances, is
taken as a real measure of something important, and is thenceforth
spoken of as if it were just as reliable a relationship as the
dependence of the boiling point of water on the amount of dissolved
salt.
While I was in Boulder, I spent some time in the library looking
through a few journals. By chance, I looked first through two
issues of the 1993 volume (29) of the Journal of Experimental
Social Psychology. With few exceptions, the articles were of the
form "the effect of A on B." One article went further: the title
was "Directional questions direct self-conceptions."
All of the articles rested on some kind of ANOVA, primarily
F-tests, and the justification for the conclusions was cited, for
example, as "F(1,82) = 7.88, p < 0.01." No individual data were
given; it was impossible to tell how many subjects behaved contrary
to the hypothesis or showed no effect. There was no indication,
ever, that the conclusion was not true of all homo sapiens.
I suppose that a person who understood F-tests (how about some
help, Gary) might be able to deduce the number of people in such
studies who didn't show the effect cited as universal. Even I could
see, in some cases, that there had to be numerous exceptions. For
example, paraphrasing,
Subjects covertly primed rated John less positively (M = 21.32)
than subjects not primed (M=22.78). Ratings were significantly
correlated with the independent "priming" variable: r(118) = 0.35,
p < 0.001.
[Skowronski, J.J.; Explicit vs. implicit impression formation: The
differing effects of overt labeling and covert priming on memory
and impressions." J. Exp. Soc. Psychol _29_, 17-41 (1993)
When means differ by only 1.46 parts out of 22, it's clear that
many of the 120 students must have violated the generalization, so
this conclusion would be true of something close to half of the
students. The coefficient of uselessness is 0.94, showing the same
thing. The authors are teasing a small effect out of an almost
equal number of examples and counterexamples. In another study,
"When warning succeeds ... " a rating scale ran from -5 to 5, and
the mean self-ratings for one case were 0.89 and in the other
-0.92. A large number of the subjects must have given ratings in
the opposite order from the one finally reported.
So what we're talking about here is not a bad methodology, but bad
science based on equivocal findings.
The IV-DV approach is not incompatible with a model-based approach
or with obtaining highly reliable data. In the Journal of
Experimental Psychology - General, I found a gem by Mary Kay
Stevenson, "Decision- making with long-term consequences: temporal
discounting for single and multiple outcomes in the future" (JEP-
General, _122_ #1, 3-22 (1993). Mary Kay Stevenson, 1364 Dept. of
Psychology, Psychological sciences building, Purdue University, W.
Lafayette, Indiana 47907-1364
This paper used old stand-bys like questionnaires and rating
scales, but it had some rationale in the observation that during
conditioning, delaying a consequence of a behavior lowers the
strength of the conditioning. It also freely postulated a thinking
organism making judgements -- this was actually an experiment with
high-level perceptions. Moreover, there was a systematic model
behind the analysis, and an attempt to fit an analytical form to
the data rather than just do a standard ANOVA.
Furthermore -- oh, unheard-of procedure -- Ms. Stevenson actually
replicated the experiment with 5 randomly-selected individuals,
fitting the model to each individual's data and verifying that the
curve for each one was concave in the right direction.
The mathematical model predicted between 97 and 99 percent of the
variance in the data.
I didn't have time to read the article carefully, but it certainly
seemed to show that high standards were applied and that an IV-DV
approach can yield data that anyone would call scientific. All
that's required is that one think like a scientist. A LOT of work
went into this paper. If only papers in psychology done to this
standard were published, all the different JEPs would fit into a
single issue.
In JEP-Human Perception and Performance, there was a good
control-theory experiment:
Viviani, P. and Stucchi, N. Behavioral movements look uniform:
evidence of perceptual-motor interactions (JEP-HPP _18_ #3, 603-
623 (August 1992).
Here the authors presented subjects with spots of light moving in
ellipses and "scribbles" on a PC screen, and had them press the ">"
or "<" key to make the motion look uniform (as many trials as
needed). The key altered an exponent in a theoretical expression
used to relate tangential velocity to radius of curvature in the
model. The correlation of the formula with an exponent of 2/3 (used
as a generative model) with the subjects' adjustments of the
exponent was 0.896, slope = 0.336, intercept 0.090.
This is just the kind of experiment a PCTer would do to explore
hypotheses about what a subject is perceiving. By giving the
subject control over the perception in a specified dimension, the
experiment allows the subject to bring the perception to a
specified state -- here, uniformity of motion -- and thus reveals
a possible controlled variable (at the "transition" level?). The
authors didn't explain what they were doing in that way, but this
is clearly a good PCT experiment. Even the correlation was
respectable, if not outstanding (the formula was rather arbitrary,
so it should be possible to improve the correlation considerably by
looking carefully at the way the formula misrepresented the data).
There is a world of difference between the kinds of experiments
reported in J Exp. Soc. Psych and the two described above (and
between the two described above and most of the others in JEP).
From good experiments, even if one doesn't buy the interpretation,
one can go on to better experiments. From bad experiments there is
no place to go: you say "Oh" and go on to something completely
different.
Best, Bill P.
-------------------------------------
End archive file