[From Bill Powers (930507.1130 MDT)]
Thanks, Gary.
Ken Hacker (930506) --
I have been reading messages on this net for about two years
now. Everything so far is true by definition in PCT. The
theory is deductive and not yet retroductive. I do not see
data in these machinations leading to theory, but theory being
used to replicate itself and reject any inklings of
confirmation or disconfirmation.
This is so totally unfair, as well as so completely off the mark,
that it leaves me groping for a place to start answering it. You
simply don't understand the methodology of PCT, which is that of
all the hard sciences (the ones that work by far the best as ways
of understanding natural phenomena).
Yes, the theory is deductive. But the deductions are made from a
model of processes hypothesized to underly observations. The
model itself consists of statements about relationships (in the
environment) between functional blocks connected to each other by
physical laws, and between processing functions (inside the
behaving system) connected by signal pathways. The basic types of
functions and interconnections in the system model were patterned
after what is known about real nervous systems, with external
relationships being conventional physical models of the
environment.
When all these variables and the relationships connecting them
are given specific definitions, the ensemble behaves according to
its construction. One of the requirements for this sort of
modeling is to make sure that the whole model is completely
enough defined so it will produce some sort of behavior without
outside help, when set in motion (usually in a computer,
nowadays).
There's absolutely no guarantee that what the model does will in
any way resemble what the real system actually does in a real
environment. The behavior follows from the properties given to
each part and to the way each part is connected to other parts.
The actual behavior of the model, which is often a nasty
surprise, follows from the way its parts are set up and
connected.
However, because the model can be made to produce behavior all by
itself according to the properties we've given it, its behavior
can be compared with the observed behavior of the real system.
The differences can serve as clues about what is wrong with the
model. Through a long series of experiments with real behavior
and modifications of the model, we can arrive eventually at a
model that behaves very like the real system under all sorts of
conditions, including new conditions not considered in
constructing the model.
Behind the PCT you see today, and the models that are used to
explain simple behaviors, there are years of history of wrong
guesses, models that didn't work right, experiments to find out
what was wrong, modifications of the models to make them work
better. None of that shows now, of course, except where we try to
go further and begin making more wrong guesses.
I don't see anything "pseudo-scientific" about this approach. I
think it's just a form of science with which few behavioral
scientists are familiar. You're used to an approach in which you
make a lot of observations, and search for some generalization
that will cover as many of them as possible. You then test this
generalization by applying it to a new population and seeing
whether the new data still fit the generalization. If they do,
you count the generalization as confirmed, or at least not
disconfirmed.
The very meaning of "theory" is different in these two
approaches. In the one you appear to use, a theory is a statement
to the effect that some relationship holds between observed
events or situations and observed behaviors. Red-haired men tend
to marry cross-eyed women and vice versa. Or communication is
optimized for groups with no more than _n_ participants, or
people who learn by interacting with PCs perform better on tests
and like the learning process more than people who learn from
lectures (or the opposite).
What you would call a theory (if I am guessing right) is what I
would call a statement of fact. It's either true that red-haired
men tend to marry cross-eyed women or it's false. The theory is
simply a statement that a certain relationship can be observed
with some reliability.
For me, however (and for the physical sciences in general), such
confirmed statements of observable facts are simply the starting
point from which theories are built. This is one reason that the
modeling approach puts such high demands on reliability of
observation: there is no point in trying to model a phenemenon
that only occurs some of the time, under some conditions, for
some people. In the physical sciences, that sort of observation
doesn't constitute a phenomenon -- or perhaps better, it
constitutes a mixture of reports on a variety of phenomena, some
apparently contradictory (the subjects who behave contrary to
hypothesis), that haven't yet been sorted out enough to provide
grounds for building a model
Given a highly reliable observation that red-haired men tend to
marry cross-eyed women, a modeler could then think it worthwhile
to try to guess what there is about being male and redheaded, and
what there is about being cross-eyed and female, that would lead
to the observed result of matrimony. In this particular case, the
modeler would probably be baffled.
But take a more realistic case: tall parents tend to have tall
children. Strictly on the observational level, this statement has
no more a priori likelihood of being true than the other one.
It's either a fact or it's not a fact. But by calling on other
knowledge, what we know about genetics, we can guess at an
underlying model in which tallness is a characteristic carried in
the genes, and which is more likely to be expressed if both
parents carry the same gene. Now we're attempting to explain the
observed phenomenon not by appealing to the generalization, but
by looking for underlying order from which the observed
generalization would necessarily follow, deductively.
Before genes had actually been isolated and observed as real
physical systems, they were only plausible models. Some very
clever people imagined that there might be some underlying
mechanism that determined people's eye colors, the markings on
plants, the susceptibility to diseases. Looking at how these
traits and characteristics appeared in successive generations,
they distilled certain rules of combination that they put into
the model. When further research revealed the chromosomes and
then the structure of DNA, these rules of combination were
explained as the actual physical redistribution of components of
DNA during the process of sexual reproduction. There was no
actual interaction at the level of eye color or markings or
occurrances of illnesses. These were only the visible
consequences, much later in development, of processes occuring at
a deeper level of organization. The apparently direct causal
relationships seen at the global level were not causal at all;
they were outcomes of underlying processes.
This is essentially the relationship of PCT and HPCT to
conventional sciences of behavior. In the conventional approach,
the emphasis is on finding observations that repeat and fall
under generalized statements of relationship. The PCT researcher
has no bones to pick about such observations, except the
complaint that they don't tend to be very reliable. All sciences
begin with a collection of unexplained facts of variable quality.
The role of theory -- theory as I define it -- is to bring order
into such collections of fact by showing that they follow
neccessarily from underlying, and simpler, mechanisms.
Once theories of this kind begin to take shape, an interaction
can commence. The theory/model suggests different ways of
generalizing from the same observations. If the model proves to
be useful, the kinds of generalizations it suggests prove to
bring more order into the data. The generalizations become more
accurate; they fit more of the circumstances; they begin to
permit making predictions under new circumstances; the data get
better. And those consequences reveal new discrepancies that
require changes in the underlying model, which improves the
generalizations, which improves the model ...
This interactive process among modeling, observation, and
generalization got started in the physical sciences a long time
ago, three centuries or so ago. This is because the early
physical models happened to work very well. But it has not really
started at all in most of the behavioral sciences (although in
the past decade or two modeling has been on the rise again). In
most conventional approaches to behavior, the name of the game is
still observing, classifying, and generalizing. That process by
itself can take you only so far, and then you come up against the
limits of random guessing. You end up with vast collections of
facts of low quality which stand alone, unrelated to most other
facts and only mariginally reproducible. You end up with the old
cycle of fads, in which new generalizations burst upon the scene,
attract everyone's attention and raise everyone's hopes for a few
years or in some cases a few decades, and then inevitably peter
out when they prove to lead nowhere.
The behavioral sciences as a whole have not followed even the
course of "normal" science, in which each new generation refines
and builds upon what the previous generations have found.
Instead, we have almost a random walk, a splintering into
contradictory schools, a lurching from one idea to another
without direction, like a blind man in unfamiliar territory. To
me this means that the behavioral sciences have yet to discover
the secret of successful science, which I think is modelling.
Your reactions to PCT and HPCT often seem competitive, as if you
have to choose between what you know about communication and what
PCT has to say about it, and you wish to keep what you know. This
is a misconception of the role of PCT. If you find some of the
pronouncements and conjectures of PCT advocates uninformed, that
is because they ARE uninformed. I sit in my first-floor computer
room 12 miles outside a small Western town; Rick sits at his
computer at work where he is supposed to be doing something else;
Tom makes what comments he can while in the midst of fashioning a
new career; very few PCTers have any facilities for gathering
data and very little free time (those who aren't retired) for
working out new applications of PCT. For most of us PCT is a
basement, back room, evening, and weekend occupation. Our
resources are meager.
This is exactly why we want people like you to take up PCT and
start developing it for use in your own fields. We don't want PCT
to _replace_ what you're doing, because then you'd just be one of
us, working on theory without the experimentation and observation
that's required. What we want -- what I want -- is for you to
begin slipping PCT ideas underneath the kind of work you're
doing, asking the kinds of questions that will challenge the
model and then trying to reformulate the model -- very carefully
-- so it will meet the challenges without losing the predictive
abilities it already has in other applications. When you do that
you'll see your data differently and you'll see alternate ways of
characterizing it.
Consider your studies of interaction, your proposed social-
science experiment. You've put it in such a way that either the
conventional explanation about interactive behavior will be
correct, or the PCT explanation will be correct. But this puts
PCT at the wrong level. What you should do is conduct the study
exactly as you normally would do it, and arrive at the best
generalizations about it that you can find, following the normal
procedures. THEN ask: why am I observing that this generalization
is true? What does PCT have to say that would suggest something
like the phenomenon I am observing? What PCT-type characteristics
can I see in the individuals here which, interacting, might
produce something like this phenomenon?
Who do you think is in a better position to do this: you, in a
university department with graduate students and subjects milling
around underfoot and even a penny or two of funding, or me,
sitting by myself in front of my computer? All I can do is sit
here and guess at what might be. You can collect facts, find out
what PCT might have to say about them, reinterpret the facts, put
challenges to PCT, and start the whole process of real science
going where it should be going on, in a university.
I think that the behavioral sciences in general are struggling to
get past the stage of alchemy. The alchemists tried to explain
the behavior of matter and energy at the level of the phenomena
themselves, and through generalizations induced from the
phenomena. They did acquire a certain body of knowledge, but the
understanding of these matters didn't really take off on the old
exponential curve until some underlying mechanisms were proposed.
Even the idea of phlogiston was an imnprovement; it explained
combustion not in terms of Principles but in terms of underlying
elementary types of matter that had properties and from which
phenomena of combustion could be deduced. The phlogiston model
lasted 150 years, and then committed suicide (Priestly died
believing in it) and was replaced by a better one. But the
progress was always in terms of the underlying model, not just or
even mainly the observations of phenomena. If you don't think of
modern chemistry as a science built on models, just look in the
pages of the journal of biochemistry that is called Science.
Models, Ken, are totally falsifiable. They commit the modeler to
a specific quantitative prediction of a specific behavior, and if
they fail they fail. This is quite unlike a statistical
generalization, in which failure of prediction is an extremely
fuzzy concept, with lots of room for counterexamples that don't
even ripple the calm pond of assurance. Somehow you've got a very
odd idea of what PCT is about. I hope we can get all this
straightened out.
ยทยทยท
------------------------------------------------------------
Best,
Bill P.