Presenting PCT

[Avery.Andrews921216.09109]
(Rick Marken (921214.2200))

> I think both of these comments represent the same misconception about
> PCT --

I disagree. If someone accepts (a) that they don't understand
feedback as well as they thought they dud (b) feedback mechanisms are
much more central in the organization of behavior than is generally
supposed, the rest follows. E.g. that it's a significant achievment
to predict tracking behavior for one whole minute, that we ought
to try to find out what Aplysia is controlling for, etc. Even if this
assessment is overly optimist, surely people who have begun to suspect
the truth of (a & b) will be more sympathetic reviewers of PCT
papers, & more likely to recommend funding of this kind of research.

[From Rick Marken (921215.2000)]

Avery.Andrews (921216.09109) --

If someone accepts (a) that they don't understand
feedback as well as they thought they dud (b) feedback mechanisms are
much more central in the organization of behavior than is generally
supposed, the rest follows. E.g. that it's a significant achievment
to predict tracking behavior for one whole minute, that we ought
to try to find out what Aplysia is controlling for, etc.

Yes. This is a good idea. And your earlier suggestion that we develop
models (like the "Little Man") that show the capabilities of the
control architecture is also good. I didn't mean to jump on it
so negatively. But your comments (even though I misinterpreted them)
did jog a good thought; one of our problems in PCT is that we can
rarely go to the research results in the literature and say "we
have a model that can account for your results better than any other
model". This is because most results in the literature are statistical;
yet they are quoted as fact. Just after my "Blind men" paper was
rejected there was a paper posted through Psycholoquy -- I guess it's
the one that was accepted. Unfortunately I did not save it (bad sport
that I am) but it was on sentence comprehension or something. Well the
article sounded really interesting, deeply cognitive -- about how
inferences were drawn from sentences and all. Well, it turns out
that they actually described an experiment to test their little theory
(it was not really a working model). Guess what. Two groups of subjects
read sentences -- and there was some independent variable manipulated
(the taregt of the inference or something) at two levels -- conditions
A & B. I can't rememebr the details -- all I remember are the results which
were something like "as expected, condition A did significantly better
than B (t= 2.36, df=23, p<.05). Well, we know that means that a slight
majority of people in A did as expected -- but a lot of people in A
did what people in B did and vice versa. But the conclusion that
will live on (and be the target of theory) will be that A leads to
significantly more of the behavior (whatever it is) than B. There is
just nothing for PCT to do with this kind of data. PCT is a working
model of "one organism at a time" and we expect matches between model
and data that are extremely accurate -- or we go back to the drawing
board. We'd be at the drawing board night after night if we used
PCT to try to predict this kind of noise.

The more one understands feedback control, the less one sees in
the psycholgical research literature that tests that understanding.
Ultimately, you realize that PCT is designed to exlain control --
and you can't explain control until you know that it is happening.
None of the existent research in psychology provides any evidence
that control is happening in a particular situation -- or of what
is being controlled. The inescabale conclusion is that we must start
the whole enterprose of psychology all over again -- with the
understanding that organisms are locked in a negative feedback
situation with respect to their environment -- ie. they control.

But we can build snazzy models in the meantime too.

Best

Rick

[Avery Andrews 921216.1551]
Rick Marken (921215.2000)

I agree that statistics without models is not nice if there is a reasonable
prospect of doing better. But I also think that just complaining about
it doesn't accomplish much. Like, it hasn't, has it?

Perhaps, naively, I think that if people see concrete prospects for doing
better in areas (*not* methodologies, but big questions, like `how do
people manage to get their hands to their coffee cups', or `how do people
manage to give and get directions for getting to places) that they are
interested in, they might go along. But then the powers of obscurantism
are deep and potent, so maybe they won't.

Back to my Xwindows fun & games (I've managed to animate a vertical bar,
moving horizontally -- a *long* way from a working arm, but here's hoping

[From Rick Marken (921216.0800)]

Avery Andrews (921216.1551) --

Well, It's tommorrow for me now too.

I agree that statistics without models is not nice if there is a reasonable
prospect of doing better.

With or without models, the data of psychology (by and large) is useless.
I think this is a very important point and one worth some discussion.
Except for some operant conditioning and perceptual (single subject)
data, I can't think of any research results in psychology that would
qualify as anything other than statistical accidents -- saying absolutely
NOTHING about how or why an individual organism does ANYTHING.

I propose an exercise for anyone with easy access to the psychological
literature. Just open a journal randomly to any research article and see
what kind of data are collected and whether it could help us understand
the behavior of an organism. I really think it would help us see what all
the high falutin' theorizing in psychology is based on -- NOISE.

But I also think that just complaining about
it doesn't accomplish much. Like, it hasn't, has it?

I am not complaining about this at all. In fact, I have done quite a bit
of PCT research; I even have a collection of papers describing it --
the "Mind Readings" book . I have been able to fool several journals into
publishing this work by explaining how it addresses psychologists' concerns
(like how coordinated behavior works) even though the studies themselves
have seemed rather strange to the editors. The only complaint I have is not
having enough time to do PCT research -- I have to make a living after all.
I guess another complaint is that nowhere near enough people have bought
"Mind Readings" -- if they did, more people would see what the beginning
of a PCT research program looks like and I'd be able to remodel my
backyard.

The fact of the matter is, I don't care if psychologists (and other life
scientists) get it or not. I bring the point up (about the fact that PCT
requires
RESTARTING psychology from scratch) only to let those who are
interested in PCT know why it might be frustrating (and counter-productive)
to try to apply PCT to psychological data collected in the "old fashioned"
way.
In fact, the result of this effort (if not abandoned) is what I will call
Carver/Scheier PCT or C/S PCT (actually, it's a very appropriate name --
Chicken S**t PCT). This is the kind of PCT where people use some of the
terminology of the model -- but not the essence (control of perceptual input
variables). C/S PCT misses the whole point of PCT -- which is necessary if
you are going to use PCT to account for statistical results of conventionally
conducted reseach. I don't mind if people do this -- it just makes things
confusing (to the audience, not to real PCTers) so it's annoying.

I know that complaining will get us nowhere. I've been uncomplainingly doing
PCT research and modelling for twelve years (which does get us somewhere, I
hope) and I'm still doing it. That's what I want people to help out with --
but
I also know that in order to do good PCT research they MUST ignore most of
the existing psychological research. Not doing the latter is the path to C/S
PCT --
which is the path to hell.

So consider my cautions about the value of existing psychological research
a WARNING, not a COMPLAINT.

Best

Rick