[From Bill Powers (930304.2000)]
Greg Williams (930304) --
RE: the journal
I think that something like Gary Cziko's suggestion would work --
sending abstracts to CSGnet with a request for reviewers. Come to
think of it, the job of a potential coordinator would be eased
even more if those who wanted to submit articles to the journal
simply posted abstracts to the net with a request for reviews.
Anyone who wanted to do a review could then ask for a direct
email copy, review it, and send the review to the coordinator.
Elaborating along the same lines, reviewers could send their
reviews directly back to the person writing the article, raising
objections, offering suggestions, etc.. In this way the reviewers
would have to take responsibility for their opinions, and would
interact directly with the author(s) at least once and possibly
several times without an coordinator standing in the way or
needing to relay things back and forth.
To keep this process from dragging out interminably, I suggest
that when an abstract is posted, it be marked with a date.
Reviewers would then be required to notify the author that they
are taking on the review within a week of that date, and all
interactions would cease (1,2,3?) months from the submission
date. At that point authors could submit a final version to the
coordinator and the reviewers, and the reviewers could send their
final commentary to the coordinator. In this way the coordinator
would receive a paper that has already been criticized and
revised to the extent that the author is willing, and the
reviewers would have thrashed out minor points with the author
that the coordinator should not have to bother with. Really bad
articles would have been strongly discouraged by the reviewers,
so perhaps the authors would have the sense to withdraw them.
The coordinator, of course, would then edit the reviews for
brevity and they would be published along with the article,
signed.
There remains the question of how to choose what goes into the
journal, even after this process. Perhaps we can create
guidelines that will save us from committing the same crimes of
which many of us have been victims. Some of them:
Crime 1: rejecting a paper because it disagrees with the official
point of view or the reviewer's own opinions.
Crime 2: rejecting a paper because the reviewer did not
understand it. Of course if NO reviewers understand it, one must
ask why.
Crime 3: rejecting a paper because it challenges the findings of
other papers that have been published in the same journal (the
in-group syndrome).
Crime 4: rejecting a paper without reference to its substance, on
grounds that have nothing to do with its scientific soundness.
Crime 4, of course, sort of wraps up the other three.
ยทยทยท
-------------------------------------
My own view is that we should not accept papers that are not
principally about PCT but simply use PCT terminology as a way of
talking about other subjects. All articles should teach something
or demonstrate something about PCT. I'm not even sure that the
"worlds" article by Tom Bourbon and me would qualify under this
heading, or that Rick Marken's article would fit (although both
articles did develop some relationships between PCT and specific
aspects of conventional psychology -- we'll have to think about
that).
What I'm trying to say is that this journal should be the basic
research journal for the study of living control systems, and not
attempt also to evangelize. The articles should enhance our
understanding of living control systems or improve on the theory
we use for that understanding. The journal should be the place
for people who are already committed to the exploration of PCT to
publish their ideas and results, and to find out what others have
thought and found. It is not the place to convince psychologists
or anyone else that they ought to pay attention to PCT -- that, I
fear, we must continue to try to do through submissions to
conventional journals.
I feel that all propositions about PCT presented in the journal
should be accompanied by replicable demonstrations, and should
adhere to the statistical concepts we have worked out in several
years on this net. No use of group statistics to derive laws of
individual behavior. All propositions to be tied to a specific
testable model. Replicability of data to be some negotiable
figure corresponding to correlations in the 90s (and confidence
levels of p < 0.00001 or so). Exceptions to be very carefully
scrutinized.
Suppose that someone wants to publish an article showing that
behavior is NOT the control of perception. As I interpret my own
words above, it would qualify for publication if it were soundly
argued and accompanied by clear demonstrations of high
reliability. We must never lose sight of the fact that the basic
thing we want to know about living systems is whether they are in
fact control systems; this question must remain forever open if
we want to claim the name of science.
But suppose that this article is based on a philosophical por
theoretical argument, or a mathematical theorem, or some
fundamental principle of physics. I would vote against publishing
it. Without a clear demonstration of the applicability of the
argument to some specific set of real observations, such an
article would be an exercise in pure reason, and that is the
opposite of what I would like to see in THIS journal.
I will argue strongly against excluding articles that are written
at a low level of sophistication. There are many facets to PCT,
not all of them mathematical or amenable to computer
demonstrations. There are formal uses of PCT and informal ones.
People from many fields of interest and with quite different
levels of technical education are contributing to PCT. If you say
"Every time I contradict a person's statement about his own
personality, that person makes an objection intended to
counteract my comment," that is data about PCT and the nature of
a living control system. It is replicable with high reliability.
Simply by being tried, that experiment tests the generality of
the claims of PCT. I would recommend publishing it even if the
study is very simple and very simply reported.
I think that the review process I described above would encourage
articles from people with good ideas but little experience with
rigorous theorizing. Those who submit articles would benefit from
submitting abstracts for reviews while they are still designing
their studies; reviewers with more experience can help them see
how to convert a vague idea into something actually testable, and
can help them select relevant (if narrower) tests and write a
good article on the results. There's no reason for reviewers
simply to sit back and snipe. A person with some experience with
PCT can pick out of a too-general concept many specific concepts
that could be tested; if the author is willing to do the work, an
interesting and useful contribution to PCT could result.
Reviewers can be mentors as well as critics.
Finally, one of my intentions (which I invite others on the net
to help support) is to make sure that there is no bread-and-
butter way of getting published just by turning a crank. The
conventional journals are choked with articles that are simply
another study of the effect of A on B, the author having got
lucky and found a significant, but trivial, effect. If we want
high quality results to be reported, we must set the standards so
that only high quality results are accepted. This means that the
first run-through of an experiment will probably not yield
results publishable here. Only by testing and refining
hypotheses, then testing and refining again, is it possible to
get results with very small errors, very high replicability. We
may have a slim journal if we enforce such standards, but by God
every word in it will be worth reading -- when published, and ten
years later.
Best to all,
Bill P.