Journal proposals

[From Bill Powers (930304.2000)]

Greg Williams (930304) --

RE: the journal

I think that something like Gary Cziko's suggestion would work --
sending abstracts to CSGnet with a request for reviewers. Come to
think of it, the job of a potential coordinator would be eased
even more if those who wanted to submit articles to the journal
simply posted abstracts to the net with a request for reviews.
Anyone who wanted to do a review could then ask for a direct
email copy, review it, and send the review to the coordinator.

Elaborating along the same lines, reviewers could send their
reviews directly back to the person writing the article, raising
objections, offering suggestions, etc.. In this way the reviewers
would have to take responsibility for their opinions, and would
interact directly with the author(s) at least once and possibly
several times without an coordinator standing in the way or
needing to relay things back and forth.

To keep this process from dragging out interminably, I suggest
that when an abstract is posted, it be marked with a date.
Reviewers would then be required to notify the author that they
are taking on the review within a week of that date, and all
interactions would cease (1,2,3?) months from the submission
date. At that point authors could submit a final version to the
coordinator and the reviewers, and the reviewers could send their
final commentary to the coordinator. In this way the coordinator
would receive a paper that has already been criticized and
revised to the extent that the author is willing, and the
reviewers would have thrashed out minor points with the author
that the coordinator should not have to bother with. Really bad
articles would have been strongly discouraged by the reviewers,
so perhaps the authors would have the sense to withdraw them.

The coordinator, of course, would then edit the reviews for
brevity and they would be published along with the article,
signed.

There remains the question of how to choose what goes into the
journal, even after this process. Perhaps we can create
guidelines that will save us from committing the same crimes of
which many of us have been victims. Some of them:

Crime 1: rejecting a paper because it disagrees with the official
point of view or the reviewer's own opinions.

Crime 2: rejecting a paper because the reviewer did not
understand it. Of course if NO reviewers understand it, one must
ask why.

Crime 3: rejecting a paper because it challenges the findings of
other papers that have been published in the same journal (the
in-group syndrome).

Crime 4: rejecting a paper without reference to its substance, on
grounds that have nothing to do with its scientific soundness.

Crime 4, of course, sort of wraps up the other three.

ยทยทยท

-------------------------------------
My own view is that we should not accept papers that are not
principally about PCT but simply use PCT terminology as a way of
talking about other subjects. All articles should teach something
or demonstrate something about PCT. I'm not even sure that the
"worlds" article by Tom Bourbon and me would qualify under this
heading, or that Rick Marken's article would fit (although both
articles did develop some relationships between PCT and specific
aspects of conventional psychology -- we'll have to think about
that).

What I'm trying to say is that this journal should be the basic
research journal for the study of living control systems, and not
attempt also to evangelize. The articles should enhance our
understanding of living control systems or improve on the theory
we use for that understanding. The journal should be the place
for people who are already committed to the exploration of PCT to
publish their ideas and results, and to find out what others have
thought and found. It is not the place to convince psychologists
or anyone else that they ought to pay attention to PCT -- that, I
fear, we must continue to try to do through submissions to
conventional journals.

I feel that all propositions about PCT presented in the journal
should be accompanied by replicable demonstrations, and should
adhere to the statistical concepts we have worked out in several
years on this net. No use of group statistics to derive laws of
individual behavior. All propositions to be tied to a specific
testable model. Replicability of data to be some negotiable
figure corresponding to correlations in the 90s (and confidence
levels of p < 0.00001 or so). Exceptions to be very carefully
scrutinized.

Suppose that someone wants to publish an article showing that
behavior is NOT the control of perception. As I interpret my own
words above, it would qualify for publication if it were soundly
argued and accompanied by clear demonstrations of high
reliability. We must never lose sight of the fact that the basic
thing we want to know about living systems is whether they are in
fact control systems; this question must remain forever open if
we want to claim the name of science.

But suppose that this article is based on a philosophical por
theoretical argument, or a mathematical theorem, or some
fundamental principle of physics. I would vote against publishing
it. Without a clear demonstration of the applicability of the
argument to some specific set of real observations, such an
article would be an exercise in pure reason, and that is the
opposite of what I would like to see in THIS journal.

I will argue strongly against excluding articles that are written
at a low level of sophistication. There are many facets to PCT,
not all of them mathematical or amenable to computer
demonstrations. There are formal uses of PCT and informal ones.
People from many fields of interest and with quite different
levels of technical education are contributing to PCT. If you say
"Every time I contradict a person's statement about his own
personality, that person makes an objection intended to
counteract my comment," that is data about PCT and the nature of
a living control system. It is replicable with high reliability.
Simply by being tried, that experiment tests the generality of
the claims of PCT. I would recommend publishing it even if the
study is very simple and very simply reported.

I think that the review process I described above would encourage
articles from people with good ideas but little experience with
rigorous theorizing. Those who submit articles would benefit from
submitting abstracts for reviews while they are still designing
their studies; reviewers with more experience can help them see
how to convert a vague idea into something actually testable, and
can help them select relevant (if narrower) tests and write a
good article on the results. There's no reason for reviewers
simply to sit back and snipe. A person with some experience with
PCT can pick out of a too-general concept many specific concepts
that could be tested; if the author is willing to do the work, an
interesting and useful contribution to PCT could result.
Reviewers can be mentors as well as critics.

Finally, one of my intentions (which I invite others on the net
to help support) is to make sure that there is no bread-and-
butter way of getting published just by turning a crank. The
conventional journals are choked with articles that are simply
another study of the effect of A on B, the author having got
lucky and found a significant, but trivial, effect. If we want
high quality results to be reported, we must set the standards so
that only high quality results are accepted. This means that the
first run-through of an experiment will probably not yield
results publishable here. Only by testing and refining
hypotheses, then testing and refining again, is it possible to
get results with very small errors, very high replicability. We
may have a slim journal if we enforce such standards, but by God
every word in it will be worth reading -- when published, and ten
years later.

Best to all,

Bill P.

[Martin Taylor 930305 12:15]
(Bill Powers 930304.2000)

I suppose I shouldn't argue with Bill on the topic of his baby growing up,
but there are a few features of his commentary that I find disturbing.

If there were a Journal of Living Systems, based around PCT as the
foundational idea, then it should incorporate all aspects of development
of PCT. That includes not only experiments, but theory and explanation.

I really don't like Bill's proposal:

I feel that all propositions about PCT presented in the journal
should be accompanied by replicable demonstrations, and should
adhere to the statistical concepts we have worked out in several
years on this net. No use of group statistics to derive laws of
individual behavior. All propositions to be tied to a specific
testable model. Replicability of data to be some negotiable
figure corresponding to correlations in the 90s (and confidence
levels of p < 0.00001 or so). Exceptions to be very carefully
scrutinized.

How do you make a "reliable demonstration" of how a proposed Decision
Making Entity interacts with one or a set of ECSs? Or of whether
a program-type ECS can be segregated from the sequency-type ECSs that
presumably support it? How do you make a reliable demonstration that
PCT-based methods of teaching a second language work? As Joel Judd
once remarked, you can't even replicate such things, even if you can
experiment with them.

The theoretical discussion of what could be, must be, and cannot be, is
as important to PCT as demonstrations of one or two level models that
fit real data. Those are just demonstrations. They even mislead
many people as to where the power of PCT lies.

And I think Bill violates his own principles (committing Crime 2 or
Crime 1) by his dogmatic rejections of group statistics. If maxim 1
of PCT is that you can't do anything about information you can't get,
maxim 2 must be that you can use information you can get. It IS valuable
to the educator to have 70% of the children rather than 30% understand say
Pythagoras' Theorem. A change in teaching method that achieves this is
a good thing. Can you (have you) achieved 99% accuracy in modelling in
a situation where a 4-level hierarchy is necessary? Do you expect to?

What I'm trying to say is that this journal should be the basic
research journal for the study of living control systems, and not
attempt also to evangelize. The articles should enhance our
understanding of living control systems or improve on the theory
we use for that understanding.

If the theory is any good, all the reports will evangelize. Review articles
serve to bring together results, and again if they are good, the article
evangelized. Tutorial articles do the same. The articles by you and Rick
enhance people's understanding. I would object to dishonest attempts to
evangelize, but "the truth shall set you free."

The proposal that articles might improve on the theory seems to contradict
the suggestion that all propositions should be accompanied by reliable
demonstrations. As does:

But suppose that this article is based on a philosophical por
theoretical argument, or a mathematical theorem, or some
fundamental principle of physics. I would vote against publishing
it. Without a clear demonstration of the applicability of the
argument to some specific set of real observations, such an
article would be an exercise in pure reason, and that is the
opposite of what I would like to see in THIS journal.

So you don't want theoretical articles. I do. But it's your baby, so
you get the casting vote. Fundamental principles of mathematics or
physics won't go away, and need no demonstration. The demonstration
they need is that they apply to the situation at hand.

On reviews: in principle I like the electronic review idea, but particularly
in PCT diagrams are very important. Many of the ongoing arguments could
be resolved if one high-quality picture were available for both discussants
to use. You couldn't visualize my 2-level 2-ECS structure for talking
about conflict, and that was very trivial.

As I just posted to Greg, I see no need for volunteer reviewers. I receive
a stream of articles for review, sometimes from journals I never heard of.
Never have I volunteered as a reviewer, but if time permits, I usually
accept the job. I think most scientists are the same. Why ask for volunteers?

Martin

[From Dick Robertson] (930308)
Bill Powers' suggestions about the Journal sound really good to me. I would
suggest though, that there be a section of "brief Reports" for those first
round articles that he said would usually not be good enough to publish.
I think it would be good to brief report them so that others working on simi-
lar studies could know what's in the works.
Best, Dick.