help button home button ClinMed NetPrints
HOME HELP FEEDBACK BROWSE ARTICLES BROWSE BY AUTHOR
Warning: This article has not yet been accepted for publication by a peer reviewed journal. It is presented here mainly for the benefit of fellow researchers. Casual readers should not act on its findings, and journalists should be wary of reporting them.

This Article
Right arrow Abstract Freely available
Services
Right arrow Similar articles in this netprints
Right arrow Download to citation manager
Citing Articles
Right arrow Citing Articles via Google Scholar
Google Scholar
Right arrow Articles by Till, J. E.
Right arrow Search for Related Content
PubMed
Right arrow Articles by Till, J. E.
Related Collections
Right arrow Journalology:
Peer Review

Right arrow Medical informatics:
Other Medical Informatics

Right arrow World Wide Web

clinmed/2000010010v1 (March 21, 2000)
Contact author(s) for copyright information

Peer review in a post-eprints world: a research proposal

James E. Till

Joint Centre for Bioethics and Department of Medical Biophysics, University of Toronto,
and Division of Epidemiology, Statistics and Behavioural Research, Ontario Cancer Institute,
University Health Network, 610 University Avenue, Toronto, Ontario M5G 2M9, Canada

Correspondence to: till{at}oci.utoronto.ca

Introduction

The establishment of the Clinical Medicine NetPrints1 website marks an extension to clinical medicine of a novel experiment in scientific publishing. The experiment involves the publication of electronic preprints (eprints) without prior peer review. The xxx archives2, now involving eprints in physics, mathematics, nonlinear sciences and computer science, are probably the best-known, but other archives are participating (for example) in the Open Archives Initiative3.

Stated purposes of the ClinMed NetPrints website include provision of access to electronic preprints of articles, and access to facilities for direct reader feedback prior to eventual publication in a paper journal4. In an editorial announcing the new website4, it's stated that: "we have always regarded publication in the paper journal as not the end but rather only part of the peer review process. Every editor has seen published studies destroyed in the correspondence columns".

It's increasingly widely accepted that the conventional peer review of manuscripts is "expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud", and can yield published papers that "are often grossly deficient"5. A publication process in which correspondence columns are used to "sort out the good from the bad and point out the strengths and weaknesses of studies"5 hasn't been compared with conventional peer review. And, "most studies have compared one method of peer review with another and used the quality of the review as an outcome measure rather than the quality of the paper"5.

The remainder of the present article is divided into four sections. In the first, a problem (variable rejection rates) that is absent for eprints, but present for conventional peer review, is considered. In the second, a case study of a >gold standard= for electronic journals, involving a combination of online peer review with a second appraisal process (online comments from readers) is reviewed. In the third, a proposal about ClinMed NetPrints, involving a sequential process, initially providing an opportunity for readers to comment, followed by an invitation for selected NetPrints to be submitted for conventional peer review, is outlined. Finally, in a concluding section, some issues about evaluative studies of eprints are outlined briefly.

Not a problem for preprints: rejection rates in conventional peer review

In 1971, Zuckerman and Merton6 published an article about variation in rejection rates across journals in different disciplines. They reported substantial variation, with rejection rates of 20 to 40 percent in the physical sciences, and 70 to 90 percent in the social sciences and humanities. Cole, Simon and Cole7 subsequently suggested that: "Some fields, such as physics, have a norm that submitted articles should be published unless they are wrong. They prefer to make 'Type I' errors of accepting unimportant work rather than 'Type II' errors of rejecting potentially important work". This suggestion might also account for the popularity of the xxx eprint archives2.

Hargens8 reviewed previous explanations of the variation in rejection rates, which he found to be focused on two possible sources: space shortages and variation in consensus. He regarded variation in consensus as the more important determinant of rejection rates. Interdisciplinary variation in scholarly consensus involves the extent to which scholars share conceptions of appropriate research problems, theoretical approaches, or research techniques. When scholars do not share such conceptions, "they tend to view each other's work as deficient and unworthy of publication"8.

Scholarly consensus seems likely to continue to be an important variable in the evaluation of eprints. Cole9 has pointed out that : "Even at the research frontier minimal levels of consensus are a necessary condition for the accumulation of knowledge". Hargens10 suggested that: "Perhaps a future study should examine the probability that a published paper will provoke a critical comment as a possible measure of scholarly consensus". From this perspective, rapid online responses to an eprint provide a very convenient basis for efforts to assess the extent of scholarly consensus about the topics addressed in the eprint.

One proposed reform: online peer review

The current consensus seems to be that, although there are problems with peer review, it is unlikely to be abandoned11, but may be opened up5. Ideally, peer review should be reformed in ways that encourage innovation without a sacrifice of quality control12. One way to reform peer review is to develop new ways to undertake it online.

A case study of a journal that appears only in electronic form, and uses only online review, is provided by the Journal of Interactive Media in Education (JIME)13. JIME uses a three-stage review process. In the first stage, an article submitted (electronically) by its author(s) is assigned to three reviewers selected by the editor. The reviewers' comments, and the authors' responses, are posted on a private website, accessible only to the editors, reviewers and authors for each submission.

In the second stage, revised articles that have been approved by the editors are posted, and identified as preprints, at the publicly-accessible JIME website13. Reviewers, readers and editors (all of whom are publicly identified) may post comments. For example, editors may post summaries of comments, if the comments about a particular article become numerous

In the third stage, the authors prepare a final version, which takes into account the comments that have been received, and submit it for final publication in the archives of the journal.

This process might be regarded as a 'gold standard' for online peer review. However, it takes time, and requires a lot of effort by all of those who are involved. It seems unlikely to be practical unless the number of articles is quite small (12 articles were published in 1998 by JIME13).

Might the comments from self-selected readers be considered as a substitute for comments from referees selected by the editors? Bingham and colleagues14 have addressed this question, and concluded that: APostpublication review by readers on the internet is no substitute for commissioned prepublication review, but can provide editors with valuable input from individuals who would not otherwise be consulted". In the next section, a proposal about ClinMed NetPrints will be based on this conclusion.

A proposal about ClinMed NetPrints

In an editorial about the launch of ClinMed NetPrints4, it wasn't clearly stated to what extent the editors of BMJ plan to take proactive steps to solicit the revision of NetPrints and their submission for conventional peer review. Unless otherwise negotiated, authors of preprints posted at the ClinMed NetPrints website retain copyright, and could submit revised versions to any journal willing to accept them for conventional peer review.

The editors of BMJ (and of other journals) might be well advised to consider the NetPrints posted at the ClinMed NetPrints website as equivalent to articles that have been submitted directly to their journal. After screening the NetPrints using their usual editorial screening criteria, they could decide to invite selected authors to submit their NetPrints (or revised versions of them) for conventional peer review.

Thus, a posted NetPrint would, in effect, have been submitted, simultaneously, to a (potentially) large number of relevant journals. Editors of different journals might soon discover that they are in competition with each other for the solicitation of NetPrints that they found to be interesting! Authors might then find that they must choose among journals, and decide to which one they would prefer to submit first for conventional peer review.

Such a process should have advantages for both authors and editors. Editors would, increasingly, be able to rely on an existing large pool of preprints into which they could dip in order to solicit submissions, especially preprints that clearly provide an excellent fit with their journal's particular 'niche'. Editors of journals that refuse to participate in such a sequential publication process might lose some reputation. Authors of articles deemed to be of interest could quickly find an appropriate publisher. Competition among journals (and among authors) might be expected to enhance both the quality of manuscripts, and the efficiency of the publication process.

It should be noted that, no matter which journal publishes an article, it is likely that it will, at some point in time, become publicly accessible in a major electronic archive. Examples are JSTOR15, and PubMed Central16.

Conclusion: more evaluative studies are needed

Evaluative studies of eprints are needed. For example, might articles published initially as preprints, and subsequently revised on the basis of comments from readers, be of higher quality than articles submitted directly to a journal?

When making such a comparison, what criteria should be used to evaluate the quality of articles? As noted above, most studies have "used the quality of the review as an outcome measure rather than the quality of the paper"5. This important issue will not be addressed further here, except to make two points. The first is that it would be helpful to researchers interested in the evaluation of eprints if every preprint archive included a (preferably, standardized and publicly-accessible) set of statistics on usage. Such statistics might include data about the relative popularity of individual eprints, using measures such as the number of times a particular preprint is visited, the number of times it is downloaded, and the median duration of visits to it. For example, a collection of electronic theses and dissertations (ETDs) currently provides statistics about the ten most accessed ETDs17.

The second point about measures is to reiterate Tukey's warning: "when the right thing can only be measured poorly, it tends to cause the wrong thing to be measured, only because it can be measured well"18.

Competing interests: None

References

1. Clinical Medicine NetPrints. http://clinmed.netprints.org/home.dtl

2. The xxx archives. http://xxx.lanl.gov/ and http://arXiv.org/

3. The Open Archives Initiative. http://www.openarchives.org

4. Smith R, Keller MA, Sack J, Witscher B. Netprints: the next phase in the evolution of biomedical publishing. BMJ 1999; 319: 1515-1516 [Full text]

5. Smith R. Peer review: reform or revolution? BMJ 1997; 315: 759-760 [Full text]

6. Zuckerman HA, Merton RK. Patterns of evaluation in science: Institutionalization, structure and functions of the referee system. Minerva 1971; 9: 66-100

7. Cole S, Simon G, Cole JR. Do journal rejection rates index consensus? Am Sociol Rev 1988; 53(1): 152-156

8. Hargens LL. Scholarly consensus and journal rejection rates. Am Sociol Rev 1988; 53(1): 139-151

9. Cole S. The hierarchy of the sciences? Am J Sociol 1983; 89(1): 111-139

10. Hargens LL. Further evidence on field differences in consensus from the NSF peer review studies. Am Sociol Rev 1988; 53(1): 157-160

11. Bottiger LE. Printed medical journals - will they survive? J Intern Med 1999; 246(2): 127-131

12. Horrobin DF. The philosophical basis of peer review and the suppression of innovation. JAMA 1990; 263(10): 1438-1441

13. Journal of Interactive Media in Education (JIME). http://www-jime.open.ac.uk/

14. Bingham CM, Higgins G, Coleman R, Van Der Weyden MB. The Medical Journal of Australia internet peer-review study. Lancet 1998; 352(9126): 441-445

15. JSTOR. http://www.jstor.org/

16. PubMed Central. http://www.pubmedcentral.nih.gov/

17. Statistics on the usage of the Virginia Tech collection. [10 most-accessed VT ETDs in 1998]

18. Tukey JW. Methodology, and the statistician's responsibility for BOTH accuracy AND relevance. J Am Stat Assoc 1979; 74(368):786-793

Note: the webpage addresses referred to above were visited on January 17, 2000.




Rapid responses to this article:

Read all Rapid responses

Revisions
James E. Till
ClinMed NetPrints, 11 Jan 2002 [Full text]

This Article
Right arrow Abstract Freely available
Services
Right arrow Similar articles in this netprints
Right arrow Download to citation manager
Citing Articles
Right arrow Citing Articles via Google Scholar
Google Scholar
Right arrow Articles by Till, J. E.
Right arrow Search for Related Content
PubMed
Right arrow Articles by Till, J. E.
Related Collections
Right arrow Journalology:
Peer Review

Right arrow Medical informatics:
Other Medical Informatics

Right arrow World Wide Web


HOME HELP FEEDBACK BROWSE ARTICLES BROWSE BY AUTHOR