Abstract

Peer review lies at the core of science and academic life. In one of its most pervasive forms, peer review for the scientific literature is the main mechanism that research journals use to assess quality. Editors rely on their review systems to inform the choices they must make from among the many manuscripts competing for the few places available for published papers. In the past 50 years, the use of peer review has become the “gold standard” by which journals are judged, just as journals use it to judge papers. And whereas journals in all branches of science share the core ethos and values of peer review, it has evolved in diverse ways to best fit the environments and circumstances of the various sciences and disciplines. However, our purpose in this task force report, “Review Criteria for Research Manuscripts,” is not to discuss the general nature and permutations of peer review, as important as those topics are. Others have already done this thoughtfully and well.1–4 Their work has focused on the tensions inherent in the peer review process, the state of peer review and major changes in it, particularly over the last 20 years, the development of data derived from research on peer review, and specific areas of contention and ethics raised by the conduct of peer review. Our intention, in contrast, is to contribute to the practice of review and develop a scholarly resource for reviewers to use as they review manuscripts. Both review and reviewers are often misunderstood by authors and the reviewers themselves. Authors often feel that decisions about their manuscripts are based on mysterious criteria and standards, in a largely secretive process run by editors and unknown reviewers. Their concerns about the opacity of review processes are confirmed; Colaianni found that fewer than half of the journals in her sample of journals from four subject fields actually included clear statements about their peer review practices.5 Reviewers, too, are handcuffed by a lack of information; they are usually told little if anything about their role in how decisions are made about journal articles or about what is expected of them.6 “Review Criteria for Research Manuscripts” grew from an effort to address the need for more information about review systems and reviewing in the medical education research community. By forming a task force to concentrate on the needs of reviewers, we hoped to develop, sort, and present information that would, in turn, help to increase the quality of peer review that members of this community provide to journals and to one another. To meet this need, the task force focused on the core issues: Who needs information most, and what information do they most need? The trajectories of our answers to these questions (developed through a normative group process) crossed at reviewers and criteria, and what we have produced is a reference tool for reviewers to use when they receive research manuscripts that they have been asked to review. BACKGROUND When grappling with what information was needed and who needed it, we could not ignore how perceptions of and attitudes toward peer review have changed over recent decades and among different research communities. Further, these changes have varied from discipline to discipline, field to field, science to science. Peer review was originally conceived to provide advice for the editor, the equivalent of asking the knowledgeable colleague down the hall for an opinion. By the 1960s and 1970s, however, it had come to be the measure of quality for journals—high-quality journals use strong peer-review systems. When the National Library of Medicine created Index Medicus in the 1960s, peer review was not a requirement for a journal's inclusion, but it was a highly weighted factor, as remains the case today. As scholarly publication flourished, particularly in the sciences, and hundreds of new journals emerged, the expectation was that these journals would be founded on the practice of peer review, and the practice was solidified. The spread of peer review and its adoption as the standard of quality brought with them, however, ethical and other problems that challenge the conduct normal in peer review systems. A few widely known cases of fraud and misconduct (particularly the Darsee7 and Slutsky8 ones) that came to light in the 1980s illustrated starkly the problems with authorship, duplicate publication, and other publication misconduct that many editors had been concerned and frustrated about for years. In 1978 the editors of ten internationally prominent medical journals formed a group to begin cooperative work on common problems that affected journals. Originally called the Vancouver Group (after the site of their first meeting), the group soon took more formal shape and status, becoming the International Committee of Medical Journal Editors (ICMJE). The group has become increasingly important over the past 20 years, meeting each year and periodically issuing consensus statements, which hundreds of other journals voluntarily sign on to. Several of the statements deal indirectly, and some directly, with peer review.9 In seeking to understand and improve peer review, editors in biomedicine had more questions than answers, however. Stephen Lock's pivotal book, A Delicate Balance: Editorial Peer Review in Medicine,2 presented a systematic look at peer review, bringing together the whole body of relevant research across the sciences. Then in 1989, the American Medical Association sponsored the First International Congress on Peer Review in Biomedical Publication, and JAMA published the proceedings in a special issue with the evocative title, “Guarding the Guardians: Research on Editorial Peer Review.”10 Two other conferences followed, in Chicago in 1993 and in Prague in 1997, each with a proceedings in JAMA,11,12 and a third conference is scheduled for Barcelona in September 2001. The emphasis of these meetings is research on peer review and other issues important in bisocience journals; the importance of creating a community and forum for the presentation of research on peer review can not be overstated. STATUS OF RESEARCH ON REVIEWING AND REVIEWERS The research by the bioscience editors is not the only research on peer review, although it has come to dominate in the past decade. Parallel work has been done in psychology, sociology, economics, and other fields. Taken together, this knowledge illuminates many aspects of review and provides increasing evidence that editors need to support their review systems or changes to them. In particular, two decades of research have deepened the understanding of reviewing and reviewers. Three overviews at different times and from different perspectives have summarized what is known from research. The first, based in the social sciences, was Armstrong's 1982 article13 that reviewed research on science journals and the editorial policies of leading journals, and then presented the implications and his recommendations for improvements. The second was Lock's 1985 book,2 already mentioned. The third, which is very recent, is the systematic review by Overbeke14; it is the best present summary source and introduction to studies in the area. Research into the kinds of reviewers who do better reviews for editors, that is, the types of reviews that editors value, has produced contradictory results so far. A 1993 study15 found fairly strong evidence that good peer reviewers tended to be under age 40, were from top-ranked academic institutions, were well known to the editor, and were blinded to the identity of the paper's author. A 1998 study,16 on the other hand, was not able to identify the characteristics of good reviewers. The closest findings, very weak, were that reviewers between ages 40 and 60 did better reviews than did those over age 60, and also that reviewers educated in North America and trained in epidemiology or statistics did better reviews. These two studies were done at medical journals; there is not a parallel body of research for the social science journals. Studies in the biomedical sciences and social sciences over the past decade produced mixed findings about using a masked review system, also called a double-blinded system. In a masked system, the reviewer does not know the identity of the author or institution. This is in addition to the customary practice of concealing the identity of the reviewer from the author. Studies in economics journals produced strong support for using masked review.17–19 But similar studies of review in the bioscience journals have been more mixed, although evidence is firming on some issues. Although earlier research had indicated otherwise, two randomized controlled trials in the 1990s found that masking made no difference in the quality of the reviews of papers at prestigious biomedical journals.20,21 Likewise, open peer review, where the identities of the author and reviewer are known to each other, apparently did not affect the quality of reviews.22 Nonetheless, this issue is debated strongly among editors, and more research is needed at the few journals that have open peer review. Reviewers have to respond to the widely varied expectations and procedures of the journals that ask them for reviews, because the journals have different ways of obtaining information from them.6 Journals have been able to develop validated assessment instruments to evaluate reviewers' performances.22 Men and women may behave somewhat differently as reviewers. For example, a 1990 study reported that women reviewers accepted three times more articles by women authors than by men authors, where male reviewers accepted equal proportions.23 And a 1994 study found differences between men and women in several review activities.24,25 Reviewers may react to papers differently, depending on the content. Again, the findings are mixed. On the one hand, a study found that reviewers seemed to favor results that support the

Keywords

Peer reviewResource (disambiguation)Technical peer reviewEngineering ethicsEthosPsychologyComputer sciencePolitical science

Related Publications

Publication Info

Year
2001
Type
article
Volume
76
Issue
9
Pages
904-908
Citations
27
Access
Closed

External Links

Social Impact

Altmetric
PlumX Metrics

Social media, news, blog, policy document mentions

Citation Metrics

27
OpenAlex

Cite This

Georges Bordage, Addeane S. Caelleigh (2001). A Tool for Reviewers. Academic Medicine , 76 (9) , 904-908. https://doi.org/10.1097/00001888-200109000-00013

Identifiers

DOI
10.1097/00001888-200109000-00013