Navigation
Public engagement

Becoming a Scientist

Read online for free

Print your own copy

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter
« When you eat matters | Main | The evolution of sex chromosomes »
Tuesday
Sep222009

Nature attacks peer review

In the latest issue of Nature, the journal has published a rather unfair attack on peer-review. Peer review is the process that most journals use to assess the merit of individual papers - submissions are judged by editorial staff, then sent to scientists working in the field for peer review, then the reports by these scientific peers are judged by the editorial staff to determine whether they warrant publication. While it is the standard today, there has been a lot of resistance to peer review in the past, as the editorial staff of journals exercised their power of selection. Notably Nature, founded in 1869, only moved towards peer review 100 years later, under the direction of John Maddox. Other journals, such as PNAS, are only now scrapping peer review bypasses.

There are certainly problems with the journal submission process, but typically these involve too little peer review, rather than too much. A journal such as Nature typically rejects the majority of papers without review and for those papers reviewed there are only two to three reviewers per paper. Scientists put a lot of effort into reviewing, but as it is an unpaid and unrequited favour, it is not the highest level priority. Even after review, the editorial staff have enormous power to accept or decline the advice of peer review, Nature once famously publishing a paper falsly reporting to show effects of homeopathy. This editorial decision tends to be a combination of ranking the news splash effect (Nature and Science compete for citations in the big newspapers), the "boys club" effect (no longer all male, but certainly the big names have an easier pathway to acceptance) and editorial "gut feeling".

To justify the editorial over-ride, defects in peer review are commonly cited. In this latest editorial piece, Nature presents the results of an unpublished study presented at a conference, reporting that the results show a bias of peer review towards positive results. This may be so, but does the cited study actually show that? What the study did was submit two papers, one with positive results and one with negative results, to two journals, and analyse the peer review results. The results showed that peer reviews at one journal (Journal of Bone and Joint Surgery) had a minor reduction in ranking the negative results paper, while the second journal (Clinical Orthopedics and Related Research) showed no significant difference. Hardly a damming inditement of peer-review.

What are the methodological flaws that could account for the minor differences observed at one out of two journals?

* Different reviewers. Even picking 100 reviewers for each paper does not cancel out this effect unless reviewers were carefully stratified to ensure random distribution.

* The quality of the two papers may have been different. The author of the study tried to make them as identical as possible, but different results need to be presented differently. As the study is unpublished we only have the author's opinion that the two studies were of equal quality.

* Positive and negative results can have very different "impacts". Most journals explicitly request a review which takes into account both scientific validity and scientific impact. Negative results generally have lower impact and hence would get lower review scores, as explicitly requested by the journals. To remove this effect the papers should have been submitted to a journal such as PLOS One, which requests a review only on scientific quality.

* Positive and negative results require different statistical standards. A positive result uses simple statistics to show that the two groups were different. A negative result requires more complex statistics and can only state that the two results were not different above a certain level. A negative result can never exclude that a positive result exists with a smaller effect than would be picked up by the study design.

Certainly the most obvious sign of "positive bias" evidenced by this article is the decision by Nature to write an editorial and broadcast a podcast on a minor unpublished study that denigrates peer reviewers and hence elevates editorial staff. Would they have written a similar editorial on an unpublished presentation showing no sign of bias by peer reviewers? The minor impact observed in one out of two journals tested (with all the caveats above) did not warrant Nature to fill its editorial with phrases such as "dirty", "biased", "more negative and critical" and "biased, subjective people". The worst bias of all is the accusation that peer reviewers from the second study only showed no statistical bias because "these reviewers guessed they were part of an experiment". Surely Nature should have been able to spot that subjective reporting, dimissing negative results and elevating positive results are the very definiton of positive result bias!

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>