On the angenmap mailing list, I wrote:
Overall, I see little justification for believing that our current system [ of peer review ] is particularly good. It's just comfortable, especially for the people who have been molded by it.
Chris Moran responded:
I frequently hear these assertions about weak correlations, false negative rates and false positives for impact, but no one ever seems to provide any data to support the claims. I am speaking from experience as an editor and can say with confidence that I have never seen a paper yet that hasn't been improved by reviewer and editorial input, always with respect to presentation and very frequently with respect to science. I can also say with confidence that there is considerable incentive for people to submit fraudulent manuscripts given the pressure to publish and I can't see how this would not get very much worse in an unmoderated publication venue.
I would never say that our current system of publication is perfect, but I remain confident that it is better than the alternatives
I just posted a long response with some citations, and I'd like to see if I can crowdsource some additional citations. Please add 'em at the end!
Hi Chris,
in my previous e-mail, I gave this single citation for concerns about peer review -
http://breast-cancer-research.com/content/12/S4/S13
I couldn't tell if you had missed that, or disagreed? But in any case, here are a few more citations, taken from that and elsewhere, with annotations.
---
[1] Why most published research findings are false
Ioannidis JP.
http://www.ncbi.nlm.nih.gov/pubmed/16060722
This is one of an eye-opening set of articles by Ioannidis about the way statistics are used in publications, and how this virtually guarantees that many research findings are false. I would appreciate a link to a better summary; I've mostly read the pop-science press accounts.
---
[2] High Impact = High Statistical Standards? Not Necessarily So
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0056180
A worrying discussion of statistical practices in some top journals. Quote: "We interpreted these differences as consequences of the editorial policies adopted by the journal editors, ..."
---
[3] Reproducibility of peer review in clinical neuroscience
http://www.ncbi.nlm.nih.gov/pubmed/10960059
"Agreement between the reviewers as to whether manuscripts should be accepted, revised or rejected was not significantly greater than that expected by chance"
---
[4] What errors do peer reviewers detect?
http://www.ncbi.nlm.nih.gov/pubmed/18840867
"Conclusions: Editors should not assume that reviewers will detect most major errors."
---
[5] On plummeting manuscript acceptance rates by the main ecological journals and the progress of ecology
http://library.queensu.ca/ojs/index.php/IEE/article/view/4351
A really nice article about how journal selectivity in ecology does not correlate that well with number of citations, as shown by an analysis of recent papers in PLoS One compared with other journals.
This is a direct example of false negatives: the "selective" journals like Oikos would presumably have rejected at least 50% of the papers that appeared in PLoS One, despite their competitiveness in terms of eventual citation.
---
[6] Arsenic life.
The "arsenic life" debacle was an example of a false positive editorial and reviewer decision. See http://www.usatoday.com/story/tech/columnist/vergano/2013/02/01/arseniclife-peer-reviews-nasa/1883327/ for the actual review text, and http://storify.com/carmendrahl/arsenic-based-life-paper-peer-review-process-comes for the online discussion. Here the post-publication and online peer review was considerably more effective than the actual peer review and editorial decision.
My sense is that everyone is aware of papers in Nature and Science that are generally regarded as either wrong or boring, but I don't have any citations supporting this at hand. I'd love to get some.
---
[7] Google Scholar "top publications" ranking based on h5-index:
http://scholar.google.com/citations?view_op=top_venues
Note that astro-ph arXiv, an unmoderated forum for posting preprints, is #12 in citations overall. For bioinformatics specifically, which is cursed with few high impact venues, arXiv is #6 and #9:
http://scholar.google.com/citations?view_op=top_venues&hl=en&vq=eng_bioinformatics
which, if you regard citations as a good metric of impact, demonstrates that a largely unmoderated publication venue is not automatically a bad thing.
---
From the Breast Cancer Research article above: "Ironically, a faith based rather than an evidence based process lies at the heart of science." I'm not sure I've seen an unambiguously better approach to editorial and peer review than the current system, but asserting that "it is better than the alternatives" should rest on evidence rather than opinion -- just like the rest of science. So far I don't think we have that evidence.
Note, I didn't argue for an unmoderated publication venue in my previous e-mail. My strongest opinion in this area is that papers should be published based on whether or not they're correct, not whether or not an editor and set of reviewers thinks they're likely to be important.
For myself, I've had some great experiences with peer review, as well as some not-so-great ones. I'm working on a blog post about my changing opinions (towards becoming a bit more conservative, believe it or not :) that I'd be happy to pass on to interested people when I eventually post it.
Have a good weekend, folks!
--titus
Comments !