The likely challenges of (post) publication peer review

CORRECTION: I mistakenly linked to Geoff Bilder, Jennifer Lin, and Cameron's piece on infrastructure in the first posted version, rather than Cameron's post on culture in data sharing. Both are worth reading but the latter is more relevant to this post, and I also wanted to make sure I correctly attributed the former to Geoff, Jennifer, and Cameron all.


I am a huge fan of the idea of changing up scientific peer review.

Anyone paying attention has seen a nigh-daily drumbeat of blog posts, tweets, and op-eds pointing out the shortcomings of the current approach, which is dominated by anonymous pre-publication peer review.

For those who don't know how the current system works, it goes something like this:

  1. Author submits paper to journal
  2. Journal finds editor for paper.
  3. Editor decides whether paper is important enough to review, finds reviewers for paper.
  4. Reviewers review paper, send reviews to editor.
  5. Editor collates reviews, anonymizes them, and sends them to authors along with a decision (reject, major revisions, minor revisions, accept).
  6. Go to #1 (it it's a reject), or go to #4 (if revisions required), or go to...
  7. Publish!
  8. Garner academic fame, reap grant money, write another paper, go to #1.

This is how much of scientific communication works, and peer-reviewed pubs are how reputation is garnered and credit is assigned.

There are many concerns with this approach in the modern day world, with all the possibilities enabled by the Intertubes - papers are often not available prior to publication, which delays scientific progress; journals are often unnecessarily expensive, with high profit margins; papers have to find big effects to be considered significant enough to even review; reviewers are often inexpert, usually uncompensated, rarely timely, and sometimes quite obnoxious; the reviews are rarely published, which means the reviewer perspective is rarely heard; reviews are not re-used, so when a paper transitions from journal to journal, the reviews must be redone; editors make incomprehensible decisions; the process is slow; the process if opaque; there's little accountability; and probably many other things that I am missing.

A precis might be that, so far, publishing and peer review have really failed to take deep advantage of the opportunities provided by quick, easy, instantaneous worldwide communication.

In recognition of this idea that there may be better ways to do communicate science, I've indulged in lots of experiments in alt-publishing since becoming an Assistant Professor - I blog and tweet a lot, I sign my reviews, I (used to) post many of my reviews on my blog, I've been trying out F1000Research's open review system, most of our papers are written openly & blogged & preprinted, I've indulged in authorship experiments, and we've tried pre- and post-publication peer review in journal clubs and on my blog.

As a result of all of these experiments, I've learned a lot about how the system works and reacts to attempts to change. It's mostly been positive (and I certainly have no complaints about my career trajectory). But the main conclusion I've reached is a tentative and disappointing one - the whole system is complicated, and is deeply rooted in the surprisingly conservative culture of science. And, because of this, I don't think there's a simple way to cut the Gordian Knot and move quickly to a new publication system.

In recent weeks, Michael Eisen has been very vocal about changing to a system of post-publication peer review (PPPR). (Also see this more detailed blog post.) This morning, I objected to some snark directed at Michael and other PPPR advocates, and this led to a decent amount of back and forth on Twitter (see this tweet and descendants). But, of course, Twitter isn't great for conveying complicated positions, so I thought I'd take to the blogs.

Below, I tried to distill my opinions down to three main points. Let's see how it goes.

1. Practice in this area is still pretty shallow in science.

I don't think we have enough experience to make decisions yet. We need more experiments.

There's (IMO) legitimate confusion over questions like anonymity and platform. The question of metrics (qualitative or quantitative) rears its ugly head - when is something adequately peer reviewed, and when does something "count" in whatever way it needs to count? And how are we supposed to evaluate a deeply technical paper that's outside of our field? What if it has a mixture of good and bad reviews? How do we focus our attention on papers outside of our immediate sphere? Anyone who thinks hard about these things and reaches a simple conclusion is kidding themselves.

(Of course, the fun bit is that if you think hard about these things you quickly reach the conclusion that our current practice is horrible, too. But that doesn't mean new practices are automatically better.)

2. The Internet is not a friendly place, and it takes a lot of work to create well-behaving communities.

As my corner of science has struggled to embrace "online", I've watched scientists recap the same mistakes that many open source and social media communities made over the last two decades.

The following is a good generalization, in my experience:

Any worthwhile community these days has active (if light) moderation, a code of conduct laying out the rules, and a number of active participants who set the tone and invite new people into the community. These communities take time to grow and cannot be created by fiat.

I think any proposal for PPPR needs to explicitly address the questions of methods of moderation, selection of moderators, expected conduct, and what to do with bad actors. It also needs to address community growth and sustainability. The latter questions are where most PPPR commenters focus, but the former questions are critical as well.

3. Scholarship in this area is still pretty young.

There are a lot of knee jerk reactions (PPPR good! PPPR bad! I like blue!), but there isn't a lot of scholarship and discussion in this area, and most of it happens on blogs (which are largely invisible to 99% of scientists). There are only a few people thinking hard about the whole picture; I really like some of Cameron Neylon's posts, for example, but the fact that his work stands out to me so much is (to me) a bad sign. (I think that's a compliment to you and a diss on the field as a whole, Cameron ;).

Worse, I've been entirely spoiled by reading Gabriella Coleman's book on Anonymous, "Hacker, Hoaxer, Whistleblower, Spy". This kind of deep, immersive research and reporting on how technological communities operate is hard to find, and most of what I have found is ignorant of the new uses and opportunities of technology. I've found nothing equivalent on science. (Pointers VERY welcome!)

Concluding thoughts

At the end of the day, it's not clear to me that we will ever have an answer to Goodhart's Law -- "when a measure becomes a target, it ceases to be a good measure." There are tremendous pressures (livelihoods, reputation, funding) that will be placed on any system of peer review and publishing. I worry that any system we come up with will be even more easily perverted than the current system has been, and science (and scientists, and scientific progress) will suffer as a result.

Me? I'm going to continue experimenting, and talking with people, and seeing if I can identify and promulgate good practice from the bottom up. 'cause that's how I roll.

--titus

p.s. The reason I'm not posting reviews on my blog anymore has to do with time and energy - I've been overwhelmed for the last year or two. I think I need a better workflow for posting them that takes less of my focus.

Comments !

social