Tuesday, July 20, 2010

Do we still need pre-publication peer-review ?

A bit over a month ago Glyn Moody wrote a blog post arguing that abundance of scientific publishing outlets removes the need for our current system of pre-publication peer-review. The post sparked an interesting discussion here on FriendFeed.

Glyn Moody tells us that we have now:
"yet another case of a system that was originally founded to cope with scarcity - in this case of outlets for academic papers. Peer review was worth the cost of people's time because opportunities to publish were rare and valuable and needed husbanding carefully"

Since we have an endless capacity to publish information online Moody argues that there is no longer a need to pre-select before publication. We can leave that all behind us and do a post-publication peer-review that is distributed by all of the readers using all sorts of article level metrics that PLoS has been promoting.

More recently Duncan wrote another blog post that has some information that I think is important for this discussion. He was trying to estimate how many articles have ever been published. In the process he noted an interesting number - the number of articles that are currently published per minute. Pubmed keeps a table with the number of articles that they have information on per year. I don't think the last couple of years are well annotated and the first decades are that reliable so I just plotted here the totals between 1966 and 2007.

It is not surprising to see that the number of articles published per year is increasing, it probably matches well our expectations. I personally feel like I never have enough time to keep up with the literature. We are currently over the 700.000 papers per year. A search on pubmed for articles published in 2009 returns 848.856 papers. Something like 1.6 papers per minute !

So, although we have no scarcity of publishing outlets we have a huge scarcity of attention. It is very literally impossible to keep up with the current literature without some sophisticated filtering system. With all of the imperfections of our current System (TM) of editorial control, subjective peer review, subjective impact evaluations, impact factors and so on, we must agree that we need a lot of help filtering through these many articles.

I have read some people arguing that we should be capable ourselves of reading papers and realizing if they are interesting/innovative or not. That is fine for the very narrow range of topics that are close to our area of interest. I have pubmed queries for my topics of interest and I do filter through these myself without relying (too much) on the journal it was published on, etc. The problem is everything else that is not within this extremely narrow range of topics or the many papers that escape my queries. I want to be made aware of important new methods and new discoveries outside my narrow focus.

Moody and many others argue that we can do the filtering after publication by the aggregated actions of all of the readers. I totally agree, it should be possible to do the filtering after publication. It should be possible but it is not in place yet. So, if we want to do away with the System .. build a better system along side it. Show that it works. I would pay for tools that would recommend me papers to read. In my mind, this is where publishers of today should be making their money, in tools that connect the readers to what they want to read, not on content that should be free to read and re-use by anyone (open access).