What constitutes a minimal publishable unit in scientific publishing ? The transition to online publishing and the proliferation of journals is creating a setting where anything can be published. Every week spam emails almost beg us to submit our next research to some journal. Yes, I am looking at you Bentham and Hindawi. At the same time, the idea of a post-publication peer review system also promotes an increase in number of publications. With the success of PLoS ONE and its many clones we are in for another large increase in the rate of scientific publishing. Publish-then-sort as they say.
With all these outlets for publication and the pressure to build up your CV it is normal that researchers try to slice their work into as many publishable units as possible. One very common trend in high-throughout research is to see two to three publications that relate to the same work: the main paper for the dataset and biological findings and 1 or 2 off-shoots that might include a database paper and/or a data analysis methods paper. Besides these quasi duplicated papers there are the real small bites, specially in bioinformatics research. You know, those that you read and you think to yourself that it must have taken no more than a few days to get it done. So what is an acceptable publishable unit ?
I mapped phosphorylation sites to modbase models of S. cerevisiae proteins and just sent this tweet with a small fact about protein phosphosites and surface accessibility:
Should I add that tweet to my CV ? This relationship is expected and probably already published with a smaller dataset but I could bet that it would not take much more to get a paper published. What is stopping us from adding trivial papers to the flood of publications ? I don't have an actual answer to these questions. There are many interesting and insightful "small-bite" research papers that start from a very creative question that can be quickly addressed. It is also obvious that the amount of time/work spent on a problem is not proportional to the interest and merit of a piece of research. At the same time, it is very clear that the incentives in academia and publishing are currently aligned to increase the rate of publication. This increase is only a problem if we can't cope with it so maybe instead of fighting against these aligned incentives we should be investing heavily in filtering tools.
With all these outlets for publication and the pressure to build up your CV it is normal that researchers try to slice their work into as many publishable units as possible. One very common trend in high-throughout research is to see two to three publications that relate to the same work: the main paper for the dataset and biological findings and 1 or 2 off-shoots that might include a database paper and/or a data analysis methods paper. Besides these quasi duplicated papers there are the real small bites, specially in bioinformatics research. You know, those that you read and you think to yourself that it must have taken no more than a few days to get it done. So what is an acceptable publishable unit ?
I mapped phosphorylation sites to modbase models of S. cerevisiae proteins and just sent this tweet with a small fact about protein phosphosites and surface accessibility:
Should I add that tweet to my CV ? This relationship is expected and probably already published with a smaller dataset but I could bet that it would not take much more to get a paper published. What is stopping us from adding trivial papers to the flood of publications ? I don't have an actual answer to these questions. There are many interesting and insightful "small-bite" research papers that start from a very creative question that can be quickly addressed. It is also obvious that the amount of time/work spent on a problem is not proportional to the interest and merit of a piece of research. At the same time, it is very clear that the incentives in academia and publishing are currently aligned to increase the rate of publication. This increase is only a problem if we can't cope with it so maybe instead of fighting against these aligned incentives we should be investing heavily in filtering tools.