Wednesday, June 21, 2006

Journal policies on preprint servers

I mentioned in a previous post that it would be interesting to separate the registration, which allows claims of precedence for a scholarly finding (the submission of a manuscript) from the certification, which establishes the validity of a registered scholarly claim (the peer review process).

This can only happen if journals accept that a manuscript submitted to a preprint server is different from a peer-review article and therefore it should not be considered as prior publication. So what do the journals currently say about preprint servers ? I looked around the different policies, sent some emails and compiled a this list:

Nature: yes but ...
Nature allows prior publication on recognised community preprint servers for review by other scientists in the field before formal submission to a journal. The details of the preprint server concerned and any accession numbers should be included in the cover letter accompanying submission of the manuscript to Nature. This policy does not extend to preprints available to the media or that are otherwise publicised before or during the submission and consideration process at Nature.


I enquired about this last part of their policy on the peer review forum and this was the response:
"We are aware that preprint servers such as ArXiv are available to the media, but as things stand we consider for publication, and publish, many papers that have been posted on it, and on other community preprint servers.As long as the authors have not actively sought out media coverage before submission and publication in Nature, we are happy to consider their work."


Nature Genetics/Nature Biotechnology: yes
(...)the presentation of results at scientific meetings (including the publication of abstracts) is acceptable, as is the deposition of unrefereed preprints in electronic archives.


PNAS: Yes!
"Preprints have a long and notable history in science, and it has been PNAS policy that they do not constitute prior publication. This is true whether an author hands copies of a manuscript to a few trusted colleagues or puts it on a publicly accessible web site for everyone to read, as is common now in parts of the physics community. The medium of distribution is not germane. A preprint is not considered a publication because it has not yet been formally reviewed and it is often not the final form of the paper. Indeed, a benefit of preprints is that feedback usually leads to an improved published paper or to no publication because of a revealed flaw. "


BMC Bioinformatics/BMC Biology/BMC Evolutionary Biology/BMC Genomics/BMC Genetics/Genome Biology: Yes
"Any manuscript or substantial parts of it submitted to the journal must not be under consideration by any other journal although it may have been deposited on a preprint server."


Molecular Systems Biology: Do you feel lucky ?
"Molecular Systems Biology reserves the right not to publish material that has already been pre-published (either in electronic or other media)."


Genome Research: No
"Submitted manuscripts must not be posted on any web site and are subject to press embargo."


Science: Do you feel lucky ?
"We will not consider any paper or component of a paper that has been published or is under consideration for publication elsewhere. Distribution on the Internet may be considered prior publication and may compromise the originality of the paper or submission. Please contact the editors with questions regarding allowable postings under this policy."


Cell: No ?
"Manuscripts are considered with the understanding that no part of the work has been published previously in print or electronic format and the paper is not under consideration by another publication or electronic medium."


PLoS - No clear policy information on the site about this but according to an email I got from PLoS they do consider for publication papers that have been submited in preprint servers. I hope they could make this clear in the policies they have available online.

Bioinformatics,Molecular Biology and Evolution - ??
"Authors wishing to deposit their paper in public or institutional repositories may deposit a link that provides free access to the paper, but must stipulate that public availability be delayed until 12 months after first online publication in the journal"

I sent emails to both journals but I only had an answer from MBE directing me to this policy common to the journals of the Oxford University Press.

In summary most journals I checked will consider papers that have been previously submited to preprint servers, so I might consider in the future to submit my own work to preprint servers before looking for a journal. Very few journals clearly refuse manuscripts that might be available in electronic form but a good number either have no clear policy or reserve the right to reject papers that are available online.

Monday, June 19, 2006

Mendel's Garden and Science Online Seminars

For those interested in evolution and genetics this is a good day. The first issue of Mendel's Garden is out with lots of interesting links. I particularly liked RPM's post on evolution of regulatory regions. I still think that evo-devo should focus a bit more on changes in protein interaction networks but more about that one of these days (hopefully :).

On a related note, Science started a series of online seminars with a primer on "Examining Natural Selection in Humans". This is a flash presentation with voice overs from the authors of a recent Science review on the same subject. I like this idea much more than the podcasts. I am not a big fan of podcasts because it is much faster to scan a text than it is to hear someone read it for you. At least with images there is more information and more appeal to spend some minutes listening to a presentation. The only thing I have against this Science Online Seminars initiative is that there is no RSS feed (I hope it is just a matter of time).

Friday, June 16, 2006

Bio::Blogs announcement

Bio::Blogs is a blog carnival covering all bioinformatics and computational biology subjects. Bio::Blogs is schedule to be a monthly edition to come out on the first day of every month. The deadline for submission is until the end of month. Submissions for the next release of Bio::Blogs and offers to host the next editions can be sent to:

I will be hosting the first issue of Bio::Blogs here and there will be a homepage to keep track of all of the editions.

For discussions relating Bio::Blogs visit the Nodalpoint forum entry.

Wednesday, June 14, 2006

SB2.0 webcast and other links

If you missed the Synthetic Biology 2.0 conference you can know watch the webcast here (via MainlyMartian).

Nature tech team over at Nascent continue with their productive stream of new products, including the release of Nature Network Boston and a new release of the Open Text Mining Interface. They even set up a webpage for us to keep up with all the activity here. They really look like a research group by know :) I wonder what will happen if they tried to publish some of this research... Open Text Mining Interface published by Nature in journal X.


Monday, June 12, 2006

PLoS blogs

Liz Allen and Chris Surridge just kicked off the new PLoS blogs. According to Liz the blogs will be used to discuss their "vision for scientific communication, with all of its potentials and obstacles". I thank both of them for the nice links to this blog :) and for engaging in conversation.

Chris Surridge details in his first post how news of PLoS One has been spreading through the blogs. I think this only happened because the ideas behind ONE do strike a chord with bloggers and I really hope their efforts are met with success and more people engage in scientific discussion and collaboration.
Science blog carnivals

What is a blog carnival ? In my opinion a blog carnival is just a meta-blog, a link aggregation supervised by an editor. They have been around for some time and there are already some rules to what usually is common to expect from a blog carnival. You can read this nice post on Science and Politics to get a better understanding of blog carnivals.

Here is a short summary I found on this FAQ:
Blog Carnivals typically collect together links pointing to blog articles on a particular topic. A Blog Carnival is like a magazine. It has a title, a topic, editors, contributors, and an audience. Editions of the carnival typically come out on a regular basis (e.g. every monday, or on the first of the month). Each edition is a special blog article that consists of links to all the contributions that have been submitted, often with the editors opinions or remarks.


There are of course science carnivals and I would say that their numbers are increasing with more people joining the science blogosphere. To my knowledge (please correct me :) the first scientific blog carnival was the Tangled Bank that I think started on the 21st of April 2004 and is still up and running.

These carnivals could also be seen as a path of certification (as discussed in the previous post). The rotating editor reviews submissions and bundles some of them together. This should guaranty that the carnival has the best of what has been posted on the subject in the recent past. The authors gain the attention of anyone interested in the carnival and the readers get supposably good quality posts on the subject. With time, and if there are more blog posts than carnivals we will likely see some carnivals gaining reputation.

Maybe one day having one of your discovery posts appear in one of the carnivals will be the equivalent of today having a paper published in a top journal.

With that said, why don't we start a computational biology/bioinformatics carnival ? :) There might not be enough people for it but we can make it monthly or something like this. Any suggestion for a name ?

Thursday, June 08, 2006

The peer review trial

The next day after finding about PLoS One I saw the announcement for the Nature peer review trial. For the next couple of months any author submitting to Nature can opt to go trough a parallel process of open peer review. Nature is also promoting the discussion on the issue online in a forum where anyone can comment. You can also track the discussion going on the web through Connotea under the tag of "peer review trial", or under the "peer review" tag in Postgenomic.

I really enjoyed reading this opinion on "Rethinking Scholarly Communication", summarized in one of the Nature articles. Briefly, the authors first describe (from Roosendaal and Geurts) the required functions any system of scholarly communication:
* Registration, which allows claims of precedence for a scholarly finding.
* Certification, which establishes the validity of a registered scholarly claim.
* Awareness, which allows actors in the scholarly system to remain aware of new claims and findings.
* Archiving, which preserves the scholarly record over time.
* Rewarding, which rewards actors for their performance in the communication system based on metrics derived from that system.

The authors then try to show that it is possible to build a science communication system where all these functions are not centered in the journal, but are separated in different entities.

This would speed up science communication. There is a significant delay between submitting a communication and having it accessible to others because all the functions are centered in the journals and only after the certification (peer reviewing) is the work made available.

Separating the registration from the certification also has the potential benefit of exploring parallel certifications. The manuscripts deposited in the pre-print servers can be evaluated by the traditional peer-review process in journals but on top of this there is also the possibility of exploring other ways of certifying the work presented. The authors give the example of Citabase but also blog aggregation sites like Postgenomic could provide independent measures of the interest of a communication.

More generally and maybe going a bit of-topic, this reminded me of the correlation between modularity and complexity in biology. By dividing a process into separate and independent modules you allow for exploration of novelty without compromising the system. The process is still free to go from start to end in the traditional way but new subsystems can be created to compete with some of modules.

For me this discussion, is relevant for the whole scientific process , not just communication. New web technologies lower the costs of establishing collaborations and should therefore ease the recruitment of resources required to tackle a problem. Because people are better at different task it does make some sense to increase the modularity in the scientific process.


Monday, June 05, 2006

PLoS One

There is an article in Wired about open access in scientific publishing. It focuses on the efforts of the Public library of Science (PLoS) to make content freely available by transferring the costs of publication to the authors. What actually caught my attention was this little paragraph:

The success of the top two PLoS journals has led to the birth of four more modest ones aimed at specific fields: clinical trials, computational biology, genetics, and pathogens. And this summer, Varmus and his colleagues will launch PLoS One, a paperless journal that will publish online any paper that evaluators deem “scientifically legitimate.” Each article will generate a thread for comment and review. Great papers will be recognized by the discussion they generate, and bad ones will fade away.

The emphasis is mine. I went snooping around for the upcoming PLoS One and I found a page to subscribe to a mailing list. It has curious banner with a subtitle of open access 2.0.



I found some links in the source code that got me to the prototype webpage. It sounds exactly like what a lot of people have been pushing for: rapid scientific communication, community peer reviewing, continuous revision of the paper (they call it interactive papers) and open access. This will be hard to implement but if successful it will do much to bring more transparency to the scientific process and increase the cooperation between scientist.

There is also something about the name PLoS ONE. They are really betting a lot on this launch if they are calling it ONE. It implicitly states that ONE will be the flagship of PLoS, where any paper (not just Biology) can be published.




Wednesday, May 31, 2006

Bringing democracy to the net

Democracy is most often thought of as in opposition to totalitarianism. In this case I mean democracy in opposition to anarchy. Some people are raising their voices against the trend of collective intelligence/wisdom of the crows that have been the hype of the net for the past few years. Wikipedia is the crown jewel of this trend of empowering people for the common good and probably as a result of the project's visibility it has been the one to take the heat from the backlash.

Is Wikipedia dead like Nicholas Carr suggests in his blog ? His provocative title was a flame bait but he does call attention to some interesting things happening at Wikipedia. The wikipedia is not dead , it is just changing. It has to change to cope with the increase in visibility, vandalism and to deal with situations where no real consensus is possible.
The system is evolving by restricting anonymous posting and allowing editors to apply temporary editing restrictions to some pages. It is evolving to become more bureaucratic in nature with disputes and mechanisms to deal with the discord. What Nicholas Carr said is dead is the ideal that anyone can edit anything in wikipedia and I would say this is actually good news.

Following his post on the death of Wikipedia, Carr points to an assay by Jaron Lanier entitled Digital Maoism. It is a bit long but I highly recommend it.
Some quotes from the text:
"Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing projects. "


Sites like Wikipedia are important online experiments. They are trying to develop the tools that allow useful work to come out from millions of very small contributions. I think this will have to go trough some representative democracy systems. We still have to work on ways to establish the governing body in these internet systems. Essentially to whom we decide to deposit trust for that particular task or realm of knowledge. For this we will need better ways to define identity online and to establish trust relationships.

Further reading:
Wiki-truth

Friday, May 26, 2006

The Human Puppet (2)

In November I rambled about a possible sci-fi scenario. It was about a human person giving away their will to be directed by the masses in the internet. A vessel for the "collective intelligence". A voluntary and extreme reality show.

Well, there goes the sci-fi, you can participate in it in about 19 days. Via TechCrunch I found this site:

Kieran Vogel will make Internet television history when he becomes the first person to give total control of his life to the Internet.
(...)
Through an interactive media platform Kieran will live by the decisions the internet decides such as:

# What time he wakes up
# What he wears
# What he eats
# Who he dates
# What he watches


I get a visceral negative response to this. Although this is just a reality show and it is all going to happen inside a house I think it will be important to keep this in mind. In the future technology will make web even more pervasive then today and there are scenarios along the lines of this human puppet idea that could have negative consequences.
I guess what I am thinking is that the same technologies that helps us to collaborate can also be use to control (sounds a bit obvious). In the end the only difference is on how much do the people involved want to (or can) exercise their will power.

Thursday, May 25, 2006

Using viral memes to request computer time

Every time we direct the browser somewhere, dedicating your attention, some computer processing time is used to display the page. This includes a lot of client side processing like all the javascript in all that nice looking AJAX stuff. What if we could harvest some of this computer processing power to solve very small tasks, something like grid computing.
How would this work ? There could be a video server that would allow me to put a video on my blog (like google video) or a simple game or whatever thing that people would enjoy and spend a little time doing. During this time there would be a package downloaded from the same server, some processing done on the client side and a result sent back. If people enjoy the video/game/whatever and it goes viral then it spreads all over the blogs and any person dedicating their attention to it is contributing computer power to solve some task. Maybe this could work as an alternative to advertising ? Good content would be traded for computer power. To compare, Sun is selling computer power in the US for 1 dollar an hour. Of course this type of very small scale grid processing would be worth much less.

Tags: , ,

Wednesday, May 24, 2006

Conference blogging and SB2.0

In case you missed the Synthetic Biology 2.0 meeting and want a quick summary of what happened there you can take a look at some blogs. There were at least 4 bloggers at the conference. Oliver Morton (chief news and features editor of Nature) has a series of posts in his old blog. Rob Carlson described how he and Drew Endy were calling the field intentional biology. Alex Mallet from Drew Endy's lab has a quick summary of the meeting and finally Mackenzie has in his cis-action by far the best coverage with lots more to read.

I hope they put up on the site the recorded talks since I missed a lot of interesting things during the live webcast.

In the third day of the meeting (that was not available in the live webcast) there was a discussion about possible self-regulation in the field (as in the 1975 Asilomar meeting). According to an article in NewScientist the attending researchers decided against self-regulation measures.


Saturday, May 20, 2006

Synthetic Biology & best practices

There is a Synthetic Biology conference going on in Berkeley (webcast here) and they are going to talk about the subject of best practices in one of the days. There is a document online with an outline of some of the subjects up for discussion. In reaction to this, a group of organization published an open letter for the people attending the meeting.
From the text:
We are writing to express our deep concerns about the rapidly developing field of Synthetic Biology that is attempting to create novel life forms and artificial living systems. We believe that this potentially powerful technology is being developed without proper societal debate concerning socio-economic, security, health, environmental and human rights implications. We are alarmed that synthetic biologists meeting this weekend intend to vote on a scheme of voluntary self-regulation without consulting or involving broader social groups. We urge you to withdraw these self-governance proposals and participate in a process of open and inclusive oversight of this technology.

Forms of self-regulation are not incompatible with open discussion with the broader society nor with state regulation. Do we even need regulation at this point ?


The internet and the study of human intelligence

I started reading a book on machine learning methods last night and my mind floated away to thinking about the internet and artificial intelligence (yes the book is a bit boring :).
Anyway, one thing that I thought about was how the internet might become (or is already) a very good place to study (human) intelligence. Some people are very transparent on the net and if anything the trend is for people to start sharing their lives or at least their view of the world earlier. So it is possible to get an idea of what someone is exposed to, what people read, films they see, some of their life experiences, etc. In some sense you can access someone's input in life.
On the other hand you can also read this person's opinions when presented with some content. Person X with known past experiences Y was exposed to Z and reacted in this way. With this information we could probably learn a lot about human thought processes.


A little bit of this a little bit of that ...

What do you get when you mix humans/sex/religion/evolution? A big media hype.
Also, given that a big portion of the scientist currently blogging are working on evolution you also get a lot of buzzing in the science blogosphere. No wonder then why this paper reached the top spot in postgenomic.

This one is a very good example of the usefulness of blogs and why we should really promote more science communication online. The paper was released in advanced online publication and some days after you can already read a lot of opinions about it. It is not just the blog entries but also all the comments on these blog posts. As a result of this we not only get the results and discussion from the paper but the opinion of whoever decided to participate in the discussion.

Wednesday, May 17, 2006

Postgenomic greasemonkey script (2)

I have posted the Postgenomic script I mentioned in the previous post in the Nodalpoint wiki page. There are some instructions there on how to get it running. If you have some problems or suggestions leave some comments here or in the forum in Nodalpoint. Right now it is only set to work with the Nature journals but it should work with more.


Saturday, May 13, 2006

Postgenomics script for Firefox

I am playing around with greasemonkey to try to add links to Postgenomic to journal websites. The basic idea is to search the webpage you are seeing (like a Nature website for example) for papers that have been talked about in blogs and are tracked by Postgenomic. When one is found a little picture is added with a link to the Postgenomic page talking about the paper.
The result is something like this (in the case of the table of contents):


Or like this when viewing the paper itself:


In another journal:


I am more comfortable with Perl, but anyway I think it works as a proof or principle. If Stew agrees I'll probably post the script in Nodalpoint for people to improve or just try it out.

Thursday, May 11, 2006

Google Trends and Co-Op

There some new Google services up and running and buzzing around the blogs today. I only briefly took a look around them.
Google Trends is like Google Finance for anything search trend than you want to analyze. Very useful for someone wanting to waste time instead of doing some productive work ;). You can compare the search and news volume for different terms like:

It gets the data from all the google searches so it really does not reflect the trends within the scientific community.

The other new tool out yesterday is Google Co-Op, the start of social search for Google. It looks as obscure as Google Base so I can again try to make some weird connection to how researcher might use it :). It looks like Google Co-Op is a way for users to further personalize their search. User can subscribe to providers that offer their knowledge/guidance to shape some of the results you see in your search. If you search for example for alzheimer's you should see on the top of the results some refinement that you can do. For example you can look only at treatment related results. This was possible because a list of contributors have labeled a lot of content according to some rules.

Anyone can create a directory and start labeling content following an XML schema that describes the "context". So anyone or (more likely) any group of people can add metadata to content and have it available in google. The obvious application for science would be to have metadata on scientific publications available. Maybe getting Connotea and CiteULike data into a google directory for example would be useful. These sites can still go on developing the niche specific tools but we could benefit from having a lot of the tagging metadata available in google.


Wednesday, May 10, 2006

Nature Protocols

Nature continues clearly the most innovative of the publishing houses in my view. A new web site is up in beta phase called Nature Protocols:

Nature Protocols is a new online web resource for laboratory protocols. The site, currently in beta phase, will contain high quality, peer-reviewed protocols commissioned by the Nature Protocols Editorial team and will also publish content posted onto the site by the community

They accept different types of content:
* Peer-reviewed protocols
* Protocols related to primary research papers in Nature journals
* Company Protocols and Application notes
* Non peer-reviewed (Community) protocols

There are already several protocol websites already out there so what is the point ? For Nature I guess it is obvious. Just like most portal websites they are creating a very good place to put ads. I am sure that all these protocols will have links to products on their Nature products and a lot of ads. The second advantage for Nature is the stickiness of the service. More people will come back to the website to look for protocols and stumble on to Nature content, increasing visibility for the journals and their impact.

A little detail is that, as they say above, the protocols from the papers published in the Nature journals will be made available on the website. On one hand this sounds great because the methods sections in the papers are usually so small (due to restrictions for publication) that they are most of the times incredibly hard to decipher (and usually put into supplementary materials). On the other hand, this will increase even further the tendency to hide away from the paper the really important pars of the research, the results and how these where obtained (methods) and to show only the subjective interpretations of the authors.
This reminds me of a recent editorial by Gregory A Petsko in Genome Biology (sub only). Here is how is states the problem :) - "The tendency to marginalize the methods is threatening to turn papers in journals like Nature and Science into glorified press releases."

For scientists this will be a very useful resource. Nature has a lot of appeal and will be able to quickly create a lot of really good content by inviting experienced scientists to write up their protocols full with tips and tricks accumulated over years of experience. This is the easy part for science portals, the content comes free. If somebody went to Yahoo and told them that scientist actually pay scientific journals to please please show our created content they would probably laugh :). Yahoo/MSN and other web portals have to pay people to create the content that they have on their sites.

web2.0@EMBL

The EMBL Centre for Computational Biology has announced a series of talks related to novel concepts and easy-to-use web tools for biologists. So far there are three schedule talks:

Session 1 - Using new web concepts for more efficient research - an introduction for the less-techy crowd
Time/place: Tue, May 16th, 2006; 14:30; Small Operon

This one I think will introduce the concepts around what is called web2.0 and the potential impact these might have for researchers. I am really curious to see how big will the "less-techy crowd" really be :).

The following sessions are a bit more specific dealing with particular problems we might have in our activities and how can some of the recent web technologies help us deal with them.

Session 2 - Information overflow? Stay tuned with a click (May 23rd, 2006; 14:30;)
Session 3 - Tags: simply organize and share links and references with keywords (May 30th, 2006; 14:30)
Session 4 - Stop emailing huge files: How to jointly edit manuscripts and share data (June 6th, 2006; 14:30;)
All in the Small Operon, here in the EMBL Heidelberg

I commend the efforts of the EMBL CCB and I hope that a lot of people turn up. Let's see if the open collaborative ideas come up on the discussions. If you are in the neighborhood and are interested, come on by and help with the discussion (map).

Tags: ,