Friday, July 28, 2006

Binding specificity and complexity

There is a paper out in PNAS about the distribution of free energy of binding for the yeast-two-hybrid datasets. Although I still have to dig into the model they used I found the result quite interesting. They observe that the average binding energy decreases with cellular complexity.
They have some sentences in there that made my hairs stand like: "more evolved organisms have weaker binary protein-protein binding". What does "more evolved" mean ? Also on figure 4 of the paper they plot miu (a parameter related to the average binding energy) over divergence times without saying what species they are comparing.

This result fits well with another paper published a while ago in PLoS Comp Bio about protein family expansions and complexity. Christine Vogel and Cyrus Chothia show (among other things) what protein domains expansion best correlate with complexity. They used cell numbers as a proxy for species complexity. If you look at the top of the list (in table 2) you can find several of the peptide binding domains, know to be of low specificity, given that they do not require a folded structure to interact with.

What I would like to know is the correlation between binding affinity and binding specificity. For example SH2 domains bind much more tightly than SH3 domains although they are both not very specific binding domains. Maybe in general it could be said that average lower binding affinities correspond to lower average binding specificity.

Why would complexity correlate with binding specificity ? I think one important factor is cellular size. An increase is size has allowed for exploration of spacial factors in determining cellular response. Specificity of binding in the real cell (not in binary assays) is determined also by localization at sub cellular structures.

One practical reminder coming from this is that even if we have the perfect method to determine biophysical binding specificity we are still going to get poor results if we cannot predict all other components that will determine if the two proteins will bind or not (i.e localization, expression).

TOPAZ and PLoS ONE

According to the PLoS blog the new PLoS ONE will be accepting submissions soon. I guess they will at the same time release the TOPAZ system that will likely be available here.

"TOPAZ will serve the rapidly growing demand for sophisticated tools and resources to read and use the scientific and medical literature, allowing scholarly publishers, societies, universities, and research communities to publish open access journals economically and efficiently."


Sunday, July 23, 2006

Opening up the scientific process

During my stay at the EMBL, for the past couple of years, it already happened more than once that people I know have been scooped. This simple means that all the hard work that they have been doing was already done by someone else that manage to publish it a bit sooner and therefore limited severely the usefulness of their discoveries. Very few journals are interested in publishing research that merely confirms other published results.

From talking to other people, I have come to accept that scooping is a part of science. There is no other possible conclusion from this but to accept that the scientific process is very flawed. We should not be wasting resources literally racing with each other to be the first person to discover something. When you try to explain to non-scientist, that it is very common to have 3 or 4 labs doing exactly the same thing they usually have a hard time integrating this with their perception of science as the pursue of knowledge trough collaboration.
I am probably naïve given that I am only doing this for a couple of years but I don’t pretend to say that we do not need competition in science. We need to keep each other in check exactly because lack of competition leads to waste of resources. I would argue however that right now the scientific process is creating competition at wrong levels decreasing the potential productivity.

So how do we work and what do we aim to produce? We are in the business of producing manuscripts accepted in peer reviewed journals. To have competition there most be a scarce element. In our case the limited element is the attention of fellow scientist. Given that scientist’s attention is scarce we all compete for the limited number of time that researchers have to read papers every week. So the good news is that the system tends to give credit to high quality manuscripts. This means that research projects and ongoing results should be absolutely confidential and everything should be focused in getting that Science or Nature paper.
I found a beautiful drawing of an iceberg (used here with permission from the author, David Fierstein) that I think illustrates the problem we have today by focusing the competition on the manuscripts. Only a small fraction of the research process is in view.


Wouldn’t it be great if we could find a way to make most of the scientific process public but at the same time guaranty some level of competition? What I think we could do would be to define steps in the process that we could say are independent, which can work as modules. Here I mean module in the sense of a black box with inputs and outputs that we wire together without caring too much on how the internals of the boxes work. I am thinking these days about these modules and here is a first draft of what this could look like:


The data streams would be, as the name suggests, a public view of the data being produced by a group or individual researcher. Blogs are a simple way this could be achieved today (see for example this blog). The manuscripts could be built in wikis by selection of relevant data bits from the streams that fit together to answer an interesting question. This is where I propose that the competition would come in. Only those relevant bits of data that better answer the question would be used. The authors of the manuscript would be all those that contributed data bits or in some other way contributed for the manuscript creation. In this way all the data would be public and still a healthy level of competition would be maintained.
The rest of the process could go on in public view. Versions of the manuscript deemed stable could be deposited in a pre-print server and comments and peer review would commence. Latter there could still be another step of competition to get the paper formally accepted in a journal.

One advantage of this is that it is not a revolution of the scientific process. People could still work in their normal research environment closed within their research groups. This is just a model of how we could extend the system to make it mostly open and public. The technologies are all here: structured blogging for the data streams, wikis for the manuscripts and online communities to drive the research agendas.

I think it is important to view the scientific process as a group of modules also because it allows us latter to think of different ways to wire the modules together. Increasing the modularity should permit us to innovate. For example we can latter think of ways that the data streams are brought together to answer questions, etc.


Friday, July 21, 2006

Bio::Blogs #2 - call for submissions

(via Nodalpoint) This is just a quick reminder that we have 10 days to submit links to the second edition of Bio::Blogs. You can send your suggestions to bioblogs {at} gmail.com. Also if you wish to host future editions send in a quick email with your name and link to your blog to the same email address.

Monday, July 17, 2006

Conference on Systems Biology of Mammalian Cells

There was a Systems Biology conference here in Heidelberg last week. For those interested the recorded talks are now available on their site. There is a lot of interesting things about the behavior of network motifs and about network modeling.

Sunday, July 16, 2006

Blog changes

Notes from the Biomass is back again in a new website. I was cleaning the links on the blog to better reflect what I am actually reading and while I was at it I changed the template. It looks better in IE than in Firefox but I really don't have the time nor the ability to work on a good design.

Tuesday, July 11, 2006

Defrag my life

I am taking the week to visit my former lab in Aveiro, Portugal where I spent one year trying to understand how a codon reassignment occurred in the evolutionary past of C. albicans. This was where I first got into Perl and the wonders of comparative genomics.

It brings back a lot of memories every time I come back to one of the cities I lived in before (6 cities and counting) and I sometimes wonder if it is really necessary for scientists to live such fragmented lives.

reboot, restart, new program.

The regular programming will return soon :).

Tuesday, July 04, 2006

Re: The ninth wave

I usually reed Gregory A Petsko' comments and editorials in Genome Biology that are unfortunately only available with subscription. In the last edition of the journal he wrote a comment entitled "The ninth wave". I have lived most of my life 10min away from the Atlantic ocean and at least to my recollection we used to talk about the 7th wave not the ninth as the biggest wave in a set of waves, but this it not the point :).
Petsko argues that the increase of free access to information on the web and of computer savvy investigators presents a clear danger of a flood of useless correlations hinting at potential discoveries never followed by careful experimental work:
Computational analysis of someone else's data, on the other hand, always produces results, and all too often no one but the cognoscenti can tell if these results mean anything.

This reminded me of a review I read recently from Andy Clark (via Evolgen). Andy Clark talks about the huge increase of researchers in comparative genomics:
...one of its worst disasters is that it has created a hoard of genomics investigators who think that evolutionary biology is just fun, speculative story telling. Sadly, much of the scientific publication industry seems to respond to the herd as much as it does to scientific rigor, and so we have a bit of a mess on our hands.

I have a feeling that this is the opinion of a lot of researchers. There is this generalized consensus that people working on computational biology have it easy. Sitting at the computer all day, inventing correlations with other people's data.
Maybe some people feel this way because it is relatively fast to go from idea to result using computers if you have in a mind clearly what you want to test while the experimental work certainly takes longer.
Why should I re-do the experimental work if I can answer a question that I think is interesting using available information ? I should be criticized if I try to overinterpret the results, if the methods used are not appropriate or if the question is not relevant but I should not be criticized for looking for an answer the fastest way I can.

Monday, July 03, 2006

Journal policies on preprint servers (2)

Recently I did a survey on the different journal policies regarding preprint servers. I am interested in this because I feel it is important to separate the peer review process from the time-stamping (submission) of a scientific communication. Establishing this separation allows for exploration of alternative and parallel ways of determining the value of a scientific communication. This is only possible if journals accept manuscripts previously deposited in pre-print servers.
Today I received the answer from Bioinformatics:
"The Executive Editors have advised that we will allow authors to submit manuscripts to a preprint archive."


If you also think that this model, already very established in physics and maths, is useful you can also sent some mails to your journals of interest to enquire about their policies. If enough authors voice their interest there will be more journals accepting manuscripts from pre-print servers.
I think we are now lacking a biomedical preprint server. The Genome Biology journal served until early this year also as a preprint server but they discontinued this practice. Maybe arxiv could expand to include biomedical manuscripts (they already accept quantitative biology manuscripts) .

Saturday, July 01, 2006

Bio::Blogs # 1

An editorial of sorts

Welcome to the first edition of Bio::Blogs, a blog carnival covering all subjects related to bioinformatics and computational biology. The main objectives of Bio::Blogs are, in my opinion, to help nit together the bioinformatics blogging community and to showcase some interesting posts on these subjects to other communities. Hopefully it will serve as incentive for other people in the area to start their own blogs and to join in the conversation.

I get to host this edition and I decided to format it more or less like a journal with three sections:1) Conference reports; 2) Primers and reviews; 3) Blog articles. I think this reflects also my opinion on what could be a future role of these carnivals, to serve as a path for certification of scientific content parallel to the current scientific journals.

Given that there were so few submissions I added some links myself. Hopefully in the next editions we can get some more publicity and participation :). Talking about future editions, the second edition of Bio::Blogs will be hosted by Neil and we have now a full month to make something up in our blogs and submit the link to bioblogs{at}gmail{dot}com.


Conference Reports
I selected a blog post from Alf describing what was discussed in a recent conference dedicated to Data Webs. There is a lot of information about potential ways to deal with the increase of data submitted all over the web in many different formats. I remember seeing the advert for this conference and I was intrigued to see Philip Bourne, the editor-in-chief of PLoS Computational Biology, among the speakers. I see know that he is involved in publishing tools under development in PLoS.

Primers & Reviews
Stew from Flags and Lollipops sent in this link to a review on the use of bioinformatics to hunt for disease related genes. He highlights a series of tools and methods that can be used to prioritize candidate genes for experimental validation.

Neil, the next host of Bio::Blogs spent some time with the BioPerl package called Bio::Graphics. He dedicated a blog entry to explain how to create graphics for your results with this package. He gives examples on how to make graphic representations of sequences mapped with blast hits and phosphorylation sites.

Chris, a usual around Nodalpoint, nominated a post in Evolgen:
Evolgen has an interesting post about the relative importance (and interest in) cis and trans acting genetic variation in evo-devo. A lot of (computational) energy has thus far been expended in finding regulatory motifs close to genes (ie, within promoter regions), and conserved elements in non-coding sequences. Rather predictably, cis-acting variants have received the lion's share of attention, probably because they present a more tractable problem. The post deals with work from the evo-devo and comparative genomics fields, but these problems have also been attacked from within-species variation perspectives, particularly the genetics of gene expression. But that's next month's post...

Blog articles
I get to link to my last post. I present some very preliminary results on the influence of protein age on the likelihood of protein-protein interactions. Have fun pointing out all the likely flaws in reasoning and hopefully useful ways to build on it.

To wrap things up here is an announcement by Pierre of a possibly useful applet implementing a Genetic Programming Algorithm. If you ever wanted to play around with genetic programming you can have a go with his applet.


That is it for this month. It is a short Bio::Blogs but I hope you find some of these things useful. Don’t forget to submit the links for the next edition before the end of July. Neil will take up the editorial role for #2 in his blog. If you know of a nice symbol that we might use for Bio::Blogs sent it in as well.

The likelihood that two proteins interact might depend on the proteins' age

Abstract
It has been previously shown[1] that S. cerevisiae proteins preferentially interact with proteins of the same estimated likely time of origin. Using a similar approach but focusing on a less broad evolutionary time span I observed that the likelihood for protein interactions depends on the proteins’ age.

Methods and Results
Protein-protein interactions for S. cerevisiae were obtained from BIND, excluding any interactions derived from protein complexes. I considered only proteins that were represented in this interactome (i.e. with one or more interactions).
In order to create groups of S. cerevisiae proteins with different average age I used the reciprocal best blast hit method to determine the most likely ortholog in eleven other yeast species (see figure 1 for species names).

S. cerevisiae proteins with orthologs in all species were considered to be ancestral proteins and were grouped into group A. To obtain groups of proteins with decreasing average age of origin, S. cerevisiae proteins were selected according to the absence of identifiable orthologs in other species (see figure 1). It is important to note that these groups of decreasing average protein age are overlapping. Group F is contained in E , both are contained in D and so forth. I could have selected non overlapping groups of proteins with decreasing time of origin but the lower numbers obtained might in a latter stage make statistical analysis more difficult.
The phylogenetic tree in figure 1 (obtained with MEGA 3.1) is a neighbourhood joining tree obtained by concatenating 10 proteins from the ancestral group A. I did it mostly to avoid copyrighted images and too have a graphical representation of the species divergence.
To determine the effect of protein age on the likelihood of interaction with ancestral proteins I counted the number of interactions between group A and the other groups of proteins (see table 1).

From the data it is possible to observe that protein-interactions within groups (within group A) is more likely than protein-interactions between groups. This is in agreement with the results from Qin et al.[1]. Also the likelihood for a protein to interact with an ancestral protein depends on the age of this protein. This simple analysis suggests that the younger the protein is the less likely it is to interact with an ancestral protein.
One possible use of this observation, if it holds to further scrutiny, would be to use the likely time of origin of the proteins as information to include in protein-protein prediction algorithms.

Caveats and possible continuations
The protein-protein interactions used here also contain the high-throughput studies and therefore the interactome used should be considered with caution. I might redo this analysis with a recent set of interactions compiled from the literature[2] but this will also introduce some bias into the interactome.
I should do some statistical analysis to determine if the differences observed are at all significant. If the differences are significant I should try to correlate the likelihood of interactions with a quantitative measure like average protein identity.

References
[1]Qin H, Lu HH, Wu WB, Li WH. Evolution of the yeast protein interaction network. Proc Natl Acad Sci U S A. 2003 Oct 28;100(22):12820-4. Epub 2003 Oct 13
[2]Reguly T, Breitkreutz A, Boucher L, et al. Comprehensive curation and analysis of global interaction networks in Saccharomyces cerevisiae. J Biol. 2006 Jun 8;5(4):11 [Epub ahead of print]

Sunday, June 25, 2006

Quick links

I stumbled upon a new computational biology blog called Nature's Numbers, looks interesting.
From Science Blogs universe here is a list compiled by Coturnix of upcoming blog carnivals for the next few days. I also remind anyone reading that the deadline for submissions for bio::blogs is coming very soon so send in your links :).
Still in Science Blogs here is an introduction to information theory. I am getting interesting in this as a tool for computational biology but I have a lot to learn on the subject. Here are two papers I fished out that use information theory in biology.
Also, if you want to donate some money, go check out the donors choose challenge of several Science Bloggers. Seed will match the donations up to $10,000 making each donation potentially more useful.

Wednesday, June 21, 2006

Journal policies on preprint servers

I mentioned in a previous post that it would be interesting to separate the registration, which allows claims of precedence for a scholarly finding (the submission of a manuscript) from the certification, which establishes the validity of a registered scholarly claim (the peer review process).

This can only happen if journals accept that a manuscript submitted to a preprint server is different from a peer-review article and therefore it should not be considered as prior publication. So what do the journals currently say about preprint servers ? I looked around the different policies, sent some emails and compiled a this list:

Nature: yes but ...
Nature allows prior publication on recognised community preprint servers for review by other scientists in the field before formal submission to a journal. The details of the preprint server concerned and any accession numbers should be included in the cover letter accompanying submission of the manuscript to Nature. This policy does not extend to preprints available to the media or that are otherwise publicised before or during the submission and consideration process at Nature.


I enquired about this last part of their policy on the peer review forum and this was the response:
"We are aware that preprint servers such as ArXiv are available to the media, but as things stand we consider for publication, and publish, many papers that have been posted on it, and on other community preprint servers.As long as the authors have not actively sought out media coverage before submission and publication in Nature, we are happy to consider their work."


Nature Genetics/Nature Biotechnology: yes
(...)the presentation of results at scientific meetings (including the publication of abstracts) is acceptable, as is the deposition of unrefereed preprints in electronic archives.


PNAS: Yes!
"Preprints have a long and notable history in science, and it has been PNAS policy that they do not constitute prior publication. This is true whether an author hands copies of a manuscript to a few trusted colleagues or puts it on a publicly accessible web site for everyone to read, as is common now in parts of the physics community. The medium of distribution is not germane. A preprint is not considered a publication because it has not yet been formally reviewed and it is often not the final form of the paper. Indeed, a benefit of preprints is that feedback usually leads to an improved published paper or to no publication because of a revealed flaw. "


BMC Bioinformatics/BMC Biology/BMC Evolutionary Biology/BMC Genomics/BMC Genetics/Genome Biology: Yes
"Any manuscript or substantial parts of it submitted to the journal must not be under consideration by any other journal although it may have been deposited on a preprint server."


Molecular Systems Biology: Do you feel lucky ?
"Molecular Systems Biology reserves the right not to publish material that has already been pre-published (either in electronic or other media)."


Genome Research: No
"Submitted manuscripts must not be posted on any web site and are subject to press embargo."


Science: Do you feel lucky ?
"We will not consider any paper or component of a paper that has been published or is under consideration for publication elsewhere. Distribution on the Internet may be considered prior publication and may compromise the originality of the paper or submission. Please contact the editors with questions regarding allowable postings under this policy."


Cell: No ?
"Manuscripts are considered with the understanding that no part of the work has been published previously in print or electronic format and the paper is not under consideration by another publication or electronic medium."


PLoS - No clear policy information on the site about this but according to an email I got from PLoS they do consider for publication papers that have been submited in preprint servers. I hope they could make this clear in the policies they have available online.

Bioinformatics,Molecular Biology and Evolution - ??
"Authors wishing to deposit their paper in public or institutional repositories may deposit a link that provides free access to the paper, but must stipulate that public availability be delayed until 12 months after first online publication in the journal"

I sent emails to both journals but I only had an answer from MBE directing me to this policy common to the journals of the Oxford University Press.

In summary most journals I checked will consider papers that have been previously submited to preprint servers, so I might consider in the future to submit my own work to preprint servers before looking for a journal. Very few journals clearly refuse manuscripts that might be available in electronic form but a good number either have no clear policy or reserve the right to reject papers that are available online.

Monday, June 19, 2006

Mendel's Garden and Science Online Seminars

For those interested in evolution and genetics this is a good day. The first issue of Mendel's Garden is out with lots of interesting links. I particularly liked RPM's post on evolution of regulatory regions. I still think that evo-devo should focus a bit more on changes in protein interaction networks but more about that one of these days (hopefully :).

On a related note, Science started a series of online seminars with a primer on "Examining Natural Selection in Humans". This is a flash presentation with voice overs from the authors of a recent Science review on the same subject. I like this idea much more than the podcasts. I am not a big fan of podcasts because it is much faster to scan a text than it is to hear someone read it for you. At least with images there is more information and more appeal to spend some minutes listening to a presentation. The only thing I have against this Science Online Seminars initiative is that there is no RSS feed (I hope it is just a matter of time).

Friday, June 16, 2006

Bio::Blogs announcement

Bio::Blogs is a blog carnival covering all bioinformatics and computational biology subjects. Bio::Blogs is schedule to be a monthly edition to come out on the first day of every month. The deadline for submission is until the end of month. Submissions for the next release of Bio::Blogs and offers to host the next editions can be sent to:

I will be hosting the first issue of Bio::Blogs here and there will be a homepage to keep track of all of the editions.

For discussions relating Bio::Blogs visit the Nodalpoint forum entry.

Wednesday, June 14, 2006

SB2.0 webcast and other links

If you missed the Synthetic Biology 2.0 conference you can know watch the webcast here (via MainlyMartian).

Nature tech team over at Nascent continue with their productive stream of new products, including the release of Nature Network Boston and a new release of the Open Text Mining Interface. They even set up a webpage for us to keep up with all the activity here. They really look like a research group by know :) I wonder what will happen if they tried to publish some of this research... Open Text Mining Interface published by Nature in journal X.


Monday, June 12, 2006

PLoS blogs

Liz Allen and Chris Surridge just kicked off the new PLoS blogs. According to Liz the blogs will be used to discuss their "vision for scientific communication, with all of its potentials and obstacles". I thank both of them for the nice links to this blog :) and for engaging in conversation.

Chris Surridge details in his first post how news of PLoS One has been spreading through the blogs. I think this only happened because the ideas behind ONE do strike a chord with bloggers and I really hope their efforts are met with success and more people engage in scientific discussion and collaboration.
Science blog carnivals

What is a blog carnival ? In my opinion a blog carnival is just a meta-blog, a link aggregation supervised by an editor. They have been around for some time and there are already some rules to what usually is common to expect from a blog carnival. You can read this nice post on Science and Politics to get a better understanding of blog carnivals.

Here is a short summary I found on this FAQ:
Blog Carnivals typically collect together links pointing to blog articles on a particular topic. A Blog Carnival is like a magazine. It has a title, a topic, editors, contributors, and an audience. Editions of the carnival typically come out on a regular basis (e.g. every monday, or on the first of the month). Each edition is a special blog article that consists of links to all the contributions that have been submitted, often with the editors opinions or remarks.


There are of course science carnivals and I would say that their numbers are increasing with more people joining the science blogosphere. To my knowledge (please correct me :) the first scientific blog carnival was the Tangled Bank that I think started on the 21st of April 2004 and is still up and running.

These carnivals could also be seen as a path of certification (as discussed in the previous post). The rotating editor reviews submissions and bundles some of them together. This should guaranty that the carnival has the best of what has been posted on the subject in the recent past. The authors gain the attention of anyone interested in the carnival and the readers get supposably good quality posts on the subject. With time, and if there are more blog posts than carnivals we will likely see some carnivals gaining reputation.

Maybe one day having one of your discovery posts appear in one of the carnivals will be the equivalent of today having a paper published in a top journal.

With that said, why don't we start a computational biology/bioinformatics carnival ? :) There might not be enough people for it but we can make it monthly or something like this. Any suggestion for a name ?

Thursday, June 08, 2006

The peer review trial

The next day after finding about PLoS One I saw the announcement for the Nature peer review trial. For the next couple of months any author submitting to Nature can opt to go trough a parallel process of open peer review. Nature is also promoting the discussion on the issue online in a forum where anyone can comment. You can also track the discussion going on the web through Connotea under the tag of "peer review trial", or under the "peer review" tag in Postgenomic.

I really enjoyed reading this opinion on "Rethinking Scholarly Communication", summarized in one of the Nature articles. Briefly, the authors first describe (from Roosendaal and Geurts) the required functions any system of scholarly communication:
* Registration, which allows claims of precedence for a scholarly finding.
* Certification, which establishes the validity of a registered scholarly claim.
* Awareness, which allows actors in the scholarly system to remain aware of new claims and findings.
* Archiving, which preserves the scholarly record over time.
* Rewarding, which rewards actors for their performance in the communication system based on metrics derived from that system.

The authors then try to show that it is possible to build a science communication system where all these functions are not centered in the journal, but are separated in different entities.

This would speed up science communication. There is a significant delay between submitting a communication and having it accessible to others because all the functions are centered in the journals and only after the certification (peer reviewing) is the work made available.

Separating the registration from the certification also has the potential benefit of exploring parallel certifications. The manuscripts deposited in the pre-print servers can be evaluated by the traditional peer-review process in journals but on top of this there is also the possibility of exploring other ways of certifying the work presented. The authors give the example of Citabase but also blog aggregation sites like Postgenomic could provide independent measures of the interest of a communication.

More generally and maybe going a bit of-topic, this reminded me of the correlation between modularity and complexity in biology. By dividing a process into separate and independent modules you allow for exploration of novelty without compromising the system. The process is still free to go from start to end in the traditional way but new subsystems can be created to compete with some of modules.

For me this discussion, is relevant for the whole scientific process , not just communication. New web technologies lower the costs of establishing collaborations and should therefore ease the recruitment of resources required to tackle a problem. Because people are better at different task it does make some sense to increase the modularity in the scientific process.


Monday, June 05, 2006

PLoS One

There is an article in Wired about open access in scientific publishing. It focuses on the efforts of the Public library of Science (PLoS) to make content freely available by transferring the costs of publication to the authors. What actually caught my attention was this little paragraph:

The success of the top two PLoS journals has led to the birth of four more modest ones aimed at specific fields: clinical trials, computational biology, genetics, and pathogens. And this summer, Varmus and his colleagues will launch PLoS One, a paperless journal that will publish online any paper that evaluators deem “scientifically legitimate.” Each article will generate a thread for comment and review. Great papers will be recognized by the discussion they generate, and bad ones will fade away.

The emphasis is mine. I went snooping around for the upcoming PLoS One and I found a page to subscribe to a mailing list. It has curious banner with a subtitle of open access 2.0.



I found some links in the source code that got me to the prototype webpage. It sounds exactly like what a lot of people have been pushing for: rapid scientific communication, community peer reviewing, continuous revision of the paper (they call it interactive papers) and open access. This will be hard to implement but if successful it will do much to bring more transparency to the scientific process and increase the cooperation between scientist.

There is also something about the name PLoS ONE. They are really betting a lot on this launch if they are calling it ONE. It implicitly states that ONE will be the flagship of PLoS, where any paper (not just Biology) can be published.