Friday, June 16, 2006

Bio::Blogs announcement

Bio::Blogs is a blog carnival covering all bioinformatics and computational biology subjects. Bio::Blogs is schedule to be a monthly edition to come out on the first day of every month. The deadline for submission is until the end of month. Submissions for the next release of Bio::Blogs and offers to host the next editions can be sent to:

I will be hosting the first issue of Bio::Blogs here and there will be a homepage to keep track of all of the editions.

For discussions relating Bio::Blogs visit the Nodalpoint forum entry.

Wednesday, June 14, 2006

SB2.0 webcast and other links

If you missed the Synthetic Biology 2.0 conference you can know watch the webcast here (via MainlyMartian).

Nature tech team over at Nascent continue with their productive stream of new products, including the release of Nature Network Boston and a new release of the Open Text Mining Interface. They even set up a webpage for us to keep up with all the activity here. They really look like a research group by know :) I wonder what will happen if they tried to publish some of this research... Open Text Mining Interface published by Nature in journal X.


Monday, June 12, 2006

PLoS blogs

Liz Allen and Chris Surridge just kicked off the new PLoS blogs. According to Liz the blogs will be used to discuss their "vision for scientific communication, with all of its potentials and obstacles". I thank both of them for the nice links to this blog :) and for engaging in conversation.

Chris Surridge details in his first post how news of PLoS One has been spreading through the blogs. I think this only happened because the ideas behind ONE do strike a chord with bloggers and I really hope their efforts are met with success and more people engage in scientific discussion and collaboration.
Science blog carnivals

What is a blog carnival ? In my opinion a blog carnival is just a meta-blog, a link aggregation supervised by an editor. They have been around for some time and there are already some rules to what usually is common to expect from a blog carnival. You can read this nice post on Science and Politics to get a better understanding of blog carnivals.

Here is a short summary I found on this FAQ:
Blog Carnivals typically collect together links pointing to blog articles on a particular topic. A Blog Carnival is like a magazine. It has a title, a topic, editors, contributors, and an audience. Editions of the carnival typically come out on a regular basis (e.g. every monday, or on the first of the month). Each edition is a special blog article that consists of links to all the contributions that have been submitted, often with the editors opinions or remarks.


There are of course science carnivals and I would say that their numbers are increasing with more people joining the science blogosphere. To my knowledge (please correct me :) the first scientific blog carnival was the Tangled Bank that I think started on the 21st of April 2004 and is still up and running.

These carnivals could also be seen as a path of certification (as discussed in the previous post). The rotating editor reviews submissions and bundles some of them together. This should guaranty that the carnival has the best of what has been posted on the subject in the recent past. The authors gain the attention of anyone interested in the carnival and the readers get supposably good quality posts on the subject. With time, and if there are more blog posts than carnivals we will likely see some carnivals gaining reputation.

Maybe one day having one of your discovery posts appear in one of the carnivals will be the equivalent of today having a paper published in a top journal.

With that said, why don't we start a computational biology/bioinformatics carnival ? :) There might not be enough people for it but we can make it monthly or something like this. Any suggestion for a name ?

Thursday, June 08, 2006

The peer review trial

The next day after finding about PLoS One I saw the announcement for the Nature peer review trial. For the next couple of months any author submitting to Nature can opt to go trough a parallel process of open peer review. Nature is also promoting the discussion on the issue online in a forum where anyone can comment. You can also track the discussion going on the web through Connotea under the tag of "peer review trial", or under the "peer review" tag in Postgenomic.

I really enjoyed reading this opinion on "Rethinking Scholarly Communication", summarized in one of the Nature articles. Briefly, the authors first describe (from Roosendaal and Geurts) the required functions any system of scholarly communication:
* Registration, which allows claims of precedence for a scholarly finding.
* Certification, which establishes the validity of a registered scholarly claim.
* Awareness, which allows actors in the scholarly system to remain aware of new claims and findings.
* Archiving, which preserves the scholarly record over time.
* Rewarding, which rewards actors for their performance in the communication system based on metrics derived from that system.

The authors then try to show that it is possible to build a science communication system where all these functions are not centered in the journal, but are separated in different entities.

This would speed up science communication. There is a significant delay between submitting a communication and having it accessible to others because all the functions are centered in the journals and only after the certification (peer reviewing) is the work made available.

Separating the registration from the certification also has the potential benefit of exploring parallel certifications. The manuscripts deposited in the pre-print servers can be evaluated by the traditional peer-review process in journals but on top of this there is also the possibility of exploring other ways of certifying the work presented. The authors give the example of Citabase but also blog aggregation sites like Postgenomic could provide independent measures of the interest of a communication.

More generally and maybe going a bit of-topic, this reminded me of the correlation between modularity and complexity in biology. By dividing a process into separate and independent modules you allow for exploration of novelty without compromising the system. The process is still free to go from start to end in the traditional way but new subsystems can be created to compete with some of modules.

For me this discussion, is relevant for the whole scientific process , not just communication. New web technologies lower the costs of establishing collaborations and should therefore ease the recruitment of resources required to tackle a problem. Because people are better at different task it does make some sense to increase the modularity in the scientific process.


Monday, June 05, 2006

PLoS One

There is an article in Wired about open access in scientific publishing. It focuses on the efforts of the Public library of Science (PLoS) to make content freely available by transferring the costs of publication to the authors. What actually caught my attention was this little paragraph:

The success of the top two PLoS journals has led to the birth of four more modest ones aimed at specific fields: clinical trials, computational biology, genetics, and pathogens. And this summer, Varmus and his colleagues will launch PLoS One, a paperless journal that will publish online any paper that evaluators deem “scientifically legitimate.” Each article will generate a thread for comment and review. Great papers will be recognized by the discussion they generate, and bad ones will fade away.

The emphasis is mine. I went snooping around for the upcoming PLoS One and I found a page to subscribe to a mailing list. It has curious banner with a subtitle of open access 2.0.



I found some links in the source code that got me to the prototype webpage. It sounds exactly like what a lot of people have been pushing for: rapid scientific communication, community peer reviewing, continuous revision of the paper (they call it interactive papers) and open access. This will be hard to implement but if successful it will do much to bring more transparency to the scientific process and increase the cooperation between scientist.

There is also something about the name PLoS ONE. They are really betting a lot on this launch if they are calling it ONE. It implicitly states that ONE will be the flagship of PLoS, where any paper (not just Biology) can be published.




Wednesday, May 31, 2006

Bringing democracy to the net

Democracy is most often thought of as in opposition to totalitarianism. In this case I mean democracy in opposition to anarchy. Some people are raising their voices against the trend of collective intelligence/wisdom of the crows that have been the hype of the net for the past few years. Wikipedia is the crown jewel of this trend of empowering people for the common good and probably as a result of the project's visibility it has been the one to take the heat from the backlash.

Is Wikipedia dead like Nicholas Carr suggests in his blog ? His provocative title was a flame bait but he does call attention to some interesting things happening at Wikipedia. The wikipedia is not dead , it is just changing. It has to change to cope with the increase in visibility, vandalism and to deal with situations where no real consensus is possible.
The system is evolving by restricting anonymous posting and allowing editors to apply temporary editing restrictions to some pages. It is evolving to become more bureaucratic in nature with disputes and mechanisms to deal with the discord. What Nicholas Carr said is dead is the ideal that anyone can edit anything in wikipedia and I would say this is actually good news.

Following his post on the death of Wikipedia, Carr points to an assay by Jaron Lanier entitled Digital Maoism. It is a bit long but I highly recommend it.
Some quotes from the text:
"Every authentic example of collective intelligence that I am aware of also shows how that collective was guided or inspired by well-meaning individuals. These people focused the collective and in some cases also corrected for some of the common hive mind failure modes. The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing projects. "


Sites like Wikipedia are important online experiments. They are trying to develop the tools that allow useful work to come out from millions of very small contributions. I think this will have to go trough some representative democracy systems. We still have to work on ways to establish the governing body in these internet systems. Essentially to whom we decide to deposit trust for that particular task or realm of knowledge. For this we will need better ways to define identity online and to establish trust relationships.

Further reading:
Wiki-truth

Friday, May 26, 2006

The Human Puppet (2)

In November I rambled about a possible sci-fi scenario. It was about a human person giving away their will to be directed by the masses in the internet. A vessel for the "collective intelligence". A voluntary and extreme reality show.

Well, there goes the sci-fi, you can participate in it in about 19 days. Via TechCrunch I found this site:

Kieran Vogel will make Internet television history when he becomes the first person to give total control of his life to the Internet.
(...)
Through an interactive media platform Kieran will live by the decisions the internet decides such as:

# What time he wakes up
# What he wears
# What he eats
# Who he dates
# What he watches


I get a visceral negative response to this. Although this is just a reality show and it is all going to happen inside a house I think it will be important to keep this in mind. In the future technology will make web even more pervasive then today and there are scenarios along the lines of this human puppet idea that could have negative consequences.
I guess what I am thinking is that the same technologies that helps us to collaborate can also be use to control (sounds a bit obvious). In the end the only difference is on how much do the people involved want to (or can) exercise their will power.

Thursday, May 25, 2006

Using viral memes to request computer time

Every time we direct the browser somewhere, dedicating your attention, some computer processing time is used to display the page. This includes a lot of client side processing like all the javascript in all that nice looking AJAX stuff. What if we could harvest some of this computer processing power to solve very small tasks, something like grid computing.
How would this work ? There could be a video server that would allow me to put a video on my blog (like google video) or a simple game or whatever thing that people would enjoy and spend a little time doing. During this time there would be a package downloaded from the same server, some processing done on the client side and a result sent back. If people enjoy the video/game/whatever and it goes viral then it spreads all over the blogs and any person dedicating their attention to it is contributing computer power to solve some task. Maybe this could work as an alternative to advertising ? Good content would be traded for computer power. To compare, Sun is selling computer power in the US for 1 dollar an hour. Of course this type of very small scale grid processing would be worth much less.

Tags: , ,

Wednesday, May 24, 2006

Conference blogging and SB2.0

In case you missed the Synthetic Biology 2.0 meeting and want a quick summary of what happened there you can take a look at some blogs. There were at least 4 bloggers at the conference. Oliver Morton (chief news and features editor of Nature) has a series of posts in his old blog. Rob Carlson described how he and Drew Endy were calling the field intentional biology. Alex Mallet from Drew Endy's lab has a quick summary of the meeting and finally Mackenzie has in his cis-action by far the best coverage with lots more to read.

I hope they put up on the site the recorded talks since I missed a lot of interesting things during the live webcast.

In the third day of the meeting (that was not available in the live webcast) there was a discussion about possible self-regulation in the field (as in the 1975 Asilomar meeting). According to an article in NewScientist the attending researchers decided against self-regulation measures.


Saturday, May 20, 2006

Synthetic Biology & best practices

There is a Synthetic Biology conference going on in Berkeley (webcast here) and they are going to talk about the subject of best practices in one of the days. There is a document online with an outline of some of the subjects up for discussion. In reaction to this, a group of organization published an open letter for the people attending the meeting.
From the text:
We are writing to express our deep concerns about the rapidly developing field of Synthetic Biology that is attempting to create novel life forms and artificial living systems. We believe that this potentially powerful technology is being developed without proper societal debate concerning socio-economic, security, health, environmental and human rights implications. We are alarmed that synthetic biologists meeting this weekend intend to vote on a scheme of voluntary self-regulation without consulting or involving broader social groups. We urge you to withdraw these self-governance proposals and participate in a process of open and inclusive oversight of this technology.

Forms of self-regulation are not incompatible with open discussion with the broader society nor with state regulation. Do we even need regulation at this point ?


The internet and the study of human intelligence

I started reading a book on machine learning methods last night and my mind floated away to thinking about the internet and artificial intelligence (yes the book is a bit boring :).
Anyway, one thing that I thought about was how the internet might become (or is already) a very good place to study (human) intelligence. Some people are very transparent on the net and if anything the trend is for people to start sharing their lives or at least their view of the world earlier. So it is possible to get an idea of what someone is exposed to, what people read, films they see, some of their life experiences, etc. In some sense you can access someone's input in life.
On the other hand you can also read this person's opinions when presented with some content. Person X with known past experiences Y was exposed to Z and reacted in this way. With this information we could probably learn a lot about human thought processes.


A little bit of this a little bit of that ...

What do you get when you mix humans/sex/religion/evolution? A big media hype.
Also, given that a big portion of the scientist currently blogging are working on evolution you also get a lot of buzzing in the science blogosphere. No wonder then why this paper reached the top spot in postgenomic.

This one is a very good example of the usefulness of blogs and why we should really promote more science communication online. The paper was released in advanced online publication and some days after you can already read a lot of opinions about it. It is not just the blog entries but also all the comments on these blog posts. As a result of this we not only get the results and discussion from the paper but the opinion of whoever decided to participate in the discussion.

Wednesday, May 17, 2006

Postgenomic greasemonkey script (2)

I have posted the Postgenomic script I mentioned in the previous post in the Nodalpoint wiki page. There are some instructions there on how to get it running. If you have some problems or suggestions leave some comments here or in the forum in Nodalpoint. Right now it is only set to work with the Nature journals but it should work with more.


Saturday, May 13, 2006

Postgenomics script for Firefox

I am playing around with greasemonkey to try to add links to Postgenomic to journal websites. The basic idea is to search the webpage you are seeing (like a Nature website for example) for papers that have been talked about in blogs and are tracked by Postgenomic. When one is found a little picture is added with a link to the Postgenomic page talking about the paper.
The result is something like this (in the case of the table of contents):


Or like this when viewing the paper itself:


In another journal:


I am more comfortable with Perl, but anyway I think it works as a proof or principle. If Stew agrees I'll probably post the script in Nodalpoint for people to improve or just try it out.

Thursday, May 11, 2006

Google Trends and Co-Op

There some new Google services up and running and buzzing around the blogs today. I only briefly took a look around them.
Google Trends is like Google Finance for anything search trend than you want to analyze. Very useful for someone wanting to waste time instead of doing some productive work ;). You can compare the search and news volume for different terms like:

It gets the data from all the google searches so it really does not reflect the trends within the scientific community.

The other new tool out yesterday is Google Co-Op, the start of social search for Google. It looks as obscure as Google Base so I can again try to make some weird connection to how researcher might use it :). It looks like Google Co-Op is a way for users to further personalize their search. User can subscribe to providers that offer their knowledge/guidance to shape some of the results you see in your search. If you search for example for alzheimer's you should see on the top of the results some refinement that you can do. For example you can look only at treatment related results. This was possible because a list of contributors have labeled a lot of content according to some rules.

Anyone can create a directory and start labeling content following an XML schema that describes the "context". So anyone or (more likely) any group of people can add metadata to content and have it available in google. The obvious application for science would be to have metadata on scientific publications available. Maybe getting Connotea and CiteULike data into a google directory for example would be useful. These sites can still go on developing the niche specific tools but we could benefit from having a lot of the tagging metadata available in google.


Wednesday, May 10, 2006

Nature Protocols

Nature continues clearly the most innovative of the publishing houses in my view. A new web site is up in beta phase called Nature Protocols:

Nature Protocols is a new online web resource for laboratory protocols. The site, currently in beta phase, will contain high quality, peer-reviewed protocols commissioned by the Nature Protocols Editorial team and will also publish content posted onto the site by the community

They accept different types of content:
* Peer-reviewed protocols
* Protocols related to primary research papers in Nature journals
* Company Protocols and Application notes
* Non peer-reviewed (Community) protocols

There are already several protocol websites already out there so what is the point ? For Nature I guess it is obvious. Just like most portal websites they are creating a very good place to put ads. I am sure that all these protocols will have links to products on their Nature products and a lot of ads. The second advantage for Nature is the stickiness of the service. More people will come back to the website to look for protocols and stumble on to Nature content, increasing visibility for the journals and their impact.

A little detail is that, as they say above, the protocols from the papers published in the Nature journals will be made available on the website. On one hand this sounds great because the methods sections in the papers are usually so small (due to restrictions for publication) that they are most of the times incredibly hard to decipher (and usually put into supplementary materials). On the other hand, this will increase even further the tendency to hide away from the paper the really important pars of the research, the results and how these where obtained (methods) and to show only the subjective interpretations of the authors.
This reminds me of a recent editorial by Gregory A Petsko in Genome Biology (sub only). Here is how is states the problem :) - "The tendency to marginalize the methods is threatening to turn papers in journals like Nature and Science into glorified press releases."

For scientists this will be a very useful resource. Nature has a lot of appeal and will be able to quickly create a lot of really good content by inviting experienced scientists to write up their protocols full with tips and tricks accumulated over years of experience. This is the easy part for science portals, the content comes free. If somebody went to Yahoo and told them that scientist actually pay scientific journals to please please show our created content they would probably laugh :). Yahoo/MSN and other web portals have to pay people to create the content that they have on their sites.

web2.0@EMBL

The EMBL Centre for Computational Biology has announced a series of talks related to novel concepts and easy-to-use web tools for biologists. So far there are three schedule talks:

Session 1 - Using new web concepts for more efficient research - an introduction for the less-techy crowd
Time/place: Tue, May 16th, 2006; 14:30; Small Operon

This one I think will introduce the concepts around what is called web2.0 and the potential impact these might have for researchers. I am really curious to see how big will the "less-techy crowd" really be :).

The following sessions are a bit more specific dealing with particular problems we might have in our activities and how can some of the recent web technologies help us deal with them.

Session 2 - Information overflow? Stay tuned with a click (May 23rd, 2006; 14:30;)
Session 3 - Tags: simply organize and share links and references with keywords (May 30th, 2006; 14:30)
Session 4 - Stop emailing huge files: How to jointly edit manuscripts and share data (June 6th, 2006; 14:30;)
All in the Small Operon, here in the EMBL Heidelberg

I commend the efforts of the EMBL CCB and I hope that a lot of people turn up. Let's see if the open collaborative ideas come up on the discussions. If you are in the neighborhood and are interested, come on by and help with the discussion (map).

Tags: ,

Tuesday, April 25, 2006

Engineering a scientific culture

In a commentary in Cell, Gerald Rubin describes Janelia Farm, the new research campus of the Howard Hughes Medical Institute. If you cannot access the commentary, there is a lot of information available on the website such as this flash presentation (oozing with PR talk).

In summary (as I understood it) the objective is to create a collaborative working environment where scientist can explore risky and long term projects without having to worry about applying for grants and publishing on very regular basis.
Group leaders in Janelia Farm will
- have small groups (two to six)
- not be able to apply to outside funding
- still work in the bench

Unless you are really interested in managing resources and all the hassle of applying for grants, this sounds very appealing.

Also, there is no limit on the amount of time the group leader can stay at Janelia Farm, as long as they pass a review process every 5 years. This is unlike for example here at EMBL where most people are forced to move after 9 years (there is a review process after 5 years).

Since the main objectives of Janelia Farm is to work on long term projects that can have significant impact, the review process will not focus on publications but on more subjective criteria like:
"(1) the ability to define and the willingness to tackle difficult and important problems; (2) originality, creativity, and diligence in the pursuit of solutions to those problems; and (3) contributions to the overall intellectual life of the campus by offering constructive criticism, mentoring, technical advice, and in some cases, collaborations with their colleagues and visiting scientists"

Sounds like a researchers paradise :), do the science we will do the rest for you.
It will be interesting to see in some years if they manage to create such an environment. The lack of very objective criteria and no limit on the stay in the campus might lead to some corruption.

Friday, April 21, 2006

Posting data on your blog

From Postgenomic I read this blog post in Science and Politics on science blogs. Bora Zivkovic describes in his post the different types of science blogging with several examples. The most interesting part for me was his discussion of posting hypothesis and unpublished data. I was very happy to see that he already had some post with his own unpublished data and that the discussion about science communication online is coming up in different communities.

His answer to the scoop problem :
But, putting data on a blog is a fast way of getting the data out with a date/time stamp on it. It is a way to scoop the competition. Once the data are published in a real Journal, you can refer back to your blog post and, by doing that, establish your primacy.

There are some problems with this. For example, people hosting their blogs can try to forge the dates, so it would be best to have a third party time-stamping the data. Postgenomic would be great for this, there could be another section in the aggregator to track posts with data. Some journals will probably complain about prior publication and decline to publish something already seen in a blog.

The problems with current publishing systems and the agonizing feeling of seeing your hard work published by other people will probably help drive some change in science communication. Blogging data would make science communication more real-time and transparent, hopefully reducing the number of wasted resources and frustrations with overlapping research.

This is a topic I usually come back to once in while so I have mentioned this here before. The stream like format of the blog makes it hard to keep posting all the relevant links on the topic so I think from now on I will just link to the last post on the topic to at least form a connected chain.