Showing posts with label bioinformatics. Show all posts
Showing posts with label bioinformatics. Show all posts

Tuesday, April 02, 2013

Benchmark the experimental data not just the integration

There was a paper out today in Molecular Systems Biology with a resource of kinase-substrate interactions obtained from in-vitro kinase assays using protein micro-arrays. It is clear that there is a significant difference between what a kinase regulates inside a cell and what it could phosphorylate in-vitro given appropriate conditions. In fact, reviewer number 1 in the attached comments (PDF), explains at length why these protein-array based kinase interactions may be problematic. The authors are aware of this and integrate the protein-array data with additional data sources to derive a higher confidence dataset of kinase interactions. The authors then provide computational and experimental benchmarks of the integrated dataset. What I have an issue with is that the original protein-array data itself it not clearly benchmarked in the paper. How are we to know what is the contribution of that feature and all of the hard experimental work for the final integrated predictor ?

A very similar procedure was used in a recent Cell paper paper where co-complex membership was predicted based on the elution profiles of proteins detected by mass-spectrometry. Here again, the authors do not present benchmarks of the interactions predicted solely on the co-elution data. Instead they integrate it with around 15 other features before evaluating and studying the final result. In this case, they have in supplementary material some indirect indication of the value of the experimental data by itself by providing the rank each feature has in the predictor.

I don't think the papers are incorrect. In both cases the authors provide an interesting final result with the integrated set of interactions benchmarked and analysed. However, in both cases, we are unsure of the value of the experimental data that is presented. I don't think it is an unreasonable request. There are many reasons why this information should be clearly presented before additional data integration steps are used. At the very least this is important for other groups thinking about setting up similar experimental approaches.



Thursday, July 17, 2008

ISMB 2008


I am leaving soon to Toronto to attend ISMB 2008. I usually stay way from big conferences since typically in small conferences is easier to really have time to talk to everyone. The nice thing about attending a big conference is that it looks like everyone is there. There is no shortage of science bloggers attending and it is going to be nice to get to know the people behind some of the blogs for the first time.

There is a room in FriendFeed were several people attending are gathered and for those not going it will probably be a good place to check for coverage of the conference. Alternatively here is a list of bloggers that are attending ISMB or some of the conferences before/after it:

Saturday, June 28, 2008

Capturing biology one model at a time

Mathematical and computational modeling is (I hope) a well accepted requirement in biology. These tools allow us to formalize and study systems of higher complexity that are hard to conceptualize with logic thinking. There have been great advances in our capacity to model different biological systems, from single components to cellular functions and tissues. Many of these efforts have been ongoing separately, each one dealing with a particular layer of abstraction (atoms, interactions, cells, etc) and some of them are now reaching a level of accuracy that rivals some experimental methods. I will try to summarize, in a series of blog posts, the main advances behind some of these models and examples of integration between them with particular emphasis on proteins and cellular networks. I invite others to post about models in their areas of interest to be collected for a review.

From sequence to fold
RNA and proteins once produced adopt structures that have different functional roles. In principle all information required to determine the structure is in the DNA sequence that encodes for the RNA/protein. Although there has been some success in the prediction of RNA structure from sequence ab-initio protein folding remains a difficult challenge (see review by R.Das and D.Baker). A more pragmatic approach has been to use the increasing structural and sequence data made available in public databases to develop sequence based models for protein domains. In this way, for well studied protein folds it is possible to ask the reverse question, what sequences are likely to fold this way.
(To be expanded in a future post, volunteers welcome)

Protein binding models

I am particularly interested in how proteins interact with other components (mainly other proteins and DNA) and in trying to model these interactions from sequence to function. I will leave protein-compound interactions and metabolic networks for more knowledge people.
As mentioned above even without a complete ab-initio folding model, it is possible to predict for some sequences what is their structure or determine to what protein/domain family the sequence belongs from comparative genomics analysis. This by itself might not be very informative from a cellular perspective. We need to know how cellular components interact and hwo these interconnected components create useful functions in a cell.

Docking
Trying to understand and predict how two proteins interact in a complex has been the challenge of structural computational biology for more than two decades . The initial attempt to understand protein-interaction from computational analysis of structural data (what is known today as docking) was published by Wodak and Janin in 1978. In this seminal study, the authors established a computational procedure to reconstitute a protein complex from simplified models of the two interacting proteins. In the twenty-years that have followed the complexity and accuracy of docking methods has steadily increased but still faces difficult hurdles (see reviews Bonvin et al. 2006, Gray, 2006). Docking methods start from the knowledge that two proteins interact and aim at predicting the most likely binding interfaces and conformation of these proteins in a 3D model of the complex. Ultimately, docking approaches might one day also predict new interactions for a protein by exhaustively docking all other proteins in the proteome of the species, but at the moment this is still not feasible.

Interaction types
It should still be possible to use the 3D structures of protein complexes to understand at least particular interactions types. In a recent study, Russel and Aloy have shown that it is possible to transfer structural information on protein-protein interactions by homology to other proteins with identical sequences (Aloy and Russell 2002). In this approach the homologous proteins are aligned to the sequences of the proteins in the 3D complex structure. Mutations in the homologous sequences are evaluated with an empirical potential to determine the likelihood of binding. A similar approach was described soon after by Lu and colleagues and both have been applied on large scale genomic studies (Aloy and Russell 2003 ; Lu et al. 2003). As any other functional annotation by homology this method is limited by how much the target proteins have diverged from the templates. Alloy and Rusell estimated that interaction modeling is reliable above 30% sequence identity (Aloy et al. 2003). Substitutions can also be evaluated with more sophisticated energy potentials after an homology model of the interface under study is created. Examples of tools that can be used to evaluate the impact of mutations on binding propensity include Rosetta and FoldX.
Althougt the methods described above were mostly developed for domain-domain protein interactions similar aproaches have been developed for protein-peptide interactions (see for example McLaughlin et al. 2006) and protein-DNA interactions (see for example Kaplan et al. 2005) .

In summary the accumulation of protein-protein and protein-DNA interaction information along with structures of complexes and the ever increase coverage of sequence space allow us to develop models that describe binding for some domain families. In a future blog post I will try to review the different domain families that are well covered by these binding models.

Previous mini-reviews
Protein sequence evolution

Wednesday, May 14, 2008

Prediction of phospho-proteins from sequence

I want to be able to predict what proteins in a proteome are more likely to be regulated by phosphorylation and hopefully use mostly sequence information. This post is a quick note to show what I have tried and maybe get some feedback from people that might have tried this before.

The most straightforward way to predict the phospho-proteins is to use existing phospho-site predictors in some way. I have used the GPS 2.0 predictor on the S. cerevisiea proteome with medium cutoff and including only Serine/Threonine kinases. The fraction of tyrosine phosphosites in S. cerevisiae is very low so I decided to for now not try to predict tyrosine phosphorylation.

This produces a ranked list of 4E6 putative phosphosites for the roughly 6000 proteins scored according to the predictor (each site is scored for multiple kinases). My question is how to best make use of these predictions if I mostly want to know what proteins are phosphorylated and not the exact sites. Using a set of known phosphorylated proteins in S. cerevisiae (mostly taken from expasy) I computed different final scores as a function of the of all phospho-site scores:
1) the sum
2) the highest value
3) the average
4) the sum of putative scores if they were above a threshold (4,6,10)
5) the sum of putative phosphosite scores if they were outside ordered protein segments as defined by a secondary structure predictor and above a score threshold

The results are summarized with the area under the ROC curve (known phosphoproteins were considered positives and all other negatives) :


In summary, the sum of all phospho-site scores is the best way that I found so far to predict what proteins are phospho-regulated. My interpretation is that phospho-regulated proteins tend to be multi-phosphorylated and/or regulated by multiple kinases so the maximum site score will not work as well as the sum. As a side note, although there are abundance biases in mass-spec data (the source of most of the phospho-data) protein abundance is a very poor predictor of phospho-regulation (AROC=0.55).

Disregarding putative sites outside predicted secondary structured protein segments did not improve the predictions as I would expect but I should try a few disorder predictors.

Ideas for improvements are welcomed, in particular sequence based methods. I would also like to avoid comparative genomics for now.

Monday, April 14, 2008

Life Sciences Virtual Conference and Expo

IBM Deep Computing will hold a 2 day virtual conference on Innovations in Drug Discovery and Development (16th and 17th of April 2008). The talks will be recorded and available for playback for those that register. The focus of the talks will be on the impact of High Performance Computing for life science research. The current list of talks:
  • Dr. Paul Matsudaira, Director Whitehead Institute Professor of Biology and Bioengineering, MIT : Advanced Imaging and Informatics Methods for Complex Life Sciences Problems
  • Professor Jan-Eric Litton, Director of Informatics, Karolinska Institute - Biobanking : The Challenge of Infrastructure for Large Scale Population Studies
  • Dr. Joel Saltz, Professor and Chair, Department of BioMedical Informatics, Ohio State University : The Cancer Biomedical Informatics Grid (caBIG™)
  • Professor Peter J. Hunter, University of Auckland, Bioengineering Institute : Innovation in biological system simulations
  • Dr. Ajay Royyuru, IBM Research, Computation Biology at IBM : Update on the IBM Genealogy Project co-sponsored with National Geographic
  • Dr. Michael Hehenberger, Solutions Executive, Global Life Sciences : IT Architectures and Solutions for Imaging Biomarkers

Tuesday, April 08, 2008

Structure based prediction of SH2 targets

One of the last few things I worked on during the PhD is now available in PLoS Comp Bio. It is about the structure based prediction of binding of SH2 domains to phospho-peptide targets.

The SH2 domain (src homology domain 2) is a small domain of around 100 amino-acid that has a strong preference to bind peptides that have phosphorylated tyrosines. The selectivity of each domain is typically further restricted by variable surfaces near the phospho-tyrosine binding pocket. See figure below:

The binding preference of each domain can be experimentally determined using for example peptide library screening, phage display or protein arrays. Alternatively we should be able to analyze the increasing amount of structural information and predict the binding specificity of peptide binding domains.
We tried to show here that given a structure of an SH2 domain in complex with a peptide it is possible to predict the binding specificity of this domain. It is also possible, to some extent, predict how mutations on these domains might affect their binding preferences. Finally, combining predictions of specificity with known human phospho sites allows for very reasonable predictions of in vivo SH2-target interactions.

The obvious limitation here is that we need to start with structure of the domain we know from some unpublished work that for families with good structural coverage, homology models can produce specificity predictions that as accurate as from x-ray structure. The other limitation is that giving the lack of dynamics a single conformation of the interactions is modeled and this should in part help determine the binding specificity. One possible to this problem that we have used with some success is to model different peptide conformation for each binding domain.

I should make clear that although I think there is an improvement over previous works there is already a considerable amount of research on this topic that we tried to cite in the introduction and discussion. I would say that some of the best previous work on structure based predictions of domain-peptide interactions has come from Wei Wang lab (see for example McLaughlin et al. or Hou et al.)

This manuscript was the first (and only so far) I collaborated on with Google Docs. It worked well and I recommend it to anyone that needs to co-write a manuscript with other people. It saves a lot of emails and annotations on top of annotations.

Friday, February 22, 2008

Call for Bio::Blogs#19

Duncan Hull has volunteer to host the next issue of Bio::Blogs (a bioinformatic related monthly blog journal). It will be out in the beginning of March on the O'Really? blog. The suggested theme for this month is the relationship between Biology and Engineering inspired on the interview published on Edge.org "Engineering and Biology": A Talk with Drew Endy. Anyone can send links for this issue on this topic but also for other interesting bioinformatic posts to bioblogs at gmail.com
We could also try to format if automatically using FeedJournal as suggested by Neil.

Saturday, November 10, 2007

Predicting functional association using mRNA localization

About a month ago Lécuyer and colleagues published a paper in Cell describing an extensive study of mRNA localization in Drosophila embryos during development. The main conclusion of this study was that a very large fraction (71%) of the genes they analyzed (2314) had localization patterns during some stage of the embryonic development. This includes both embryonic localization or sub-cellular localizations.

There is a lot of information that was gathered in this analysis and it should serve as resource for further studies. There is information for different developmental stages so it should also be possible to look for the dynamics of localization of the mRNAs. Another application of this data would be to use it as information source to predict functional association between genes.

Protein localization information as been used in the past for prediction of protein-protein interactions (both physical and genetic interactions). Typically this is done by integrating localization with other data sources in probabilistic analysis [Jansen R et al. 2003, Rhodes DR et al. 2005, Zhong W & Sternberg PW, 2006].

To test if mRNA localization could be used in the same way I took from this website the localization information gathered in the Cell paper and available genetic and protein interaction information for D.melanogaster genes/proteins (can be obtained for example in BioGRID among others). For this analysis I grouped physical and genetic interactions together to have a larger number of interactions to test. The underlying assumption is that both should imply some functional association of the gene pair.

The very first simple test is to have a look at all pairs of genes (with available localization information) and test how the likelihood that they interact depends on the number of cases where they were found to co-localized (see figure below). I discarded any gene for each no interaction was known.
As seen in the figure there is a significant correlation (r=0.63,N=21,p<0.01) between the likelihood of interaction and the number of co-localizations observed for the pair. At this point I did not exclude any localization term but since images were annotated using an hierarchical structure these terms are in some cases very broad.

More specific patterns should be more informative so I removed very broad terms by checking the fraction of genes annotated to each term. I created two groups of more narrow scope, one excluding all terms annotated to more than 50% of genes (denominated "localizations 50") and a second excluding all terms annotated to more than 30% of genes (localizations 30). In the figure below I binned gene pairs according to the number of co-localizations observed in the three groups of localization terms and for each bin calculated the fraction that interact.

As expected, more specific mRNA localization terms (localizations 30) are more informative for prediction of functional association since fewer terms are required to obtain the same or higher likelihood of interaction. The increased likelihood does not come at a cost of fewer pairs annotated. For example, there are similar number of gene pairs in bin "10-14" of the more specific localization terms (localizations 30) as in the bin ">20" for all localization terms (see figure below).
It is important to keep in mind that mRNA localization alone is a very poor predictor of genetic or physical interaction. I took the number of co-localization of each pair (using the terms in "localizations 30") and plotted a ROC curve to determine the area under the ROC curve (AROC or AUC). The AROC value calculated was 0.54, with a 95% confidence lower bound of 0.52 and a p value of 6E-7 of the true area being 0.5. So it is not random (that would be 0.5) but by itself is a very poor predictor.

In summary:
1) the degree of mRNA co-localization significantly correlates with the likelihood of genetic or physical association.
2) less ubiquitous mRNA localization patterns should be more informative for interaction prediction
3) the degree of mRNA co-localization is by itself a poor predictor of interaction but it should be possible to use this information to improve statistical methods to predict genetic/physical interactions.

This was a quick analysis, not thoroughly tested and just meant to confirm that mRNA localization should be useful for genetic/physical interaction predictions. I am not going to pursue this but if there is anyone interested I suggest that it could be interesting to see what terms have more predictive power with the idea of integrating this information with other data sources or also possibly directing future localization studies. Perhaps there is little point of tracking different developmental stages or maybe embryonic localization patterns are not as informative as sub-cellular localizations to predict functional association.


Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, Emili A, Snyder M, Greenblatt JF, Gerstein M. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003 Oct 17;302(5644):449-53.
Rhodes DR, Tomlins SA, Varambally S, Mahavisno V, Barrette T, Kalyana-Sundaram S, Ghosh D, Pandey A, Chinnaiyan AM. Probabilistic model of the human protein-protein interaction network.Nat Biotechnol. 2005 Aug;23(8):951-9.
Zhong W, Sternberg PW. Genome-wide prediction of C. elegans genetic interactions.Science. 2006 Mar 10;311(5766):1481-4.

Thursday, November 01, 2007

Bio::Blogs #16, The one with a Halloween theme

The 16# edition of Bio::Blogs is know available at Freelancing science. Jump over there for summary of what has been going on during this month in the bioinformatic related blogs. If not for anything else then just to have a look at the pumpkin. Thanks again to everyone that participated.
Paulo Nuin from Blind.Scientist has volunteered to host the 17# edition that is scheduled to appear as usual on the 1st of December.

Saturday, September 08, 2007

The Biology of Modular Protein Domains

From tomorrow on I will be in Austria for a small conference on the biology of protein domains. I might post some short notes about the meeting in the next few days. I'll get a chance to present some of the things I have been working on about the prediction of domain-peptide interactions from structural data.

Here is one of these modular protein domains, an SH3 domain, in complex with a peptide:
The very short summary of it is that it is possible to take the structure of one of these domains in complex with a peptide (ex: SH3, phospho binding domains, kinases, etc) and predict their binding specificity. To some extent it is also possible to take a sequence, obtain a model (depends on structural coverage) and determine its specificity. I'll talk more about the details (hopefully) soon.

Saturday, August 11, 2007

Quotes

Another interesting SciView interview is available at Blind.Scientist. Here is one quote from Alexei Drummond (Chief Scientist of Biomatters) that I liked:

"I think that bioinformatics has to become a field where people without programming skills can contribute substantially. I would argue that all of the programmers in bioinformatics should be working very hard to program themselves out of their jobs (and into more satisfying jobs)."


Science advances quickly and so do the computational needs. Can we ever do away with these one off scripts if there are always new data types and innovative ways of analyzing them ? I guess the ideas around workflows and such could lead to very visual oriented programing that anyone can do.

Tuesday, August 07, 2007

TwoThree new bioinformatic related blogs

A quick post to link out to two new bioinformatic related blogs:

Freelancing science (by Paweł Szczęsny)
Open.nfo (by Keith)

I will be happy the day there are too many to track :).

Updated: It could the official month of "start your own bioinformatics blog". The bio.struct blog is the third one so far.

Friday, August 03, 2007

Bio::Blogs#13

A great edition of the monthly Bio::Blogs is up at Neil's blog. This month there are plenty of tutorials and a round up of blog coverage about the ISMB/ECCB 2007 conference.

PDF version for offline reading of the editorial and highlighted posts is here and here (Box.net copy).

If someone wants to give it a try at editing future editions of Bio::Blogs let me know.

Speaking of community projects, the list of webservers published in that last NAR webserver edition are in this Nodalpoint wiki webpage. If you try one of these services spend a minute noting down if it was even available, if it worked well, etc.

Friday, July 27, 2007

Google code for educators

(via the Google Blog) Google started a website to gather teaching materials for CS educators, covering some of the most recent technologies. Right now it has some material for AJAX Programming, Distributed Systems and Web Security. There are some video lectures and presentations. There is already some material on parallel programming (mostly related to their MapReduce) that should be of use to bioinformatics.

One a related topic Tiago has on his blog started a multipart series about "Bioinformatics, multi-core CPUs and grid computing". The first and second part are already available.

Friday, June 01, 2007

Bio::Blogs# 11

The 11th edition of Bio::Blogs, is online at Nodalpoint. We tried to do something different this time. Michael Barton volunteered to host a special section dedicated to tips and tricks for bioinformatics that is hosted separately in Bioinformatics Zen. Because there were so many posts this month about personalized medicine there is also a special section on that.

There are three separate PDFs for this edition: 1) the main PDF can be found here; 2) The one on personalized medicine can be downloaded here; the one for tips and tricks available from Bioinformatics Zen. Michael did a great job with this special section, with a very cool design.

Friday, May 11, 2007

Science Foo Camp 2007 and other links

Nature is organizing another Science Foo Camp. There are already a couple of bloggers that have been invited (Jean-Claude Bradley, Pierre, PZ Myers, Peter MR, Andrew Walkingshaw). There is a "Going to Camp" group in Nature Network, and the scifoo tag in connotea to explore if you want to dig deeper.

I was there last year and I can only thank again Timo for inviting me and encourage everyone that has been invited to go. It was a chance to get to know fascinating people and hear about new ideas. In the off chance of any of the organizers is reading this ... please try to get together people from Freebase (or similar company) with the people involved in biological standards (like Nicolas Le Novère).

A quick hello to two new bioinformatic related blogs: Beta Science by Morgan Langille and Suicyte Notes.

(via Pierre, Neil and Nautilus) In a correspondence letter published by Nature, Mark Gerstein, Michael Seringhaus and Stanley Fields discuss the implementation of structured, machine readable abstracts. As I mentioned in a comment to Neil's post, this is one of those ideas that have been around, that most people would agree to but somehow it is never implemented. In this case it would have to start on the publisher's side. As we have seen with other small technical implementations, like RSS feeds, once a main publisher sets this up others will follow.

Sunday, April 01, 2007

Bio::Blogs #9 - small update

Welcome to the ninth edition of the bioinformatics blog journal Bio::Blogs posted online on the 1st of April of 2007. The archive from previous months can be found at bioblogs.wordpress.com.


Today is an exciting day for bioinformatics and open science in general. I am happy to report on an ongoing project in Nature that has been under wraps for quite a long time. It is called Nature Sherlock and it promises to turn the dream of rich semantic web for scientist a reality. This service is still in closed beta but you can have a look at (http://sherlock.nature.com/) to see that the service does exist and you might from the name get a sense for what it might do. I have been allowed to use Sherlock for some time and according to the FAQ of the main website it has been co-developed by Google and Nature and it is one of the results of meetings that went on during the 1st Science Foo Camp (also co-organized by Google and Nature). Access to the main site requires a beta tester password but I can say that Sherlock looks like a very promising tool. Sherlock is the code-name for the main bot that is set to crawl text and databases from willing providers (current partners include Nature, EBI, NCBI and Pubmed Central) to produce semantic web objects that abide to well established standards in biology. Some of the results, specially regarding the text mining, are of lower accuracy (details can be found on the help pages) but overall it looks like an amazing tool. I hope that they get this out soon.

In this month's Bio::Blogs I have included many posts that were not submitted but I thought were interesting and worth mentioning. This might be a more biased selection but in this way I can make up for the current low number of submission. As in the last edition, the blog posts mentioned were converted into PDF for anyone interested in downloading and reading Bio::Blogs offline (anyway you might enjoy this). There are many interesting comments in online blog posts that I did not include in the PDF, so if you read this offline and find something interesting go online for the discussion.


News and Views
This month saw the announcement of the main findings coming from the Global Ocean Sampling Expedition. Several articles published in PLoS Biology detail the main conclusions of Craig Venter's efforts to sequence the microbial diversity. Both Konrad and Roland Krause blogged some comments on this metagenomics initiative.

Articles
I will start up this section highlighting Stew's post on software availability. Testing around 111 resources taken from the Application Notes published in the March issues of Bioinformatics shows that between 11% to 17% (depending on the year) of these resources are no longer available. Even considering that bioinformatic research runs at very fast pace and that some of these resources might be outdated by now there is no reason why these resources should not be available (as was required for publication).
RPG from Evolgen submitted a post entitled “I Got Your Distribution Right Here” were he analyzes the variation of genome sizes among birds. He concludes by noting that the variability of genome sizes in aves , is smaller than in squamata (lizards and snakes), and testudines (turtles, tortoises, and terrapins). An interesting question might then be why do birds have a smaller distribution of genome sizes. Is there a selection pressure ?
Barry Mahfood submitted a blog post where he ask the question: “Is Death Really Necessary?”. Looking at the human life-expectancy in different periods in time and thinking about what might determine self, Barry thinks that eternal life is achievable in the very near future.

Semantic web/Mach-up/web-services series
This month there were several blog posts regarding mash-ups, web-services and semantic web. All of these relate to the ease of accessing data online and combining data and services together to produce useful and interesting out-comes.
Freebase has a large potential to bring some of the semantic web concept closer to reality. Deepak sent in a link to his description of Freebase and the potential usefulness of the site for scientists. I had the fortune of receiving an invitation to test the service but I did not have time yet to fully explore it.
I hope you saw trough my April fools introduction to Nature Sherlock. Even if Nature Sherlock does not really exist (it is a service to look for similar articles), it is clear that the Nature publishing group is the most active science publisher on the web. Tony Hammond in Nascent gave in a recent blog post a brief description of some of the tools Nature is working on.
While we are waiting for web services and data to become easier to work with we can speed up the process by using web scraping technologies like openKapow (described by me) or dapper (explained by Andrew Perry). These tools can help you create an interface to services that do not provide APIs.

Tips and Tricks

I will end up this months edition with a collection of tips for bioinformatics. Bosco wrote an interesting post - “Notes to a young computational biologist”- were he collects a series of useful tips for anyone working in bioinformatics. There is a long thread of of comments with other people's ideas making it a useful resource. On a similar note Keith Robison, wrote about error messages and the typical traps that might take a long time to debug if we are not familiar with them. (Update) In reply to a recent editorial in PLoS Computational Biology, Chris sent in some tips for collaboration work.
From Neil Saunder's blog comes a tutorial on setting up a reference manager system using LaTeX. I work mostly on a windows machine and I am happy with Word plus Endnote but I will keep this in mind if I try to change to a Linux set up.
Finally I end up this month's edition with a submission from Suresh Kumar on “Designing primer through computational approach”. It is a nice summary of things to keep in mind for primer design along with useful links to tools and websites that might come in handy.

Update - Just to be sure, the Nature Sherlock is as real as the new Google TiSP wifi service.

Saturday, March 31, 2007

Bio::Blogs #9 call for submission

The 9th edition of Bio::Blogs will be posted here tomorrow. I will go around the usual blogs and look for interesting blog posts to make a round-up of what happened during the month. I will try to make again an offline version including the blog posts authorized by the authors. Fell free to submit links to bioinformatic related blog posts you find interesting from your blog and any other blogs during today and tomorrow. Submissions can be sent by email to bioblogs at gmail or in a comment to this post.

Tuesday, March 27, 2007

Google Base API

While we are waiting for freebase to give us a chance to preview their service we can go ahead and try something that probably is very similar in spirit to freebase. Google Base has been up a long time but only recently have they opened it up for automatic access (see Google Base API). There are some restrictions but in the end we can think of it as a free online database that we can use remotely.

How easy is it to use ? If you like Java, C# or PHP you are in luck because they have client libraries to help you get started.

I also found this Google Base code in CPAN and decided to give this a try. After reading some of the information in the API website and having a look at the code it comes down to 3 main tasks: 1)authentication; 2)insert/delete/update; 3)query

Having installed the above mentioned CPAN module the authentication step is easy:

use WWW::Google::API::Base;
my $api_user = "username"; #Google user name
my $api_pass = "pass"; #Google pass
my $api_key = "API_KEY";
#any program using the API must get a key


my $gbase = WWW::Google::API::Base->new(
{ auth_type => 'ProgrammaticLogin',
api_key => $api_key,
api_user => $api_user,
api_pass => $api_pass },
{ } );

That's it, $gbase is authorized to use that google account in Gbase.

Now to insert something useful in the database requires a bit more effort. The CPAN module comes with an example on how to insert recipes. I am not that great a cook so I created a new function in Base.pm that comes with the module. I called it insertSequence

sub insertSequence {
my $self= shift;
my $id = shift;
my $seq_string = shift;
my $seq_type = shift;
my $spp = shift;
my $l=shift;
$self->client->ua->default_header('content-type',
'application/atom+xml');
my $xml = <<EOF;

<?xml version='1.0'?>
<entry xmlns='http://www.w3.org/2005/Atom'
xmlns:g='http://base.google.com/ns/1.0'
xmlns:c='http://base.google.com/cns/1.0'>
<author>
<name>API insert</name>
</author>
<category scheme='http://www.google.com/type' term='googlebase.item'/>
<title type='text'>$id</title>
<content type='text'>$seq_string</content>
<g:item_type>sequence</g:item_type>
<g:spp type='text'>$spp</g:spp>
<g:length type='int' >$l</g:length >
<g:sequence_type type='text'>$seq_type</g:sequence_type>
</entry>

EOF

my $insert_request = HTTP::Request->new(
POST => 'http://www.google.com/base/feeds/items',
$self->client->ua->default_headers,
$xml);
my $response;
eval {
$response = $self->client->do($insert_request);
};
if ($@) {
my $error = $@;
die $error;
}
my $atom = $response->content;
my $entry = XML::Atom::Entry->new(\$atom);
return $entry
}


The function takes in information on the sequence like the ID, the sequence string, type , species, length and creates and XML entry to submit to Google Base according to the specifications they provide in the website. In this case it will be an entry of type "sequence" (that is non standard for GBase). The only detail in this was that I could not get the sequence string into an item attribute of type text because there seams to be a size limit in these. This is why the sequence is in the description.

Ok, with this new function adding a sequence to the database is easy. After the authentication code as above we just need to do:
$gbase->insertSequence($simple ,$seq_str,
"protein","s.cerevisiae",$l);

After getting the information from somewhere to populate the variables. According to the Google API faq, there is a limit of 5 queries per second. In about 25 lines we can get a FASTA to GBase pipe. Here is an example of protein sequence in Gbase (it might get deleted in time).

Now I guess one of the interesting parts is that we can use Google to filter results using the Google Base query language. The CPAN module above already has a query tool. It is still very simple but it gets the results of a search into an ATOM object. Here is a query that returns items from S.cerevisiae that have length between 350 and 400:
my $new_id="http://www.google.com/base/feeds/items
?bq=[spp:s.cerevisiae][length(int) : 350..400]";
my $select_inserted_entry;
eval {
$select_inserted_entry =$gbase->select($new_id);
print $select_inserted_entry->as_xml;# The output in XML format
};
if ($@) {
my $e = $@;
die $e->status_line; # HTTP::Response
}
I am not sure yet if these items are available to other users to query not the code that would do it. I think this example here only gets the items in my account. This was as far as I got with this. The last step would be to have an XML parser turn the returned ATOM object into something more useful.

Friday, March 16, 2007

Bioinformatic web scraping/mash-ups made easy with kapow

In bioinformatics it is common that we might need to use a web service multiple times. Ideally, whoever built the web service provided a way to automatically query the site via an API. Unfortunately, Lincoln Stein's dream of a bioinformatics nation is still not a reality. When there is no programmable interface available and the underlying database information is not available it's usually necessary to write some code to scape the content from the web service.

In come openKapow, a free tool to (easily) build and publish robots to turn any website into a real web service. To illustrate how easy it is to use it I have built a Kapow robot to get, for any human geneID, a list of orthologs (with species and IDs). I downloaded the robotmaker and tried it on the Ensembl database. To be fair Ensembl is probably one of the best bioinformatics resources with available API and easy data mining tools like Biomart. This was just to give an example.

You start the robot by defining the initial webpage and the service inputs and outputs. I decided to create a REST service that would take an Ensembl gene ID and output pairs of gene ID/species name. The robotmaker application is intuitive to use for anyone with a moderate experience with HTML. The robot is created by setting up the steps that should occur to transform the input into the desired output. For example, we have to define were the input should be entered by clicking on the search box:
From here there are a set of loops and conditional statements that you can include to get the list of orthologs:

We can run through the robot steps with a test input and debug it graphically. Once the robot is running it is possible to host it on the openKapow web page, apparently also free of charge. Here is the link for this simple robot (this link might go down in the future). Of course it is also possible to build new robots that use robots that are published on openKapow. Also this example uses a single webpage but it would be more interesting to use this to mash up different services together.