Mathematical and computational modeling is (I hope) a well accepted requirement in biology. These tools allow us to formalize and study systems of higher complexity that are hard to conceptualize with logic thinking. There have been great advances in our capacity to model different biological systems, from single components to cellular functions and tissues. Many of these efforts have been ongoing separately, each one dealing with a particular layer of abstraction (atoms, interactions, cells, etc) and some of them are now reaching a level of accuracy that rivals some experimental methods. I will try to summarize, in a series of blog posts, the main advances behind some of these models and examples of integration between them with particular emphasis on proteins and cellular networks. I invite others to post about models in their areas of interest to be collected for a review.
From sequence to fold
RNA and proteins once produced adopt structures that have different functional roles. In principle all information required to determine the structure is in the DNA sequence that encodes for the RNA/protein. Although there has been some success in the prediction of RNA structure from sequence ab-initio protein folding remains a difficult challenge (see review by R.Das and D.Baker). A more pragmatic approach has been to use the increasing structural and sequence data made available in public databases to develop sequence based models for protein domains. In this way, for well studied protein folds it is possible to ask the reverse question, what sequences are likely to fold this way.
(To be expanded in a future post, volunteers welcome)
Protein binding models
I am particularly interested in how proteins interact with other components (mainly other proteins and DNA) and in trying to model these interactions from sequence to function. I will leave protein-compound interactions and metabolic networks for more knowledge people.
As mentioned above even without a complete ab-initio folding model, it is possible to predict for some sequences what is their structure or determine to what protein/domain family the sequence belongs from comparative genomics analysis. This by itself might not be very informative from a cellular perspective. We need to know how cellular components interact and hwo these interconnected components create useful functions in a cell.
Docking
Trying to understand and predict how two proteins interact in a complex has been the challenge of structural computational biology for more than two decades . The initial attempt to understand protein-interaction from computational analysis of structural data (what is known today as docking) was published by Wodak and Janin in 1978. In this seminal study, the authors established a computational procedure to reconstitute a protein complex from simplified models of the two interacting proteins. In the twenty-years that have followed the complexity and accuracy of docking methods has steadily increased but still faces difficult hurdles (see reviews Bonvin et al. 2006, Gray, 2006). Docking methods start from the knowledge that two proteins interact and aim at predicting the most likely binding interfaces and conformation of these proteins in a 3D model of the complex. Ultimately, docking approaches might one day also predict new interactions for a protein by exhaustively docking all other proteins in the proteome of the species, but at the moment this is still not feasible.
Interaction types
It should still be possible to use the 3D structures of protein complexes to understand at least particular interactions types. In a recent study, Russel and Aloy have shown that it is possible to transfer structural information on protein-protein interactions by homology to other proteins with identical sequences (Aloy and Russell 2002). In this approach the homologous proteins are aligned to the sequences of the proteins in the 3D complex structure. Mutations in the homologous sequences are evaluated with an empirical potential to determine the likelihood of binding. A similar approach was described soon after by Lu and colleagues and both have been applied on large scale genomic studies (Aloy and Russell 2003 ; Lu et al. 2003). As any other functional annotation by homology this method is limited by how much the target proteins have diverged from the templates. Alloy and Rusell estimated that interaction modeling is reliable above 30% sequence identity (Aloy et al. 2003). Substitutions can also be evaluated with more sophisticated energy potentials after an homology model of the interface under study is created. Examples of tools that can be used to evaluate the impact of mutations on binding propensity include Rosetta and FoldX.
Althougt the methods described above were mostly developed for domain-domain protein interactions similar aproaches have been developed for protein-peptide interactions (see for example McLaughlin et al. 2006) and protein-DNA interactions (see for example Kaplan et al. 2005) .
In summary the accumulation of protein-protein and protein-DNA interaction information along with structures of complexes and the ever increase coverage of sequence space allow us to develop models that describe binding for some domain families. In a future blog post I will try to review the different domain families that are well covered by these binding models.
Previous mini-reviews
Protein sequence evolution
Saturday, June 28, 2008
Thursday, June 12, 2008
@World
(caution, fiction ahead)
I wake up in the middle of the night startled by some noise. Pulse racing I try to focus my attention outwards. Something breaking, glass shattering? Is someone out there ? I reach out with my senses but an awkward feeling nags at me, bubbling up to my consciousness. I try hard to focus, it is coming from outside the room , someone is inside my house. I close my eyes but vertigo takes over and weightlessness empowers me. I am in the living room cleaning the floor, picking up a broken glass. The nagging feeling finally assaults me fully. I am moving but I am not in control. Panic rises quickly as I watch helpless the simple and quiet actions of someone else. I stop picking up glass and I feel curious, only it is not exactly me, the feeling is there besides me.
- Hi, who are you ?
The voice catches me by surprise and my fear goes beyond rational control. All I can think of is to escape. to go away from here. For a second time I find myself floating as if searching for a way out. When I open my eyes again I am by the beach and I breath a sigh of relief. The constant sound of the waves calms me down for a few seconds until my eyes start drifting to the side. No, stay there I am in control! I look into the eyes of a total stranger that smiles back at me in recognition. Two voices ask me if I am enjoying the view and I can only scream back in confusion.
I wake up in the middle of the night startled by some noise. I immediately flex my hands in front of my eyes to make sure it was nothing but a nightmare trying hard to calm down. What a dream. I get up and check on the noise coming from the living room realizing that it was just the storm outside. Feeling better I fire up my laptop and grab a glass of water from the kitchen. I open twitter and type away:
- I had the strangest dream !(cursor blinking) Our senses were all connected(enter)
I get up to open the window drinking another sip of water. After a couple of steps I feel a jabbing headache forcing me to stop and bright spots of light blur my vision. I close my eyes in pain and the voices of some unseen crowd thunder in my ears:
- I had the same dream - the all say in unison
The sound of glass shattering on the floor in the last thing I remember before collapsing.
I wake up in the middle of the night startled by some noise (...)
(Twistori was the main motivation for this post)
Previous fiction:
The Fortune Cookie Genome
I wake up in the middle of the night startled by some noise. Pulse racing I try to focus my attention outwards. Something breaking, glass shattering? Is someone out there ? I reach out with my senses but an awkward feeling nags at me, bubbling up to my consciousness. I try hard to focus, it is coming from outside the room , someone is inside my house. I close my eyes but vertigo takes over and weightlessness empowers me. I am in the living room cleaning the floor, picking up a broken glass. The nagging feeling finally assaults me fully. I am moving but I am not in control. Panic rises quickly as I watch helpless the simple and quiet actions of someone else. I stop picking up glass and I feel curious, only it is not exactly me, the feeling is there besides me.
- Hi, who are you ?
The voice catches me by surprise and my fear goes beyond rational control. All I can think of is to escape. to go away from here. For a second time I find myself floating as if searching for a way out. When I open my eyes again I am by the beach and I breath a sigh of relief. The constant sound of the waves calms me down for a few seconds until my eyes start drifting to the side. No, stay there I am in control! I look into the eyes of a total stranger that smiles back at me in recognition. Two voices ask me if I am enjoying the view and I can only scream back in confusion.
I wake up in the middle of the night startled by some noise. I immediately flex my hands in front of my eyes to make sure it was nothing but a nightmare trying hard to calm down. What a dream. I get up and check on the noise coming from the living room realizing that it was just the storm outside. Feeling better I fire up my laptop and grab a glass of water from the kitchen. I open twitter and type away:
- I had the strangest dream !(cursor blinking) Our senses were all connected(enter)
I get up to open the window drinking another sip of water. After a couple of steps I feel a jabbing headache forcing me to stop and bright spots of light blur my vision. I close my eyes in pain and the voices of some unseen crowd thunder in my ears:
- I had the same dream - the all say in unison
The sound of glass shattering on the floor in the last thing I remember before collapsing.
I wake up in the middle of the night startled by some noise (...)
(Twistori was the main motivation for this post)
Previous fiction:
The Fortune Cookie Genome
Tuesday, June 10, 2008
Why does FriendFeed work ?
I have been using FriendFeed for a while and I have to say that it works surprisingly well. It is hard to define what FriendFeed is so the only real way of understanding it is to try it for a while.
One common way to define FF would be as a life-stream aggregator. Each user defines a set of feeds (blog, Flickr, Twitter, bookmarks, comments, etc) providing all other users with a single view of all the online activities of that user. Anyone can select how much to share (even nothing at all) and subscribe to a number of other users. Each item (photo, blog post, bookmark) can serve then as spark for discussions. The users can mark items as interesting or comment on them and this propagates to all other people that subscribe to you. In addition we can select sources to hide if for some reason there is a particular part of a user's activities you don't enjoy. All of this creates a very personalized view of whoever you elect to interact with online.
I still find it striking that there are so many long threads of discussions around items that we share in FriendFeed, sometimes more than in the original site. A couple of examples:
Google code as a science repository (discussion in FF, blog post)
Into the Wonderful (discussion in FF, slideshare site)
Bursty work (discussion in FF, blog post)
Why does it work so well ? One possible reason could be that a group of early adopter scientists happened to get together around this website creating the required critical mass to start the discussions. Still, most of those commenting were already participating on blogs so that might not be it. There might be something about the interface, maybe it is the ease of adding comments and that these comments can be edited that increases the participation. Ongoing discussions get bumped higher in the view so every new comment brings the item back to your attention. In this way you know who saw the item and who is thinking about it. A bit like talking about a movie you saw or a book you read with a bunch of friends.
Anyone interested in the science aspects of it should check out the Life Scientists room with currently around 85 subscribers. Here is an introduction to some of these people, in particular on what they work on. Connecting to other scientists in this way lets you see what are the articles they find interesting and discuss current scientific news. Even maybe start a couple of side-projects for the fun of it.
One common way to define FF would be as a life-stream aggregator. Each user defines a set of feeds (blog, Flickr, Twitter, bookmarks, comments, etc) providing all other users with a single view of all the online activities of that user. Anyone can select how much to share (even nothing at all) and subscribe to a number of other users. Each item (photo, blog post, bookmark) can serve then as spark for discussions. The users can mark items as interesting or comment on them and this propagates to all other people that subscribe to you. In addition we can select sources to hide if for some reason there is a particular part of a user's activities you don't enjoy. All of this creates a very personalized view of whoever you elect to interact with online.
I still find it striking that there are so many long threads of discussions around items that we share in FriendFeed, sometimes more than in the original site. A couple of examples:
Google code as a science repository (discussion in FF, blog post)
Into the Wonderful (discussion in FF, slideshare site)
Bursty work (discussion in FF, blog post)
Why does it work so well ? One possible reason could be that a group of early adopter scientists happened to get together around this website creating the required critical mass to start the discussions. Still, most of those commenting were already participating on blogs so that might not be it. There might be something about the interface, maybe it is the ease of adding comments and that these comments can be edited that increases the participation. Ongoing discussions get bumped higher in the view so every new comment brings the item back to your attention. In this way you know who saw the item and who is thinking about it. A bit like talking about a movie you saw or a book you read with a bunch of friends.
Anyone interested in the science aspects of it should check out the Life Scientists room with currently around 85 subscribers. Here is an introduction to some of these people, in particular on what they work on. Connecting to other scientists in this way lets you see what are the articles they find interesting and discuss current scientific news. Even maybe start a couple of side-projects for the fun of it.
Monday, June 09, 2008
Evaluation metrics and Pubmed Faceoff
I have been reading recently a lot about evaluation metrics for papers and authors. It started with a blog post in Action Potential (Nature Neuroscience's blog) showing a correlation between the number of downloads of a paper and its citations. From the comments in that blog post I found out about a forum in Nature Network about Citation in Science and also the recently published group of perspectives on "The use and misuse of bibliometric indices in evaluating scholarly performance".
It could have been a coincidence but Pierre sparked a long discussion in FriendFeed when he suggested it would be nice to be able to sort Pubmed queries by the imapact factor of the journal. In reaction to this Euan set up a very creative interface to Pubmed that he named Pubmed Faceoff. He took several different factors into account (citations from Scopus, eigenfactor of the journal, the time the paper was published) and for each paper returned from a Pubmed query creates a face that describes the paper. The idea for the visualization is based on Chernoff Faces. It is really a creative idea and I wish Pubmed could spend more resources in coming up with alternative interfaces like this, something like a "labs" section where they could play with ideas or allow others to create interfaces that they would host.
I wont go here into the whole debate about the evaluation metrics since there is already a lot of discussion going on in some of those links I mentioned.

I wont go here into the whole debate about the evaluation metrics since there is already a lot of discussion going on in some of those links I mentioned.
Wednesday, May 14, 2008
Prediction of phospho-proteins from sequence
I want to be able to predict what proteins in a proteome are more likely to be regulated by phosphorylation and hopefully use mostly sequence information. This post is a quick note to show what I have tried and maybe get some feedback from people that might have tried this before.
The most straightforward way to predict the phospho-proteins is to use existing phospho-site predictors in some way. I have used the GPS 2.0 predictor on the S. cerevisiea proteome with medium cutoff and including only Serine/Threonine kinases. The fraction of tyrosine phosphosites in S. cerevisiae is very low so I decided to for now not try to predict tyrosine phosphorylation.
This produces a ranked list of 4E6 putative phosphosites for the roughly 6000 proteins scored according to the predictor (each site is scored for multiple kinases). My question is how to best make use of these predictions if I mostly want to know what proteins are phosphorylated and not the exact sites. Using a set of known phosphorylated proteins in S. cerevisiae (mostly taken from expasy) I computed different final scores as a function of the of all phospho-site scores:
1) the sum
2) the highest value
3) the average
4) the sum of putative scores if they were above a threshold (4,6,10)
5) the sum of putative phosphosite scores if they were outside ordered protein segments as defined by a secondary structure predictor and above a score threshold
The results are summarized with the area under the ROC curve (known phosphoproteins were considered positives and all other negatives) :

In summary, the sum of all phospho-site scores is the best way that I found so far to predict what proteins are phospho-regulated. My interpretation is that phospho-regulated proteins tend to be multi-phosphorylated and/or regulated by multiple kinases so the maximum site score will not work as well as the sum. As a side note, although there are abundance biases in mass-spec data (the source of most of the phospho-data) protein abundance is a very poor predictor of phospho-regulation (AROC=0.55).
Disregarding putative sites outside predicted secondary structured protein segments did not improve the predictions as I would expect but I should try a few disorder predictors.
Ideas for improvements are welcomed, in particular sequence based methods. I would also like to avoid comparative genomics for now.
The most straightforward way to predict the phospho-proteins is to use existing phospho-site predictors in some way. I have used the GPS 2.0 predictor on the S. cerevisiea proteome with medium cutoff and including only Serine/Threonine kinases. The fraction of tyrosine phosphosites in S. cerevisiae is very low so I decided to for now not try to predict tyrosine phosphorylation.
This produces a ranked list of 4E6 putative phosphosites for the roughly 6000 proteins scored according to the predictor (each site is scored for multiple kinases). My question is how to best make use of these predictions if I mostly want to know what proteins are phosphorylated and not the exact sites. Using a set of known phosphorylated proteins in S. cerevisiae (mostly taken from expasy) I computed different final scores as a function of the of all phospho-site scores:
1) the sum
2) the highest value
3) the average
4) the sum of putative scores if they were above a threshold (4,6,10)
5) the sum of putative phosphosite scores if they were outside ordered protein segments as defined by a secondary structure predictor and above a score threshold
The results are summarized with the area under the ROC curve (known phosphoproteins were considered positives and all other negatives) :
In summary, the sum of all phospho-site scores is the best way that I found so far to predict what proteins are phospho-regulated. My interpretation is that phospho-regulated proteins tend to be multi-phosphorylated and/or regulated by multiple kinases so the maximum site score will not work as well as the sum. As a side note, although there are abundance biases in mass-spec data (the source of most of the phospho-data) protein abundance is a very poor predictor of phospho-regulation (AROC=0.55).
Disregarding putative sites outside predicted secondary structured protein segments did not improve the predictions as I would expect but I should try a few disorder predictors.
Ideas for improvements are welcomed, in particular sequence based methods. I would also like to avoid comparative genomics for now.
Wednesday, May 07, 2008
Drug-drug interactions and network connectivity
How does the effect of drug-drug combinations relate to the cellular interactions of their targets ? Last year, Joseph Lehár and colleagues published a paper in MSB looking into this question.
One way to study the effect of drug combinations on growth of a bacteria for example is to measure the inhibition of growth of all possible combinations of serially diluted doses of two combined drugs and plotting dose-matrices like the ones shown in figure 1 of the paper and shown here adapted from the paper. In fig1A the authors show how the combined effect of increasing doses of two drugs inhibit the growth of a methicillin-resistant Staphylococcus aureus strain. Light colors are equivalent to a strong inhibition of drug. One observation from this figure is that the two drugs can inhibit the growth of this strain in an additive fashion. The question the authors tried to address in this paper is how much does this sort of dose-matrix inform us about the possible interactions of the targets. The drugs could be interacting with the same target, different targets in the same pathway/complex, targets in different pathways both required for growth, etc.
In order to study this they first simulated an abstract metabolic network (using ODEs, see model file in Sup) with two different pathways required for growth, with branched and linear blocks and one negative feedback (see Fig3 in the paper). They simulated the effect of increasing drugs in their models by decreasing the enzyme activities of the simulated targets. For each possible drug-drug combination they then calculated the predicted dose-matrix effect on growth (pathway output). The observed that by fitting the obtained dose-matrices to 4 types of classical dose-matrix models (described in Fig2) they could predict where in this network the two targets would more likely be.
As an example , two sequential targets in an unbranched section of the network embedded in an negative feedback produces a dose-matrix that best fits a potentiation model (shown here, adapted from Fig3).
Having established by simulations that there is information on the drug-matrices that relate to the interaction of their targets they then tested the effect of 10 known antifungal drugs on the sterol pathway (also well established) of Candida glabrata. For each drug-drug combination they tried to fit the experimental dose matrices to the same 4 models and compared the best model fit to the expected for the position of the targets in the sterol pathway. For many cases (72%) the best model fit was the same as predicted from the sterol pathway model but only 54% of the best-fit models were unambiguous. There were some cases were drug-with-itself dose matrices (positive control) did not appear additive as expected. The authors mention that this is due to the "instability in the measured potency of a drug" but I am not sure why a drug-with-itself matrix would not be reproducible.
Finally the authors further tested this relation between drug combinations and target interactions by experimentally measuring drug dose-matrices for 94 drug/compounds in human(HCT116) tumor cells (see text for details).
In summary, even if the prediction accuracy is far from perfect, this work shows that it should be possible to either:
1 - use known pathway models plus drug dose-matrices to improve prediction of the most likely targets of the drugs
2 - use known drug-target relationships plus the drug dose-matrices to predict the network connectivity
One obvious complication is the multiple drug targets for the same compound that would reduce the usefulness of the predictions. Some interesting extensions could be to test drug-drug interactions in KO strains or in combinations with RNAi knock-downs
or protein over-expressions.
As an example , two sequential targets in an unbranched section of the network embedded in an negative feedback produces a dose-matrix that best fits a potentiation model (shown here, adapted from Fig3).
Having established by simulations that there is information on the drug-matrices that relate to the interaction of their targets they then tested the effect of 10 known antifungal drugs on the sterol pathway (also well established) of Candida glabrata. For each drug-drug combination they tried to fit the experimental dose matrices to the same 4 models and compared the best model fit to the expected for the position of the targets in the sterol pathway. For many cases (72%) the best model fit was the same as predicted from the sterol pathway model but only 54% of the best-fit models were unambiguous. There were some cases were drug-with-itself dose matrices (positive control) did not appear additive as expected. The authors mention that this is due to the "instability in the measured potency of a drug" but I am not sure why a drug-with-itself matrix would not be reproducible.
Finally the authors further tested this relation between drug combinations and target interactions by experimentally measuring drug dose-matrices for 94 drug/compounds in human(HCT116) tumor cells (see text for details).
In summary, even if the prediction accuracy is far from perfect, this work shows that it should be possible to either:
1 - use known pathway models plus drug dose-matrices to improve prediction of the most likely targets of the drugs
2 - use known drug-target relationships plus the drug dose-matrices to predict the network connectivity
One obvious complication is the multiple drug targets for the same compound that would reduce the usefulness of the predictions. Some interesting extensions could be to test drug-drug interactions in KO strains or in combinations with RNAi knock-downs
or protein over-expressions.
Thursday, April 24, 2008
SciFoo and BioBarCamp
(Via Attila) The invitations for the 3rd SciFoo have apparently been sent. It will be held from the 8th to the 10th of August at the Googleplex. There is also an idea floating around to organize a BarCamp at the same time as SciFoo. A BarCamp Check out the BioBarCamp wiki and discussion group. There are already several suggestions for venues to organize it and several people interested in attending.
On a side note it's fun to see something like this getting thought of and set up from Twitter/FriendFeed conversations. I have been trying out FriendFeed for a while now and although I am not a big fan of micro blogging (yet?) I really like the conversations around the feed streams.
On a side note it's fun to see something like this getting thought of and set up from Twitter/FriendFeed conversations. I have been trying out FriendFeed for a while now and although I am not a big fan of micro blogging (yet?) I really like the conversations around the feed streams.
Wednesday, April 16, 2008
The shuffle project

Most of my work in the last few years was computational, either looking at the evolution of protein-protein interactions or at the prediction of domain-peptide interactions. The nice thing of working on a lab were a lot of people were doing wet lab experiments was that I had the oportunity to, once in a while, grab some pipettes and participate in some of the work that was going on. One project that worked out well was published today (not open access sorry). My contribution to this project was small but it was a lot of fun and I am very interested in the topic that we worked on. We called it the shuffle project in lab.
The main objective of this work was to study how the addition of gene regulatory interactions impacts on a cell's fitness. We introduced different combinations of existing E.coli promoters and transcription/sigma factors either as plasmids or integrated in the genome. In effect, each construct mimics a duplication of one of the E.coli's sigma factors or transcription factors with a change in its promoter. We then tested the impact on fitness by measuring growth curves under different conditions or performing competition assays.
There were a couple of interesting findings but the two the I found most interesting were:
- The vast majority of the constructs had no measurable impact on growth even by testing different experimental conditions.
- A few constructs could out-compete the control in competition assays (stationary phase survival or passaging experiments in rich medium).
Both of these suggest that the gene regulatory network of E. coli is very tolerant to the addition of novel regulatory interactions. This is important because it tells us that regulatory networks are free to explore new interactions given that there is a limited impact on fitness. From this we could also argue that if there are many equivalent (nearly neutral) ways of regulating gene expression we can't expect to see individual gene regulatory interactions conserved across different species. There are a several recent studies, particularly in eukaryotic species, showing that there is in fact a fast divergence of transcription factor binding sites (see recent review by Brian B. Tuch and colleagues) and many other examples that show that although the selectable phenotype is found to be conserved the underlying interactions or regulations have diverged in different species. (see Tsong et al. and Lars Juhl Jensen et al.)
There are a couple of questions that come from these and other related works. What is the fractions of cellular interactions that are simple biologically irrelevant ? Is it possible to predict to what degree purifying selection restricts changes at different levels of cellular organization ? What is the extent of change in protein-protein interactions ?
Having previously worked on the evolution of protein-protein interactions this is the direction that most interests me. This is why I am currently looking at the evolution of phospho-regulation and signaling in eukaryotic species.
Monday, April 14, 2008
Life Sciences Virtual Conference and Expo

- Dr. Paul Matsudaira, Director Whitehead Institute Professor of Biology and Bioengineering, MIT : Advanced Imaging and Informatics Methods for Complex Life Sciences Problems
- Professor Jan-Eric Litton, Director of Informatics, Karolinska Institute - Biobanking : The Challenge of Infrastructure for Large Scale Population Studies
- Dr. Joel Saltz, Professor and Chair, Department of BioMedical Informatics, Ohio State University : The Cancer Biomedical Informatics Grid (caBIG™)
- Professor Peter J. Hunter, University of Auckland, Bioengineering Institute : Innovation in biological system simulations
- Dr. Ajay Royyuru, IBM Research, Computation Biology at IBM : Update on the IBM Genealogy Project co-sponsored with National Geographic
- Dr. Michael Hehenberger, Solutions Executive, Global Life Sciences : IT Architectures and Solutions for Imaging Biomarkers
Tuesday, April 08, 2008
Structure based prediction of SH2 targets
One of the last few things I worked on during the PhD is now available in PLoS Comp Bio. It is about the structure based prediction of binding of SH2 domains to phospho-peptide targets.
The SH2 domain (src homology domain 2) is a small domain of around 100 amino-acid that has a strong preference to bind peptides that have phosphorylated tyrosines. The selectivity of each domain is typically further restricted by variable surfaces near the phospho-tyrosine binding pocket. See figure below:

The binding preference of each domain can be experimentally determined using for example peptide library screening, phage display or protein arrays. Alternatively we should be able to analyze the increasing amount of structural information and predict the binding specificity of peptide binding domains.
We tried to show here that given a structure of an SH2 domain in complex with a peptide it is possible to predict the binding specificity of this domain. It is also possible, to some extent, predict how mutations on these domains might affect their binding preferences. Finally, combining predictions of specificity with known human phospho sites allows for very reasonable predictions of in vivo SH2-target interactions.
The obvious limitation here is that we need to start with structure of the domain we know from some unpublished work that for families with good structural coverage, homology models can produce specificity predictions that as accurate as from x-ray structure. The other limitation is that giving the lack of dynamics a single conformation of the interactions is modeled and this should in part help determine the binding specificity. One possible to this problem that we have used with some success is to model different peptide conformation for each binding domain.
I should make clear that although I think there is an improvement over previous works there is already a considerable amount of research on this topic that we tried to cite in the introduction and discussion. I would say that some of the best previous work on structure based predictions of domain-peptide interactions has come from Wei Wang lab (see for example McLaughlin et al. or Hou et al.)
This manuscript was the first (and only so far) I collaborated on with Google Docs. It worked well and I recommend it to anyone that needs to co-write a manuscript with other people. It saves a lot of emails and annotations on top of annotations.
The SH2 domain (src homology domain 2) is a small domain of around 100 amino-acid that has a strong preference to bind peptides that have phosphorylated tyrosines. The selectivity of each domain is typically further restricted by variable surfaces near the phospho-tyrosine binding pocket. See figure below:
The binding preference of each domain can be experimentally determined using for example peptide library screening, phage display or protein arrays. Alternatively we should be able to analyze the increasing amount of structural information and predict the binding specificity of peptide binding domains.
We tried to show here that given a structure of an SH2 domain in complex with a peptide it is possible to predict the binding specificity of this domain. It is also possible, to some extent, predict how mutations on these domains might affect their binding preferences. Finally, combining predictions of specificity with known human phospho sites allows for very reasonable predictions of in vivo SH2-target interactions.
The obvious limitation here is that we need to start with structure of the domain we know from some unpublished work that for families with good structural coverage, homology models can produce specificity predictions that as accurate as from x-ray structure. The other limitation is that giving the lack of dynamics a single conformation of the interactions is modeled and this should in part help determine the binding specificity. One possible to this problem that we have used with some success is to model different peptide conformation for each binding domain.
I should make clear that although I think there is an improvement over previous works there is already a considerable amount of research on this topic that we tried to cite in the introduction and discussion. I would say that some of the best previous work on structure based predictions of domain-peptide interactions has come from Wei Wang lab (see for example McLaughlin et al. or Hou et al.)
This manuscript was the first (and only so far) I collaborated on with Google Docs. It worked well and I recommend it to anyone that needs to co-write a manuscript with other people. It saves a lot of emails and annotations on top of annotations.
Bio::Blogs#20 - the very late edition
I said I would organize the 20th edition of Bio::Blogs here on the 1st of April but April fools and my current work load did not allow me to get Bio::Blogs up on time.
There were a couple of interesting discussions and blog posts in March worth noting. For example, Neil mentioned a post by Jennifer Rohn started that initiated what could be one of the longest threads in Nature Network :"In which I utterly fail to conceptualize". It started off as small anti-Excel rant but turned in the comments to 1st) a discussion of bioinformatic tools to use, 2nd) a discussion of wet versus dry mindset and how much one should devote to learn the other. Finally it ended up as a exchange about collaborations and how a social networking site like Nature Network could/should help scientists find collaborators. There was even a group started by Bob O'Hara to discuss this last issue further.
I commented on the thread already but can try to expand a bit on it here. Nature Network is positioned as a social networking site for scientists. So far the best that it has to offer has been the blog posts and forum discussions. This is not very different from a "typical" forum. It facilitates the exchange of ideas around scientific topics but NN could try to look at all the typical needs of scientists (lab books, grant managing, lab managing, collaborations, protocols, paper recomendations,etc) and decide on a couple that they could work into the social network site. Ways to search for collaborators and maybe paper recommendation engines that take advantage of your network (network+connotea) are the most obvious and easier to implement. Thinking long term, tools to help manage the lab could be an interesting addition.
Another interesting discussion started from a post by Cameron Neylon on a data model for electronic lab notebooks (part I, II, III). Read also Neil's post, and Gibson's reply to Cameron on FuGE.
How much of the day to day activities and results need to be structured ? How heavy should this structure be to capture enough useful computer readable information ? Although I find these questions and discussion interesting, I would guess that we are far from having this applied to any great extent. If most people are reluctant to try out new applications they will be even less willing to convey their day to day practices via a structured data model. I mentioned recently the experiment under way at FEBS letters journal to create structured abstracts during the publishing process. As part of the announcement the editors commissioned reviews on the topic. It is worth reading the review by Florian Leitner and Alfonso Valencia on computational annotation methods. They argue for the creation of semi-automated tools that take advantage of the automatic methods and the curators (authors or others). The problems and solutions for annotation of scientific papers are shared with digital lab notebooks. It hope that more interest in this problem will lead to easy to use tools that suggest annotations for users under some controlled vocabularies.
Several people blogged about the 15 year old bug found in the BLOSUM matrices and the uncertainty in multiple sequence alignments. See posts by Neil, Kay Lars and Mailund.
Both cases remind us of the importance of using tools critically. The flip side of this is that it is impossible to constantly question every single tool we use since this would slow our work down to a crawl.
In the topic of Open Science, in March the Open Science proposal drafted by Shirley Wu and Cameron Neylon, for the Pacific Symposium on Biocomputing was accepted. It was accepted as a 3 hour workshop consisting of invited talks, demos and discussions. The call for participation is here along with the important deadlines for submissions (talk proposals due June 1st and poster abstracts due the 12th of September).
On a related note Michael Barton has set up a research stream (explained here) He is collecting updates on his work, tagged papers and graphs posted to Flickr into one feed that gives an immediate impression of what he is working on at present time. This is really a great set up. Even for private use withing a lab or across labs for collaboration this would give everyone involved the capacity to tap into the interesting feeds. I would probably not like to have everyone's feeds and maybe a supervisor should have access to some filtered set of feeds or tags to get only the important updates but this looks a step in the right direction. The same way, machines could also have research feeds that I could subscribe too to get updates on some data source.
Also in March, Deepak suggested we need more LEAP (Lightly Engineered Application Products)in science. He suggests that it is better to have one tool that does a job very well than one that does many somewhat well. I guess we have a few examples of this in science. Some of the most cited papers of all time are very well known cases of a tool that does one job well (ex: BLAST).
Finally, some meta-news on Bio::Blogs. I am currently way behind on many work commitments and I don't think I can keep up the (light) editorial work required for Bio::Blogs so I am considering stopping Bio::Blogs altogether. It has been almost two years and it has been fun and hopefully useful. The initial goal of trying to nit together the bioinformatic related blogs and offering some form of highlighting service is still required but I am not sure this is the best way going forward.
Still, if anyone wants to take over from here let me know by email (bioblogs at gmail.com).
There were a couple of interesting discussions and blog posts in March worth noting. For example, Neil mentioned a post by Jennifer Rohn started that initiated what could be one of the longest threads in Nature Network :"In which I utterly fail to conceptualize". It started off as small anti-Excel rant but turned in the comments to 1st) a discussion of bioinformatic tools to use, 2nd) a discussion of wet versus dry mindset and how much one should devote to learn the other. Finally it ended up as a exchange about collaborations and how a social networking site like Nature Network could/should help scientists find collaborators. There was even a group started by Bob O'Hara to discuss this last issue further.
I commented on the thread already but can try to expand a bit on it here. Nature Network is positioned as a social networking site for scientists. So far the best that it has to offer has been the blog posts and forum discussions. This is not very different from a "typical" forum. It facilitates the exchange of ideas around scientific topics but NN could try to look at all the typical needs of scientists (lab books, grant managing, lab managing, collaborations, protocols, paper recomendations,etc) and decide on a couple that they could work into the social network site. Ways to search for collaborators and maybe paper recommendation engines that take advantage of your network (network+connotea) are the most obvious and easier to implement. Thinking long term, tools to help manage the lab could be an interesting addition.
Another interesting discussion started from a post by Cameron Neylon on a data model for electronic lab notebooks (part I, II, III). Read also Neil's post, and Gibson's reply to Cameron on FuGE.
How much of the day to day activities and results need to be structured ? How heavy should this structure be to capture enough useful computer readable information ? Although I find these questions and discussion interesting, I would guess that we are far from having this applied to any great extent. If most people are reluctant to try out new applications they will be even less willing to convey their day to day practices via a structured data model. I mentioned recently the experiment under way at FEBS letters journal to create structured abstracts during the publishing process. As part of the announcement the editors commissioned reviews on the topic. It is worth reading the review by Florian Leitner and Alfonso Valencia on computational annotation methods. They argue for the creation of semi-automated tools that take advantage of the automatic methods and the curators (authors or others). The problems and solutions for annotation of scientific papers are shared with digital lab notebooks. It hope that more interest in this problem will lead to easy to use tools that suggest annotations for users under some controlled vocabularies.
Several people blogged about the 15 year old bug found in the BLOSUM matrices and the uncertainty in multiple sequence alignments. See posts by Neil, Kay Lars and Mailund.
Both cases remind us of the importance of using tools critically. The flip side of this is that it is impossible to constantly question every single tool we use since this would slow our work down to a crawl.
In the topic of Open Science, in March the Open Science proposal drafted by Shirley Wu and Cameron Neylon, for the Pacific Symposium on Biocomputing was accepted. It was accepted as a 3 hour workshop consisting of invited talks, demos and discussions. The call for participation is here along with the important deadlines for submissions (talk proposals due June 1st and poster abstracts due the 12th of September).
On a related note Michael Barton has set up a research stream (explained here) He is collecting updates on his work, tagged papers and graphs posted to Flickr into one feed that gives an immediate impression of what he is working on at present time. This is really a great set up. Even for private use withing a lab or across labs for collaboration this would give everyone involved the capacity to tap into the interesting feeds. I would probably not like to have everyone's feeds and maybe a supervisor should have access to some filtered set of feeds or tags to get only the important updates but this looks a step in the right direction. The same way, machines could also have research feeds that I could subscribe too to get updates on some data source.
Also in March, Deepak suggested we need more LEAP (Lightly Engineered Application Products)in science. He suggests that it is better to have one tool that does a job very well than one that does many somewhat well. I guess we have a few examples of this in science. Some of the most cited papers of all time are very well known cases of a tool that does one job well (ex: BLAST).
Finally, some meta-news on Bio::Blogs. I am currently way behind on many work commitments and I don't think I can keep up the (light) editorial work required for Bio::Blogs so I am considering stopping Bio::Blogs altogether. It has been almost two years and it has been fun and hopefully useful. The initial goal of trying to nit together the bioinformatic related blogs and offering some form of highlighting service is still required but I am not sure this is the best way going forward.
Still, if anyone wants to take over from here let me know by email (bioblogs at gmail.com).
Tuesday, April 01, 2008
(April fools update) Leveling the playing field – NIH to ban brain enhancing practices
Update - This post was part of an April 1st news but I am sure everyone got it :). Still the pressure in science is real and worth thinking about.
There has been quite a buildup of discussion surrounding the idea of brain enhancing drugs in the last couple of days. It started early march with a New York Time piece “Brain Enhancement Is Wrong, Right?” and it has culminated with the recent announcement of the World Anti Brain Doping Authority (WABDA) a joint effort from the NIH and EU to initiates studies on the reach of brain enhancing practices in science today.
There are many points of view already expressed on the web, see for example: ·Chris Patil
·Bora
·Anna Kushnir
·Genome Technology
·Egghead
·Eye on DNA
·Bob Ohara
·Martin Fenner
·Jennomics
My first reaction was of pure skepticism, this must be some kind of joke I thought, so I tried to probe a little bit around the UCSF campus to see if anyone has ever heard of this as well. One of my supervisors mentioned that about a year ago he had to fill out a NIH survey addressing the current problem of very high rejection rates for NIH grants. It looks like within this survey there was a section regarding the problems of competition in science and some of these brushed around the topic of brain enhancing practices. It could be that at the time NIH was trying to measure how far would people go under an extreme competitive environment.
This really got me thinking about how we are engaged in an environment that is not that far removed from highly competitive sports. How many stories have we heard about data forgery and scandalous retractions in the last couple of years? To what extent will people go to secure their place in science? To be recognized?
So maybe NIH is right in being proactive. Even if the issue is not as serious in science as it is in sports, unless there is an amazing influx of money or a considerable decrease of working scientists this might become an important problem. If nothing else we will get to know the current extent of these practices and it highlights yet again how far we deviated from course. The money society puts into scientific research is being wasted on overlapping competitive projects. Research agendas should be open and free for anyone to participate in. Maybe NIH should regulate that as well.
There has been quite a buildup of discussion surrounding the idea of brain enhancing drugs in the last couple of days. It started early march with a New York Time piece “Brain Enhancement Is Wrong, Right?” and it has culminated with the recent announcement of the World Anti Brain Doping Authority (WABDA) a joint effort from the NIH and EU to initiates studies on the reach of brain enhancing practices in science today.
There are many points of view already expressed on the web, see for example: ·Chris Patil
·Bora
·Anna Kushnir
·Genome Technology
·Egghead
·Eye on DNA
·Bob Ohara
·Martin Fenner
·Jennomics
My first reaction was of pure skepticism, this must be some kind of joke I thought, so I tried to probe a little bit around the UCSF campus to see if anyone has ever heard of this as well. One of my supervisors mentioned that about a year ago he had to fill out a NIH survey addressing the current problem of very high rejection rates for NIH grants. It looks like within this survey there was a section regarding the problems of competition in science and some of these brushed around the topic of brain enhancing practices. It could be that at the time NIH was trying to measure how far would people go under an extreme competitive environment.
This really got me thinking about how we are engaged in an environment that is not that far removed from highly competitive sports. How many stories have we heard about data forgery and scandalous retractions in the last couple of years? To what extent will people go to secure their place in science? To be recognized?
So maybe NIH is right in being proactive. Even if the issue is not as serious in science as it is in sports, unless there is an amazing influx of money or a considerable decrease of working scientists this might become an important problem. If nothing else we will get to know the current extent of these practices and it highlights yet again how far we deviated from course. The money society puts into scientific research is being wasted on overlapping competitive projects. Research agendas should be open and free for anyone to participate in. Maybe NIH should regulate that as well.
Monday, March 31, 2008
call for Bio::Blogs #20
The 20th edition of Bio::Blogs will be posted here by the end of tomorrow. This is very short notice but if anyone would like to contribute please send a few links of the most interesting things of the past month and I will put everything together (email bioblogs at gmail).
Friday, March 21, 2008
The structured abstract experiment at FEBS letters
The journal "FEBS letters" is starting a publishing experiment on structured abstracts. As described in the editorial the experiment is aimed at:
"integrating each manuscript with a structured summary precisely reporting, with database identifiers and predefined controlled vocabularies, the protein interactions reported in the manuscript."
The experiment will be a collaboration between FEBS letters and the interaction database MINT, it has started in the beginning of this year and it will last 6 months. It will try to evaluate the necessary tools and the authors's "degree of interest (and competence) to invest" in this annotation process.
It will be very interesting to see the results of this experiment to see if authors are willing to do this extra bit of work and how much this might facilitate the annotation efforts.
"integrating each manuscript with a structured summary precisely reporting, with database identifiers and predefined controlled vocabularies, the protein interactions reported in the manuscript."
The experiment will be a collaboration between FEBS letters and the interaction database MINT, it has started in the beginning of this year and it will last 6 months. It will try to evaluate the necessary tools and the authors's "degree of interest (and competence) to invest" in this annotation process.
It will be very interesting to see the results of this experiment to see if authors are willing to do this extra bit of work and how much this might facilitate the annotation efforts.
Saturday, March 08, 2008
Bio::Blogs #19 - Bioengineering
This months edition of Bio::Blogs is now available at Duncan's blog and it is mostly focused on (bio)engineering. Click the link for a summary of interesting things that were blogged about in the past month.
I will be hosting issue number 20 here in the blog, without a clear topic. Possibly with some emphasis on data integration. Email your top picks of the month until the end of March to bioblogs at gmail .com
I will be hosting issue number 20 here in the blog, without a clear topic. Possibly with some emphasis on data integration. Email your top picks of the month until the end of March to bioblogs at gmail .com
Sunday, March 02, 2008
Design, mutate and freze
Drew Endy talked about engineering biology for Edge. Most of the emphasis is still on standardization of biological parts and the importance of simplifying the process of creating a biological function. Still it would be nice to hear from him some new ideas about establishing processes of engineering biology. His whole speech seems focused on creating the hacker culture in biology. To transpose all the same concepts that would allow us to re-create the explosive growth of tinkering and production that we saw for electronics and computer programing within the biological sciences.
I agree with most of what he says, that we should: 1)focus on method development; 2)work on a registry of parts and 3) foster an "open source"/hacker culture in synthetic biology. In this text he did not mention for example the importance of modeling but it is implicit in the standardization of parts. Once you have a computer simulation of the process you wish to engineer that you should be able to reach into the parts list to implement it. The problem with this concept of standardized parts is the complexity that Drew Endy dislikes so much. There is still no way around it. We can take a part that has been very well defined in E. coli, plug into a yeast plasmid and it might not work at all.
If we are still far way from the ideal plug and play maybe we could try to take advantage of what biology can do very well, to evolve to a suitable solution. I would argue that we should develop engineering protocols that could take advantage of the evolutionary process.
<insert rambling>
Lets say we want to implement a function and I know beforehand that I will not be able to get perfect parts to implement it. Can we design this function in a way that it will have a large funnel of attraction for the design properties that I am interested in ? Are there biological parts that are more amenable to a directed evolutionary experiment to reach that design goal ? How can I increase the mutation rate for a controlled period of time and only for the stretch of DNA that I want to evolve ? Maybe it is possible to place the parts in a plasmid and have the replication of this plasmid be under a different polymerase that is more error prone ?
</insert rambling>
If we could answer some of these questions (maybe we have already), we could design the function of interest (modeling), pull parts that would be close to the solution, mutate/select until the best design is achieved and then freeze it by reducing the generation of diversity in some way.
Further reading:
Synthetic biology: promises and challenges
Molecular Systems Biology 3 Article number: 158 doi:10.1038/msb4100202
I agree with most of what he says, that we should: 1)focus on method development; 2)work on a registry of parts and 3) foster an "open source"/hacker culture in synthetic biology. In this text he did not mention for example the importance of modeling but it is implicit in the standardization of parts. Once you have a computer simulation of the process you wish to engineer that you should be able to reach into the parts list to implement it. The problem with this concept of standardized parts is the complexity that Drew Endy dislikes so much. There is still no way around it. We can take a part that has been very well defined in E. coli, plug into a yeast plasmid and it might not work at all.
If we are still far way from the ideal plug and play maybe we could try to take advantage of what biology can do very well, to evolve to a suitable solution. I would argue that we should develop engineering protocols that could take advantage of the evolutionary process.
<insert rambling>
Lets say we want to implement a function and I know beforehand that I will not be able to get perfect parts to implement it. Can we design this function in a way that it will have a large funnel of attraction for the design properties that I am interested in ? Are there biological parts that are more amenable to a directed evolutionary experiment to reach that design goal ? How can I increase the mutation rate for a controlled period of time and only for the stretch of DNA that I want to evolve ? Maybe it is possible to place the parts in a plasmid and have the replication of this plasmid be under a different polymerase that is more error prone ?
</insert rambling>
If we could answer some of these questions (maybe we have already), we could design the function of interest (modeling), pull parts that would be close to the solution, mutate/select until the best design is achieved and then freeze it by reducing the generation of diversity in some way.
Further reading:
Synthetic biology: promises and challenges
Molecular Systems Biology 3 Article number: 158 doi:10.1038/msb4100202
Tuesday, February 26, 2008
Jonathan Eisen@PLoS
PLoS has a new Academic Editor in Chief that blogs, works on evolution and has been at SciFoo twice. Jonathan A. Eisen, explains his reasons for accepting the job in an editorial available online. Among other things, he states:
I wonder if we will ever see the AEIC of Science/Nature/Cell blogging :). The editorials are the closest article format to a blog post but they insist on a somewhat exaggerated formality. Just as an example here is a link to the 2007 archives of the (great) editorials of Frank Gannon from EMBO reports.
Second, I want to work with the professional staff at PLoS Biology, the Academic Editors, and anyone else in the community who shares my desire to build new initiatives that will keep PLoS Biology as a top-tier journal. These would include ideas like producing issues dedicated to particular themes, actively recruiting excellent papers in fields where OA is not yet common, producing more outreach and educational material, and engaging bloggers and fully embracing the Web 2.0 world.I actually would like to get a bit more involved with what they are doing at PLoS, in particular with what they might be discussing for PLoS ONE and the hubs. Maybe I can pester them later on during the year. For some reactions on the news and more information, here is the related Postgenomic cluster.
I wonder if we will ever see the AEIC of Science/Nature/Cell blogging :). The editorials are the closest article format to a blog post but they insist on a somewhat exaggerated formality. Just as an example here is a link to the 2007 archives of the (great) editorials of Frank Gannon from EMBO reports.
Friday, February 22, 2008
Call for Bio::Blogs#19
Duncan Hull has volunteer to host the next issue of Bio::Blogs (a bioinformatic related monthly blog journal). It will be out in the beginning of March on the O'Really? blog. The suggested theme for this month is the relationship between Biology and Engineering inspired on the interview published on Edge.org "Engineering and Biology": A Talk with Drew Endy. Anyone can send links for this issue on this topic but also for other interesting bioinformatic posts to bioblogs at gmail.com
We could also try to format if automatically using FeedJournal as suggested by Neil.
We could also try to format if automatically using FeedJournal as suggested by Neil.
Friday, February 08, 2008
Late Links: Bio::Blogs#18 + new blog
I have been away from the web for the last few weeks as I moved to San Francisco to start my first postdoc. I will be working at UCSF in the Lim Lab and the Krogan lab on the evolution of signaling in yeasts. I'll try to blog more about it later during the year. I am looking forward to getting to know the bay area and hopefully make the most of the great (and apparently relaxed) science & technology environment.
Early this month Michael Barton edited another great edition of Bio::Blogs mostly dedicated to open science. He also put together an essay on the subject that is worth reading and commenting on. The next edition of Bio::Blogs will probably come back here to Public Rambling on the 1st of March (unless there is another volunteer).
Also in these last few weeks Lars Juhl Jensen started blogging at Buried Treasure. I met Lars at EMBL while I was doing my PhD and he always had time to help me out when I had some work related question. Like Roland Krause said Lars is one of the most prolific researchers in computational biology I ever met.
Early this month Michael Barton edited another great edition of Bio::Blogs mostly dedicated to open science. He also put together an essay on the subject that is worth reading and commenting on. The next edition of Bio::Blogs will probably come back here to Public Rambling on the 1st of March (unless there is another volunteer).
Also in these last few weeks Lars Juhl Jensen started blogging at Buried Treasure. I met Lars at EMBL while I was doing my PhD and he always had time to help me out when I had some work related question. Like Roland Krause said Lars is one of the most prolific researchers in computational biology I ever met.
Saturday, January 26, 2008
Submissions for Bio::Blogs#18
I am slowly re-connecting to the online world again, trying to pick trough the thousands of blog posts and other RSS feed alerts piled up in GReader. Way before I manage to do that (unless I press the read all button) the next edition of Bio::Blogs will be up at Bioinformatics Zen. Michael Barton has kindly agree to host the 18th edition of Bio::Blogs with a particular emphasis on Open Science and Open Notebook Science. It is scheduled for February 1st and anyone can participate by sending a link of their submissions to bioblogs at gmail.com.
To get in the spirit of the upcoming edition and to inspire some related blog posts go check out his recent movie. What do you think ? Will there be a significant increase of people sharing and collaborating online this year ?
To get in the spirit of the upcoming edition and to inspire some related blog posts go check out his recent movie. What do you think ? Will there be a significant increase of people sharing and collaborating online this year ?
Sunday, December 23, 2007
Disconnecting for a while
I am disconnecting from blogging for longer than usual. There will not be a Bio::Blogs edition on the 1st of January but there will be one dedicated to Open Science on the 1st of February. Before I go, congratulation to the chemioinformatics related blogging group that got a paper from combined efforts. Also, have a look at the new blog from Jason Kelly called Free Genes that will focus on synthetic biology and open science issues.
I'll be back sometime in the end of January. Happy celebrations to everyone and a good start to the new year.
Wednesday, December 05, 2007
Open Science project on domain family expansion
Some domain families of similar function have expanded more than others during evolution. Different domain families might have significantly different constraints imposed by their fold that could explain these differences. This project aims to understand what properties determine these differences focusing in particular on peptide binding domains. Examples of constraints to explore include average cost of production or capacity to generate binding diversity for the domain family.
This project is also a test for using Google Code as a research project management system for open science (see here for project home). Wiki pages will be used to collect previous research and milestone discoveries during the project development and to write the final manuscript towards the end of the project. Issue tracking system can be used to organize the required project tasks and assign them to participants. The file repository can hold the datasets and code used to derive any result.

I plan to use the blog as a notebook for the project (tag: domainevolution) and the project home at Google Code as the repository and organization center. The next few post regarding the project will be dedicated to explain better why I am interested in the question and develop further what are some of my expectations. Anyone interested in contributing is more than welcome to join in along the way. I should say that I am not in any hurry and that this is something for my 20% time ;).
This project is also a test for using Google Code as a research project management system for open science (see here for project home). Wiki pages will be used to collect previous research and milestone discoveries during the project development and to write the final manuscript towards the end of the project. Issue tracking system can be used to organize the required project tasks and assign them to participants. The file repository can hold the datasets and code used to derive any result.
I plan to use the blog as a notebook for the project (tag: domainevolution) and the project home at Google Code as the repository and organization center. The next few post regarding the project will be dedicated to explain better why I am interested in the question and develop further what are some of my expectations. Anyone interested in contributing is more than welcome to join in along the way. I should say that I am not in any hurry and that this is something for my 20% time ;).
Sunday, December 02, 2007
Merry Bio::Blogs everyone
Paulo Nuin hosted the 17th edition of Bio::Blogs. The number of submissions was very low so I suspect I am not the only one rushing to finish everything before going on holidays.
Should we skip the edition of the 1st of January or maybe postpone it for a few days ? Anyone interested in hosting ? I have been thinking of changing the format a little bit to try to increase the incentives for participating but I'll leave this for another post.
Should we skip the edition of the 1st of January or maybe postpone it for a few days ? Anyone interested in hosting ? I have been thinking of changing the format a little bit to try to increase the incentives for participating but I'll leave this for another post.
Tuesday, November 27, 2007
Bio::Blogs #17 - call for submissions
The 17th edition of Bio::Blogs will be hosted by Paulo Nuin at Blind.Scientist . Submissions of interesting bioinformatic related blog posts of this month can be sent, until the end of the November, to the usual address (bioblogs at gmail dot com) or to nuin at genedrift dot org.
There is also still time to submit blog posts to the OpenLab 2007 compilation.
There is also still time to submit blog posts to the OpenLab 2007 compilation.
Monday, November 19, 2007
Linking Out - Open Science and a new blog
Cameron Neylon posted a request for collaboration in his blog:
...we are using the S. aureus Sortase enzyme to attach a range of molecules to proteins. We have found that this provides a clean, easy, and most importantly general method for attaching things to proteins.
(...)
We are confident that it is possible to get reasonable yields of these conjugates and that the method is robust and easy to apply. This is an exciting result with some potentially exciting applications. However to publish we need to generate some data on applications of these conjugates.
They are looking for collaborators interested in applying this method. Go check the blog posts if you are interested or know someone that works on something similar.
(via Open Access News) Liz Lyon, Associate Director of UK Digital Curation Centre posted an interesting presentation on Open Science: "Open Science and the Research Library: Roles, Challenges and Opportunities?".
(via Fungal Genomes) I found a new blog related to evolution called Thirst for Science with a lot of insightful posts.
...we are using the S. aureus Sortase enzyme to attach a range of molecules to proteins. We have found that this provides a clean, easy, and most importantly general method for attaching things to proteins.
(...)
We are confident that it is possible to get reasonable yields of these conjugates and that the method is robust and easy to apply. This is an exciting result with some potentially exciting applications. However to publish we need to generate some data on applications of these conjugates.
They are looking for collaborators interested in applying this method. Go check the blog posts if you are interested or know someone that works on something similar.
(via Open Access News) Liz Lyon, Associate Director of UK Digital Curation Centre posted an interesting presentation on Open Science: "Open Science and the Research Library: Roles, Challenges and Opportunities?".
(via Fungal Genomes) I found a new blog related to evolution called Thirst for Science with a lot of insightful posts.
Linking out - Personalized medicine
Personalized medicine continues to climb the hype cycle. I have been getting most of the best news coverage on the subject from blogs.
- Bertalan Meskó reviews companies focused on personalized medicine (see part I and II)
- Attila Csordas and Deepak Singh cover the social aspects of personal health and the tie-in to 23andMe
- Gareth Palidwor reads into the details to speculate that the business model of 23andMe might be to sell the aggregated user data.
- Gene Sherpas puts on the brakes, describing the hype as Genomic Voyeurism
I am concerned that all the attention the genomics side of personalized medicine will distort the relative importance of nature versus nurture. Everyone craves for a peek at their own destiny and at their roots. These services hope to provide both of these by looking at our DNA. I don't think they can really do this reliably but nothing stops them from luring people.
- Bertalan Meskó reviews companies focused on personalized medicine (see part I and II)
- Attila Csordas and Deepak Singh cover the social aspects of personal health and the tie-in to 23andMe
- Gareth Palidwor reads into the details to speculate that the business model of 23andMe might be to sell the aggregated user data.
- Gene Sherpas puts on the brakes, describing the hype as Genomic Voyeurism
I am concerned that all the attention the genomics side of personalized medicine will distort the relative importance of nature versus nurture. Everyone craves for a peek at their own destiny and at their roots. These services hope to provide both of these by looking at our DNA. I don't think they can really do this reliably but nothing stops them from luring people.
Tuesday, November 13, 2007
Last call for Open Laboratory 2007

Anyone interested in participating can send in links to their favorite blog posts of the year and also volunteer to be part of the reviewing process (see instructions here).
Monday, November 12, 2007
4th year blog anniversary
Having a glance a the blog posts it is easy to find some very weird ones :)
Your Identity Aura (2005)
Our Collective Mind (2005)
The Human Puppet (2005)
Social Network Dynamics in a Conference Setting (2006)
The Fortune Cookie Genome (2007)
There a lot of serious ones too but I will leave that list to some other time.
Thanks to Nodalpoint and the Nodalpoint regulars (Greg, Neil, Alf and Chris) for introducing me to blogging some 6 years ago and to everyone else that joined in along the way with their blogs and/or comments. It sure makes blogging more enjoyable.
(Image Credit: Picture taken by mattnjuzz and published under CC by-nc-sa. Originally taken from Flick)
Saturday, November 10, 2007
Predicting functional association using mRNA localization
About a month ago Lécuyer and colleagues published a paper in Cell describing an extensive study of mRNA localization in Drosophila embryos during development. The main conclusion of this study was that a very large fraction (71%) of the genes they analyzed (2314) had localization patterns during some stage of the embryonic development. This includes both embryonic localization or sub-cellular localizations.
There is a lot of information that was gathered in this analysis and it should serve as resource for further studies. There is information for different developmental stages so it should also be possible to look for the dynamics of localization of the mRNAs. Another application of this data would be to use it as information source to predict functional association between genes.
Protein localization information as been used in the past for prediction of protein-protein interactions (both physical and genetic interactions). Typically this is done by integrating localization with other data sources in probabilistic analysis [Jansen R et al. 2003, Rhodes DR et al. 2005, Zhong W & Sternberg PW, 2006].
To test if mRNA localization could be used in the same way I took from this website the localization information gathered in the Cell paper and available genetic and protein interaction information for D.melanogaster genes/proteins (can be obtained for example in BioGRID among others). For this analysis I grouped physical and genetic interactions together to have a larger number of interactions to test. The underlying assumption is that both should imply some functional association of the gene pair.
The very first simple test is to have a look at all pairs of genes (with available localization information) and test how the likelihood that they interact depends on the number of cases where they were found to co-localized (see figure below). I discarded any gene for each no interaction was known.
As seen in the figure there is a significant correlation (r=0.63,N=21,p<0.01) between the likelihood of interaction and the number of co-localizations observed for the pair. At this point I did not exclude any localization term but since images were annotated using an hierarchical structure these terms are in some cases very broad.
More specific patterns should be more informative so I removed very broad terms by checking the fraction of genes annotated to each term. I created two groups of more narrow scope, one excluding all terms annotated to more than 50% of genes (denominated "localizations 50") and a second excluding all terms annotated to more than 30% of genes (localizations 30). In the figure below I binned gene pairs according to the number of co-localizations observed in the three groups of localization terms and for each bin calculated the fraction that interact.

As expected, more specific mRNA localization terms (localizations 30) are more informative for prediction of functional association since fewer terms are required to obtain the same or higher likelihood of interaction. The increased likelihood does not come at a cost of fewer pairs annotated. For example, there are similar number of gene pairs in bin "10-14" of the more specific localization terms (localizations 30) as in the bin ">20" for all localization terms (see figure below).
It is important to keep in mind that mRNA localization alone is a very poor predictor of genetic or physical interaction. I took the number of co-localization of each pair (using the terms in "localizations 30") and plotted a ROC curve to determine the area under the ROC curve (AROC or AUC). The AROC value calculated was 0.54, with a 95% confidence lower bound of 0.52 and a p value of 6E-7 of the true area being 0.5. So it is not random (that would be 0.5) but by itself is a very poor predictor.
In summary:
1) the degree of mRNA co-localization significantly correlates with the likelihood of genetic or physical association.
2) less ubiquitous mRNA localization patterns should be more informative for interaction prediction
3) the degree of mRNA co-localization is by itself a poor predictor of interaction but it should be possible to use this information to improve statistical methods to predict genetic/physical interactions.
This was a quick analysis, not thoroughly tested and just meant to confirm that mRNA localization should be useful for genetic/physical interaction predictions. I am not going to pursue this but if there is anyone interested I suggest that it could be interesting to see what terms have more predictive power with the idea of integrating this information with other data sources or also possibly directing future localization studies. Perhaps there is little point of tracking different developmental stages or maybe embryonic localization patterns are not as informative as sub-cellular localizations to predict functional association.
Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, Emili A, Snyder M, Greenblatt JF, Gerstein M. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003 Oct 17;302(5644):449-53.
Rhodes DR, Tomlins SA, Varambally S, Mahavisno V, Barrette T, Kalyana-Sundaram S, Ghosh D, Pandey A, Chinnaiyan AM. Probabilistic model of the human protein-protein interaction network.Nat Biotechnol. 2005 Aug;23(8):951-9.
Zhong W, Sternberg PW. Genome-wide prediction of C. elegans genetic interactions.Science. 2006 Mar 10;311(5766):1481-4.
There is a lot of information that was gathered in this analysis and it should serve as resource for further studies. There is information for different developmental stages so it should also be possible to look for the dynamics of localization of the mRNAs. Another application of this data would be to use it as information source to predict functional association between genes.
Protein localization information as been used in the past for prediction of protein-protein interactions (both physical and genetic interactions). Typically this is done by integrating localization with other data sources in probabilistic analysis [Jansen R et al. 2003, Rhodes DR et al. 2005, Zhong W & Sternberg PW, 2006].
To test if mRNA localization could be used in the same way I took from this website the localization information gathered in the Cell paper and available genetic and protein interaction information for D.melanogaster genes/proteins (can be obtained for example in BioGRID among others). For this analysis I grouped physical and genetic interactions together to have a larger number of interactions to test. The underlying assumption is that both should imply some functional association of the gene pair.
The very first simple test is to have a look at all pairs of genes (with available localization information) and test how the likelihood that they interact depends on the number of cases where they were found to co-localized (see figure below). I discarded any gene for each no interaction was known.
More specific patterns should be more informative so I removed very broad terms by checking the fraction of genes annotated to each term. I created two groups of more narrow scope, one excluding all terms annotated to more than 50% of genes (denominated "localizations 50") and a second excluding all terms annotated to more than 30% of genes (localizations 30). In the figure below I binned gene pairs according to the number of co-localizations observed in the three groups of localization terms and for each bin calculated the fraction that interact.
As expected, more specific mRNA localization terms (localizations 30) are more informative for prediction of functional association since fewer terms are required to obtain the same or higher likelihood of interaction. The increased likelihood does not come at a cost of fewer pairs annotated. For example, there are similar number of gene pairs in bin "10-14" of the more specific localization terms (localizations 30) as in the bin ">20" for all localization terms (see figure below).
In summary:
1) the degree of mRNA co-localization significantly correlates with the likelihood of genetic or physical association.
2) less ubiquitous mRNA localization patterns should be more informative for interaction prediction
3) the degree of mRNA co-localization is by itself a poor predictor of interaction but it should be possible to use this information to improve statistical methods to predict genetic/physical interactions.
This was a quick analysis, not thoroughly tested and just meant to confirm that mRNA localization should be useful for genetic/physical interaction predictions. I am not going to pursue this but if there is anyone interested I suggest that it could be interesting to see what terms have more predictive power with the idea of integrating this information with other data sources or also possibly directing future localization studies. Perhaps there is little point of tracking different developmental stages or maybe embryonic localization patterns are not as informative as sub-cellular localizations to predict functional association.
Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, Emili A, Snyder M, Greenblatt JF, Gerstein M. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003 Oct 17;302(5644):449-53.
Rhodes DR, Tomlins SA, Varambally S, Mahavisno V, Barrette T, Kalyana-Sundaram S, Ghosh D, Pandey A, Chinnaiyan AM. Probabilistic model of the human protein-protein interaction network.Nat Biotechnol. 2005 Aug;23(8):951-9.
Zhong W, Sternberg PW. Genome-wide prediction of C. elegans genetic interactions.Science. 2006 Mar 10;311(5766):1481-4.
Thursday, November 08, 2007
What I don't like about BPR3
For those that have not heard about it before BPR3 stands for Bloggers for Peer-Reviewed Research Reporting. From their website:
"Bloggers for Peer-Reviewed Research Reporting strives to identify serious academic blog posts about peer-reviewed research by offering an icon and an aggregation site where others can look to find the best academic blogging on the Net."
It is all great except that it already exists and for a long time before BPR3. You can go to the papers section in Postgenomic and select papers by the date they were published, were blogged about, how many bloggers mentioned the paper or limit this search to a particular journal. I have even used this early this year to suggest that the number of citations increases with the number of blog posts mentioning the paper.
In this case I think that unless they really aim to develop something that is better that what Postgenomic already offers, the added competition will only fragment an already poor market. The value of a tracking site like Postgenomic, Techmeme or what BPR3 is proposing to create increases with user base in a non-linear way. This is what people usually refer to as network effects in social web applications. Increasing number of users make the sites more useful, reinforcing the importance of the social application. I suspect Postgenomic is not closed in any way to discussions. The code is even available here for re-use. So, why can't BPR3 and Postgenomic work this out and have a single tracking database and presentation. Let's say that BPR3 could be a mirror for the Postgenomics papers section (why re-invent the wheel).
I am not in favor of any particular site (sorry Euan :), what I think would be useful would be:
1 ) common standards for everyone (publishers, bloggers, etc) to carry information on published literature (number of times paper was read, ratings, comments, blog posts, e-notebook data, etc) attached to single identifier (DOI sounds fine)
2) one independent tracking site with enough users to gain hub status such that everyone gains from high exposure to the science crowd.
"Bloggers for Peer-Reviewed Research Reporting strives to identify serious academic blog posts about peer-reviewed research by offering an icon and an aggregation site where others can look to find the best academic blogging on the Net."
It is all great except that it already exists and for a long time before BPR3. You can go to the papers section in Postgenomic and select papers by the date they were published, were blogged about, how many bloggers mentioned the paper or limit this search to a particular journal. I have even used this early this year to suggest that the number of citations increases with the number of blog posts mentioning the paper.
In this case I think that unless they really aim to develop something that is better that what Postgenomic already offers, the added competition will only fragment an already poor market. The value of a tracking site like Postgenomic, Techmeme or what BPR3 is proposing to create increases with user base in a non-linear way. This is what people usually refer to as network effects in social web applications. Increasing number of users make the sites more useful, reinforcing the importance of the social application. I suspect Postgenomic is not closed in any way to discussions. The code is even available here for re-use. So, why can't BPR3 and Postgenomic work this out and have a single tracking database and presentation. Let's say that BPR3 could be a mirror for the Postgenomics papers section (why re-invent the wheel).
I am not in favor of any particular site (sorry Euan :), what I think would be useful would be:
1 ) common standards for everyone (publishers, bloggers, etc) to carry information on published literature (number of times paper was read, ratings, comments, blog posts, e-notebook data, etc) attached to single identifier (DOI sounds fine)
2) one independent tracking site with enough users to gain hub status such that everyone gains from high exposure to the science crowd.
Thursday, November 01, 2007
The right to equivalent response
(disclaimer: I worked for Molecular Systems Biology)
The last issue of PLoS Biology caries an editorial about Open Access written by Catriona J. MacCallum. It addresses the definition of Open Access and what the author considers an "insidious" trend of obscuring "the true meaning of open access by confusing it with free access".
I agree with the main point of the editorial, that we should keep in mind the definition of open access and that the capacity to re-use a published work should have more value to the readers.
However, it is very unfortunate that the very fist example MacCallum picks on is the Molecular Systems Biology journal for the simple fact that very recently they have changed the publishing policies to address exactly this issue. Authors can choose one of two CC licenses, deciding for themselves if they want to allow derivatives of their work or not. See post at MSB blog. As it is explained in the blog post the discussions about the licenses actually started several month ago and I think the final implementation is a very balanced decision on their part.
Thomas Lemberger, editor at MSB wrote a reply to the editorial that PLoS decided to publish as a response from the readers. These can only be seen if readers decide to click the link "Read Other Responses" on the right side of the online version.
I am obviously biased but for me this is not really giving the right to equivalent response. It would not have cost them much to issue a correction or publish the letter as correspondence where it would have the same visibility as the editorial. This would signal that they are indeed committed to collaborating with other publishers and journals that support open access (as stated in PLoS core principles).
The last issue of PLoS Biology caries an editorial about Open Access written by Catriona J. MacCallum. It addresses the definition of Open Access and what the author considers an "insidious" trend of obscuring "the true meaning of open access by confusing it with free access".
I agree with the main point of the editorial, that we should keep in mind the definition of open access and that the capacity to re-use a published work should have more value to the readers.
However, it is very unfortunate that the very fist example MacCallum picks on is the Molecular Systems Biology journal for the simple fact that very recently they have changed the publishing policies to address exactly this issue. Authors can choose one of two CC licenses, deciding for themselves if they want to allow derivatives of their work or not. See post at MSB blog. As it is explained in the blog post the discussions about the licenses actually started several month ago and I think the final implementation is a very balanced decision on their part.
Thomas Lemberger, editor at MSB wrote a reply to the editorial that PLoS decided to publish as a response from the readers. These can only be seen if readers decide to click the link "Read Other Responses" on the right side of the online version.
I am obviously biased but for me this is not really giving the right to equivalent response. It would not have cost them much to issue a correction or publish the letter as correspondence where it would have the same visibility as the editorial. This would signal that they are indeed committed to collaborating with other publishers and journals that support open access (as stated in PLoS core principles).
Bio::Blogs #16, The one with a Halloween theme
The 16# edition of Bio::Blogs is know available at Freelancing science. Jump over there for summary of what has been going on during this month in the bioinformatic related blogs. If not for anything else then just to have a look at the pumpkin. Thanks again to everyone that participated.
Paulo Nuin from Blind.Scientist has volunteered to host the 17# edition that is scheduled to appear as usual on the 1st of December.
Paulo Nuin from Blind.Scientist has volunteered to host the 17# edition that is scheduled to appear as usual on the 1st of December.
Thursday, October 25, 2007
Building an e-Science platform with Miscrosoft tools
(via Frank Gibson's Peanutbutter) Hugo Hiden, the technical director of the North-East Regional e-Science Centre (NEReSC) started a new blog where he will explore how to build an e-Science platform based on Microsoft technology. The initial post explains a little bit why he is doing this:
"The reason for this blog is, primarily, to document my experiences with writing a prototype e-Science research platform using Microsoft tools instead of the more traditional approach of fighting with Open Source. This way is easier, supposedly."
and also, what he aims to build:
"The task I have set myself is to recreate, at a basic level, the software being developed by the CARMEN project (http://www.carmen.org.uk). "
Let's see how it goes. Maybe they'll take suggestions later on :).
"The reason for this blog is, primarily, to document my experiences with writing a prototype e-Science research platform using Microsoft tools instead of the more traditional approach of fighting with Open Source. This way is easier, supposedly."
and also, what he aims to build:
"The task I have set myself is to recreate, at a basic level, the software being developed by the CARMEN project (http://www.carmen.org.uk). "
Let's see how it goes. Maybe they'll take suggestions later on :).
Sunday, October 21, 2007
Bio::Blogs #16 - call for submissions
The next edition of Bio::Blogs (bioinformatics blog journal) will be hosted at Freelancing science on the 1st of November. If you find anything this month that you think is interesting to add to this addition send an email to bioblogs at gmail. com until the end of the month. Anyone interested in hosting future edition can also send an email to volunteer.
Friday, October 19, 2007
The Fortune Cookie Genome
*in an imaginary future*
Today is the day I get the sequencing results back. It is going be interesting to have finally a glimpse of my very own genome. At the same time I am afraid of the potential disease associations they might find in there. In any case I rather know it with time to do something about it. Thats it ... I exhale and open the main door to the building walking up the desk.
- Hi. I have an appointment with my genetic adviser.
- Oh yes, go up to the 3rd floor, they are expecting you.
I walk up a DNA shaped stairway and walk into the office of one of the attending specialists. He was the one convincing me of how useful it would be to purchase the GenomeSurvey(TM) package.
- I got your email. The results are in ?
- Yes, we have your genome fully sequenced and uploaded into your service of choice. I see you have picked Google Health as your storage provider as part of the package.
- Is there any bad news ? Will I have a serious disease soon ?
- I understand your concern. There is really nothing too serious, but I will come to that in moment. You may login with your Google account here and I can guide you through some of the results.
I login to my health page and I am confronted with the usual simple white-blue Google interface. I noticed the addition of a genome tab and let my adviser tell me more about it.
- As you can see, your genome as been uploaded to your account. It has also been submitted as an John Doe genome to the NCBI personal genomics database. You may select later to make your identity known and/or associate any of your personal history information to it.
- What about the disease associations ?
- Yes. So you can click here on the associations report to have a full listings of the phenotypic associations. You have a very healthy genome, no serious rare diseases. In your case the most important finding is that you have a 2% increased likelihood of developing a heart condition when you are above 60 and a 1% increased likelihood of having Alzeimer's disease after 65.
- That's it ? 2% ? 1 %?
- Well, that is assuming no prior knowledge on your diet and other personal history as established in the large HapMap version 10. From now on you may input into the forms provided in Google Health all your diet and other personal information on a daily basis and as the information accumulates the service will automatically update the probabilities. As your adviser I should tell you that this information can be used by Google to provide you with better targeted advertisement in all other Google products.
- Right ... is this it ? Does the package include anything else ?
- Of course ! As I mentioned to you before you can click here on the prescription tab to get an informal advice on how best to deal with the associations that were found for you. You should always discuss these suggestions with your doctor before doing anything. By company policy I cannot read this information with you, since we are not liable for this. You can read it at home when you get there.
- Well , if there is nothing else I will go.
- Thank you again for choosing our GenomeSurvey(TM) package I am happy to have served you and I hope that you feel more empowered about your own health. Be well.
I go home feeling a bit cheated but obviously happy of having no serious disorder in the horizon. I rush to my home computer to read the prescription that will help me prevent my heart condition and Alzeimers. I click the GoolgeDoctor(TM) button and a clip like avatar jumps around in the screen. A computerized voice reads aloud the text appearing in the screen:
Dear Pedro. You can call me clipy ! I will be your assistant for any of your health needs. In order to decrease the likelihood for the negative phenotypes associated to your genome please consider abiding by the following rules:
- Do a lot of exercise
- Eat a healthy diet
- Find balance in your life
*in an imaginary present*
- Snap out of it, what does your say ?
I look back to the small piece of paper in my hand and read:
- "You must find balance in your life", thats what it says.
- Well, these things are never wrong.
I drop the paper on my dish and finish eating the fortune cookie before leaving the chinese restaurant with my friends.
- You won't believe what I thought of ...
Further reading
The Future of Personal Genomics (21 September 2007 Science)
How much information is there really in personal genomes and how much should patients know ? Extra points for citing a post from Eye on Dna in a Science Policy Forum.
The Science and Business of Genetic Ancestry Testing (10th October 2007 Science)
A discussion surrounding results of genetic ancestry tests and the commercialization of these tests.
Google Says Its Health Platform Is Due In Early 2008 (17 October InformationWeek)
Google is still trying to build a platform to host the health related information. Microsoft already launched a service called HealthVault (read about it from Deepak).
BMC Medical Genomics (17 October BMC blog)
BMC will launch a journal dedicated to Medical Genomics, covering articles on "on functional genomics, genome structure, genome-scale population genetics, epigenomics, proteomics, systems analysis and pharmacogenomics in relation to human health and disease."
Do-it-yourself science (17 October Nature)
This editorial links up several news, opinions and articles in the last issue of Nature to ask the question - How much involvement can patient advocates have in genetics? The most impressive articles is the story of Hugh Rienhoff, a trained geneticist and biotechnology that decided to personally research about his daughter's disease (as in buying a PCR machine etc). (via Keith)
Common sense for our genomes (18 October Nature)
Steven E. Brenner explains the need for a Genome Commons. See discussion at bbgm.
Today is the day I get the sequencing results back. It is going be interesting to have finally a glimpse of my very own genome. At the same time I am afraid of the potential disease associations they might find in there. In any case I rather know it with time to do something about it. Thats it ... I exhale and open the main door to the building walking up the desk.
- Hi. I have an appointment with my genetic adviser.
- Oh yes, go up to the 3rd floor, they are expecting you.
I walk up a DNA shaped stairway and walk into the office of one of the attending specialists. He was the one convincing me of how useful it would be to purchase the GenomeSurvey(TM) package.
- I got your email. The results are in ?
- Yes, we have your genome fully sequenced and uploaded into your service of choice. I see you have picked Google Health as your storage provider as part of the package.
- Is there any bad news ? Will I have a serious disease soon ?
- I understand your concern. There is really nothing too serious, but I will come to that in moment. You may login with your Google account here and I can guide you through some of the results.
I login to my health page and I am confronted with the usual simple white-blue Google interface. I noticed the addition of a genome tab and let my adviser tell me more about it.
- As you can see, your genome as been uploaded to your account. It has also been submitted as an John Doe genome to the NCBI personal genomics database. You may select later to make your identity known and/or associate any of your personal history information to it.
- What about the disease associations ?
- Yes. So you can click here on the associations report to have a full listings of the phenotypic associations. You have a very healthy genome, no serious rare diseases. In your case the most important finding is that you have a 2% increased likelihood of developing a heart condition when you are above 60 and a 1% increased likelihood of having Alzeimer's disease after 65.
- That's it ? 2% ? 1 %?
- Well, that is assuming no prior knowledge on your diet and other personal history as established in the large HapMap version 10. From now on you may input into the forms provided in Google Health all your diet and other personal information on a daily basis and as the information accumulates the service will automatically update the probabilities. As your adviser I should tell you that this information can be used by Google to provide you with better targeted advertisement in all other Google products.
- Right ... is this it ? Does the package include anything else ?
- Of course ! As I mentioned to you before you can click here on the prescription tab to get an informal advice on how best to deal with the associations that were found for you. You should always discuss these suggestions with your doctor before doing anything. By company policy I cannot read this information with you, since we are not liable for this. You can read it at home when you get there.
- Well , if there is nothing else I will go.
- Thank you again for choosing our GenomeSurvey(TM) package I am happy to have served you and I hope that you feel more empowered about your own health. Be well.
I go home feeling a bit cheated but obviously happy of having no serious disorder in the horizon. I rush to my home computer to read the prescription that will help me prevent my heart condition and Alzeimers. I click the GoolgeDoctor(TM) button and a clip like avatar jumps around in the screen. A computerized voice reads aloud the text appearing in the screen:
Dear Pedro. You can call me clipy ! I will be your assistant for any of your health needs. In order to decrease the likelihood for the negative phenotypes associated to your genome please consider abiding by the following rules:
- Do a lot of exercise
- Eat a healthy diet
- Find balance in your life
*in an imaginary present*
- Snap out of it, what does your say ?
I look back to the small piece of paper in my hand and read:
- "You must find balance in your life", thats what it says.
- Well, these things are never wrong.
I drop the paper on my dish and finish eating the fortune cookie before leaving the chinese restaurant with my friends.
- You won't believe what I thought of ...
Further reading
The Future of Personal Genomics (21 September 2007 Science)
How much information is there really in personal genomes and how much should patients know ? Extra points for citing a post from Eye on Dna in a Science Policy Forum.
The Science and Business of Genetic Ancestry Testing (10th October 2007 Science)
A discussion surrounding results of genetic ancestry tests and the commercialization of these tests.
Google Says Its Health Platform Is Due In Early 2008 (17 October InformationWeek)
Google is still trying to build a platform to host the health related information. Microsoft already launched a service called HealthVault (read about it from Deepak).
BMC Medical Genomics (17 October BMC blog)
BMC will launch a journal dedicated to Medical Genomics, covering articles on "on functional genomics, genome structure, genome-scale population genetics, epigenomics, proteomics, systems analysis and pharmacogenomics in relation to human health and disease."
Do-it-yourself science (17 October Nature)
This editorial links up several news, opinions and articles in the last issue of Nature to ask the question - How much involvement can patient advocates have in genetics? The most impressive articles is the story of Hugh Rienhoff, a trained geneticist and biotechnology that decided to personally research about his daughter's disease (as in buying a PCR machine etc). (via Keith)
Common sense for our genomes (18 October Nature)
Steven E. Brenner explains the need for a Genome Commons. See discussion at bbgm.
Thursday, October 11, 2007
JournalFire

A new science related service called JournalFire has started. It was apparently created by a group of graduate students that are "frustrated with the current system of scientific discourse and publication". According to the initial blog post this service "provides a centralized location for you to share, discuss, and evaluate published journal articles. You, the scientists, are put in charge of determining what studies are significant and noteworthy."
I did not have a chance to test it since it is in private beta but I have asked for an account. It looks like anyone with an .edu account should be able to access it already. It sounds promising but has many of these services a lot depends on the capacity to attract a sufficiently large group of people to sustain interesting discussions. I will update the post if I get an account to test the service.
(I wonder if the people from OpenWetWare have anything to do with this)
Monday, October 01, 2007
Bio::Blogs #15
Welcome to the 15th edition of the bioinformatics blog journal Bio::Blogs.
I complained a while ago that there was very little expansion of the bioinformatics blogging community but at least in the last couple of months it looks like this is changing. Although not necessary started last month here are three blogs that I only recently noticed: At the end of the day from Stephen Spiro (Spiro lab homepage), Paradoxus and Saaien Tist from Jan Aerts.
Not only are there more blogs there are many more examples of bloggers posting original ideas and research. Most people agree that being open about research should foster collaboration but so far few people have really tried to do it. It is inspiring to read trough these examples and trying to imagine how we might be doing science in the next couple of years.
This month was also marked by the many conference reports that we had available to read and by the experiments of taking real life conferences into Second Life.
Keeping this short and to the point this edition of Bio::Blogs focuses on these conference reports and on the ongoing experiments of using blogs to post about original research. I hope this nudges more people to go ahead and give blogging and open science a try.
Conference Reports
Neil Saunders was at the ComBio2007 conference and posted his notes about it in a four part series (1,2,3,4).
Allyson from Systems Biology & Bioinformatics provided a very extensive coverage of Integrative Bioinformatics 2007. Read all about it in chronological order from parts 1 to 10 (1,2,3,4,5,6,7,8,9,10).
From my blog here are two blog posts on the FEBS workshop - "The Biology of Modular Protein Domains" (1,2). This was not really about bioinformatics but I hope it will be interesting from the perspective of what data is coming that requires good integration strategies.
I'll jump know from real life to virtual talks. Those creative people at Nature keep testing out the potential of the web to improve interchange of knowledge. They kicked-off a seminar series of digital talks in the Second Nature island withind Second Life. The first talk by Philipp Holliger, entitled "New polymerases for old DNA" was about the engineering of new polymerases to amplify ancient DNA. Joanna Scott (working at Nature) has a very nice report on the talk in her blog.
Continuing on with virtual talks, in the past month there were another 3 sessions of the series SciFoo Lives On, organized by Jean-Claude Bradley and hosted also in Second Nature. JC Bradley covered the sessions on his blog: Sept 4 - Definitions in Open Science,Sept 10 - Communicating Science with Video, Sept 24 - Open Notebook Science Case Studies. Additional coverage by other bloggers can be found via the wiki page.
Blog articles
What are some of the most frustrating bottlenecks in bioinformatics research ? Where do we really spend most of our time ? Given that we work with digitized information it should in principle be mostly about the ideas. Thinking about interesting questions, crossing information and interpreting the results. At least for me this is typically not the case. What usually takes time is gathering all the necessary information in a way that can be analyzed. Three blog posts this month discuss this problem. Hari Jayaram and Neil Saunders posted about the problems they faced when attempting to do conceptually simple tasks. In response Deepak wrote a thoughtful post on how science databases should focus also on making the information easily accessible via appropriate APIs.
From online discussions to great examples of open science we start off with Jeremiah Faith's post were he describes an idea to determine the effect of sequence level mutations on transcription, translation, and noise.
Michael Barton from Bioinformatics Zen created a new blog dedicated to posting about his research on gene expression in yeast. Jump over there to read the many blog posts that he has already there, to provide feedback and maybe find common ground for collaborations.
Also this month, RPM from Evolgen re-started his attempt to publish original research on the blog. He is trying to study the evolution of a duplicated gene in Drosophila. There are two posts covering the introduction to the problem (part 1, part 2).
The last post highlighted in this month's edition is from Benjamin M Good. He has been working on a tool called Entity Describer to add semantic controlled vocabularies to Connotea and he has posted the manuscript they will try to publish on his blog and in Nature Precedings (10101/npre.2007.945.2).
This is it for this month. As usual, if anyone is interesting in serving as editor for any future edition, tell me by email.
I complained a while ago that there was very little expansion of the bioinformatics blogging community but at least in the last couple of months it looks like this is changing. Although not necessary started last month here are three blogs that I only recently noticed: At the end of the day from Stephen Spiro (Spiro lab homepage), Paradoxus and Saaien Tist from Jan Aerts.
Not only are there more blogs there are many more examples of bloggers posting original ideas and research. Most people agree that being open about research should foster collaboration but so far few people have really tried to do it. It is inspiring to read trough these examples and trying to imagine how we might be doing science in the next couple of years.
This month was also marked by the many conference reports that we had available to read and by the experiments of taking real life conferences into Second Life.
Keeping this short and to the point this edition of Bio::Blogs focuses on these conference reports and on the ongoing experiments of using blogs to post about original research. I hope this nudges more people to go ahead and give blogging and open science a try.
Conference Reports
Neil Saunders was at the ComBio2007 conference and posted his notes about it in a four part series (1,2,3,4).
Allyson from Systems Biology & Bioinformatics provided a very extensive coverage of Integrative Bioinformatics 2007. Read all about it in chronological order from parts 1 to 10 (1,2,3,4,5,6,7,8,9,10).
From my blog here are two blog posts on the FEBS workshop - "The Biology of Modular Protein Domains" (1,2). This was not really about bioinformatics but I hope it will be interesting from the perspective of what data is coming that requires good integration strategies.
I'll jump know from real life to virtual talks. Those creative people at Nature keep testing out the potential of the web to improve interchange of knowledge. They kicked-off a seminar series of digital talks in the Second Nature island withind Second Life. The first talk by Philipp Holliger, entitled "New polymerases for old DNA" was about the engineering of new polymerases to amplify ancient DNA. Joanna Scott (working at Nature) has a very nice report on the talk in her blog.
Continuing on with virtual talks, in the past month there were another 3 sessions of the series SciFoo Lives On, organized by Jean-Claude Bradley and hosted also in Second Nature. JC Bradley covered the sessions on his blog: Sept 4 - Definitions in Open Science,Sept 10 - Communicating Science with Video, Sept 24 - Open Notebook Science Case Studies. Additional coverage by other bloggers can be found via the wiki page.
Blog articles
What are some of the most frustrating bottlenecks in bioinformatics research ? Where do we really spend most of our time ? Given that we work with digitized information it should in principle be mostly about the ideas. Thinking about interesting questions, crossing information and interpreting the results. At least for me this is typically not the case. What usually takes time is gathering all the necessary information in a way that can be analyzed. Three blog posts this month discuss this problem. Hari Jayaram and Neil Saunders posted about the problems they faced when attempting to do conceptually simple tasks. In response Deepak wrote a thoughtful post on how science databases should focus also on making the information easily accessible via appropriate APIs.
From online discussions to great examples of open science we start off with Jeremiah Faith's post were he describes an idea to determine the effect of sequence level mutations on transcription, translation, and noise.
Michael Barton from Bioinformatics Zen created a new blog dedicated to posting about his research on gene expression in yeast. Jump over there to read the many blog posts that he has already there, to provide feedback and maybe find common ground for collaborations.
Also this month, RPM from Evolgen re-started his attempt to publish original research on the blog. He is trying to study the evolution of a duplicated gene in Drosophila. There are two posts covering the introduction to the problem (part 1, part 2).
The last post highlighted in this month's edition is from Benjamin M Good. He has been working on a tool called Entity Describer to add semantic controlled vocabularies to Connotea and he has posted the manuscript they will try to publish on his blog and in Nature Precedings (10101/npre.2007.945.2).
This is it for this month. As usual, if anyone is interesting in serving as editor for any future edition, tell me by email.
ICSB 2007
I am attending the eighth International Conference on Systems Biology (ICSB 2007) in Long Beach. I typically prefer smaller conferences but this one is probably the best one to get an overview of the recent progress in systems biology. As expected the program has a broad scope and unlike last year's meeting there are no parallel sessions so I will have a chance to ear more from others fields. Any other bloggers attending ?
Saturday, September 29, 2007
Modular protein domains (an overdue wrap-up)
I did not even cover 1/3 of the Module Protein Domain workshop in my previous blog post. I will not attempt to do it know after so much time. The organizers were clearly concerned about keeping the information withing the participants so I will just post some of the general impressions that I took from the meeting.
Specificity profiling in high gear
There were several sessions dedicated to particular protein domains (SH3, SH2 and PDZ in particular) and for all of these there are several projects under way (or mostly completed) to determine the binding specificity of a large number of these domains (although in different species) using either phage display, spotted peptides and other methods. We should project ahead and start planning what to do with this information. How to combine this to predict pathways and pathway models with dynamical information. The work of Rune Linding is a a very good start at this (see NetworKIN).
Given that the methods are set up I suspect that the emphasis might shift now on exploring the evolution of binding specificities and the impact of disease causing mutations (i.e. profiling binding specificities of domain variants).
Good integration of different methods
Compared to the same meeting two years ago I had an impression that there was a better integration of different approaches (biochemical, structural, computational, etc). A particularly good example was the work of Michael B. Yaffe. There were plenty of structural talks (probably a bit too much) but I found particularly interesting the work of Ivan Dikic that presented extensive novel work on ubiquitin binding domains and Charalampos Kalodimos that presented his lab's work on potential functional roles of proline isomerization (Pubmed).
The computational part was well represented too and it was fun to see again Gary Bader and to get to know Philip Kim.
I hope to be there again in two years time to see how the field changed.
I did not even cover 1/3 of the Module Protein Domain workshop in my previous blog post. I will not attempt to do it know after so much time. The organizers were clearly concerned about keeping the information withing the participants so I will just post some of the general impressions that I took from the meeting.
Specificity profiling in high gear
There were several sessions dedicated to particular protein domains (SH3, SH2 and PDZ in particular) and for all of these there are several projects under way (or mostly completed) to determine the binding specificity of a large number of these domains (although in different species) using either phage display, spotted peptides and other methods. We should project ahead and start planning what to do with this information. How to combine this to predict pathways and pathway models with dynamical information. The work of Rune Linding is a a very good start at this (see NetworKIN).
Given that the methods are set up I suspect that the emphasis might shift now on exploring the evolution of binding specificities and the impact of disease causing mutations (i.e. profiling binding specificities of domain variants).
Good integration of different methods
Compared to the same meeting two years ago I had an impression that there was a better integration of different approaches (biochemical, structural, computational, etc). A particularly good example was the work of Michael B. Yaffe. There were plenty of structural talks (probably a bit too much) but I found particularly interesting the work of Ivan Dikic that presented extensive novel work on ubiquitin binding domains and Charalampos Kalodimos that presented his lab's work on potential functional roles of proline isomerization (Pubmed).
The computational part was well represented too and it was fun to see again Gary Bader and to get to know Philip Kim.
I hope to be there again in two years time to see how the field changed.
Bio::Blogs #15 - call for submission
Since there were no volunteers :) I will be hosting the 15th edition of Bio::Blogs here in the blog. I will be gathering some posts from around the web on bioinformatics and other science related topics from the last month and will post about in on the 1st of October. Suggestions are more than welcome. Please email any links to interesting blog posts to bioblogs at gmail dot com.
On a personal note, I have defended my PhD :). This mostly explains the low volume blogging.
Since there were no volunteers :) I will be hosting the 15th edition of Bio::Blogs here in the blog. I will be gathering some posts from around the web on bioinformatics and other science related topics from the last month and will post about in on the 1st of October. Suggestions are more than welcome. Please email any links to interesting blog posts to bioblogs at gmail dot com.
On a personal note, I have defended my PhD :). This mostly explains the low volume blogging.
Subscribe to:
Posts (Atom)