Monday, November 18, 2013

IPRG 2012 -- What did we learn?



IPRG 2012:  What did we learn?

In general, the ABRF (The Association of Biomolecular Resource Facilities) has some awesome ideas and the IPRG 2012 study is no exception.

In this study, synthetic peptides were produced that contained common modifications on their respective amino acids, including phosphorylation, acetylation, methylation, sulfation and nitration events.  The synthetic peptides were spiked into yeast tryptic digest.  The anonymous participants of the study ran these samples and attempted to search for these PTMs using a variety of LC-MS/MS and processing conditions.  While the level/number of identified spectra was a measured metric, the real focus of this study was the efficiency of identification of the modified peptides and the correct localization of those modifications.

The results are definitely interesting across the board.  One place of particular interest is a breakdown in the paper of the number of peptides, both consensus and unique that were identified by each research group.  The study showed that the clear winners were a group that used Byonic as the primary search engine.  Surprisingly, the one researcher who used Proteome Discoverer/Sequest had the lowest number of identified peptides in the study.  Having personally compared PD to every one of the search engines compared in this study on at least a few, if not numerous datasets, I have to think that this group had issues either with their instrumentation or experimental design.  Nothing short of that would explain the discrepancy.  While it would be interesting to know for sure what happened, that would negate a good bit of the anonymity of this study.

Another place where Byonic really showed power was in the identifications of the known modifications and the correct placement of them.  Interestingly, nearly all of the instruments and methodologies had trouble with one specific modification in specific, tyrosine sulfation.

Now, I want to throw out my cautious opinion on this study. I definitely see the value in comparing lab to lab, particularly when reproducibility is such an active criticism for our field.  It is definitely worth thinking about the small sample size and the huge array of variables that this study is taking a swing at.  Different instruments, LC gradients, packing material, ionization sources and their relative efficiencies, processing schemes, etc., etc., all contribute to these results.

Is it valuable to know where we are in terms of global abilities to accurately assess PTMs?  Absolutely, and this is certainly a valuable snapshot of where we are.  But we should be slow to make judgments based on this small sample size and intrinsic variability.

You can read the paper, In Press, here.

No comments:

Post a Comment