Wednesday, November 30, 2016

Thermo MSF Parser!



Wow! Am I missing more stuff all the time? Starting to feel like an NBA referee. (The Will Smith video above is hilarious, btw).

Proof? Check out this cool paper from like 5 years ago!


It is about this cool tool that you can use to do downstream analysis of PD 1.3 and 1.4 results. They hint that they are working on new versions as well.

So...all that stuff I've written on here about using the free SQLite tools...well...yeah...that still works, too....

Tuesday, November 29, 2016

A new genomics based scored protein-protein resource!

Several groups are pounding away systematically working up protein-protein interactions -- at the protein level. This new paper in Nature Methods...



thinks that if we mine the existing data and use all the proteins that have phylogenetic relationships with one another we can get to the answer of who interacts with whom.

The results are impressive. And it is worth noting that even though they didn't even cite the BioPlex resource, 57% of the data points they incorporated came from direct experiments with human proteins.

BTW, InWeb_IM is their resource in the Venn Diagram above, so even if they aren't right about all of them, it is a whole lot more protein-protein interactions for us to look through!

You can access InWeb_IM's nice web interface directly here.

You might not want to directly Google it, cause the site it took me to isn't the one in the paper -- and looks a little scammy!

Monday, November 28, 2016

CSF-PR 2.0 -- Now with more cerebrospinal fluid PTMs!


The CSF-PR resource has been around for a couple of years, but needed a revamp since...you know...something like 90% of the proteomic data EVER generated has shown up in the last 4 years or so. (Do I have that pie chart on here? I LOVE that pie chart!)

For details on the updates check out this new one from MCP here!


There are some other databases out there on CSF. This one stands out cause of the following criteria:

Has to be LC-MS/MS from living humans
Has to contain at least 20 patients per study
The data from the study has to be publicly accessible AND in some way be open to quantification between 2 disease groups or between 1 disease group and 1 control group with an n greater than 3 in both groups.

Unsurprisingly, considering how relatively few people happily line up for CSF withdrawals, they whittle a relatively large number of published studies down to a much smaller set of super high quality (and medically interesting!) studies.

You can directly access this resource here!


Sunday, November 27, 2016

FDR calculations applied to Orbitrap Metabolomics data!


Not more metabolomics...geez...

Yes, I know, this belongs somewhere else, but I promise it is really super cool. (Link to paper here!)


From our perspective, it probably seems pretty straight-forward, right? If you've got MS/MS data that you are saying is this small molecule, maybe you'd want to do some sort of a false discovery measurement, right?!?  And...maybe if you've jumped head-first into doing metabolomics cause it's super easy interesting, you might be put off a little at first cause you don't have FDR measurements.

Turns out it isn't quite so easy with the small molecules thanks to how they don't fragment as friendly as peptides do, and we can't just move down the line to the next peptide sequence that is truly unique -- since there isn't a second peptide. You get one shot at identifying and quantifying

This paper introduces two ideas -- JumpM and MISSILE that are a little incongruous, but together assembles a full methodology for how they think metabolomics should be done with heavy standards, Orbitrap data and target-decoy based FDR. And...it is honestly way smarter than the way I do it....


Saturday, November 26, 2016

Glycan analysis of protists! And other cool new Springer Protocols


Honestly, just leaving this here to remind myself to grab this new book at the library! But, how awesome does this new protocols book look?!?


Okay...actually...another new Springer Protocols book just rolled out as well and it should be on my "to borrow" list so I can close this tab on my browser.  Check this out!

Want a taste of the sweet stuff that is in this one? Check this protocol out!

Three browser tabs closed in one post? You're...welcome...?


Tuesday, November 22, 2016

Polymorphic Peptide Variants and Propagation in Spectral Networks


Subtitle: Why everyone needs to take a whack at proteomics data!

Need a paper to mull on while avoiding discussing politics with your family this holiday weekend? Think on this one!




What is it? Wait, you can't tell from the title? Come on!

In all seriousness, it is a really unique (to me, at least) way of thinking about what that unmatched spectra might be in that organism you don't have a good database for. And it might just be brilliant. I can't tell.

I gave it a good read and then thought about it in my car while I enjoyed the combination of normal D.C. and possibly early holiday traffic(?) and this is what I think is going on.(And I might totally have this wrong).

Imagine we're starting off with this organism that no one has sequenced before and we need to do proteomics on it. The mass spec side is the same as always (as long as it wasn't hard to lyse or whatever, of course) but then we've got no database for it. We could de novo it or use BICEPS, but these are both going to be super computationally expensive, full of false discoveries or require that you spend 2 years studying Python to use it (this approach may fail in one of these regards as well, I'll have to check).

Spectral networks goes sideways here. What if you could lower your bioinformatic load (what?!?) by running more samples? They go the easy route here and take 3 bacteria and do dd-MS2 on them. Then they take the spectra that are the most similar (by MS/MS fragments) and network them together. In this way you can 1) Find the most important features and 2) Start to limit what you're going to have to search.

I know this is wacky. Who has spare mass spec time?!? To this, I answer -- who can find a good bioinformatician for that salary that you can't seem to find a good mass spectrometrist for? Nobody, that's why!


Seriously -- what choice do we you are told to get some proteomics data on this organism? Wait and hope the genomics people are considering it a priority, will sequence it this year, and will annotate it by 2020?

Example set: They start with 3 species (or strains) of Cyanothece that biofuels people are seriously interested in that someone has done proteomics on. Serious proteomics:

Start with:
 >1e6 spectra/organism
Cluster the completely homologous peptides (identical ones from each run AND organism)
 = heck, if you search those conserved ones you're gonna have a massive reduction in search space (but you're going to miss what makes that organism why it isn't the other)
Cluster the MS/MS spectra that are only different in one mass shift. For example, the y ions are awesome till you get to the high mass ones and then each one is off by 8 in species 2 and 14 in species 3. (Or whatever). then move onto the next pairing!

As a side effect here, btw, you're going to get a quick understanding of evolutionary relatedness here -- without any genomic information on these guys! Most these MS/MS spectra are the same and you didn't get the samples mixed up? These things are related for sure!

In this run through they break their spectra into something like 16,000 networks. So....this is just a little more complex than the example 2 paragraphs up, but it is for illustration purposes only.

But check this out -- you now have these networks, where this spectrumA is equal to spectrumB (+8Da at y7/y8/y9) and spectrumC is equal to spectrumB (- something). Now that it is all linked you dump in some matched spectral data. Some stuff that is ID'ed and perfect. The MS/MS spectra are linked to IDs and it falls together like dominoes.

Does it work? They probably wouldn't have sent it to MCP if it didn't, but it definitely looks like it works. I find it makes more sense to me the more I think about it....

The pipeline is more complex than I described.


...but all the tools are freely available here. 

Monday, November 21, 2016

Proteomic analysis of patients with cerebral and uncomplicated malaria


Malaria sucks. I'm reading this book now:


...and it put how bad malaria sucks into perspective. One expert she references throws out a figure that more than half the human beings who have ever lived may have died...of malaria... If you are into a super depressing read on how a gross parasite has shaped our history, mostly by killing us by the millions and billions, I couldn't come up with a better suggestion.

For a more uplifting tail, I suggest this nice recent paper from Bertin et al.,.  In this work, these authors take a proteomic run at some patient sera with non-complicated (bad) and cerebral (really really really bad) Plasmodium falciparum (the species that generally kills you the first go 'round) malaria. They used label free quan and an Orbitrap Velos and some clever bioinformatic tricks (compound databases with lots of the Var sequences) and sweet downstream statistics to try and find some differences.

While there are tons of challenges with this monster of a disease, like crappy databases and poor annotation and mutations all over the place, they are still able to find some really interesting conclusions. Several of the differentially regulated proteins they find appear linked and may even work together in functional complexes.


Sunday, November 20, 2016

Ion mobility Orbitrap in development at PNNL?!?!


I'm just gonna put this here until PubMed lists the darned thing so I can access it through my library. 

After the reader responses to a previous post I constructed regarding ion mobility, it is clear that I don't understand this technology enough to really go on about it (though...that doesn't always stop me.)

Again, I don't know the difference, but I have got to see this Ion mobility Q Exactive that ExcellIMS has...

...that they were using quite successfully to separate petroleum products. And I also got to see this monster in person (cause it is here in Baltimore!)


...so it isn't a particularly novel idea, but PNNL seems to know something about ion mobility and I'm gonna guess it is something special.

Edit: Changed the title. I thought the original came off more sarcastic than I meant it to.

Saturday, November 19, 2016

"Tag count" RNA-Seq methodology applied to spectral counting


I have some mixed feelings about this paper, but I seriously think it is worth sharing. It is from Branson and Freitas and in this issue of JPR here.

In this study, they take a huge historic dataset from PRIDE and borrow an R package from BioConductor that was designed for RNA-Seq based quan and apply it to spectral counting. It turns out that it is stinking fast on a normal desktop PC and it appears to work very well on this dataset. All good news!

I guess my hesitation comes from my normal issues with spectral counting -- that it does work well on huge datasets with lots and lots of PSMs per protein. It would be nice to see this applied to another set that hasn't been fractionated and repeated so many times. It is a seriously interesting concept, though!

The authors take an interesting approach to the code as well. Unless I'm overlooking it, they didn't publish it anywhere. Instead, they produce a Supplemental text file with the example R Script.

Initially, this made me laugh. Partially cause I'm a little under half awake, then I realized how thorough the script was!


I'll be darned if it isn't complete enough CTRL+P in and just run because it does reference all the prerequisites. This probably sounds dumb, but...people aren't always this thorough with free resources that they put out there....and when they are, it is something that should be appreciated!


Friday, November 18, 2016

Why didn't I get any quan values in Proteome Discoverer for this thing?!?!?

Wait. Do you really have time for this before work?


(LOL! Any picture of this guy make me happy, but this is a favorite)

When you're running PD you often run your nodes down two distinct pipelines -- one that is your peptide ID pipeline and one that is your quantification pipeline.


I'm no expert in what is going on behind the scenes here in the magic binary land, but I find it really useful from a logical sense to keep this in mind. Our end report is going to bring it back together, but these nodes are functioning, for the most part, as distinct and separate executables. The results are all brought back together into the SQLite (is it still this in PD 2.2, I think so, but I'm not 100% sure) table that is our .MSF and .PDRESULT file.

As such -- it is completely possible for our friends Sequest and Percolator to find something that the other side of the aisle did not. Honestly -- dig deep -- it happens quite a bit.

Check this out --


This is an iTRAQ run from a friend who studies possibly one of the hardest things (that isn't a plant) that you can do proteomics on.  (14,000 MS/MS events -- 90 PSMs in this file, seriously.)  But check this out -- in the processed data I can find thousands of spectra with iTRAQ quan) -- but no IDs.

This can only occur when we've seriously separated out the 2 processing pathways.

This isn't the most common question I get about PD, though!  The question that comes in is -- wait, I ID'ed this! Where's the quan?

First off -- this is gonna be significantly less common in reporter ion quan. If you've got a good fragmentation spectra, chances are you're going to have reporter ions down there. Even if you spike in a good heavy labeled standard -- like the PRTCs -- you'll probably see reporter ions. (Thought I had proof of this, but I can't find it right now). This is isolation interference. We're never fragmenting a 100% pure population of just our ion of interest. Other stuff sneaks in. But it does happen.

If you see something like this, you'll want to look at the Peptide Group level for the "Quan Info" tab. This will give you a vague statement regarding why you didn't get quantification.



It is significantly more common to find ID with no quan in the Event Detector MS1 quantification. (SILAC and PIAD). Example...

Check out this SILAC dataset and the stuff we find waaaaay down in the noise. We get some info on why there isn't quan when we look at the Peptide Group Level. "Not used" and "Excluded by Method"

To figure this out, you need to check out this troubleshooting chart from the manual.


This is what PD considers behind the scenes. In PD 2.x we've got control over some of these parameters (in the MSF and Quantification nodes). It might take some detective work to determine what you are looking for. But the Quan Info columns can help you chase it down.

It is a little more manual in the PIAD workflow. Example...


We've got a protein ID with 55% coverage and no quan? What? As the highlighter and misspelled word indicate, you see that this protein only has one Unique peptide. What we need to do is find out why that peptide didn't get quan.

If we check that protein and then Expand the Associated Tables.... (click to see full-size)


We can find that 1 PSM that is unique just to that protein...If we go one layer down...we find an absolute kick in the pants.  Remember when you set you built that method and you said "No Quantification" (cause in PD 2.x the PIAD isn't considered a "real" quan method).

PD 2.2 has "real" label free quan, as do the PeakJuggler and OpenMS community nodes. But PIAD doesn't get some of the troubleshooting benefits that SILAC does.

But we can figure out why this thing didn't get quan.


If we highlight this peptide and then show "the spectra used for quantification" and "show the XIC" we might get to the answer. Check out the XIC at the bottom.  Even with a 6ppm mass tolerance cutoff, this is an ugly peak. If we look at the precursor, we're seeing an awful lot of interference here. (It says 64% isolation interference...which...honestly, is a measurement of something else entirely, but is useful for illustration purposes here.)

The Event detector is seriously strict. Remember, the maximum cutoff you can put in is 4ppm.

Check out another peptide (the next one down the list and what it does look like)


Again...isolation interference shouldn't be my metric, but its only 22% for this one and it shows in the Peak. The PIAD has no problem working with this one.

I guess the moral of the story is -- PD 2.1 quan has a logical pipeline and you can almost always figure out why you get and ID and no quan. Honestly, it is probably harder to figure out why you got quan but no ID.




Thursday, November 17, 2016

Excellent review on proteome complexity and balance!


Are we focusing just a little too much on the trees and maybe missing the forest? I dunno, but this review is really thought-provoking!



There is a lot of good stuff in here -- like arguments for why having an array for 30,000 transcripts isn't as good as looking at the proteome -- and clear descriptions of where all of our "proteoforms" are coming from.

I don't want to steal all the the highlights from a review that is this good, but I seriously just printed out Figure 1 and hung it on my wall so I can just look at it....


...cause I need more time to think on this.

You should check this review out, for real, cause I think we've got the capabilities already to fully assess a lot of the mechanisms that they describe here in terms of proteomic imbalances already hiding in our RAW files -- and we're just not systematically looking for them...


Grammarly? Blog grammar upgrade?


I saw a commercial for this thing over the weekend -- and I'm gonna give it a go. Hopefully, it'll mean an upgrade in the clarity and quality of my ramblings here.

If you don't know about this and English grammar isn't the thing you're best at --maybe you come from an area of the U.S. that is famous for the poor quality of its public schools -- you might want to check out this free Chrome extension!

(They pay-for version appears to be much more powerful, but at $12/month the free on is going to have to prove itself first).

Wednesday, November 16, 2016

Proteomics of microdissected neurons!


I totally read this whole paper and found out it was from last year!  What?!? Geez... Fortunately, it is still awesome!


You can direct link to it here.

While the paper is definitely geared toward a specific nefarious disease, it is a beautiful exercise in optimization of how to extract the most proteins/peptides out of a tiny amount of fixed tissue. Cause...if you can take a block of wax that has something as tiny as neurons in it and detect the proteins that ought to be there, you're doing it right!

The procedure isn't trivial, and the authors are quick to warn you -- the laser microdissection (sp?) needs to be done really really well. Sample preparation methods and instrument methods need to be employed that focus on minimizing peptide loss above all else.

And...they pull it off (of course!) and get almost 2,000 protein IDs from some areas they cut out with the laser. Others aren't nearly as high...but honestly, maybe that isn't a limitation of their methodology, it may really be biology. When you are looking at areas of anatomy this specialized will we need expression of the whole proteome?

Some details -- the LC-MS was single shot (no fractionation, but technical replicates when possible) analysis on 50cm EasySpray columns onto a Q Exactive. The QE was geared up for sensitivity -- allowing up to 120ms for MS1 fill time and up to 250ms at the MS/MS level. They were willing to take a massive hit in cycle time, if necessary, to get good fragmentation spectra.

While it seems like I'm focusing on this as if its just a methods paper, they did serious runs on tissues from Alzheimer's diseased and control brains and have all the differential data in the open access supplemental info!

Wait -- this deserves an extra sentence or two -- the supplemental tables are so freaking logical. I might be baised since I've been looking at PD output tables for...a long time...but they are so smart. For example table S4 has the protein IDs charted against the surface area of the tissue that was dissected out!  S2 is the actual comparison between the normal and diseased tissues and whether the proteins were detected or not.

This is a killer little study showing how much we can get from those little waxy blocks of tissue those pathologists have been stockpiling!

Tuesday, November 15, 2016

Pulsed SILAC + Ribosome profiling = New Variation on proteogenomics!

(Image on Ribosome profiling (or Ribo-Seq) borrowed from this review article)

Ribo-Seq is a powerful technique the genomics world has now. It gets them even closer to the proteome by characterizing what messenger RNAs (mRNAs) are currently protected by ribosomes. They are protected by the ribosomes because they are, right that second, making some proteins!

I'm a little foggy still, but I think they pull out all the RNA like they would for RNA-Seq but then degrade everything that is free floating. Then they just have to destroy the ribosomes to release the protected mRNA.

In bacteria, the central dogma applies pretty clearly. The regulations systems are pretty simple...cause you've (normally) got one tiny chromosome. And its been shown that RNA-Seq and proteomics line up pretty great in bacteria, but we're a little more complicated.


Check these brand new results out!

This team uses pulsed SILAC (pSILAC!) and this technology together to assess stress response in human cells treated with bortezomid. Its a proteosome inhibitor that is used for some cancers. Actually, on its own this drug is super fascinating. Some cancer cells protect themselves from immune response by making tons of proteosomes and just eating up the immune response. This drug drops a boron right in the catalytic site of one of the major proteosome proteins and shuts 'em down.  Now...proteosomes are pretty important to our cells functioning so this creates a lot of stress!  Hence this paper!



Here is the setup from the main experiment in the paper. On the proteomics side, they do something interesting with the pulsed SILAC. They use HCD and high/high mode on an Orbitrap Velos to develop their SILAC SRM transitions for their heavy and light peptides. Then they do the rest of the study with targeted SRMs. I've never tried this approach, but it does seem kind of smart. They pick 4 transitions for each peptide to monitor -- and since the heavy label should be on the y ions, they should be reasonably easy to get to. With 4 separate SRMs, its hard to argue about interference effects, even on a device with quads that can only isolate 0.7/1.0Da at Q1/Q3, respectively.

What did they find out?  Human biology is seriously complicated!  Even this fancy-pants new Ribo-Seq thing can't accurately tell you how much protein is there or how much is being made. However, using the two together can give you an understanding of the stress response in the cell. They propose the use of this methodology for understanding other chemotherapeutics -- get a deep mRNA-Seq, go get a deep Ribo-Seq, and then get accurate information on protein levels from the proteomics to make the rest of it make sense.

If you're wondering why you wouldn't just cut out the two expensive genomics techniques from the experiment and just do good proteomics on it -- well...so am I, but you don't just want that sequencer sitting there do ya?......ummm...got one!  The pSILAC could tell you how much protein is there at each point in your time course, but the Ribo-Seq can give you an extremely rough estimate a measurement of what is being produced so you could better infer whether the change in total protein concentration is because of a change in production (translation) or degradation!

Sunday, November 13, 2016

A post-processing approach to bacterial membrane proteomics!


This is a really nice new study that shows how some cool tools I didn't know about can be used to work out membrane proteomics in bacteria.

You'll have to check out the details on your own ;)

This starts out as a discovery run in bacterial spores from one of my favorite stinky organisms. They lyse the spores by boiling them and then do SDS-PAGE and gel extraction to get the peptides. They ID everything they can and then go to this cool thing!

You feed psortb your protein sequences -- and then it predicts where they came from in the cell!  A quick investigation into this tool shows that it uses biologically annotated data as well as the normal stuff (hydrophobicity and known conserved protein spanning motifs) -- from bacteria to make these predictions.

And...it totally works. This team identifies over 100 membrane proteins from this understudied pathogen and find a new protein that they can prove is involved in spore formation (by knocking it out!). And then overproduce the protein in E.coli to prove what compounds it is involved in making.

Seriously cool -- and counterintuitive (to me) -- approach that obviously works, for you microbiologists out there (who probably knew about all this anyway)!

(Go Hokies!)

Saturday, November 12, 2016

How does spectral counting do for intact proteoform quan?


Spectral counting isn't my favorite thing for bottom-up proteomic quan. It totally works for highly differentially expressed things -- and there is probably over 1,000 studies out there with clear evidence of this, but I'm an XIC kind of guy.

This new study from Lucia Geis-Asteggiante et al., does something I've certainly not seen before -- applying spectral counting to top-down proteoform quan.

They start with an exosome lysate and it looks like go for the proteoforms under about 30kDa. An Orbitrap Fusion Lumos, which is tuned up nicely and, unless I'm forgetting something, clearly states every parameter you'd need to reproduce this on an identical instrument.

They spike in some proteins of known concentrations and evaluate the performance of spectral counting in determining quan...and it works surprisingly well above 2x!

In a very minor criticism of the methodology, they do the MS1 and MS/MS at 120k resolution and I think this might be just a little excessive at the MS1 level. (I think they did it so they could use baseline isotopic resolution for further analysis) and probably leads to sampling less proteoforms than what we normally see with "low-high" approaches.

This is definitely a proof-of-principle paper, though, and they definitely prove it works -- which I'll admit surprised me! I'm definitely interested to know how this compares to XIC-based proteoform quan.

As an aside, they process this all in the ProsightPD nodes in PD 2.1. Sounds like another set of 2nd party nodes are ready for primetime!

EDIT: Forgot the link to the paper! Here!

Friday, November 11, 2016

A nice TMT phospho drug mechanism study of geninstein in breast cancer!


TMT phospho was on my mind all day yesterday thanks to hanging around with a TMT and phospho expert for 12 hours or so.

For a really elegant example of a study that worked for this methodology, check out this open access study from Yi Fang et al.,.  The figure above really kinda sums up this work. What they're interested in is the early effects of a chemotherapeutic in "triple negative" breast cancer cell lines.

What I like about this study is -- it is very straight-forward, simple and totally works. It shows how far we've gotten technically with both phospho-enrichment, isobaric tagging, and data processing. An Orbi Velos does the heavy lifting and MaxQuant is used to compile the data with the whole peptide quan in some channels and the phospho in others.

You could argue that there are better ways of looking at the quantification output than just saying that anything above or below 1.5 fold is significant, but when you end up with ~240 nice phosphopeptides that fall into a downstream analysis like this....


....where these are known DNA damage pathway phosphorylation events!!!....I am not going to argue that you should have drawn your cutoff with some fancy-pants statistical cutoff thing that I don't know how to do myself anyway.  (Sorry about the resolution, the paper is open access and this isn't even the best figure in this nice paper! You should check it out yourself!)

Tuesday, November 8, 2016

Laaazy post morning! Skyline 3.6 updates!


I highlighted my favorites!

You get a super lazy blog post this morning, cause I gotta get to work early and make sure I have plenty of time tonight to get to my polling place. My state has a lot of electorate votes hanging in the balance!

Monday, November 7, 2016

Percolator 3.0 -- Super Percolator?



The University of Washington has been nice enough to provide us with some of the best software ever developed for mass spectrometry -- for free. I discovered two new ones that I'd never even heard of while looking for an image for this post. Luckily, I have expert <ctrl +> skills!

Percolator is still the one I use the most since it is integrated into commercial proteomics packages like PD and Mascot. As good as this algorithm is at distinguishing good peptides matches from bad ones (it is the gold standard -- for good reason!) it has one drawback -- it isn't the fastest part of your data processing pipeline and looking at 1e6 or more spectra may be your bottleneck

Enter Percolator 3.0 as described in the JASMS that was delivered yesterday!

What is it good for?
1) Big data sets
2) Fast and accurate protein inference
3) Do you need more than this?

How did they test it? They Percolated the entire JHU Human Proteome Draft Maps (and 2 other datasets, one huge and one small yeast proteome)

The JHU dataset is 2.7e7 MS/MS spectra alone. I honestly think I could do this on my Destroyer with the Percolator in PD 2.2, but it might take days.

In a search I did this week, 3e5 spectra took roughly 1 hour to Percolate -- so if we assume this would scale linearly maybe 100 hours for the whole JHU Draft Map dataset?

...and they could Percolate it in MINUTES. And through some bioinformatic magic, they were able to do this with only 30GB of RAM!

It seriously gets better than this. Percolator is a machine learning algorithm and what it trains on has a lot to do with how effectively it works. They improve the training sets and methods and the darned thing even works better!  Remember the small yeast proteome dataset I mentioned above? They show that even though it has the capability to tear through world class datasets in terms of size -- it still works right for small sets!

In terms of protein inference, the paper walks you through the logic for their inference logic and they end up settling on one that not only works up a good score for protein inference, but also adjusts for the length of the protein (something that not all inference algorithms take into account --or...more commonly....do in a really dumb way, but lets talk about that some other time!)  The observations they make in the different inference adjustments alone are worth analyzing at length!

In a random aside -- I'd like to mention the data processing method that led up to what they Percolated -- semi-tryptic digestion with 2 missed cleavages -- on 2.7e7 spectra? Wow....

To wrap up -- this is another awesome advance in proteomics software created and distributed for free from those guys in Seattle. The code is all available to download now and make our lives easier and our data much much better.