Sunday, January 31, 2016

proteOMZ -- using proteomics to understand oxygen depleted zones in the ocean!


What happens when you take the high end proteomics firepower of the Woods Hole Oceanographic Institute (WHOI) and pair it with some Google resources like one of their big ol' boats?

You get this sweet project that aims to improve our understanding of the huge areas of the deep ocean where there are ridiculously low levels of oxygen. What is living down there? How is it metabolizing whatever its finding in terms of carbon and nitrogen sources?  How will the change in Earth's climate affect these huge regions?

I'd never think to ask these questions in the first place, but I'm glad someone did and I'm excited to see what they come up with after their month at sea (and...you know...months or years of mass spec and data processing time)!!!

Wednesday, January 27, 2016

Disregulated metabolism in cancer


I promise I'm not changing the title of this blog to "Metabolomics Research", but I'm kind of obsessed with the metabolome right now and I have limited reading time...so....

Today's breakfast paper is this review from MD Hirschey et al., and has 21 authors, every one of which appears to be at a different institution (Open Access). They're all members of the Halifax Project. Does this sound like an XMen story arc to you? It might to this group as well, cause you can find them at www.gettingtoknowcancer.org which seems a lot less ominous.

The goal of the HALIFAX PROJECT is to address the problems intrinsic to cancer heterogeneity (most cancer cells are cancerous for completely different reasons). One way to address it is to find common denominators. The first appears to be that their metabolomes are all screwed up.

This isn't new information. According to the review, excessive lactate formation was first detected in tumors almost 100 years ago and lots of work was done that indicated that glycolysis, amino acid synthesis and degradation, lipid metabolism and single carbon metabolism (among others) are messed up in tumors.

Interesting facts that pop out to me after going through this:

Tumors consume so much oxygen that there is often fermentation occurring in them. Even though fermentation produces only 5% as much ATP as glycolysis, ATP is not found to be limiting in tumors(!?!?)

Cancers revert to a bunch of different nutrient sources to maintain growth. For one they break down a lot of available lipids. Limiting fatty acid availability might be a cool cancer therapy

There are clinical trials in effect where the combination of calorie restriction and chemotherapy are being used to both limit the side effects of the therapeutics and to increase the efficacy of the drugs.

Since a ton of metabolism in humans is dependent on the mitochondria, focused treatments on those little buggers might be a strategy for moving forward.

There are serious links between having a messed up metabolism and having functional epigenetics. On top of the consequences of cells using up too many nutrients, producing too many toxic byproducts, now you've got proteins with weird PTMs and messed up histones? They mention in the paper that real targeted metabolic therapies for cancer are a ways off, but sure sounds like something I'm glad people are spending time on!

Tuesday, January 26, 2016

Rapid absolute quan of Tau protein variants in CSF with PRM

This is a Tau protein. They hang out in the brain and if they are all working correctly, everything is awesome. When the Tau proteins aren't working correctly? It's real bad. Alzheimer's, Parkinson's and other neurological disorders are tightly linked to malfunctioning Tau proteins. (Honestly, I don't know how the cause-effect thing works here, or if it is even 100% understood. What everyone seems to concur on is: bad Tau = bad news)

As with every other clinically relevant assay, we've got to start somewhere. Currently it looks like most of the analysis of Tau proteins is performed with ELISA assays, though more sophisticated institutions may be moving over to antibody pulldowns of specific Tau forms followed by SRMs.

In this new study from Barthelemy et al., these authors show how we can improve upon these assays. ELISA is a great technique for detection and quantification but it becomes a whole lot less fun when you've got multiple isoforms to go after. Likewise, immuno-SRMs may recognize one variant of the protein but skip others.

What is the solution they tried out? Building a full length stable isotope labeled Tau protein and spiking it directly into their synthetic cerebral spinal fluid (CSF) [they used serum. people don't exactly line up to volunteer CSF], digesting and then doing Parallel Reaction Monitoring (PRM, FTW!) on a Q Exactive.

Now, they did enrich the proteins to some degree, and I'm a little unclear as to how it was done. It looks more like a precipitation to me. If this is your field, you should pay more attention to those paragraphs than I did.

What I did pay attention to: They used a 10Da isolation window in their PRMs. This way they could pick up both their heavy labeled peptides and their lights in the same windows. They also scheduled the PRMs. It might seem risky to use a window that wide in a biological fluid, but the PRMs were read at 70,000 resolution and the fragments read out with a 3ppm maximum window in Pinpoint. So they had:
1)Retention time
2)Natural fragments at high resolution and high accuracy AND
3)Standard heavy isotope fragments

Makes a 10Da window seem a whole lot less risky.

What did they get out of this? Crazy sensitive, crazy precise, absolute quantification of not one Tau protein, but of 22 different peptides that showed the presence/absence of specific regions and/or isoforms of the Tau proteins. In one 13 minute assay.

Of course, they go on to show that they can do this assay in people's CSF. I love this paper. Not only because it has been a science free day, but because this shows how we can take our cool toys and apply them to solve medical problems. This goes into the clinic and its: cheaper, faster, more certain and returns way more useful data than anything we currently have.

P.S. They did ALL of this with analytical flow (100uL/min!)


(Shoutout to Dr. DuFresne. It is a good movie, but I had that song stuck in my head for 4 months...)

Sunday, January 24, 2016

Metabolomics of growing grapes!


This new paper from Alvaro Cuadros-Inostroza et al., appears to be an effort to combine two of my favorite things. In the study they look at the metabolomes of growing merlot and cabernet sauvignon in Chile during over two seasons. Grapes are harvested from multiple time points and from multiple vineyards so that they have enough values to do tons of statistics.

In the study they focus on 115 metabolites that are detectable in all of the samples. I'm only just starting to get a feel for modern metabolomics, but I am impressed with the downstream stats and analyses. They mostly do their processing in R after extracting the data from the vendor software, but they work up nice graphical outputs like this one (click on it and it should expand):


I think this is a remarkably clear chart. Curious about how glutamate production differs between cab sauv and merlot? Find its branch off the TCA cycle and read the heat map across and you'll see that glutamate levels begin much higher in cab sauv but that the levels drop rapidly at the end of the season. A rapid Google search on glutamate production in wine grapes will lead you to this cool book you can read free online (how did we research anything before Google...?) that references a study describing this characteristic of ripening in cab sauv. Why does merlot taste different? Maybe it has to do with glutamate-asparagine ratios?!?  The book doesn't appear to reference a study looking at merlot, so maybe this is a completely new data point!

In sum, this is a nicely presented paper on a cool topic that makes metabolomics seem really useful and not all that scary.

Saturday, January 23, 2016

Imaging mass spec with new advanced statistics package!


I think everyone will agree that imaging mass spec is cool stuff. I've got to visit some cool labs doing DESI and LAESI and I've got some neighbors at NIDA doing super sweet MALDI-Orbitrap lipid analysis. I've never done it, but I'm always impressed by what is being done and the potential that it has.

In this new paper from Kyle Bemis et al., this team suggests that we can do even better with existing technology if we use more advanced statistics. To prove it, they use a DESI-LTQ and a couple imaging TOFs and run the data through their new R package, CARDINAL.

What did they find? Better ability to separate signals from closely overlapping organs in developing organisms and different sections of the brain that better match expected distributions than classical software.

If you have some imaging data, maybe it's worth taking a shot at reprocessing with this package. If this isn't your field...maybe its still worth a look-see. This Koala liked it.


Friday, January 22, 2016

Want to remove some uncertainty from your SRMs? Try labeling entire proteins!


Wow. You want to talk about quality control for your validations? This is how you remove uncertainty from your measurements. You produce an entire protein completely labeled with your protein of interest and you spike that whole protein into your normal protein extract.

What are the advantages? No concerns about tryptic digestion efficiency -- that is controlled for. No concern about chromatography shifts.

Disadvantages? You've got to construct an entire protein that is isotopically labeled!!! Fortunately, Miral Dizdar's lab at NIST has assembled thorough instructions on how they do it in this work from Prasad Reddy et al., (paywalled).

Is it tough? Sure it is. Heck, for some proteins it might not even be possible (there are some proteins that E.coli just may not be able to make) but if you want absolute certainty you are measuring the right things, there might not be a better way to do it!

Thursday, January 21, 2016

MCP online tutorials


Much like the rest of the East Coast I am seriously snowed in. Time to put some work into my 2 indoor hobbies!  I'll be postdating some cool stuff I've been meaning to put in (if you haven't noticed, the dates on these posts are completely irrelevant...I just like the look of having a different post on a different day).

First up? I've never noticed these nice tutorials on the MCP front page. Not sure if they are new or old, but I'll add it to the Newbie information on the right.  You can never have too many tutorials!

Wednesday, January 20, 2016

NIST protein standards


This is likely old news that I either missed or forgot about. But did you know that NIST sells thoroughly characterized and QC'ed protein standards?

This one is cool. It is a standardized yeast protein extract that you can use to optimize your digestion parameters. They make these in HUGE quantities so that batch-to-batch variation is something you don't have to worry about.

In 2016, they will be releasing a monoclonal antibody standard that they've been working on. This is something a whole lot of people desperately need due to the batch-to-batch variability issues with commercial suppliers. This slide deck gives you a hint to the level they are characterizing this thing! Oh, and 17 labs also threw in last year to help make sure that this will likely be the most fully elucidated antibody in the world!  I'm keeping my fingers crossed it'll be out before that thing in San Antonio!


Tuesday, January 19, 2016

moCluster - Rapidly make sense of huge multi-omics data sets!

So somebody out there has generated a few dozen terabytes of transcriptomics data on the samples that you are generating millions of spectra on? What should you do next?

Maybe you need to grab that data and do some clustering to see what stands out. There are a couple of algorithms out there that will do this, like iCluster, but what if you want mo'?

Then the Kuster lab would like to introduce you to moCluster. In this paper from Chen Meng et al., they describe this new R BioConductor package (under "mogsa") that can perform these clustering analyses, better and 1,000 x faster!

How's it work? Well, it starts here...


...and then it gets complicated. If any of you guys who are good with math that contains letters have feedback, please leave some comments. I'm not exactly...qualified to review this part of the paper.

You might wonder why I'm writing this. And why I think the Kuster Cluster passes muster.

Remember this awesome study where they did proteomics of the entire NCI-60 panel? What if they showed that they could take their algorithm, this proteomics data set AND some transcriptomics data and they can start to see differentiation in clustering based on cell origin and cancer type?

If you take the transcriptomic data from the NCI-60 panel and do normal clustering via principal component analysis (PCA), you're probably going to end up with a figure like this one I published a couple years ago (the ones in red are stromal invasion):


A couple weird things are going to score as nasty outliers and then your are going to end up squeezing your maths and getting just a gobbledy-gook of your cell lines all clustered together. If you go through and remove "outlier" after "outlier" you'll eventually start to see something that approaches clustering. The problem is that cancer cells are so messed up and variable in what makes them cancerous that it is just about impossible to make sense out of what you are seeing. Not impossible, but hard.

Clustering, btw, is a really central technique to how genomics people make sense of their data.
So what if you had the complementary information of the transcriptomics and proteomics and did some clustering. Can that improve what you're seeing?


From the proteomic data (right) you can't really differentiate the melanoma cell lines from everything else. However, it does fall out in the transcriptomics data. The opposite goes for the leukemia. When you look at the transcriptomics data, the leukemia is right in the mix with the rest of the cell lines, but it differentiates strongly in the proteomics data. How does leukemia differ from other cancers? Look at the proteins that are driving this differentiation!!  Is the differentiating factor the original cell type involved? Maybe, but maybe the difference is that family of protein pumps that also make melanomas so resistant to that chemotherapeautic you've been working with!

TL/DR: Proteomics improves everything genomics, if you can figure out how to leverage it. moCluster looks like an awesome and fast new step forward unifying global -omics information!



Monday, January 18, 2016

O-man glycosylations in yeast


It might be time to consider changing the name of one prestigious journal to MCgP, cause the glycoproteomics just keeps coming!

This solid new paper from Patrick Neubert et. al., goes out and explores O-mannose glycosylation in yeast. Yeast? I know what you're thinking....


Wait a second, though! It turns out that yeast is the ideal model here. Cause this O-man glycosylation is the only O glycosylation that yeast get and we don't have a great feel for optimizing parameters for studying it. By building the methods and showing that it works in yeast we can take this method over and study these important glycosylation events in organisms that aren't yeast.

To get deep coverage, proteins were digested with three different enzymes (LysC, LysN, AspN) and a combination of HCD/ETD was optimized for each digest using an Orbitrap Velos and Fusion.

They weren't satisfied with building a methodology here, either. They look deeply into the structures of the glycopeptides they study and even look at 3-dimensional interactions that suggest the presence of some glycosylations have a negative effect on the formation of others in the same 3D space.  So, not only do you have a new methodology for studying an important glycosylation family, but you also have some sweet new basic science uncovered. Turns out we still don't know everything there is to know about even model organisms like baker's yeast!


Sunday, January 17, 2016

Allergens + proteomics = allergenomics!


Okay, so I was reading this very cool paper from Choopong et al., and was thinking they came up with this awesome new technique. And then I looked up the term "allergenomics" and I found this page describing that very same technique from 2003 (where I stole the above picture)

 Reinforcement of how little I still know about this field and how much more I have to read!

Here is the idea: Choopong and this team took some dust mites (eww...) and diced 'em up and ran a 2D-gel with their proteins. Then they basically western-blot for allergens by not using regular IGg, but by using the IGe antigens from a patient (with allergies!) blood. When stuff matches to some irregular formation of the IGe from the allergic person, then you've maybe found one of the proteins responsible for the allergy!  You identify it by cutting the 2D spot out, digesting it and doing our normal MS/MS technique.

Voila! You found the reasons you're allergic to dust mites!  Maybe then you have a new target for allergen therapy. What a cool and powerful upstream technique put to great use in this study!

Saturday, January 16, 2016

A camera fast enough to record the movement of light?

Sorry, this isn't on topic, but it is too cool to not share even though its a couple years old. This camera at MIT can supposedly take pictures fast enough that it can see the movement of light.


If this is real, can you imagine the size of the data files they generate?

Wednesday, January 13, 2016

ASMS warning about Pirates!


Ummm....so...Texas has apparently has something called Housing Pirates, and there is a warning on the ASMS San Antonio page about them.

You can read more about it here.

Not to point a finger at Texas or nuthin', maybe everywhere has them, but the first thing to pop up when I jokingly typed "Housing Pirates" into Google was a warning about them in Dallas, where I grabbed that cool image above.

Sneaky pirates...



Tuesday, January 12, 2016

Proteomics of Aging in a HUGE cohort of patients!


If that headline above doesn't grab your attention, I'm not sure what will!  You can check out an overview of this article at Medical Express here (warning, definitely an ad-revenue driven site).

What it talks about, however, is this article that is open access at Oncotarget. A ton of researchers threw in on this one, and they needed to cause this dataset looks at more than 11,000 patient urine samples in an attempt to differentiate aging from aging diseases. This might be a good benchmark for the robustness of an Orbitrap Velos! 11,000 people's pee? No problem! Go S-lens go!

The breakdown was over 1,000 control/healthy individuals and the other 10,000 were between the age of 25 and 86 and suffered from a variety of ailments considered to be pathological aging effects.



After robust statiscal analysis, they pulled out about 100 proteins that appear to be sincerely age-linked and did some nice pathway analysis. The article above maybe kinda sensationalized the paper a little, but if we're gonna slow this stuff down we've first got to understand it better so I won't complain too much!

Monday, January 11, 2016

SILAC analysis provides insight into Ewing Sarcoma


I'll start by being perfectly honest here. There are a lot of terms in this nice new paper that I don't know and don't have a handle on after some light Wikipedia work. Rather than spend my pre-work hours trying to become an expert in Ewing Sarcomas, I'm gonna focus on what I do understand in this paper -- the fact that they did some nice proteomics work!

The paper I'm talking about is from Severine Clavier et al., and is available (open access) here. The sample workup is typical for SILAC and takes place with cells that are apparently an appropriate model for this disease. Separation of the digested SILAC peptides was on a 50cm nanoLC column running a relatively fast gradient and a relatively slow flow rate (~50min effective elution gradient, but at 200nL). My first thought when seeing the 30,000 resolution at the MS1 was that it wouldn't be enough to fully pull out SILAC pairs on a gradient that short, but my first thought appears to be wrong as they report quantification of 1,700 proteins. Not too shabby for a classic Orbitrap!

The quantitative digital proteomic maps (RAW files) they generated were processed with Proteome Discoverer and this is where I get interested. They used Proteome Discoverer 1.3 to get the PSMs and used the 1% FDR. Then the data goes into something I don't think I knew about until this morning -- something called myProMS. (Which you can read about here and directly access here.)

Probably the second most common comment when I give a proteomics talk is "where are the statistics?" And then people get sad when I mention R packages for the stats. myProMS appears to be an R-less statistics package for downstream analysis and, according to this poster, it directly supports output from Mascot, Proteome Discoverer!

How did this study use this downstream software? To generate P-values and volcano plots to pull the statistically significant differential proteins out of their dataset like this!


Does that look sweet, or what?!? To use this software you'll either need Linux or to download a free Linux emulator within your other operating system (instructions on the website)

Hold-on, I'm not done with this paper!  They then take the statistically significant (woohoo!!!) data and then run it through the free STRING network analysis program and find a pathway that makes sense in regards to the morphological phenotypes of these cells and it looks super slick!


Saturday, January 9, 2016

Are there really missing values in normal (DDA) proteomics data?


A common premise in proteomics the last few years is that our normal shotgun proteomics approaches -- in particular, data dependent techniques (DDA) suffer from something called "missing values." This statement has been parroted about quite a bit, but is it actually true?

Not according to this very nice new paper in MCP from Bo Zhang et al.,. First of all, let me say that there is a lot of good stuff in this study, but let me pull my favorite quote out:

"Contrary to what [is] frequently being claimed, missing values do not have to be an intrinsic problem of DDA approaches that perform quantification at the MS1 level."

This isn't exactly revolutionary, of course, many people have made this statement (there were very nice posters to this effect as ASMS the last couple years from the Qu lab), but it sure is nice to get things like that out of the way.

Here is the central premise: The few studies and loads of marketing material that claim missing values in DDA data are focusing on one thing: that within a single quantitative digital proteomic map (i.e., RAW data file) we will not fragment every possible ion. And this is obviously true.

So how do these researchers contest this point? By pointing out that if we have high resolution accurate mass MS1 scans that we don't need to fragment every ion in every single run. If our goal is to compare sample A and sample B, if we make the fragmentation in sample A, then we do not have to fragment it in sample B. Having an accurate high resolution MS1 mass and retention time is enough to confirm that the peptide in sample A and sample B are the same thing.

If you use the precursor ion area detector in Proteome Discoverer or the awesome OpenMS label free quantification nodes or LFQ in MaxQuant these software are going to make this assumption automatically. So, how did this paper make MCP?

'Cause they propose a way to calculate a false discovery rate (FDR) for this assumption that you are looking at the right peptide in each run!  The software they use is called DeMix-Q and you can get it from github here.  They do FDR by running target and decoy label free quantification matches through an algorithm that considers many factors, including retention time.

What do you get at the end? Data that is fully complementary. Did you run 400 samples? Imagine that in sample 237 you had an MS/MS fragmentation that was unique only to that RAW file -- but there is a clear (but low) MS1 signal in every other dataset. This will allow you to quantify that peptide in all 400 samples! And have a metric for your confidence via FDR!

Can you get away with this on every instrument? Probably not. If you've got a lower mass accuracy instrument you probably can't distinguish between peptides of similar m/z from run to run and you are going to see a lot of false measurements. In that case you are probably better off to take the big hit in dynamic range and use a data INdependent method like SWATCH so you can back up your identifications with fragmentation data in every single run. For those of us, though, who have to go after the really low copy number proteins or who have collaborators looking for complete proteome coverage, it looks like we'll still be a lot better off with smarter data processing with DDA data.

TL/DR: Smart paper that combats a current myth in our field and shows us a great method for applying false discovery rates to label free DDA data.

Friday, January 8, 2016

Should you take a second look at your PepMap gradient?


Check this out!  This is the same amount of the Thermo HeLa peptide mix ran on the same instrument but using two different gradients. (Click to expand)

Gradient 2 picked up 2,000 more peptides that translated into over 400 (don't check my arithmetic please, it is really early here) new proteins!  Sure, they're probably single peptide hits, but I bet if you looked at them you'd see they're from low abundance (translation: cooler!) ones.

How'd the awesome scientists running this optimization get this boost? By changing their gradient from this:

2-35% B in 90 minutes

to

2-20% B in 75 minutes 20-32%B in 15 minutes.

Holy cow!  That's a two thousand peptide gain? Totally for free!  It looks like C-18 PepMap material (like that used in the EasySpray columns) might be a little more hydrophilic (? or something?) than other C-18 materials like C-18 Magic or older materials. Or two stage gradients are just awesome. I don't know, but if you've got the time maybe you should investigate a more shallow gradient.

P.S. Useful information:
Channel A -- 0.1%FA
Channel B -- 100% ACN 0.1%FA
Flow rate 300nL/min
200ng HeLa digest on column
QE Plus running a standard "Top10" method

Big ShoutOut to:  Tara, Josh, and Lani cause they did the work. I just stole from their slides!



Wednesday, January 6, 2016

Western blots versus parallel reaction monitoring (PRM)!!


This week I visited a lab that has been doing some great validation work with parallel reaction monitoring (PRM). While preparing their work for publication, one of their collaborators began insisting that they "validate" their findings with western blots. I don't know about them, but I felt like I'd been playing Jumanji....



It is 2016 (despite what you've been writing at the top of every page!)!!!  Holy cow. I know I'm typing to my imaginary choir here, but I really want to get this out of my system. Surprisingly, though, no one has really done a head-to-head comparison of PRM versus Western Blots that I can find and I'd like this rant to pop up when I type the two terms into Google. There is, however, a ton of material to pull from to support my rant.

I hereby present: Western blots versus Parallel Reaction monitoring!



In the red corner we have Western Blots. There is a great wikipedia article on this methodology here.

Basically, though, you do this:
1) You run an SDS-PAGE gel
2) You transfer the proteins via more electricity to a membrane (sucks 'em right out and they stick to the membrane
3) You soak the entire membrane in a solution containing a commercial antibody raised against a peptide within your protein of interest. (They do this by injecting a peptide from your protein of interest into a rodent, or camel, or horse. Typically a rodent, though. For this example, lets say its a bunny rabbit.
4) You wash away bunny rabbit antibodies that don't stick to your membrane in a super tight way
5) Then you add a solution to your membrane that has an antibody with a detection region and a region that binds to any bunny rabbit antibodies
6) Then you activate your detection. That detection region might light up (fluorescence) or it typically causes a chemical reaction that makes a dark spot where stuff matches.

Amazing technique when it was developed in the 70s. It is, of course, still super powerful today, but it has weaknesses that have been addressed many times. First of all, it relies on the efficacy and specificity of two commercial antibodies. I know this is ancient history, but in 2008 Lisa Berglund et al., did a high-throughput analysis of commercial antibodies and found that a large number of them did not work at all. In fact, the average success rate of the 1,410 antibodies they tested was an awe-inspring 49%.  I'm sure those numbers have went way way up. However, according to this 2013 article in Nature Methods, the field of antibody production contains over 350 separate producers. Despite this level of competition, the paper appears to recommend returning antibodies as a step in normal lab practices. Hey, no one is perfect, but I'm just throwing these articles out there.

Let us assume that the antibodies you've ordered have been used by tons of groups and that they work just fine. Chances are that you can find a protocol that will give you a good method for step 4 above. If you don't wash away non-specific binding you will just get a blot full of signal. If you use a wash that is too stringent, you get nothing at all. Hopefully someone has done this work for you!  If not, you're on your own. And then you have to wonder...is it the antibody? or is it me? Probably a good idea to run it a few more times. At 4-5 hours a pop with the newest technology and not one single glitch along the way, it might take a few days to optimize a new assay.

I found this nice picture on Google Images, I'd like to share (original source unknown):



The challenger in the blue corner -- Parallel Reaction monitoring!


The figures for PRM are taken from this great recent (open access!) review by Navin Rauniyar.

Here are the steps
1) You start with a pretty good estimate of the mass of some peptides from your protein of interest and you use that for the quadrupole (or ion trap on LTQ-Orbitrap instruments. Yup! You can totally do this on hybrids, but it works better on quadrupole-Orbitraps) isolation. You can easily refine your data acquisition by retention time or by focusing the isolation to reduce background and increase specificity

2) You fragment the ions you select

3) You match the high resolution accurate mass fragment ions to the theoretical (or your previously experimentally observed fragments) within 1 or 2 ppm (really, no reason to ever go above 3 ppm.)

4) Post processing allows you to drop fragment ions that might not be as specific as you'd like and you can use the intensity of your fragment ions cumulatively to score your signal.  Once you have your favorite fragment ions your rerun your PRM method for the samples you'd like to compare.


HEAD to HEAD time!

Category 1: Reproducibility
According to references in this paper from Dan Liebler and Lisa Zimmerman, the CV of a western blotting measurement ranges from 20-40%. In this paper from S. Gallien et al., all PRM measurements in unfractionated human body fluids were found with CVs less than 20%. 5% is common.
Winner?  PRM!

Category 2A: Time (single target)
There are a bunch of new technologies for western blots including fast transfers and fast blotting and direct signal measurements. If we assume you're using something like that and it takes you 2 hours to normalize, load and run a gel (you are faster than me). You can get this down to 4-5 hours. Now, you can use multiple lanes. But this also involves man hours, where you have to be moving things, transferring things, blah blah blah.
 Winner? In pure time with newest technology? Western blots, maybe. In time (measurements per work day? Definitely PRM.

Category 2B: Time (multiple protein targets)
Okay, imagine that you weren't just interested in the quantification of one protein or one phosphorylation site. What if you were interested in the quantification of EVER protein in a given pathway. Say, for example, MAPK or RAS (overlapping, I know).
 One Western blot can be maybe 20 patients for ONE protein (or phosphorylation).
 One PRM run can look at many proteins. How many? With a Q Exactive classic, up to 100 targets is pretty darned easy to set up. In the paper above from S. Gallien, they looked at something like 700 targets. So, if you're conservative and say that you had two targets per protein, then 350 Western blots on that patient. In one run. In under 70 minutes. Compare that to 350 Western blots.
 Winner? With multiple protein targets, it is PRM and it isn't even close. 

Category 3: Sensitivity
A chart on this page says you can get sensitivity down to the femto/pico range (in grams.) Since proteins have masses in the kDa range and we measure LC-MS sensitivity on new instruments in the femto and atto-mol range, unless I'm not awake yet, this seems pretty clear.
Winner? PRM by a bunch of zeroes!

Category 4: Specificity
Antibodies are awesomely specific. But they are often raised against one peptide. PRMs of multiple peptides can easily be set up. And you can choose targets AND fragment ions with the level of specificity that you need. Commercial antibody providers will obviously take this stuff into account, but this control is often out of your hands. And its one peptide.  You can see in the blot pictures above that you may often get multiple targets. You can narrow it down by SDS-PAGE determined average mass. OR you can choose multiple peptides that are unique in evolution and you can use the retention time of those peptides and fragments that are unique in evolution, within a few electrons in mass?? This one seems pretty darned clear to me..
Winner? PRM!!!!


Category 5: Cost
Starting from scratch? A full setup to do Western blots is gonna look a lot cheaper than an Orbitrap. But if you are doing a lot of them, man hours and western reagents are going to add up soon enough. (Side note: If you are reading this and you don't have a mass spec you just might have too much time on your hands.) If you already have a HRAM LC-MS setup, you don't need anything additional for relative quantification via PRM. For absolute quan you'd want some heavy peptide standards. If you have both the capabilities to do western blots and PRMs in your lab, the antibodies, gels and membranes are additional costs/experiment.
Winner? (If you already have a mass spec that can do PRMs for single targets?) PRM!
   If you want to look at an entire pathway, like RAS or MAPK? It doesn't take long before it is WAY cheaper to buy a Q Exactive than it is to do tens of thousands of Western blots.

TL/DR? Friends don't let friends western blot. (A friend summed this post up that way, so I stole it!)


Sunday, January 3, 2016

No ID for your cross-linked peptides? Maybe you aren't looking for the right things.


Cross-linking reagents are such a great idea for studying lots of things. But they can be some cumbersome to work with that a lot of groups just ignore them altogether.  Sven Giese et al., thought it would be worth it to take a deeper look at high resolution CID fragmentation nearly a thousand known cross-linked species to see if we just aren't looking for the correct fragment ion species with our typical techniques.

Turns out that might be exactly what is happening. Taking what they learned from the known peptide high resolution study, they were able to boost the identification rate of their unknowns by 9x over traditional search engines.

Worth noting that they did a lot of this with custom coding in Python, so these tools might not immediately be accessible to all of us, but I bet some smart coder could integrate this info into some user-friendlier tool!

Friday, January 1, 2016

Cause no one ever asked for it! My favorite papers of 2015!

(Picture from PugsAndKisses.Com)

This is definitely my favorite post of the year. This is where I get to go back through this ridiculous hobby of mine and re-read my interpretations about the amazing work you guys are doing out there!  (An added benefit is that I get to fix typos, errors and even delete some of the dumber things I've typed.)

There was SO MUCH great stuff published this year. I know I only read a tiny fraction, but I now have 17 tabs open that I'm trying to narrow down. I'm going to start with the 2 that really stand out in my mind

PROMIS-QUAN -- The most proteins ever ID'ed in a plasma sample isn't some analysis where someone did 2D fractionation and 288 hours of runs? No, its one single LC-MS run?  When friends outside my field ask me how the technology is progressing, I tell them about this paper. I hope hope hope it is real. I feel equally impressed that this group came up with this and equally stupid for not thinking of it, because it is so simple and so so brilliant.

Intelligent acquisition of PRMs -- I really think PRMs are the future of accurate quantification. You get your ion and you know it really is your target because you have basically ever fragment of one species with accurate mass, typically within 1ppm or 2. Problem is they are kinda slow. So these crazies in Luxembourg go and write their own software so that they can intelligently acquire their targets based on the appearance of heavy labeled internal standards? This is a study that is so good, the PI makes this list even though he didn't respond when I asked him for a slide from his HUPO talk. Tie this in with a lot of mounting data that PRMs can be as sensitive or more than QQQ and you start to wonder what routine labs are gonna look like in the near future...

LC-MS can be both reproducible AND accurate -- The genomics/transcriptomics people get to eat our lunch sometimes due to the belief in general science that we aren't very reproducible. So a bunch of smart people get together and show that our biggest problem, as a field, may be that we don't have common sample prep techniques, cause if you prep samples the same way it doesn't seem to matter where your mass spec is or who runs it...

(within reason, of course)

... you can get the same data.

Speaking of sample prep:

How 'bout massively speeding up FASP reactions with mSTERN blotting, iFASP, or change gears entirely with the SMART digest kits?  Which one should you use? I don't know! I'm just a blogger. How 'bout a bunch of you smart people get together and decide which one and lets shake off this whole "proteomics isn't reproducible" bologna and get all the money people are spending on those weird, shiny (and crazy expensive!) RNA boxes. 

Oh yeah!  On the topic of those RNA boxes, PROTEOGENOMICS!

Probably my favorite primary research paper on this topic this year (man, there were some great ones!) I can think of was this gem in Nature. We also saw several great reviews, but this one in Nature Methods was likely the most current and comprehensive one that I spent time on. Is Proteogenomics still really hard to do? Sure! Does it look worth it? Yeah, I still think it does, and it'll get easier at some point!

There were some proteomics papers that transcended our field this year as well. Probably the biggest one was the pancreatic cancer detection from urine that the good people at MSBioWorks were involved in.  Another one I liked a lot was the Proteomics in Forensics out of the Kohlbacher group. Apparently you guys liked it as well, cause my blurb on it was probably my most read post of the year.

  [Previously my opinions on another paper that were a bit negative occupied this spot. I chose to delete a few minutes after posting. Lets keep this positive! Insert Gusto instead!]

Now it gets a little random! Just things that occur to me this morning as really smart.

How 'bout going after non-stoichiometric peptides and PTMs?  When I mention this to people it still seems a little controversial but biologically it makes an awful lot of sense. This year we also either saw a lot more glycoproteomics because that field is advancing on all fronts or I was just more aware of it.  I think its the former, though. A great example was this paper out of Australia.  It was another big year for phosphoproteomics, with new enrichment techniques, incredibly deep coverage studies, reproducibility analyses, applications of quantification and even new tools to analyze all that phospho data!

Another one that sticks out to me was Direct Infusion SIM (DISIM?). If you need to quantify something fast, turns out you can direct infuse the target and you can get some good relative quantification. Makes sense to me, and they have the data to show it works, so why not!?!

Okay, I've been working on this one for way too long. Ending notes: Holy cow, y'all did some awesome stuff in 2015!  THANK YOU!!! I can't wait to see what you've got for us this year!!!!