Saturday, April 30, 2016
Gonna do a rapid LC-MS/MS evaluation of human blood? Couple of ways you could go about it -- you could dilute and shoot (hence the loosely associated pug cowboy....) or you could spin that stuff through molecule weight cutoff filter to ditch the stuff you don't want.
Which way should you go?
Check out this paper from Tore Vehus et al., titled Blood targeted proteomics: centrifugal filter sample prep vs dilute-and-shoot. (open access...and not yet peer reviewed! interesting!!)
Friday, April 29, 2016
2 papers in a week is coincidence, right? 3 is something else...
So...in this very nice (paywalled) paper at JPR we see something a little different for determining protein-protein interactions for chemically cross-linked species.. This algorithm, REACT, has been described previously in FT-ICR studies out of Seattle. It is written in the "ion trap control language" which, to my small brain, makes it seem a little less than universally acceptable.
In this study they show that the algorithm can not only be used to analyze Q Exactive Plus data, but also that it (they seem surprised in the paper. I'm not! Q Exactive, FTW!) produces more and better data than the FT-ICR.
The gist of this program is that you generate spectral libraries representing your theoretically occurring cross-linked peptides and that is what you search your experimentally obtained MS/MS spectra against. In this way it is similar to the awesome XCOMB resource from the Goodlett lab. (Wow, I need to update my old links to this in the blog!)
XCOMB relies on a search algorithm like Sequest or Mascot to make the identifications. ReACT uses SpectraST. This is probably another application where spectral libraries will shine and hopefully we'll see ReACT bundled into something user friendly so we can try it out!
Thursday, April 28, 2016
Coincidentally...another new protein-protein interaction approach showed up in the literature yesterday.
This one is a modification of the awesome SAINT algorithm and is called SAINTQ. I can't read it cause its paywalled, but its an interesting trend. Is there some broadly perceived deficiency in the current algorithms that lead two 2 studies getting knocked out this month, or is this just a coincidence? Who wants to get some of their protein-protein interaction RAW files and a 12 pack and see which one will solve your pressing biological problem? You know how to reach me. Tonight wouldn't be ideal though (Go Celtics!)
You can check out SAINTQ in Proteomics here.
Wednesday, April 27, 2016
I always get really excited when somebody outside the field asks if we can do protein-protein interactions with mass spectrometry! "Hell yeah we....wait, actually yeah we can but its still pretty hard to do..." Better to say..."Yes, but there are 15 different ways you can do it and ten ways to do the downstream analysis..."? So far the enthusiastic shouting voice in my head still always wins, fortunately!
I like this new method in this month's JPR. Its called SAFER, which stands for something. Its a much less complicated way of thinking about your protein-protein interactions, and from the data they present here -- both stuff they generate themselves on their Orbitrap Velos as well as stuff they get from PRIDE -- it may work better than stuff you might be using like t-tests or even the great SAINT methodology.
Unlike SAINT, there is no nice program to download. SAFER takes your data from MaxQuant and Perseus and then you do some manual filtering and do PCA analysis. So if your normal pipeline revolves around MaxQuant you don't have to learn much else. If you aren't using MaxQuant as your pipeline it looks simple enough that you could make it work with your label free quan software, but you'll have to put in a little more legwork to put the method together.
Monday, April 25, 2016
This is a resource that I definitely don't use as much as I should. It gets mentioned on the blog here and there but it is always good to remind people how awesome this is.
The ABRF message board is a great place to go when you've got a tough mass spec question or even places to post your success stories. This community has been active for a long time and some seriously skilled technical experts moderate and contribute to this resource.
You can access it here.
Friday, April 22, 2016
What is this? Top down proteomics week? One day I strongly expect every week will be, but I just ran into a couple of good papers that I like.
This one is from Claudia Martelli et al., and is available open access here. I like this one cause its another example of top down proteomics being applied now.
This group cares about medulloblastomas, the most common brain tumor in children. From a run down of the corresponding author's Google Scholar profile you can see that this is what this group does. And they aren't limited to just our technologies. They and other groups have been throwing all sorts of high-tech firepower at this messed up disease.
In this paper they throw top-down proteomics into the mix using an Orbitrap Elite. And --BOOM -- some new stuff they haven't seen before. Did they identify thousands of proteins? Nope. But what they did find was stuff they'd never seen before. This protein shouldn't be here, thats a weird modification on it, etc., The table is very nostalgic to me. Remember a few years ago when every protein we identified would go into a table in the paper intact? They could do that here. Sure, its a small dataset, but if it reveals new information into a disease you care about? Time to get on it!
Wednesday, April 20, 2016
Anyone around the D.C. and MD area interested in non-protein mass spectrometry? While the Orbitrap has spent ten years re-defining proteomics, a smaller group of people have been using this technology to doing the same with small molecules and metabolomics. On May 19th at the NIH there will be a bunch of this research shown off.
This will be very low on marketing mumbo-jumbo. The goal here is to show how you lucky Orbitrap users can switch over if you want to and probably do way better targeted small molecule and global metabolomics than anybody in your facility using something else.
I'm not trying to put anyone out of work, but really, think about it, you're probably currently identifying tens of thousands of complex biomolecules per run and using nanoflow separations!! If the theoretical human metabolome is only 6,000 things (ref? 4000 things? )...um....why couldn't you just do that? I think you'll be really surprised by how easy it is to take what you've got in your lab right now and start doing better metabolomics than anyone has ever published in your field.
As a plus, you'll see some sweet imaging Orbitrap data as well as a real-world application of PaperSpray. One drop of blood - 1 minute later - what is wrong with this person!?!?! I'm super psyched for both of those. Warning: I won't be available for conversations during any of these talks. Come early or stay late if you want to chat! (Just kidding about the early part!)
You can register for this here.
Tuesday, April 19, 2016
Top-down proteomics is something I think the whole scientific community finds really exciting. Its that goal off in the distance of what proteomics will one day become that we're solidly and steadily progressing to. All of our current techniques are, well, compromises. By necessity we have to make our experiments way more complex and throw away lots of information --by digesting stuff.
It might be easy sometimes to dismiss the Kelleher group a little as futurists. Yes, they're absolutely pushing the technology futher...but we'll be able to solve biological problems with their awesome tools and techniques in a decade or two after a few big mass spec advances. Come on, don't tell me you haven't sat back and thought that (or said it).
This recent paper from Ioanna Ntai et al., shows that there is stuff that we can gain from integrating top down proteomics into our work, right now. Does current technology limit our ability to really do comprehensive top down proteomics? Sure, but this shows that even around our current limitations there is so much to be gained from top-down that it can still add some clarity to biological problems right now!
What'd they do? They took data from some xenografts (human tumors grown in mice) and studied those tumors with RNASeq, bottom up proteomics and fractionated intact proteins for top-down. There aren't many details on the sequencing in the paper, but the bottom-up definitely wasn't the emphasis here as it was performed on an old instrument (having personally visited all these authors labs I know they have nice stuff, but I guess it was all busy) ;) they do get points from this noisy blogger by employing PEAKS Studio so they could easily look for mutations and PTMs. (Which I LOVE when you've got secondary confirmation methods like they have here!)
The top-down was done on an Orbitrap Elite using two different methods depending on the protein size....which makes a lot of sense....and makes me feel like I should read more papers from this group. Ugh. Okay.
I might be behind the curve, but I just learned about this thing. If you have a Google account you can set up Google Scholar alerts. So every time you go into Google Scholar a little bell will be in the side of the screen that will show you that someone published something on your interest list. You can customize Chrome to show that alert in different ways. So from now on I'll get a little red alert in my browser when the Kelleher lab publishes a new paper. That's pretty cool, right?!?
BACK ON TOPIC:
Just kidding. Something neat in the methods is something called "The Advanced Protein Assay" from a company based in Colorado. Hunted it down and it looks pretty cool. Load concentration on mass specs is something I think we've all kind of...guessed..??...on? Pierce knocked out a cool thing recently that might be similar to this that can quantify digested peptides and I've heard nothing but positive things about it.
BACK ON TOPIC (fo' real):
TL/DR: Current instrumentation and techniques can boost your proteomics workflows RIGHT NOW. Maybe stop thinking of top-down as "I can't get all one Billion proteoforms today so its no good". Maybe think of it as, holy cow, I might be able to add 1,000 small protein IDs to understanding this biological problem and maybe that what I need to get to the bottom of this project.
Monday, April 18, 2016
Whoa! Playing catch-up bigtime this morning, so I've gotta move fast. Behind on everything, but this one is worth sharing before I try getting caught up. In this month's
there is this piece, called "A Clarion Call for Proteomics." I didn't know what a Clarion Call was, so I Google Image searched it, and...that picture comes up. I refuse to investigate further. That will do. To those of you that also keep red/blue 3D glasses in your office (for tertiary protein structure analysis, of course), totally check that image out.
In all seriousness, whatever a Clarion Call is, this little op-ed is a nice read. It introduces several nice new pieces of work and leads to an opinion article from some guy named Coon on where proteomics is and where it needs to be going if its going to be the piece it needs to be in personalized medicine in the future.
Saturday, April 16, 2016
I LOVE my Proteome Destroyer. Yeah, its getting a little older but if I queue up 192 files for mutation detection and label free quantification 1) I figure it'll be done by the end of the weekend and 2) I don't have to worry about leaving the heat on. It'll keep my little house nice and toasty.
I know OmicsPCs sells more powerful and efficient stuff these days, but this monster is still enough for me and my recreational proteomics goals.
Y'all have a great weekend! I'm gonna go play on this thing back home and hope to have a stack of cool mutations to validate when I get back!
Friday, April 15, 2016
A colleague is a little late for a meeting, or I'm early. Since I don't think the latter has ever happened. I'm gonna assume she's late. While I'm waiting I ran into this study that just popped up on Twitter!
Its from Trayambak Basak et al., and is (pay walled) here.
What this team aims to answer is: which is better, iTRAQ analysis or the patented DIA method commonly referred to as SWATCH?
For a model system, they prep some yeast, label part of it with iTRAQ 4-plex and do 2D separation with SCX followed by 99 minute gradients. Looks like they do 7 fractions. So something in the range of 10 hours of total run time.
This half day of run time yields them 800 or so quantifiable proteins. The DIA analysis gave them a similar number.
They compensate for the requirement in the DIA to use 50ppm mass tolerance and only 6 fragments per ID by requiring 2 peptides per protein. This definitely caused a decrease in their protein numbers, but I think its essential here. 50ppm mass tolerance is pretty tight on a smaller MS/MS fragment, but if you're looking in the higher mass range then the tolerance is gonna get kinda wobbly (fortunately, even in a busy DIA window, there is less stuff in the higher mass range typically).
In the end they have a nice Venn diagram that shows the 2 techniques are very complementary. iTRAQ picks up some that DIA didn't and so on. They end up with something like 1500 quantified proteins.
So, if you have a TripleTOF and you want to get the deepest possible quantitative proteome coverage, you should prep have your sample for iTRAQ and do SWATH on the other half to get down in the grass!
I like this paper because this group said "hey, we've got an older instrument and there are a lot of them in the field, how do we use the tools we have available to get the most out of it?" This is super valuable! Look, its really cool that Josh Coon's lab can get 4,000 yeast proteins in a 60 minute run on a first generation Fusion. That's awesome. But there are a lot of old instruments out there and sometimes you are stuck with the technology that you have. Funding agencies don't exactly hand out Orbitrap Fusions.
You'll never identify 4,000 proteins in an hour on a QTOF...or an ion trap....or on a sector insturment ... But that doesn't mean you can't do good science. This study shows the potential for the older technologies. You have to be smart and do good science up front. It may take you two to three days of runs to get those 4,000 proteins, but if you put the work in you can eventually get deep coverage.
A blogger with a Finnigan LCQ approves of this study.
Thursday, April 14, 2016
Don't watch this video yet! But it is fantastic.
KRAS is something that researchers (including a bunch of my friends!) have been tirelessly working on for 60 years or so. Why? Cause its crazy important. 95% of pancreatic cancers have KRAS mutations. And we all know how pancreatic cancer almost always turns out.
One of the first things Harold Varmus did when the POTUS appointed him to helm the NCI was set up a focused RAS Initiative project to get to the bottom of this stuff. And a ton of smart people are working hard on RAS from funding from this initiative and many others. We've got to figure this stuff out. And, because it is a mutation, well -- most of the work has been done via genomics and transcriptomics. Makes sense, right? Proteomics hasn't been ignored, but you could argue pretty easily it certainly hasn't been the focus.
Ben -- take a deep breath. While I'm building a wall of self-control around my enthusiasm, I suggest you check out this paper from Chris Tape et al., that came out in today's CELL (and is open access).
One super dangerous thing that we got into in early cancer work was studying cancer cells in isolation. We learned a lot by taking cancer cells and getting a pure growing culture of the immortal cells. That's how you'd do it in microbiology, and cancer was studied the same way. The problem is that cancer isn't an isolated event. Sure, there are bunch of gross cells growing all crazy, by themselves, but they have to interact with all the cells around them.
SO, this is where this super smart study comes in. I don't have my head fully wrapped around it (its a CELL paper, after all). They did the normal stuff, of course. Did some quantitative proteomics of their interesting KRAS cell lines, quantified some cytokines and verified previous findings about growth factors. Nice start!
Next? Well, they take this KRAS cell line and do thorough quantitative phosphoproteomics (SILAC based) on the cell line with various inhibitors of phosphorylation that are important throughout the pathway in the cell. RAS pathways involve all sorts of key operators JAK/STAT, ERK, heck, you name it. So...they inhibitied distinct pathways -- to determine how KRAS phosphorylations proceed under these conditions. Stop there, you've got a nice paper! Nope, we haven't even gotten to my favorite part.
Okay, I'm going to even skip the point where they do TMT 10-plex for mult-axis phosphoproteomics (WHAT!?!?!) cause I've got a lot of work to still do tonight.
Lets go back to thinking about cells in their native state. Not all cells go bad. Some cells go crazy and eventually divide out of control. But they are surrounded intrinsically by good cells. So. What if you wanted to study this? Would you take some of your whacko cells and grow them up and mix them with good cells and see what happens? Maybe, but how would you tell what phospho cascades are changing in what cells?
You SILAC label the two different populations differently. And that is what they did. You take all the cells and then do phosphoproteomics on them and you can see that the KRAS cells are secreting signals that cause the stromal (surrounding) cells to come to their aid!
Something like this --
We've known for a while that stroma plays some role in the development of pancreatic cancers, but an awful lot of sequencing has turned up very little. Its protein level post translational modifications that are to blame here. These beautiful, natural, phospho signaling cascades are being diverted -- not only within one cell, but to affect cells that are technically not cancer cells on their own. Will this have some affect at the genetic level? Sure, eventually, as these signals lead to differential regulation of transcripts. But we're missing an awful lot of the picture here and I bet a lot of observations of real tumors are going to make a lot of sense now in this context.
I could go on about the validation, but, again, I've got a long night ahead.
Absolutely stunning paper.
Wednesday, April 13, 2016
Y'all have this unfortunately accurate level of insight into how my brain works. Start off with a cool paper that pushes the limits of detection for absolute quan in a Q Exactive, move to a limbo analogy, find out David Hasselhoff made an album called "Do the Limbo Dance" (probably only released for Germany...but I refuse to investigate further)!
The paper in question is by M Concheiro et al., and you can find it at PubMed here. In this study these researchers are working on super low-level quantification for a deadly illegal drug in oral fluid. Turns out that previous detection methods yield a lot of false positives as even being in the same room with people using this nefarious substance can cause a person to test positive for it using the classical assays.
What's the solution? A better target AND more resolution, of course!
So these researchers work up a better method using the Q Exactive and targeted MS2 (which we now call Parallel Reaction Monitoring or PRM). They optimize the method by injecting controls into their complex matrix (oral fluid).
To get the best sensitivity (while maintaining robustness) they use microflow separation (looks like 20uL/min flow rates.
How'd they do? Pretty good. They find that the Q Exactive was linear for one of these compounds down to 15picograms/mL and they had CVs at all levels in the 10% range.
I investigated this one a little personally, cause I generally consider Q Exactive PRM sensitivity to be better than this. I've got a couple compounds I've gotten down to 2 picograms/mL with standard flow. One reason their numbers are lower than mine is that they had to work in negative mode here. Okay. That makes 15 seem pretty good. I haven't done a ton of QE negative quan, but on my old QTrap, I'd expect at least a 1-2 log drop if I had to go to that evil negative polarity. That makes sense to me.
Secondly (that's a word?), I think this group undercut themselves a little. If you look at the peaks, they are somewhere in the 15-20 second range at base. I heard from another researcher who had access to this QEs method that a maximum injection time for the PRM was 120ms.
When I'm trying to show of Orbitrap sensitivity I crank that maximum fill time up as high as I can get away with. If I assume a 15 second peak, I'd almost increase that injection time 10x. at 1200ms injection time I'd still get 12+ scans across the peak which meets the Case Western requirements for label free quan (the gold standard in my mind) and I'd be able to get a bunch more ions. I can't say that the sensitivity would go up 10x here, but I guarantee it wouldn't decrease.
Anywho, this is a nice paper showing 1) the benefits of microflow for sensitivity and 2) That you can go real low in small molecule detection and quan in matrix with a Q Exactive, so its a solid win in my book! I leave you in the capable hands of THE HOFF.
Tuesday, April 12, 2016
Monday, April 11, 2016
Sweet! So you found a post translational modification in your big 'ol dataset. What do you do next? Great question!
Why don't you check the protein in iPTMnet (here!)?
iPTMnet is a compilation of proteins and their annoted PTMs from all sorts of organisms. It is super easy to use and brings data real fast.
Punch in the name of your protein of interest, limit it to your organism (if you want to) and it'll show you all the PTMs that people have found for this protein and where they are located.
It'll also break them down in a table and have a link to where they got the data for that PTM.
It is worth noting that most of the PTM work in the world is still phosphorylations. And PhosphoSite is probably the best place to deposit that kind of data, so...when I punched in a couple proteins off the top of my head --well...I just got the same data I'd get punching this protein name into PhosphoSite (every modification listed just hyperlinked into PhosphoSite. But there are groups out there working on all sorts of PTMs and we're going to see more variety in PTMs in our databases soon!
The interface is nice enough to be a go-to AND it is less counter-intuitive. When Dr. Collaborator asks for advice for where to get more information on her ubiquitinationed peptides you found it'll sound more linear to recommend she check it here!
Saturday, April 9, 2016
Okay. I'm gonna straight up admit I'm not entirely sure what is happening here! But this really fun image brought me to some cool stuff. (I did not make this, btw!!)
You have to check it out for yourself here. The 2015 C-HPP meeting which, according to the link will be held this June appears to have been in some sort of time vortex, only explained by the image above.
What you'll find there is a breakdown of the attendees who were either there (or will be there in June?) and then you'll find 2 large videos you can download from DropBox that will show you what did (or will) happen there. Also, many of the C-HPP country leaders explain what chromosome they are deeply studying in these great papers we keep seeing as well as why they chose that chromosome.
Hey, I'm all for anything that brings attention to our field and the incredible work we're capable of doing, so if you want to make it all silly you have my permission and endorsement. The C-HPP is one of my favorite big projects, not only because of the awesome stuff they are finding with this primary research model -- but also 'cause of the fact this project transcends silly politics.
And if this group of researchers are going to embrace the timey wimey stuff, I'm not going to deter it. In fact, I'll probably post the coolest David Tennant monologue from his best Piperless Dr. Who episode to up the nerd level right out the ceiling...sorry...
Friday, April 8, 2016
I've been thinking about viruses a lot this week thanks to contracting a super gross one over the weekend. Rather than continue to be whiny and dramatic about it, I thought I'd try wrapping my head in duct tape to hold it together and see what we know about the topic.
LOTS! Way too much for me to absorb without pre-existing information or degrees, but..wait...here is an approachable and super cool (and recent) triple-omics approach on the topic. The paper in question is from Kshitij Khatri et al., out of Boston University.
In this study they take some Influenza A and infect some poor miserable chicken cells with them. Not just happy with a single -omics approach to the different viruses grown in the cells, they go for 3. They do some glycomics (sugars only), glycoproteomics (sugars still stuck to peptides) and some regular shotgun proteomics.
Why'd they do all this? Well these glycoprotein things appear to be how these miserable virus things hide from our immune responses. From implications in the introduction session it sounds like the glycan chain conformations are really the most important aspects of the disease and immunity. The triple tiered approach helps them to be able compare/contrast the viruses at the different levels.
I walked away for a bit, had some more coffee and
1) Glycans cleaved off and the peptides analyzed for the cleavage mass shift (which, I'm mostly okay with if you are really careful. Low mass resolution/accuracy instruments confuse this +1 Da shift with the C13 isotopes, and even with high res the results can be a little dicey if you aren't prepared to deal with it.)
2) Enriched intact glycopeptide analysis - which I'm totally cool with, but there are SO many combinations that it always feels like some data is left on the table. ETD and the algorithms to work with the data have gotten so much better in the last couple of years that its kinda unbelievable, but that is still a HUGE search space. I encourage you to try it. Go get one of the really good recent glycoproteomics datasets offline from PRIDE and, when you've got some free search time, search it with everything you've got. As good as the spectra are, and as good as new high-powered algorithms like Byonic are, there are a lot of unexplained spectra there. And FDR never works great when you've got such a huge number of combinations. (Good data gets thrown out sometimes and its hard to tell the good from the bad.)
Is there a better answer? Maybe this approach! Is it triple the amount of work? Kinda...But look at how much data they get and how good it is!
And they have this level of certainty because they have free glycan analysis and they've got the intact glycopeptides and they can go back to their PGNase (or whatever) modified peptides. Obviously, they go further than I can as they bust out some arrays and 3D modeling, but they've got the data from three directions.
Now, it is worth throwing out there that they did all of this without ETD (QE Plus, FTW!) so the extra data points are even more critical for verification, but still...wow.... Yeah, and its probably also fair to point out that this is a virus so it is relatively simple, but still...wow...
I go through this paper and at every turn I think "this is seriously smart..." from the fact someone on this team is WAY better at running PEAKS than I am through the downstream analysis (to end up at immunogenic modeling!?!? what!?!?) So, if you are sitting there thinking that you'd love to understand what effects glycoslylation has in your model, but you can't get into glycoproteomics cause you've just got a Q Exactive, I suggest you download this new paper and reconsider. Cause I think this is as good a study as I've ever seen.
Thursday, April 7, 2016
While maybe not 100.0% factually accurate (hey, I said it was for students!) its probably the highest budget instructional promo on the field that I've ran into (that wasn't directly made by some big evil corporation). If nothing else you could post in on the FaceBook so your great aunt knows what you went to school all those years for!
Wednesday, April 6, 2016
What does proteomics need more of? Easy pretty GUIs! This morning a friend sent me a link to a program he's been working with (and presumably likes...). In the instructions, the very first line is (no joking): "open your command prompt."
PatternLab might just be the complete opposite! It is super GUI driven and uses nice primary colors. Look, I don't require a search engine that could fit in at Toys R Us...but...it might be a preference and I'll take it over a C:/ (sorry, I know I sound like a slacker, but its 2016!)
(Typed "GUI meme" into Google. Was NOT disappointed)
AND you aren't necessarily giving up power here. This pretty interface has a ton of powerful tools in it including:
PepExplorer (de novo!?! haven't checked it yet...)
XDScoring (for Phosphosites!)
Integrated XIC extraction
You can also add in other external modules. SIM-XL (for crosslinking studies!) and something called Y.A.D.A. that appears to be a deconvoluter of some kind. I didn't have time to explore it yet. There is a lot in this interface.
And I got results without reading the instruction manual at all. Though, I suspect I'd get better data if I did read some of it. I was just a little unclear about the mass cutoffs and centering, but it spit out a protein table with a rational number of results and that is all the time I really have for it this morning.
If I had a criticism of the program, I'd say it could be easier to load files into it. You point it to folders rather than files and it doesn't seem to have a persistent memory. I have to start at MyComputer every time and hunt down my folder location. If your paths are as ridiculous as mine, it gets tiresome on search 2 or 3. I'd also say that Comet searches are a good bit slower than I'm used to seeing them through other interfaces, but they aren't that bad, just noting an observation. I started with a whole human cell lysate sample and had a data table in 20 minutes or so (filtering and data visualization are separate steps). Same dataset on PeptideShaker (just using Comet) is probably under 10 on my PC if I run the SearchGUI and have it automatically bump open PeptideShaker when its done. There might be a point where you can optimize threads and resources to speed it up (you'd need to read the manual for it. So this point might be totally moot.)
TL/DR: PatternLab is nice shotgun proteomics software with a ton of features and you probably already know how to use it. There are some tools that you can run through it (SIM-XL come to mind) that may not have a true equivalent.
You can read about it in Nature Protocols! here.
And you can check out the website and download the program here. (Final note: it is ridiculously easy to install!) If you want to load RAW files directly the MSFileReader needs to be installed on your PC, but chances are you already have it.
Edit: Definitely check out the comments below from one of the authors!
Monday, April 4, 2016
My second job out of undergrad was in a hospital clinical lab. This was a huge turning point in my life. This was where I really learned how the science of medicine and diagnostics worked and where my whole obsession with instrumentation (and quality control!) developed. I loved that job, worked hard, but a lot of the time I had absolutely no idea what I was doing.
One thing that I did was use a cytospin to stick eosinophils to a slide and then stain them. Today, I learned what those cells are, why they are important, how little we know/knew about them -- and now we have an incredibly deep proteome and a MUCH better understanding of them!
The background on eosinophils is thanks to the 974 words that Wikipedia has on the topic and this awesome deep proteome coverage and thorough downstream study is from Emily Wilkerson et al., and is in this month's
So, the reason my dumb early 20s self was cytospinning these cells is that they are white blood cells that make up about 1% of the normal cell numbers and high numbers of them are associated with a variety of diseases. Eosinophil actually means "acid loving" so a really simple staining technique that you can do even with no knowledge of what you are doing or why makes them stand out like the weird cell in the picture above. Large numbers of these cells mean that your immune system is fighting something like crazy and can be important clue to aid in diagnosis or disease progression.
Of course we need a proteome of these things! At the point this study started, we knew about 500 of the proteins that were associated with this cell type.
Afterward? Over 100,000 peptides were identified (over 7000 unique proteins) and we find out that some of the 500 we did have weren't exactly right thanks to contaminated starting material.
How'd they do it? They first separated out eosinophils from the rest of the cells with gradient centrifugation on Percoll. Cause that wasn't hard enough, they also used magnetic beads coupled to some antibodies to make sure they were looking at a pure collection of these cells. (Again, this is about 1% of the cell numbers. They either started with a TON of blood or they had a super sensitive mass spec). Wait - it gets better. Then they said "well, this is a ridiculously low amount of material -- lets also do phosphoproteomics on it as well."
This is interesting -- all the peptides were ran through IMAC. What stuck was the phosphopeptides. What didn't stick, thats the regular peptides. This is a nice conservation step for anybody looking at a really low level of material and trying to decided whether they can still do phospho. The two sets of peptides were then fractionated by reverse phase high pH, fraction collected and then ran out on 100 minute gradients on an Orbitrap Fusion running Top Speed and 3 second cycle times. Interestingly, the MS1s were obtained at a relatively low 60k resolution and the ion trap was used for MS/MS. Obviously the real emphasis here was on the maximum number of MS/MS scans. Depth depth depth!
The downstream analysis is exceptionally thorough and really gives a picture of how this cell type works. What are the most abundant proteins? How do they rank in the cell? How do the other protein abundances stack up? They take it a step further by comparing "normal" eosinophils to those that are activated in certain disease states.
Seriously, a solid study. It would have been really easy to start this study and say "hey, here is the proteome of an important and poorly understood cell type." And that obviously would have published really well on its own. Instead, they put in the leg work and really add to our understanding of this cell. No question this study is going to need to be cited in every hematology text book that is written after this month. Did Wisconsin just triple the amount of knowledge we had on this cell type? You could easily go through this analysis and make a much more thorough write up of this cell type than Wikipedia has now! As good at the proteomics is here, the way this group takes the protein list and abundances and translates this into really useful and palatable biological information on this cell type really takes this study to the next level.
Friday, April 1, 2016
When I hear "plug and play" it makes me really happy. Remember when we had to get a CD (or floppy drive...and install software drivers) then plug in our new peripheral into a COM port and then go to the hardware settings and enable the stupid device? "Yay! A new controller, maybe I'll be able to use it to play DOOM in a half hour or so..." Plug and play. Plug in the new mouse. Boom it works.
So....Plug N' Play human phosphoproteomics?!?! What?? That's 2 awesome things, but that doesn't make sense. Phosphoproteomics is really hard. You've got to spend a week preparing special metal affinity columns and enriching samples and running the sample 50 different ways and on and on!
Check out this new paper in Nature Methods from Robert Lawrence et al.,! They take a complex phosphoproteomic model (MCF-7 breast cancer cell line) and look at the phosphoproteome using normal (difficult and described above) shotgun DDA techniques, then DIA (like that SWATH thing), and finally with Parallel Reaction Monitoring (PRM) for targeted phosphopeptides of interest.
What did they find out? PRM kicks ass. Actually, PRM totally kicks ass. I swear, this is rapidly becoming my favorite thing. When they targeted 101 phosphopeptides that they were interested in -- some of which were decent abundance, some of them were exceptionally low abundance (DDA and DIA didn't even pick them up!) -- they found that not only could they detect them, but they were very reproducible.
If you're thinking "so what? we know PRM is more sensitive than DIA and DDA. When I do phosphoproteomics the traditional way, I come back with 40,000 phosphopeptides." Hey, I'm not knocking the peak bagging labs out there. If your goal is to get a method that gets more phosphopeptides than the last paper, more power to you. But a lot of biologists just care about one pathway and how much information they can get about that pathway in a run. And this is where this paper makes the turn that got it into a Nature publication
You can access this resource here.
Did you just read those three bullet points and run around laughing and jumping up and down? Okay, maybe that was just me. WHOA!! Seriously? This can't be real, right? No way. Sorry, another round of me laughing like a psycho and scaring my dogs!
THIS IS REAL LIFE.
Check this out!
You tell it what pathway you are interested in. In this case, I'm going to say AMPK signaling as an example. Then it'll grab my known phosphosites that are visible to mass spectrometry with tryptic digestion. I can pick individual ones...or I can say...grab the whole known pathway by clicking the top option (lets see how it works with a whole complex phosphorylation cascade...)
It plots everything over a 60 minute gradient. The vertical axis is the number of targets that should come out then (this is, btw, experimentally observed data!!!!). So, at 10 minutes in, 8 of these targets should be eluting at once. Ummm....so my QE classic running 35k resolution PRMs gets about 8Hz. Ummm..perfect!
If there are too many, I can eliminate phosphosites that are of less interst and it immediately re-adjusts. HOLY COW! Wait, it gets better!
Once you get this all set up right, there is a button at the bottom that says "Export Schedule". What's that do? Well, it just automatically makes this method for your Q Exactive.
I'm stretching it a little. It makes an Excel file. You do still have to open your QE method software and hit CTRL+C and CTRL +P.
Lets sum this up: Human cells respond to stressed by signaling down distinct and reasonably well-understood pathways. Understanding these pathways is critical to the diagnosis of many diseases and often even to determining what chemotherapies people get when they have cancer. And these guys just made it so that you can instantly make a method for analyzing any one of these pathways. You can be ready to run in 5 minutes or so.
Its only April 1st, but this is my early pick for paper of 2016. If you don't want to hear me go on and on about how awesome this work is, you should probably try to avoid me!
Coincidentally, all this enthusiasm is getting in the car with me (after I have a shower) and going with me to the National Cancer Institute this morning. Can I work it into every single conversation I have today? Damn straight, I can!
Sincere thank you to this team for this amazing work, and to Julian for tipping me off to it!!!!