Wednesday, January 29, 2014

Comparative Analysis of Biological Sphingolipids with Glycerophospholipids and Diacylglycerol by LC-MS/MS


Uh oh! This isn't proteomics!  I'm trying to expand my range a little.  Last summer I had the opportunity to work with some lipidomics experts and it is something that I find interesting.  I also still think that it is the most difficult type of experiment I have ever seen performed with LC-MS.

This paper stands out to me because of the simplicity of it.  Btw, the article title is the same as the title of this blog post, and you can read it (open access!) here.

 In this study, this team of researchers in Japan start out with a series of known controls for their lipids and work on simplifying the sample prep and LC conditions to be both simple (no derivatizations, no crazy fancy columns) and still robust.  Typically, compounds of this type are derivatized or at least alkylated before attempting LC-MS analysis.  Turns out it isn't necessary!  Now, it's still tough to extract the lipids and the sample prep still sounds like a bit of a nightmare what with the multiple solvent extractions and all, but it could be worse.

The LC-MS/MS conditions are described in painstaking detail to the point that I think I could set this experiment up without ever wondering "what do they mean here?"  The LC was a Thermo Ultimate 3000 running a standard PepMap C-18 column into a HESI source attached to a TSQ Vantage with multiple SRM targets.

The big question?  Do these simplifications have negative effects on the specificity of this analysis for the lipid compounds of interest?  I guess not, because they are able to extract, identify, and quantify these compounds in biological samples and their measurements correlate perfectly to phenotypic differences between these samples.

Awesome study that makes LC-MS analysis of lipids seem a little less daunting.  Thanks to  @BioProteomics who Tweeted a link to this study on 1/27!

Tuesday, January 28, 2014

HeLa peptide digest from Pierce


Do you need a sample to optimize your global proteomics on?  Or would you like to simply feel inferior about your ability to prep and digest a cell pellet?  If either are the case, you should check out the HeLa protein standard from Pierce.  This is the single nicest tryptic digest I have ever seen.  This is the same digest that I've been doing all of these experiments on for the blog.  The digest is processed in huge quantities to minimize variation.  It is digested under perfect conditions using LysC first, followed by trypsin so that <10% of cleavages are missed.

It is also cheap.  20ug runs you <$100.  Or $0.50 per 0.1 ug injection.

You can get it here.

Monday, January 27, 2014

Proteomics of fear!?!?


This one is super interesting, albeit a little creepy.  This new paper currently in press at MCP takes a look at the changes in proteome profiles in the mouse brain during a model called "context fear conditioning".

Before I go any further, this is what WikiPedia and Google images told me about this model:


This was the simplest illustration I could find in the time I allotted myself to write this during lunch.  Essentially, it's like the Pavlovian conditioning thing except when you play a sound, you zap a mouse with electricity.  You then find that the you scare the beejeesus out of the mouse every time you play the sound thereafter.  We had an electric fence around our little farm when I was a kid.  My brother and I often did experiments like this on our youngest brother.  He's still real jumpy, which is also a little scary since he carries a handgun for his job....but I digress!

I have to admit, I'm a little confused by the experimental design here from the animal standpoint.  In an nutshell, one group of mice were allowed to wander around in a new environment for 3 minutes and then zapped.  A second group were zapped as soon as they entered their new environment, then allowed to wander for 3 minutes.  I guess the first group now thinks that wandering is bad?  While the second sees the zap as a random act of violence and doesn't associate it with wandering?  They summarize the method, but reference 3 papers, none of which appear to be open access, so I'm going to need to be a little foggy on this one.

60 minutes after this little zap experiment, the heads of the mice are separated from their bodies (my least favorite thing I've EVER done for science!  Kudos to you guys with the fortitude to do that a lot), the brains are removed and subsectioned.

The hippocampus (memory) and cortex were subject to protein arrays, probing for 84 different proteins.  And you know what?  They found statistically significant shifts! It appears that the majority of these shifts are in phosphoproteins in the brain.  I'm unclear (never done protein arrays) if the phosphorylation alone is enough to change the results of the array, but nevertheless, this is nuts.

Again, I know I didn't fully get this paper.  I don't understand the psychological context in which the model is framed, nor do I understand why the specific mouse strain (a mouse model of down syndrome?) was employed and how this affects the layout here. I also totally skipped over the fact that a drug I've never heard of was injected into some mice, which I'm sure adds more to the paper.

What I do get?  WE CAN SEE CHANGES IN MEMORY WITH PROTEOMICS!?!?!



The title of the paper is:  Protein profiles associated with context fear conditioning and their modulation by memantine

Happy Monday!


From Cyanide and Happiness!

Sunday, January 26, 2014

Targeted Peptide Measurements in Biology and Medicine


Currently in press at MCP, a paper with the title of this blog post.  It has a subtitle, as well, "Best Practices for Mass Spectrometry-based Assay Development Using a Fit-for-Purpose Approach," but it could just as accurately be changed to "Who's who in targeted peptide quan," due to the fact that it is essentially the summary of a meeting held at the NIH (under CPTAC, which I'll write about soon)

Anyway, definitely check this paper out since its currently open access and really outlines the current progress and future direction of targeted approaches in the diagnosis and treatment of cancer.  Of course, any of these approaches could be adapted to your disease of choice.  Since this is a discussion of how to move targets into the clinics, the paper emphasizes the use of tiered models for dividing assays and the importance of validation of results.  A great read for a Sunday morning!


Saturday, January 25, 2014

Want more data from Proteome Discoverer?



The output from Proteome Discoverer is an MSF file.  The output is trimmed down to be what has been deemed the most useful data for the average PD user.  But what if you want more?  What if you are curious about your Decoy matches, or just want to dig in deeper in order to develop your own nodes?

Fortunately, Proteome Discoverer keeps all that data in the MSF file, you just need a second program to open it up.  At the PD user's meeting, Bernard Delanghe mentioned SQLite as a software to use to get to this extra information.

Now, if you are doing this for fun like me, you can get SQLite home for free.  You'll have to pay for it if you are using it for work, but they also have a free demo so you can check it out.  You can get it here.

What information does it get you?  I think a better question is what do you want!?! Or what don't you want?!?  I opened a PD MSF file from a 2 hour HeLa run in SQLite and it gave me 71 data and processing parameter categories in the nice, well organized and easy to read output shown below.



71 categories!  I highlighted the decoy peptides, because it is something I was recently asked about.  The MSF file has just an unbelievable amount of data in it.  And if you use this program or something like it you can get to all of it.

For most people, though, PD has the data up front in the format that you really need.  If you want a feature moved to the front in PD for easy access you can always make a request on the BRIMS software portal.  A feature that I requested is now present in the alpha version of Proteome Discoverer 2.0.  The developers aren't spending the kind of time in labs that we are and feedback is necessary to make the next versions even better.

Friday, January 24, 2014

Fusion phospho method

One great thing about the Orbitrap Fusion is that it comes pre-loaded with great methods for most experiments.  Later software releases will include even more and better methods.  The initial software release did not include a pre-made method for phosphoproteomics.  Never fear!  Some friends and I wrote one up for you!  As always, please consider this just a starting point.  This is something I just do for fun after all!

In this particular experiment we wanted to go as fast as possible in order to dig phosphorylation events out of unenriched samples, so we went all ion trap, top speed and low-res neutral loss events trigger a repeat of the fragmentation with ETD.  You have a ton of options for doing phosphoproteomics on the Fusion, but this one can help get you started.  You can find it in the Orbitrap methods database here.


Wednesday, January 22, 2014

QuaMeter! Assess the performance of all your LC-MS instruments over time!


Are you having trouble keeping track of all those instruments you have (poor baby)?  Would you love it if you could use a program to track the performance of ALL OF THEM over time?  Would you also like to track more than one or two metrics?  How about 40+ metrics?!

Then maybe you need to check out QuaMeter!  This not-quite-new, but totally new to me, program that I just saw in action comes to us by way of the Tabb lab at Vanderbilt.  And it is completely free, you can download both the binary and the source code off their website.

You can longitudinally track just about anything that is affecting your proteomics experiments.  And not just dumb things, but useful ones, like how your peptide ID numbers, your spray stability, your ion intensities, your ratio of tryptic to semitryptic IDs (digestion efficiency!?!) and your fill times have changed over time.  Tracking this info could let you in on things like your quadrupole beginning to get dirty on your Q-TOF or the effect that new grad student is having on your results by plotting all of these on a longitudinal scale.  "See that steep drop when Lani joined the group?"  Find what is wrong with your lab and when it happened!

You can read the original paper here!

100% back online -- with new methods, even!

The blog should now be up completely now thanks to a little trick I use that I like to call


,which I discovered while looking for an "Insomnia" image on Google is also the name of melodic death metal band from Greece that has a couple of great tracks, but doesn't seem to have an album that I can purchase... I learn so much every day!

Anywho!  Better than just bringing back the old stuff, there is new stuff, too, thanks to some late evening thievery I performed at this place:


What did I get?  More methods!  There have been requests for LTQ methods, so I got some.  As a disclaimer,  I don't know for sure that they work, but I am familiar with the work of some of the authors so they're probably ;) okay (and they look good, I did used to run an LTQ for a living).   I also sat down and wrote a reporter ion triggered ETD method suitable for the any glycopeptide analysis on any Orbitrap system + ETD (see, you guys read here, and request stuff and I'll do what I can eventually!).  And I got my first hands-on time with the original Exactive,  So I have a couple new methods for that system.  I'm uploading all of this now and they should be up by the time I drink the gallon of coffee necessary for me to wake up and brave the fantastic midwest weather this morning.




Tuesday, January 21, 2014

Post processing to make your ETD spectra kick ass!



In yet another blog post to originate completely by accident, some friends (who would rather remain anonymous) and I were doing some ETD optimization the other day and found that our first run (set up quickly prior to lunch) had ETD reaction times and anion target values that were far too low.  This resulted in ETD spectra that were dominated by the parent ion and a few charge reduced species.  Most of them looked like this (PD view):



Crappy, right?  From the all-important manual look through of our data, it was clear that we needed to jack up our anion target value, our reaction time, or both.

Since it was our only run, we went ahead and processed it anyway.  One of the things I wanted to toy around with was the "non-fragment filter" in PD.  This vaguely named node has a bunch of parameters specifically for working with ETD spectra, including removing the parent ion, charged reduced species, and even neutral loss (think water loss) charged reduced species, and even something called FT Overtones (if anyone knows what that its, please comment!)


So we ran the crappy ETD spectra through this anyway, and guess what?!?!  the ETD spectra were all the sudden awesome.  Not good.  Awesome.  The MS/MS spectra below is the EXACT SAME SPECTRA (4613) as the one above, just with all these useless ions cut out and the spectra rescaled.


BOOM!  And this was a match.  A very nice, high scoring match.

Now, this node, which I am from here on out referring to the "awesome ETD spectra" node (I'll try to get it changed in later versions) does come with a drawback that my friends were quick to point out while I was recovering from my blown mind.  The drawback is that if you didn't actually look at your RAW data you could miss the fact that your reaction isn't optimized correctly.  Great point, right?  Ideally, we'd want our reaction tuned up correctly, then run the "awesome ETD spectra node".  But once you get it tuned up, definitely definitely use this node to get great, high scoring matches out of your ETD spectra!

Monday, January 20, 2014


I'm slowly coming back online here.  Thanks for all the support while I've dealt with these technical difficulties.  Entries should continue to randomly appear into the webstream here and Google searches should come back on line over the next couple of days.


Tuesday, January 14, 2014

Blog is down


The blog is currently down.  Due to high demand, I have set up continued access to the Orbitrap methods database and to the Proteome Discoverer 1.4 videos.

Thursday, January 9, 2014

SprayQC -- why isn't everyone using this?


I've written about this before, and I'm still shocked to see that it hasn't caught on.
SprayQC is a free program from Max Planck that monitors your spray stability and your LC conditions and stops your runs AND SENDS YOU AN EMAIL detailing the problem when something has gone awry.

It's been out for a couple of years now.  The instructions are real easy to follow and now it supports all sorts of sources thanks to an active community of developers.

Visit it here!
Read the original paper here!

Wednesday, January 8, 2014

DeNovoGUI! Easy free probabilistic networking!


Normally I'm pretty angry when I get scooped.  Not this time!  I've wanted a GUI for PepNovo+ so bad that last summer I hired a programmer in Indianapolis with my own money to help me wrap up the VB script that I wrote.  Unfortunately the project was dropped when my programmer obtained a real job, saving me some cash!  So I kept on running PepNovo+ from the command prompt when I needed to.

AND somebody else wrote it!  And its super easy to install and networks perfectly with my Proteome Discoverer de novo workflow that is described in this video!

Read the paper (open source) here!  Download this great software here!


Tuesday, January 7, 2014

Uniprot vs IPI databases




I guess I kicked up a little bit of a controversy today, as I've gotten a couple of emails already about this (the previous entry).

I've got a lot going on today so I don't want to go into the true differences between Uniprot TREMBL, Uniprot Swissprot, and the IPI databases, I'm just going to show you some real data.

I ran some HeLa digest a while back on an Orbitrap (Velos, I think).  I used a high-high mode (60k, 15k) which is relatively slow on that instrument in comparison to the QE or the Elite or Fusion.   I think this was for a limits of detection study or something.  It doesn't matter.  The experiment will illustrate the point.


I downloaded the IPI Human database and I set this to run over lunch.  Same file, same everything, all I changed was the database, IPI or Swissprot.  I didn't search with any mods except carbamidomethylation on C.

Remember:  Nothing else was changed:

Uniprot database:  2,940 proteins.
IPI database:  11,150 proteins

Want the screenshots?  Email me.  Want the file and the XML copies of the methods?  You can have those as well.  Let me know.

Why the big difference?


For one, look at our IPI Human database:  50MB, vs our Uniprot database at 13MB.  Why so much bigger?


Cause the IPI database is full of putative crap.  Putative 22kDa protein?  Super useful, right?  This is why very few people use the IPI database.  The Uniprot/Swissprot has real proteins with annotations that can help you arrive at a biological conclusion.  Could that 22kDa protein be super useful later?  Sure, but we have no idea what it is right now!

Hope this helps clarify some things!


Monday, January 6, 2014

The Fourier transformation explained in one sentence!


Due to my second flight delay today I got through all the stuff I was supposed to read and got distracted again.

This handy simplification above came by way of @attilacsordas and makes me really happy.  The mysterious central equation in the most cited paper in history (is that still true?  I don't know) simplified into one sentence and color coded?  Fantastic!

I brazenly stole this from the Revolutions site (please don't sue me) here.  You should follow the link because it has more handy color-coded explanations of various increases in complexity.@attilacsordas

Data access in proteomics


I need access to RAW data.  Lots of it.  There are so many cool papers out there right now with new studies and new techniques, some of which are so fantastic that they defy my beliefs in the current capabilities of instruments and methods.  Unfortunately, I (and if I believe what I've been told out there, you as well!) have a lot of trouble getting ahold of these results.

It's weird.  It used to be easy.  You'd pull up a paper, you'd go to the last page to figure out whether they put it on Tranche or ProteomeCommons or whatever and you'd download that data.  Super easy.  Check the RAW results, get the method they used for every file and you could replicate the study lickity-split.


Then things changed.  The data files got too big.  The databases could no longer hold all the data that we generated.  The turning point?  February 22, 2011.  This was the day that  MCP no longer required the uploading of data before considering an article for publication.

While this simplified the publication process for the authors, it sure made reproducing the studies out there a whole lot harder for everyone else.  Ultimately, this is slowing down the progress in the field.

Solution?  We need to require the public repository of proteomics data.  And I think we have the tools now.  The solution I propose?


I wrote about the Chorus project last summer as my favorite thing to come out of ASMS.  And this was exactly what I was thinking we need it for.  A cloud based (HUGE storage) site for uploading data.  And you don't have to download the data once its there, you can search it and view it and interrogate it all on the cloud.  So when the next amazing paper pops up demonstrating 2,800 proteins quantified in 10 minutes on an old triplequad we can find out that we really are looking at the next revolution in proteomics (or the other thing...) by seeing the data with our own eyes.

Currently the Chorus website states that the site is still in beta mode. However, I talked to Nate Yates and Mike MacCoss and that beta level should be off soon.  Lets all get behind Chorus and start moving forward!



Sunday, January 5, 2014

Can microflow outperform nanoflow in some applications?


I've made this argument before.  One day these mass spectrometers are going to get sensitive enough that we won't have to monkey around with nanoflow.  We'll be able to microflow, get better and more reproducible chromatography and live happily ever after.  Some groups believe these days are already here.  If you aren't sample limited, just inject more peptide, right?

This application note came by way of a LinkedIn feed (see, it is good for something!) and talks about the use and application of microflowrates in validation on a triple quad.  While the primary focus of the note is demonstrating the increase in sensitivity of microflow over higher flow rates, it brings up the limitations of nanoflow in terms of robustness and relative high maintenance.  Absolutely worth a thought!

Thursday, January 2, 2014

Identification and validation of specific markers of Bacillus anthracis spores by proteomics and genomics approaches


First proteomics paper to read in 2014?  Looked around, then:


Decision made.  Proper on so many levels.

Anyway!  Bacillus anthracis is pretty much Bacillus cereus with an extra plasmid or two depending on the strain and other complicated factors.  Problem is that we want to really identify anthracis and not identify extremely related (and extremely common) spore forming organisms when we're assessing biological threats.  Of particular interest are pXO1 and pXO2 plasmids that can be indicative of super virulent (or even militarized strains).

So what can we do to tell extremely related things apart?  Crank up the resolution!  In this case, these researchers used an LTQ Orbitrap Discovery and looked for differences in a bottom-up approach between various strains of anthracis and cereus until they found a nice set of unique peptides belonging to the nasty strains.  Then they validated their new differential markers by moving their discovery results over to SRMs on a TSQ Quantum Ultra.  They show that even in extremely complex mixtures, their new markers are specific enough and sensitive enough to clearly differentiate virulent anthracis from its closest relatives.

Good paper for you microbiologists who are interested in proteomics, as well as a good example of moving your discovery results over to a standardized validation assay.  Also a great example of an Orbitrap Discovery out there and still doing some great science!

The paper is open access for the time being and you can find it here.



What the heck....?


If you happen to have popped over to MCP  over the holidays you might be surprised to see a  drawing of a Hippopotamus (not to be confused with a Hiphopopotamus) under some constellations.  A little investigation will lead you to a paper about the Human Hippo pathway.  Unfortunately, the paper isn't open access, but it looks really good and I think the authors must have a good sense of humor!

Wednesday, January 1, 2014

2013 Proteomics Year in Review


2013 is over finally!  I can't tell you how much I've been looking forward to saying that!  While I'm sitting here I figured I'd wrap up what I think was another awesome year for our field.  What were the big developments from the perspective of this biased guy?

When I start at the top, the first thing that really hits me is the native analysis of ovalbumin.  I'm sorry, I'm still floored by that one.  Just when we start to think that we've got a handle on all the awesome things that evolution has produced in biological systems, to think that a new tool can just blow the doors open and make us realize how little we really know?  Big favorite.

Quality control in proteomics!  This was a huge year for it.  Dr. Mechtler's lab gave us new free software for evaluating the QC of RAW data (SympatiQCo), MRM proteomics released serum peptide standards, and people all over the place are using the PRTC standard.  I expect and hope that this is just the tip of the iceberg for QC!  Let's all do it!

2013 was also the launch for the Orbitrap Fusion, something I think is best highlighted by the Coon lab one hour yeast proteome paper.  Gosh damn that thing is fast.  I can't wait to see what people do with it this year now that there are a bunch of them out there in good hands!

Another star of 2013 has been the TMT 10 plex.  Isobaric tagging with no loss in coverage?  It is so great to see reagents evolving along with our instruments.  If I haven't said this enough:  Are you running iTRAQ 8 plex?  Do yourself a favor, call a sales rep and get a trial of the TMT 10 plex.  Its been almost 6 months and I still haven't talked to a single person who tried 10 plex and didn't stop doing iTRAQ 8 plex.  Night and day....

The completion of the SRM yeast library is a pretty big deal.  This opens up the real option of doing DIA on yeast!  With more libraries on the way (and easier ways of making them), I think we're going to start seeing spectral library searches and targeted quan moving closer to center stage.

I know I'm skipping over a bunch of things.  This year was huge for all of us.  And I may add things as they pop into my head.  End thought, though:  DMSO!   Holy cow!  I love this one!  What other awesome stuff is right underneath our noses?

I'm going to post this now.  And I'm real excited for what y'all are going to do in 2014!  If you can't tell, I'm a big big fan of this field and I can't wait for the next paper that totally blows my mind!