Wednesday, February 27, 2013

Do you need nanospray to do proteomics?


I've been doing MS for quite a while.  Nanospray was definitely around when I started, but it wasn't the way that most labs did things.  I toyed around with it a little in grad school, hated the existing technology at the time, and didn't do it again until my second Postdoc.  I still kind of hated it.  Microflow electrospray has so many advantages:  the pumps are more precise, they are easier to set up, troubleshoot, they degas themselves, the clog less often, etc.,
Nanospray, however, has the distinct advantage of a 1 to 2 order of magnitude increase in sensitivity so we have to use it, right?
What if I said that I'm starting to see labs that are rejecting the use of nanospray and are going back to micro flow electrospray for proteomics and are generating very nice results?  Here is the argument:  When the first nanoflow experiments were being routinely done, say 10 years ago, this was a tremendous jump forward for these relatively insensitive instruments.  The increase in analyzer sensitivity on current instruments is far more than 2 orders of magnitude.  Is it 3 or 4 or 5?  In any case, from this line of thinking, the advantages of microflow may outweigh the disadvantages of sensitivity for some experiments.
The image above is from one such experiment.  In this experiment, a 3 uL injection of a BSA standard digest at a concentration of 2 pmol/uL was ran at 200uL/min on a 15cm C-18 column.  It is difficult to see from the screenshot, but the gradient was (in total) 34 minutes.  The base peak intensity is roughly 2 E 8.
The punch line?  This digest was ran on an Orbitrap XL that could have had further optimization for this experiment.  Yes, the injection is pretty high by most standards, but imagine this same injection on a fully optimized Orbitrap Elite or Q Exactive with a little slower gradient.  We would easily be seeing a base peak of 1 E 9 due to the relative increase insensitivity and I expect we could drop this injection by 1,000 fold and still fully resolve this protein.  Hypothetically, this doesn't sound like we'd be that far off from a normal experiment now, does it?
Just a thought, but I'm excited to see what will come of these labs that are switching over to higher flow rates.

Friday, February 15, 2013

First step in the human proteome project


Detailed in this month's JPR is the first step in the HUPO human proteome project initiative.  This extremely interesting paper from Gyorgy Mark-Varga, Gilbert S. Omenn, Young-Ki Paik and William S. Hancock describes the layout of the chromosome-centric steps of this project.
The paper also has one of my favorite figures of this year (stolen blatantly from the theHpp.org)

Unlike the figure above, the one appearing in the paper is completely filled out.  I LOVE that the chromosomes were divided between so many collaborating labs in so many nations.  Science, the great unifier!
With the completion of this project (projected to be 2022), I may have to back off of my criticism of certain semi-targeted techniques, such as DIA.  If we truly have a library of the proteins present, these technologies start to make a lot more sense.
This paper is a short lunch break read.  I highly recommend that you read it in order to see where our field is going!

Wednesday, February 13, 2013

ProtMax -- automated spectral counting!


ProtMax was mentioned in a new paper this month in MCP, and Google helped me find it.  It is a really quick, simple, and extremely useful application provided by the University of Vienna and written by Dr. V. Egelhofer.
You upload the processed MzXML file and it automatically gives you quan data by spectral counting.  It can take into account retention time and link your spectral counts to your protein.  You can also make it search a specific target list, rather than the full scan data.  At the site it is also possible to download test data to verify that your formats are correct.  Hopefully there will be instructions soon for this handy little tool!

Tuesday, February 12, 2013

Tandem metal oxide enrichment for phosphoproteomics


"Arabidopsis", a word that strikes boredom into the hearts of so many students throughout the world! Mostly kidding!  I know how important it is to have good model organisms.
As proof, look at this great new paper in MCP this month from Hoehenwarter, et al.,.  In this study they used a tandem metal oxide enrichment procedure to do a bang up job characterizing the MAP Kinase substrates in this silly little plant.  I didn't even know plants used MAP kinases, which are one of the very central signaling pathways in cancer and most diseases.
In the study they used aluminum oxide to enrich intact phosphoproteins, digested them, then enriched the phosphopeptides with titandium dioxide.  The resulting double enriched phosphos were analyzed on an Orbitrap XL.  On this super enriched sample, they even got quantification data by using spectral counting in Proteome Discoverer and a new (to me, and apparently not-yet-published) software package called ProtMax.  Of course, I can't wait to investigate this!

Monday, February 11, 2013

Is iTRAQ 8-plex ruining your experiments?


I have done a lot of iTRAQ.  In my last job we used the iTRAQ 8-plex kits as our most common experiment.  Imagine my surprise when I was sitting in a presentation where the weaknesses of the iTRAQ 8-plex kit appeared to be a given!
 While I'm still waiting on further clarification and possibly a follow-up meeting with the technical experts to assembled this publication, I'm doing some digging.  And what I've come up with is that in the last 9 months or so the iTRAQ 8-plex kit has become one of the most reagents in our field.
 There have been some extremely critical papers recently, such as this one from Evans et al., where they show that iTRAQ constantly underestimates the ratio fold change.  A discussion of the paper at Shared Proteomics suggests that this may have been an issue with the HCD fragmentation on the Orbitrap XL.
  The guys on the forum also brought up this paper from Pottiez et al., that appears to completely refute these findings.  They show that using a MALDI-TOF/TOF and studying known ratios experimentally that the iTRAQ 8-plex is considerably better than the 4-plex kits.
 Further searching led me to this presentation from Matrix Science (MASCOT), presented at ASMS 2010 that also delves into the under-representation of iTRAQ ratios as described in this 2009 paper from Yen Ow et al., that has a pretty shocking illustration on the front (below).


When I moved into iTRAQ after years of label free and SILAC studies, I read a lot of iTRAQ papers.  The fact is that it is such a popular technology that there are just a staggering number of them out there -- and it is impossible to read them all.  This is how I interpret all this stuff I've been reading the last week:  Every technology is going to have some controversy around it, but if people are still getting good results from that system, then we need to roll with it until something better comes around.  I think the data out there that says which system is better for iTRAQ essentially neutralizes itself out.  4-plex and 8-plex are each going to have their distinct advantages and disadvantages.  As for the ratio suppression issue in iTRAQ -- we aren't using this as a definitive filter.  When we see a 2.3 fold up-regulation we don't really, literally, believe that protein is up-regulated 2.3 fold, or that it is an inferior measurement to another protein that is up 2.8-fold.  We know that in quantitative proteomics we have a huge error bar on every measurement of a complex digest.  If we are using iTRAQ to implicate pathways and keep take those ratios with a "grain of salt", as I think we all are doing, 8-plex iTRAQ is currently the fastest and best way to screen multiple samples at once.
Now if something better comes around, it comes around, but for now, we're going to keep running our 8-plex iTRAQ data.



Thursday, February 7, 2013

TraceFinder blog


This is a little off the topic of proteomics, but still on the topic of mass spectrometry, specifically for people interested in smaller molecule quan:  The TraceFinder blog is packed completely full of information on this software including new updates, video tutorials and, my favorite, the "Ask a Guru" option.  It would be hard to find a software package that is being supported as fully as Tracefinder

Wednesday, February 6, 2013

High Point RocketCache -- accelerate your database searching


I was on a plane and reading a lackluster review in Maximum PC magazine about the HighPoint rocket cache and going a little out of my mind.
Let me sum it up real quick:  The rocket cache allows you to connect a large (slow) rotary drive to a significantly faster solid state drive.  The solid state drive is then used as a memory cache to speed up the transfer rates of data to and from the big spinning drive.
From the reviewers: "As an example, when we ran HD Tune on the one-SSD-plus-1TB combo, we initially saw the drive hit 107MB/s sequential read speeds(the same score it hit on its own), then 169MB/s on the next run, then 194MB/s, anod on it went all the way up to 242MB/s"  More than double speed.

The review was lackluster because Maximum PC is primarily for hobbyists who are building or modifying their own PCs for gaming performance.  There aren't too many times in games where that kind of increase in hard drive read or write speed is going to make a difference.

Where it will make a huge difference?  Database searching!!!  If you are running multithreaded processing on a modern processor with 4 or more cores, the limiting factor is almost always your hard drive speed.  You slap in a solid state and it will have a bigger increase in your speed than almost any other modification.  But big solid states are expensive and there is a limited amount of testing of prolonged heavy usage.  If you use a small SSD while you are processing, then you have to constantly transfer data to and from the two drives.

But if you RocketCache it, you get the large cheap storage of the spinning drive, the speed and price of a small SSD and no hassles in constantly transferring data.

BTW, I will own this soon.  It currently runs $155 at Amazon and NewEgg

Tuesday, February 5, 2013

Cancer signaling pathway videos from MIT


This link came to me by way of the LinkedIn Biological Mass Spec forum.  It is a series of videos on cancer signaling pathways that were delivered at the Koch Institute.  If this is your kind of thing, I definitely suggest that you check them out.  The link is here.  And if you're on LinkedIn, you should check ouf the BMSF!

Monday, February 4, 2013

Printable mass table from PEAKS


I just ran across this handy tool from the nice people at PEAKS.  It is a printable mass table that has forward and reverse masses for individual and multiple amino acids.  It is a handy reference that I just placed above this monitor in my office.  You can find it here.

Get your publication reviewed here!


Crazy pre-coffee idea I just came up with.  I currently have a little more difficulty accessing the literature than I once did.  Obviously, I can get it, but I have to request full text one at a time from the library system that we use.  Here is my idea:  what if my readers started suggesting papers?  If you have a new paper of your own, or one that you simply find really interesting, send it to me!  You can forward me the URL of the abstract and I'll get our library to send it to me.  That way you can help me filter.  I request a lot of articles and often don't review very many of them because I feel they kind of suck.  If I can't blog something nice, I try not to blog anything at all.

Sunday, February 3, 2013

Proteomics for Biomarker Discovery Amazon Presale


This is pretty cool.  Tim Veenstra and Ming Zhou's new methods compilation text is now available via Amazon through a special pre-sale offer.  If you pre-order it now, the book is $101.67, which is crazy cheap for a Springer Protocols text.  They also guarantee that if the price changes at any time before the release, then you will get it for the lowest price.  If the title isn't enough to make you buy it, the book also includes the full method for my three stage enrichment process for global phosphoproteomics.  Yes, this is a shameless plug.

Friday, February 1, 2013

Optimizing your nanoLC conditions part 4: The effects of a longer column

This is the 4th part of my somewhat convoluted monologue on optimizing your nanoLC gradient.
You can read
part 1,
part2, and
part 3 here.
Again, I'm not a huge chromatography expert,  I remember some things from college about theoretical plates and column loading, but not well enough to embarrass myself describing them here.  What I do know is that the image above, taken from this Dionex app note, very accurately reflects what I've seen when experimenting with column lengths.  In this experiment, they used three column lengths, the same flow rates, and the same injection volume.  The important part to me from the MS side of thing is the peak height.  When the column length increases, the peak height does as well.  You can't see it in the above screenshot, but we can assume this -- if we loaded the same amount onto the column and the height increased, that means that the peak width had to decrease.  Decreased peak width means increased chromatographic resolution, less coelution effect, and more results for your MS/MS, and this is just with more material.  The real way to jack up your results is to increase the chromatographic resolution AND your peak intensity AND your gradient length.  You can read about an old comparison of a 10cm 140 min gradient and a 30cm 240 min gradient here. In these limited experiments, I found that I could save time AND increase my coverage by using a longer column/gradient combination.  Again, you have to take a good hard look at the requirements of your lab and the study in question, but I hope this helps get you started.  As always, if I can answer further questions or if you have suggestions, don't hesitate to contact me or post questions here and I'll do what I can.