Tuesday, January 29, 2013
Optimizing your nanoLC conditions part 3: How many full scans do you need?
This is part 3 of this week's monologue on optimizing our nanoLC conditions. BTW, it seems like the title is evolving...
Anyway, this is going to deal with matching our sample and what we want out of it to our nanoLC and MS/MS settings
As I said in part 2, we can go one of two ways -- we can optimize our LC gradient to match our MS/MS settings, or we can go the other direction. Here are the important questions to ask:
1) How complex is the sample?
2) What is more important right now, run time or sample depth?
3) How many MS1 scans do I need?
1) How complex is the sample? Is it a gel spot? An old(ish) paper said that each gel spot from a human sample contains, on average, 5 proteins. That's a really simple sample by today's standards. If you are looking at gel spots, run fast LC, short columns and low cycle times. You'll be fine.
If you are running a whole proteome, which some estimates put at 1,000,000 (1 million! At least for human) you don't want to follow this same plan. Important note: If these estimates are correct, the most extensive study of human proteomics published so far found peptides that belonged to less than 5% of the total proteins present. Every global proteomics study of a complex organism is going to be a small snapshot of the proteins in the cell and what they are doing. Leading into...
2) What is more important to you right now -- the amount of time you put in for each sample, or the total depth of the sample? I have friends who do beautifully reproducible studies of patient proteomes and reliably get 3,000 quantifiable proteins for each patient in 4-6 hours of run time. They made the decision that this was far enough for what their facility is funded to do. Another group I am working with has extremely unique human samples that are probably the key to the malaria vaccine. They may separate a single patient's blood into 144 or more fractions, and take months of run time because the depth of their data is far more important than time. Anything you decide for yourself is going to be a compromise.
3) How many MS1 scans do you need? This gets us (finally!) to the sketch at the top of this entry. Keep in mind that on just about every instrument, the MS1 scan takes the longest, particularly if you want your MS1 scan to be the best quality. It is important to get some MS1 scans, but how many?
This is my opinion, take it or leave it: If I am doing label free quan, I want to have 10 MS1 scans over my average peak. If I am doing SILAC, I shoot for 4 to 6. If I am doing reporter ion quan, I want as few as possible! There is no quan data in the MS1, only the MS2 which also contains the sequencing information. So the MS1 is only useful for the selection of ions for MS/MS.
Not too long ago, I wrote something about cycle time calculations using a Q Exactive as an example. I also made some estimates of the cycle times of the other hybrid instruments (before I worked for who I work for now, and I've never checked the numbers.) So I won't bore you with those details again, but you can get a feel for how I'm thinking about this.
What's even better than thinking about it? Doing the experiment! This is the way I really do it (at least when I was running lots of samples): I look at how many samples are on their way and I decide on a run time that makes sense. My go-to gradients for generic sample types are: 80 minutes for a gel spot, 140 minutes for a gel band or OFF-GEL fraction, and 240 minutes for a pull-down, bacterial proteome, or a survey study of a mammalian proteome. If all that is coming that week is 20 gel bands, I might run a 160 minute gradient just to squeeze some extra data out.
When you get the samples, make a test run. Take a small aliquot of one of the samples (or something representative) and run it using your base method. When it is finished, look at the resulting RAW file. If you are using a Thermo Insturment, don't even look at it, just drop it in the RAW Meat program from Vast Scientific.
RAW Meat does a lot of great things, probably another entry for later. The important thing for here is the TopN spacing feature. This tells you how many times you hit your Top N. For example, look at the picture below:
In this experiment, a Top 10 experiment was employed. In almost every case, the Orbitrap selected 10 ions for fragmentation, suggesting that there is a whole lot more in there to fragment and that we're only scratching the surface.
Now, we could lengthen our gradient to improve our chromatography, or we could increase our TopN. In this case, we raised the cycle to a Top20.
Look at the improvement! Yes, we're still hitting the maximum number of fragmentations as the most common event, but it isn't the only event. And in this particular case, we nearly doubled the number of MS/MS events -- giving us more peptide IDs in the same length of time.
In part 4, I swear I'll get back around to column lengths -- I swear, there is a point to all of this!
On to part 4!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment