Sunday, January 30, 2022

Proteomics of opportunity -- the myotendinous junction proteome!

 


A lot of science is about opportunity. One reason that I really really wanted to be where I am now is because there appears to be opportunity all over. There are institutions devoted to rare diseases all over this campus and places where maybe even a mediocre mass spectrometrist can do something good for the world if he/she tries really hard. 

I found this new study inspirational because this group had an opportunity and seized it fully. They turned tiny bits of material that is removed during the common ACL corrective surgery and produced a complete proteome of something non-MDs probably didn't know existed


So many sections and subsections and subtypes of subsections inside of us! As an aside, if you think that anatomy is super well understood, I ran samples for a guy a while back who discovered a new human organ that no one knew about until his team found it. No joke. It was a really big deal, even on mainstream media....and...well...they brought me some mouse brains...not the thing that is rapidly being drawn into anatomy textbooks with a sharpie. Meh.

Whatever this myotendinous thing is sounds like it is an absolute joy to work with as the team details how they had to crank the SDS concentration up 4x over other muscle tissues to get it to solubilize. They use an HF-X system and MaxQuant/Persues for the data analysis. Then they do stellar immunofluorescence to just drive home the point that this region is unique and that when we homogenize even tiny bits of tissue we're probably muddying tons of cool and unique biology. 

If you get an opportunity for access to something cool and unstudied, this is a great example of how to use it to bring science (and, in this case, medicine) forward! 

Saturday, January 29, 2022

Needler -- Build an MRM for any/every human peptide?

 

I find this new preprint just a bit hard to follow, but I still think it is worth thinking about


What they did was take a sideways look at ProteomeTools and PROSIT and the healthy human tissue samples studied by some of the authors of those same tools. The question is whether you could leverage those materials to build SRMs (MRMs, which are the same thing, depending on what instrument you use) without ever having to buy standards and build a curve yourself. (Maybe that wasn't exactly the goal, but by the time my peptide standards come in I've often forgot why I ordered them in the first place, so I'm going to assume that was the goal). 

PROSIT does predict retention times, which is pretty important for SRMs and critical if you really want to target whatever you want from purely in silico method building. 

This is a lofty goal and one that might be hard to explain to people because this is where I'm fuzzy on the results. Again, totally worth thinking about, though, because how cool would it be to just have -- POW -- assay designed --> run it! 

You can check our Needler on Github here

Friday, January 28, 2022

WinO -- Prepping proteomic samples in bubbles in oil, cause why not!


Have you ever read a study and been both impressed by the ingenuity and quality of the work while simultaneously hating every well-written word and beautiful picture in it?  

I love WinO because it looks like a well-executed and extremely clever study. I also hate it because [excessive angry profanitities dedacted after I realized when I type profanities on this admittedly loud keyboard it scares the raccoon family that has set up shop in my attic. Turns out I haven't been hallucinating stomping in my attic, I just saw a tiny clumsy raccoon! SUPER cute, but I imagine having them up there is subideal for at least a couple reasons....] I thought that maybe, just maybe, at 17,832 variations of how to prep mammalian cells for LCMS based proteomics we'd finally gotten them all and we surely didn't need another way to introduce new inter-/intra-lab variability into our experiments, but here's another one! 


The illustration above pretty much explains the process. The aqueous suspension of cells and trypsin all conglomerate in the oil to get everything together so they can hang out. I'm fuzzy on how you get rid of all of the oil before LCMS. 

The results are seriously impressive. The titrate down the number of cells they use by FACs based cell sorting and look at them label free with scanning SWATH and with multiplexing on a Tribrid Orbitrap. Is it really smart and innovative? Yes.

Is it a beautiful, clear, well-executed study with remarkably well-displayed results? Also yes.

Is it a little frustrating that we have another sample prep method to keep track of that will undoubtedly produce a different distribution of detected proteins than the exact same cells prepped with another method? And will this method undoubtedly, at some point, undermine someone's credibility because lab A will use this method and lab B will do something else and the same data will go out to the same person and that person will be very confused about what is going on? 

Of course it is, but that shouldn't reflect on this study in any way because that is what LCMS proteomics has always done.

Am I going to be a vulgar hypocrite and try this method? Fuck yes I am. I ordered oil yesterday to set up some collaborator's samples right now. This is the best way of doing things, because I obviously didn't prep their cells like this last time, I didn't even know it was possible, so I don't even give any of you the chance to undermine me and my results, I'll do it myself.

After I try to get a picture of a baby raccoon. Dude was trying to get water out of our frozen pond, I think, so I'll need to put out a heated water bowl or something. 

Thursday, January 27, 2022

MetaNetwork -- WGCNA for proteomics with no R or Perseus!

 


I give MaxQuant and Perseus a tough time, and will probably continue to for the duration, despite the fact that -- if you can get them running and keep them running -- and are patient (for MaxQuant, I've never waited for Perseus to do anything, fast!) the data they produce is totally legit. How it gets there? Fuzzy on that, but this summer I added to my pile of Perseus installations that are labeled by what features work and don't work the ultimate prize -- uMAP, tSNE, and WGCNA


If you do want to know what the first two are, this guy walks you through them slowly using some simple geometric models and other tools so well that you'll piece together the clues from different clips of Youtube videos from MaxQuant summer schools over the last 4 years to get a version of your Perseus functional so you can use it. You install plugins and R packages and make everything work together. WGCNA is powerful enought to draw conclusions from data as lousy as microarrays, so imagine what it can do to your well-controlled proteomics data! 

What was I typing....oh yeah! MetaNetwork!
What if you could have clustering and enrichment power from people who make really straight-forward and powerful graphic user interfaces? 


I gotta go and Docker is reminding me that the way they do free vs. nonfree changed in 2022 and I don't want to read all the things it says, so I can't run stuff through this one myself, but you can get MetaNetwork on Github here!



Wednesday, January 26, 2022

Why is there no correlation between transcript and protein abundance?

 

While it's common in proteomics for us to casually throw around the fact that protein and transcript abundances don't correlate, this hasn't exactly replaced the central dogma in all the textbooks yet. 

The figure above is from this study that maybe convinced some people during my postdoc that I wasn't just a moron.  I was, at the very least, a moron who could demonstrate that other people couldn't get their RNA microarrays and SILAC proteomics on the same cell lines to make a nice correlation line. 

A and B are different ways of looking at nucleotide abundance (oligo sequence vs the complementary sequence) vs protein abundance. D) is just random. So...protein abundance is like you took random and shifted it to the side a little. (It's not quite that bad, but at this point in time imagine that I was reviewed each year to see if I got funding my fellowship renewed -- and news that I was a moron had got around. I heard things like "We spent $1M on this giant noisy Orbitrap and he's he can't even replicate the data from this $800 microarray with it".  I needed someone to sign off on my $35,000 and, while the Burger King across the street was advertising a higher salary I still had these "ideals" and wasn't ready to sell my soul to a corporation just yet. I needed to show anyone this who would still make eye contact with me.) 

Okay, so we know it doesn't correlate. Maybe biologists aren't learning it yet, but why? 

Worth noting, this team took a whack at it, and although part of the error was clearly spectral counting used in a lot of studies (not my SILAC ones) but it still looked bad even with better methods for LFQ


Okay, but again, I was going for "why" and got distracted. This is my favorite!  


This was 2016 and I don't know for sure if everyone in proteomics had gotten to the complete point of agreement on this RNA-Protein mismatched thing. So this study still had to tiptoe in and really start with technical variability problems in RNA and in proteomics data (where we're still worse than RNASeq was then, but we're getting better, I think!) but then it went right into what matches and what doesn't! 

What matches mostly? 

Okay. That sort of makes sense. Cells sometimes just hang out not really doing anything but metabolizing and their normal jobs. 

What mismatches and why? 

Wait. Translation isn't always instantaneous and it is regulated by complex feedback loops that don't always make sense to simply have running at full tilt? Weird. 


Protein doesn't just magically disappear after it is made. Controlled and amazingly regulated mechanisms degrade them. Sometimes is makes more sense to just degrade all the protein that you have around faster than you normally would. And the amount of messenger RNA for that protein will give you about zero direct info on how fast that protein is getting degraded. 

Not all proteins "cost" the same amount to make. This review introduced me (maybe a lot of people) to thinking of intracellular environments in sort of an economic model? In my head this is the easiest example. TITIN is like 30,000 amino acids. Transcription and translation, even at the rocket pace they move at have be just a little bit different in their costs (and relative speeds of each) than making a 120 amino acid histone protein (which is probably the worst example I could come up with, but in terms fo size still makes sense) 

Okay, so this was a lot of words and I've got to run, so if it only gets you to my favorite reference about protein/mRNA correlation what and why, I've done this ridiculous unpaid job of typing lots of words into this orange box that I've been doing for over 25% of my life! 

Monday, January 24, 2022

PRM-PASEF without fancy software!

 


PRM-PASEF was a big topic at the first remote ASMS and it totally worked, but you had to modify an SQLite file manually (how I was doing it) or with Skyline (which, in my hands caused PCs to struggle to load the huge DDA files to make targets). The software on these instruments is continuing to improve and the newest OTOF control (Compass 2022?) includes a PRM scheduling table like those on other instruments. Type in mass of peptide and it's ion mobility). 

How's it perform against traditional targeted approaches? (By "multiplexed" it isn't talking about TOMAHAQ type stuff.)




Sunday, January 23, 2022

ProtSeq -- The most complicated way to analyze a proteome yet?

 



I'd been meaning to get to this one since the holidays, in particular because some of the leaders of our field seemed to be such fans of it. 

ProtSeq is more evidence that biology and medicine is really really interested in proteomics right this second and they will go to any lengths to avoid mass spectrometrists. As one EXTREMELY prominent medical researcher said on a webinar I recently sat in on mass spec based proteomics was described in flattering terms which included "toxic" and "short-sighted." We were largely detailed as a group that doesn't care about medicine, only about one-upping each other with incremental improvements and it was implied that our field leaders were essentially just extensions of the marketing departments of instrument vendors. Which...well....honestly might not have been directed at me, but I sure spend a lot of my time promoting new toys from vendors and now I feel like kind of a like a gross old puppet. Oh, my notes say the speaker also said "miss the forest, for the trees". I'll see if it is up where I can share it. I bet I can find a bunch.

I won't even go into how ProtSeq works here. Not just because I don't understand it, but because I'm really really sleepy from moving instruments for expensive incremental hardware improvements. I need to join in on the cycle here that goes like:

1) Prominent mass spec researcher publishes data on unreleased prototype instrument. This should be 2-4 months before ASMS. 

2) We see 4 million posters at ASMS. Noisy sleep deprived blogger amplifies all of these

3) Early adopters get the instrument (a lot of the functions don't work and it's highly unstable, bug fixes will roll out every month for the first year or so and taper off to every 2 or 3 months 

4) Our "big" journals will publish literally anything from the instrument during the unstable period Yeast? Sure! HeLa? Of course! This is partially to marvel at how anyone somehow got the thing to do absolutely anything. But we only get a couple freebees. One yeast paper is okay and 2 HeLa papers. Bonus points if half the authors on the paper are vendor R&D team who were largely necessary to keep the instrument from burning down the hospital where it was housed. 

5) 24 months later -- a bunch of papers from the early adopters are finally published. More conservative facilities, or ones where funding needs to be requested 1-2 years in advance finally get their purchases through and the stable instrument starts showing up in tons of labs. 

6) 6-9 months later, prominent mass spec reseracher publishes data on new unreleased prototype instrument for the next ASMS and the cycle begins again. 

I'm a hardware instrument nerd. I'm super impressed with the physics and math that goes into making these things work and a lot of our field is. We're a minority. I'm increasingly seeing young people who want to do proteomics but do not care in the slightest how the thing works that gets them there. But maybe that was everyone all along? 

ProtSeq and O-Glink and SOMAProteinGuesser and a bunch of other things rolling out are pretty clear that no one wants to play these games with us. To be honest, after writing out how it works, I can see the point. It sounds like it's great for the shareholders of instrument vendors and not for science. 


Saturday, January 15, 2022

StatsPro -- An new R package (and Shiny App) with a bunch of tests!


It felt like we went in a very short time from no stats for proteomics (some of the fancy nerds always had stats, supposedly) to too many downstream statistical tools to keep track of. 

Why StatsPro? Well, it's got more fancy sounding quantitative tests than anything I've ever heard of

In a scenario where someone says "did you run this test" and you suspect they are implying that you should know what it is and have ran that test -- even assuming you knew what they were talking about (you're a mass spec wizard, you can use that as evidence that you obviously know all the things or as justification for why you don't know things that everyone else seems to. I am personally far more comfortable with the latter. For example "why did it take 5 years of red "urgent" letters in your mailbox for you to realize that you live in a state where there are state AND "local" taxes?"

Really? I assumed that was rhetorical. 

 1) I'm not going to just open any red envelope addressed to me that says "urgent" on it that is addressed from a lady named Bambi. No offense to anyone named Bambi out there, but red is a weird color for an envelope.

2) I turn solids and liquids into gas and fire that I manipulate in vacuum chambers to do my bidding to understand how BIOLOGY and life itself works. That sounds like a slightly better use of my time than opening red envelopes, right? 

StatsPro is like this shortcut guide to taking your data and making SAM supervise his LEMUR and a bunch of other things that are mostly 2 people's long names. 

Also, StatsPro was developed on Proteome Discoverer output data. And most of the tools out there require that you move column names around to match MaxQuant output to process them! 

You can try StatsPro out here! 



Friday, January 14, 2022

US HUPO Speed Design T-Shirt Competition!

 


Would you love to have your art on a T-shirt for USHUPO? 

Are you the fastest artist in proteomics? 

Or do you have COVID so you can't go to work and you lost your phone so you can't 2-factor into anything at work, leaving you with literally no responsibilities whatsoever for the whole day? 

You are? Well, you two lucky people can compete against me in this announced today T-shirt competition! Deadline is also today! 

Try beating this! I tied proteomics to the hosting city's most famous resident. Seamlessly in MSPaint, right handed!  Yes, I have drawn every tattoo I have myself. Obviously. 



Thursday, January 13, 2022

ABRF Abstract Deadlines are THIS week!

 


Awww...crap...I am going to this one and the deadline for abstracts is this week

Alpha-Tri! Crank up DIA-NN on a GPU with fragment intensity predictions!

 

......

........whatever....check this out! 


The full text link seems to be disconnected right now, but what I think you'll actually want is this Github link! 

https://github.com/YuAirLab/Alpha-Tri

Not for the "I don't know what a conda is, but I definitely don't like it" crowd, but the directions are like 10 things if you are in the "I have a conda thing on my desktop some student set up once and I can cut/paste and follow directions AND I know this thing has an NVIDIA GPU in it that only mined like $1.85 worth of Ethereum yesterday, and if anyone asks this is absolutely more efficient than the space heater that I would otherwise need to not freeze to death in this office in January" crowd.  

Wednesday, January 12, 2022

More great ChemoProteomics with DIA!

 

This is just a beautiful study to flip through if you're interested in doing chemoproteomics


Sure, this isn't the first DIA chemoproteomics study that we've seen but -- and this could just be due to exposure and time to digest a previous study -- it somehow seems to be more approachable in this one.

Maybe it's that they don't have the strict page limits some other journals have? Or maybe it's that they use DIA on a Q Exactive Plus and use SpectroNaut? (All the methods are in the supplemental in this one.)

I'm not sure, but either way they do a great job of finding proteins that are drug targets with this method, so if this is of interest it's worth checking out. 


Tuesday, January 11, 2022

ASMS Deadline is in 2 weeks -- Get going, slackers!

 


I think there is a good chance I'm going to sit out the in-person part of this ASMS. I want to go to biology and medicine conferences to see what those people are doing!  We'll see, but I don't have a poster abstract deadline.

You great people going to talk about amazing advances in mass spectrometry in scenic and not-at-all filled with mosquitos in June Minnesota do have a deadline, however, and its in 2 weeks! 

Get on it, slackers! Those mosquitos aren't going to feed themselves! 

Get registered here! 

Monday, January 10, 2022

Getting more out of your (FLAG tag) experiments with FAIMS!

 


FLAG tags are for when you absolutely want to pull down a protein of interest to the point that you're willing to mutate your organism so you can pull down your flag tagged proteins with Anti-Flag. (Not these guys, these things).  

What are the challenges? Well, to do this the hard way, I'll cut from the abstract. 

Of this paper (almost forgot the paper link again!)


My interpretation is that when you use these proteins you've got a shitload of them around and it's hard to see past them to your targets. 

FAIMS time! 

Of course it helps. That's what it does. They experiment with settings to dig past their FLAG-tagged target protein that is 99% or 99.9% of their mixture. What is really cool here, I think, is how well the FAIMS unit works on their Orbitrap Fusion 2 (Luminati). 

They run the same samples on their FAIMS equipped Lumos compared to a non-FAIMS equipped Exploris 480 and Orbitrap Fusion 3 (Eclaire) -- and the Fusion 2 + FAIMS wins every comparison in the study. At one point they get 81% more peptides in the experiment than the Exploris 480! 

I don't know what they're charging for the FAIMS these days, but if you were looking for a boost in capabilities for some tough matrices I bet you'd spend less on this unit (minus your massive N2 usage) than you would on a whole new system! 

Sunday, January 9, 2022

PepMap Mountain and why you want to flatten it!

 


I looked for some of my LTQ  files from my postdoc to use as an example, found that some of these old drives might not work anymore (gasp! what will I ever do without 4 pounds blocks of steel holding low resolution files?) and then poked around on PRIDE until I found a few published studies with chromatography like this one above.  

Just to be clear, my chromatography looked like this a lot when I was moving from glycans (my grad work) to proteomics and I've seen lots of other people, primarily people used to analytical chromatography, end up with proteomic files that look like this. It's also very often an issue when people are getting used to PepMap. If you think this is your file, it probably isn't. I made chromatography suggestions on multiple papers I reviewed last year where I suggested making some alterations to chromatography for their next study. It'll take you 10 min on ProteomeXchange (assuming you can download files on a 10GB connection) to find several nice examples, but I'm going to pick on this random one. 

As fast and powerful as today's instruments are, you can get data out of files like this, but some chromatography optimization can get you much further.

For some hypothetical numbers, let's drop this file into RawBeans to see how things look. 

Holy cow. This instrument is FAST and I'd guess that getting the maximum number of MS2 scans was a primary goal of this file because it looks like there were in excess of 60 MS2 scans allowed between each MS1 scan! Even if that was an exceptionally rare occurrence, the apex of the chromatography is around 30 minutes and we see that here and there the instrument was hitting 40 MS2 scans for every MS1 scan. However, if you look across the board it looks like the average was probably a whole lot closer to 15. 

That math kind of makes sense because RawBeans says: 



So, 61,272/100min/60seconds gives around 10.2 MS2 scans/second. If we look at the area from 0-17 min where there is nothing and the space around 95 minutes, the active gradient was probably closer to 15/second. Honestly, not that bad, but again -- that is an extremely fast instrument. 

I dug a little and found a RAW file around the same length with a more flattened (PepMap) chromatography gradient --

--it's 130 min, so not an apples to avocados breakdown -- 

RawBeans says --

 

Realistically, not an improvement, but check this out -- 


This instrument isn't anywhere near as fast as the one above. The maximum number of MS2 scans that ever occur are around 15. 

But, 70,870/130min/60seconds = is around 9.1 scans/second. That makes this density plot look maybe just a little misleading? I think that the pixels size used to indicated each number of scan events must just max completely out on this scaling. 

Either way, we've got instrument A that is capable of 60 MS2 scans/MS1 and instrument 2 that maxes out at 15 MS2 scans/MS1 -- and because run #2 flattened out the PepMap mountain the much slower instrument gets comparable numbers of scans. 

Here is the gradient, as saved in the second RAW file

Honestly, this is actually a lower concentration of B than I would have guessed! But look how shallow that gradient is. The starting conditions are 5% B and almost all of the peptides are off by 20% B!  You know what? We only load 80% ACN/0.1% formic acid in our buffer B so this is probably an LC that has 100% ACN/0.1% FA. 

File 1? Well, it gets to 45% Buffer B in 80 minutes. Which honestly makes sense for analytical flow C-18 for a lot of organic molecules (HyperSIL Gold, for example, maybe you need 45% B to push your molecules off), but if you think about where 20% or 25% B would occur on file 1: 

(I crudely just used the relative abundance as a ruler, so this is...ish...) We hit 20% buffer B at 35 minutes and 25% B around 45 minutes. 

Now, what's funny about this is that if you just extract a couple of peaks from these 2 files, on the surface they look pretty good. In fact, the PepMap Mountain file looks like it has sharper overall peak shapes, with a full width half maximum (estimate about 50% of the peak height and then figure out how many seconds it is across that, or FWHM) that is better than the other file. 

While FWHM is a valuable metric it is NOT the most important one when thinking about mass spectrometry cycle time. What you actually care about in LCMS proteomics, almost always, is the peak width at the threshold where you would actually trigger an MS/MS scan. 

When you look at the PepMap Mountain file in that way, this is where you see the problem. 

This is actually tough to visualize, but what I did here was extract a randomly selected precursor, extracted it at 5ppm, and used "point to stick" to show it's occurrence. (Then I blocked out anything that might be personally identifiable in this file in red, I'm not trying to pick on anyone.) 


The minimum threshold in this file to trigger an MS2 scan is 50,000. Every red line that you see above there is greate than that, and most of them are 5x higher, so the mass spec thinks that every single one of those lines is an MS2 scan where it could trigger fragmentation of that ion. That's darned near 2 minutes. Most people aren't using dynamic exclusion settings of 2 minutes or more. I think most people are looking at their FWHM and placing their dynamic exclusion at just slightly more than that and this is what happens here. 



Green and blue lines are two filters for MS2 scans that fall within what the Orbitrap prescan (where your DDA target list is created, which is typically a short, fast and slightly less accurate scan to get things moving) mass range error, which I'll assume is around 25ppm (this is getting pretty long, but there is a blogpost somewhere from where we backtracked to that)

The imporant part here is that this peak, when allowing a signal of 50,000 counts to trigger, was triggered 6 times and that's why -- long long story even longer -- the percentage of MS2 scans that are converted to peptide spectral matches is significantly lower for the much much faster instrument with the PepMap Mountain chromatography compared to the significantly slower and older system. 

I guess I didn't mention that part. And the whole reason for this post! 

Here is the scenario that started this!  Brand new system A running a 2 hour gradient was compared to 9 year old sytem B running a 2 hour gradient. 9 year old system got 2x more peptides. Number of scans per file looked about the same. I can't share the actual files, but it only took about 4 minutes of searching to find published data to illustrate exactly what happened and for some reason I spent an hour making screenshots and typing when I should have been sleeping, but in my defense, I didn't feel awake enough to work on things that I get paid to work on. 

I pick on PepMap, but only because it is where this is the most extreme. Compared to any other C-18 I've ever used, stuff comes off it the earliest. I've often wondered how much a misunderstanding of this property leads to it's lack of popularity, but even at 100% aqueous you do seem to lose more phosphopeptides with it than with anything else and I'm pretty sure that's why CPTAC stopped using it. 

I'm going to stop, but here are some related past posts:

Even more boring chromatography post! 

Really old....geez...how long have I been writing in this orange box....extracting peak widths for optimizing dynamic exclusion widths

Crap. This one is even older. I wouldn't even post this, but I did review an LTQ Orbitrap study recently where unit resolution dynamic exclusion was used. People I work with today were in middle school when I wrote this, but I commonly wear shoes to lab that are older than some of them, so that's not all that weird, I guess. OH! And it has a great example of where my chromatography was a mountain! Totally worth linking here. 

Saturday, January 8, 2022

Capillary electrophoresis ESI of low nanograms of peptides!

 


I was hoping to see this one soon after getting a sneak peak at SCP2021! If you want to watch a talk describing some of these results it is available on YouTube here


I think what is most striking here is how very complementary the 3 separation techniques evaluated are. When we drop to these ultralow abundances like single cells all the stochastic effects that we're used to seeing when we run a high concentration sample 3 or 5 times are dramatically magnified. This is why in some of the published studies of single cell proteomics you'll see something like 2,000 proteins identified in the study but each individual LCMS run will often only have 300 or 400 protein IDs. 

What this looks like is a way to get at things with CE that you'll completely miss with LCMS alone. In addition, while I know this is a CE-MS study -- ummmm....the monolith columns really seem to shine in this analysis! 

The CE system used here is the (I'm pretty sure) big floor mount one from SCIEX. As much as I like the little source sized CE system, the units that I have used have been far less sensitive than conventional nanoLC and I don't think they could be used for an application like this. One the early beta units where you had 100% control of the loading pressure, time, and voltages for loading your sample...maybe... The commercial units don't give you that kind of control. 

The MS system used here was the Fusion 3 Eclipse system and, don't quote me, I read this earlier in the morning but I think the data was processed with the stand-alone Byonic software from Protein Metrics. 

Friday, January 7, 2022

TIMS Reduces coisolation interference! Hard numbers on how much.

 


FAIMS is super cool and I'm a big fan of the current iteration of the technology, but it's basically got a resolution of something like 5 or 10, right? It's superb for reducing background and that's what just about everyone uses it for, but if you set up 100 different compensation voltages and get an MS scan for each one of them, you've wasted a lot of time. A CV of 20 and 40 looks pretty different, but a 22 and a 24 look just about the same. Other ion mobility things have much higher resolution, but it's been tough to really quantify how much they help. 

Here two people who are really good at math work it all out! 


The reduction in coisolation interference is a lot. It's almost a 10x reduction compared to the TOF without TIMS! At the speed these things run at? That's ridiculously awesome -- reminder, you can realistically get 80+ high res scans/second on these things. Now, you do have to throw out the caveat that the quad is....umm....well, NASA can't build a quad this good, and neither can I, but you don't buy it for it's quadrupole isolation. 

Now, if you could only do multiplexed quantification on one of these things? Sounds like if you really thought about your TIMS isolation you could get some really really good numbers for coisolation interference! 

Monday, January 3, 2022

Determining Plasma Protein Variation Parameters for TMT Biomarker studies.

Are you getting super excited to get back in the lab? I've planned some big projects out and can't wait for some deliveries to come in! 

Great! Let's talk about the buzzkill paper of the day, where this group digs deep into variance in global plasma proteomics (using TMT quantification on an Eclipse, which, my all accounts is a pretty good too for performing that experiment). I think there are a bunch of MDs on it because who else is boring enough to want to bring a can of  "confidence intervals" to our vacuum chamber party? 


Honestly, it's not anywhere near as bad as I would have guessed before I took a deep breath and opened the PDF. It is only a problem when you consider how relatively small the average study on ProteomeXchange actually is. Yo, instruments are way faster, why isn't the biological n going up?  


I'm largely joking.  I realize lots of work is happening where people are getting their replicates up high enough to draw big biological conclusions and there is a lag phase while some things sit in peer review for a year or two. My guess is the next study from this group in Denmark is going to be something spectacular based on the work they put in up front!