Saturday, January 15, 2022

StatsPro -- An new R package (and Shiny App) with a bunch of tests!


It felt like we went in a very short time from no stats for proteomics (some of the fancy nerds always had stats, supposedly) to too many downstream statistical tools to keep track of. 

Why StatsPro? Well, it's got more fancy sounding quantitative tests than anything I've ever heard of

In a scenario where someone says "did you run this test" and you suspect they are implying that you should know what it is and have ran that test -- even assuming you knew what they were talking about (you're a mass spec wizard, you can use that as evidence that you obviously know all the things or as justification for why you don't know things that everyone else seems to. I am personally far more comfortable with the latter. For example "why did it take 5 years of red "urgent" letters in your mailbox for you to realize that you live in a state where there are state AND "local" taxes?"

Really? I assumed that was rhetorical. 

 1) I'm not going to just open any red envelope addressed to me that says "urgent" on it that is addressed from a lady named Bambi. No offense to anyone named Bambi out there, but red is a weird color for an envelope.

2) I turn solids and liquids into gas and fire that I manipulate in vacuum chambers to do my bidding to understand how BIOLOGY and life itself works. That sounds like a slightly better use of my time than opening red envelopes, right? 

StatsPro is like this shortcut guide to taking your data and making SAM supervise his LEMUR and a bunch of other things that are mostly 2 people's long names. 

Also, StatsPro was developed on Proteome Discoverer output data. And most of the tools out there require that you move column names around to match MaxQuant output to process them! 

You can try StatsPro out here! 



Friday, January 14, 2022

US HUPO Speed Design T-Shirt Competition!

 


Would you love to have your art on a T-shirt for USHUPO? 

Are you the fastest artist in proteomics? 

Or do you have COVID so you can't go to work and you lost your phone so you can't 2-factor into anything at work, leaving you with literally no responsibilities whatsoever for the whole day? 

You are? Well, you two lucky people can compete against me in this announced today T-shirt competition! Deadline is also today! 

Try beating this! I tied proteomics to the hosting city's most famous resident. Seamlessly in MSPaint, right handed!  Yes, I have drawn every tattoo I have myself. Obviously. 



Thursday, January 13, 2022

ABRF Abstract Deadlines are THIS week!

 


Awww...crap...I am going to this one and the deadline for abstracts is this week

Alpha-Tri! Crank up DIA-NN on a GPU with fragment intensity predictions!

 

......

........whatever....check this out! 


The full text link seems to be disconnected right now, but what I think you'll actually want is this Github link! 

https://github.com/YuAirLab/Alpha-Tri

Not for the "I don't know what a conda is, but I definitely don't like it" crowd, but the directions are like 10 things if you are in the "I have a conda thing on my desktop some student set up once and I can cut/paste and follow directions AND I know this thing has an NVIDIA GPU in it that only mined like $1.85 worth of Ethereum yesterday, and if anyone asks this is absolutely more efficient than the space heater that I would otherwise need to not freeze to death in this office in January" crowd.  

Wednesday, January 12, 2022

More great ChemoProteomics with DIA!

 

This is just a beautiful study to flip through if you're interested in doing chemoproteomics


Sure, this isn't the first DIA chemoproteomics study that we've seen but -- and this could just be due to exposure and time to digest a previous study -- it somehow seems to be more approachable in this one.

Maybe it's that they don't have the strict page limits some other journals have? Or maybe it's that they use DIA on a Q Exactive Plus and use SpectroNaut? (All the methods are in the supplemental in this one.)

I'm not sure, but either way they do a great job of finding proteins that are drug targets with this method, so if this is of interest it's worth checking out. 


Tuesday, January 11, 2022

ASMS Deadline is in 2 weeks -- Get going, slackers!

 


I think there is a good chance I'm going to sit out the in-person part of this ASMS. I want to go to biology and medicine conferences to see what those people are doing!  We'll see, but I don't have a poster abstract deadline.

You great people going to talk about amazing advances in mass spectrometry in scenic and not-at-all filled with mosquitos in June Minnesota do have a deadline, however, and its in 2 weeks! 

Get on it, slackers! Those mosquitos aren't going to feed themselves! 

Get registered here! 

Monday, January 10, 2022

Getting more out of your (FLAG tag) experiments with FAIMS!

 


FLAG tags are for when you absolutely want to pull down a protein of interest to the point that you're willing to mutate your organism so you can pull down your flag tagged proteins with Anti-Flag. (Not these guys, these things).  

What are the challenges? Well, to do this the hard way, I'll cut from the abstract. 

Of this paper (almost forgot the paper link again!)


My interpretation is that when you use these proteins you've got a shitload of them around and it's hard to see past them to your targets. 

FAIMS time! 

Of course it helps. That's what it does. They experiment with settings to dig past their FLAG-tagged target protein that is 99% or 99.9% of their mixture. What is really cool here, I think, is how well the FAIMS unit works on their Orbitrap Fusion 2 (Luminati). 

They run the same samples on their FAIMS equipped Lumos compared to a non-FAIMS equipped Exploris 480 and Orbitrap Fusion 3 (Eclaire) -- and the Fusion 2 + FAIMS wins every comparison in the study. At one point they get 81% more peptides in the experiment than the Exploris 480! 

I don't know what they're charging for the FAIMS these days, but if you were looking for a boost in capabilities for some tough matrices I bet you'd spend less on this unit (minus your massive N2 usage) than you would on a whole new system! 

Sunday, January 9, 2022

PepMap Mountain and why you want to flatten it!

 


I looked for some of my LTQ  files from my postdoc to use as an example, found that some of these old drives might not work anymore (gasp! what will I ever do without 4 pounds blocks of steel holding low resolution files?) and then poked around on PRIDE until I found a few published studies with chromatography like this one above.  

Just to be clear, my chromatography looked like this a lot when I was moving from glycans (my grad work) to proteomics and I've seen lots of other people, primarily people used to analytical chromatography, end up with proteomic files that look like this. It's also very often an issue when people are getting used to PepMap. If you think this is your file, it probably isn't. I made chromatography suggestions on multiple papers I reviewed last year where I suggested making some alterations to chromatography for their next study. It'll take you 10 min on ProteomeXchange (assuming you can download files on a 10GB connection) to find several nice examples, but I'm going to pick on this random one. 

As fast and powerful as today's instruments are, you can get data out of files like this, but some chromatography optimization can get you much further.

For some hypothetical numbers, let's drop this file into RawBeans to see how things look. 

Holy cow. This instrument is FAST and I'd guess that getting the maximum number of MS2 scans was a primary goal of this file because it looks like there were in excess of 60 MS2 scans allowed between each MS1 scan! Even if that was an exceptionally rare occurrence, the apex of the chromatography is around 30 minutes and we see that here and there the instrument was hitting 40 MS2 scans for every MS1 scan. However, if you look across the board it looks like the average was probably a whole lot closer to 15. 

That math kind of makes sense because RawBeans says: 



So, 61,272/100min/60seconds gives around 10.2 MS2 scans/second. If we look at the area from 0-17 min where there is nothing and the space around 95 minutes, the active gradient was probably closer to 15/second. Honestly, not that bad, but again -- that is an extremely fast instrument. 

I dug a little and found a RAW file around the same length with a more flattened (PepMap) chromatography gradient --

--it's 130 min, so not an apples to avocados breakdown -- 

RawBeans says --

 

Realistically, not an improvement, but check this out -- 


This instrument isn't anywhere near as fast as the one above. The maximum number of MS2 scans that ever occur are around 15. 

But, 70,870/130min/60seconds = is around 9.1 scans/second. That makes this density plot look maybe just a little misleading? I think that the pixels size used to indicated each number of scan events must just max completely out on this scaling. 

Either way, we've got instrument A that is capable of 60 MS2 scans/MS1 and instrument 2 that maxes out at 15 MS2 scans/MS1 -- and because run #2 flattened out the PepMap mountain the much slower instrument gets comparable numbers of scans. 

Here is the gradient, as saved in the second RAW file

Honestly, this is actually a lower concentration of B than I would have guessed! But look how shallow that gradient is. The starting conditions are 5% B and almost all of the peptides are off by 20% B!  You know what? We only load 80% ACN/0.1% formic acid in our buffer B so this is probably an LC that has 100% ACN/0.1% FA. 

File 1? Well, it gets to 45% Buffer B in 80 minutes. Which honestly makes sense for analytical flow C-18 for a lot of organic molecules (HyperSIL Gold, for example, maybe you need 45% B to push your molecules off), but if you think about where 20% or 25% B would occur on file 1: 

(I crudely just used the relative abundance as a ruler, so this is...ish...) We hit 20% buffer B at 35 minutes and 25% B around 45 minutes. 

Now, what's funny about this is that if you just extract a couple of peaks from these 2 files, on the surface they look pretty good. In fact, the PepMap Mountain file looks like it has sharper overall peak shapes, with a full width half maximum (estimate about 50% of the peak height and then figure out how many seconds it is across that, or FWHM) that is better than the other file. 

While FWHM is a valuable metric it is NOT the most important one when thinking about mass spectrometry cycle time. What you actually care about in LCMS proteomics, almost always, is the peak width at the threshold where you would actually trigger an MS/MS scan. 

When you look at the PepMap Mountain file in that way, this is where you see the problem. 

This is actually tough to visualize, but what I did here was extract a randomly selected precursor, extracted it at 5ppm, and used "point to stick" to show it's occurrence. (Then I blocked out anything that might be personally identifiable in this file in red, I'm not trying to pick on anyone.) 


The minimum threshold in this file to trigger an MS2 scan is 50,000. Every red line that you see above there is greate than that, and most of them are 5x higher, so the mass spec thinks that every single one of those lines is an MS2 scan where it could trigger fragmentation of that ion. That's darned near 2 minutes. Most people aren't using dynamic exclusion settings of 2 minutes or more. I think most people are looking at their FWHM and placing their dynamic exclusion at just slightly more than that and this is what happens here. 



Green and blue lines are two filters for MS2 scans that fall within what the Orbitrap prescan (where your DDA target list is created, which is typically a short, fast and slightly less accurate scan to get things moving) mass range error, which I'll assume is around 25ppm (this is getting pretty long, but there is a blogpost somewhere from where we backtracked to that)

The imporant part here is that this peak, when allowing a signal of 50,000 counts to trigger, was triggered 6 times and that's why -- long long story even longer -- the percentage of MS2 scans that are converted to peptide spectral matches is significantly lower for the much much faster instrument with the PepMap Mountain chromatography compared to the significantly slower and older system. 

I guess I didn't mention that part. And the whole reason for this post! 

Here is the scenario that started this!  Brand new system A running a 2 hour gradient was compared to 9 year old sytem B running a 2 hour gradient. 9 year old system got 2x more peptides. Number of scans per file looked about the same. I can't share the actual files, but it only took about 4 minutes of searching to find published data to illustrate exactly what happened and for some reason I spent an hour making screenshots and typing when I should have been sleeping, but in my defense, I didn't feel awake enough to work on things that I get paid to work on. 

I pick on PepMap, but only because it is where this is the most extreme. Compared to any other C-18 I've ever used, stuff comes off it the earliest. I've often wondered how much a misunderstanding of this property leads to it's lack of popularity, but even at 100% aqueous you do seem to lose more phosphopeptides with it than with anything else and I'm pretty sure that's why CPTAC stopped using it. 

I'm going to stop, but here are some related past posts:

Even more boring chromatography post! 

Really old....geez...how long have I been writing in this orange box....extracting peak widths for optimizing dynamic exclusion widths

Crap. This one is even older. I wouldn't even post this, but I did review an LTQ Orbitrap study recently where unit resolution dynamic exclusion was used. People I work with today were in middle school when I wrote this, but I commonly wear shoes to lab that are older than some of them, so that's not all that weird, I guess. OH! And it has a great example of where my chromatography was a mountain! Totally worth linking here. 

Saturday, January 8, 2022

Capillary electrophoresis ESI of low nanograms of peptides!

 


I was hoping to see this one soon after getting a sneak peak at SCP2021! If you want to watch a talk describing some of these results it is available on YouTube here


I think what is most striking here is how very complementary the 3 separation techniques evaluated are. When we drop to these ultralow abundances like single cells all the stochastic effects that we're used to seeing when we run a high concentration sample 3 or 5 times are dramatically magnified. This is why in some of the published studies of single cell proteomics you'll see something like 2,000 proteins identified in the study but each individual LCMS run will often only have 300 or 400 protein IDs. 

What this looks like is a way to get at things with CE that you'll completely miss with LCMS alone. In addition, while I know this is a CE-MS study -- ummmm....the monolith columns really seem to shine in this analysis! 

The CE system used here is the (I'm pretty sure) big floor mount one from SCIEX. As much as I like the little source sized CE system, the units that I have used have been far less sensitive than conventional nanoLC and I don't think they could be used for an application like this. One the early beta units where you had 100% control of the loading pressure, time, and voltages for loading your sample...maybe... The commercial units don't give you that kind of control. 

The MS system used here was the Fusion 3 Eclipse system and, don't quote me, I read this earlier in the morning but I think the data was processed with the stand-alone Byonic software from Protein Metrics. 

Friday, January 7, 2022

TIMS Reduces coisolation interference! Hard numbers on how much.

 


FAIMS is super cool and I'm a big fan of the current iteration of the technology, but it's basically got a resolution of something like 5 or 10, right? It's superb for reducing background and that's what just about everyone uses it for, but if you set up 100 different compensation voltages and get an MS scan for each one of them, you've wasted a lot of time. A CV of 20 and 40 looks pretty different, but a 22 and a 24 look just about the same. Other ion mobility things have much higher resolution, but it's been tough to really quantify how much they help. 

Here two people who are really good at math work it all out! 


The reduction in coisolation interference is a lot. It's almost a 10x reduction compared to the TOF without TIMS! At the speed these things run at? That's ridiculously awesome -- reminder, you can realistically get 80+ high res scans/second on these things. Now, you do have to throw out the caveat that the quad is....umm....well, NASA can't build a quad this good, and neither can I, but you don't buy it for it's quadrupole isolation. 

Now, if you could only do multiplexed quantification on one of these things? Sounds like if you really thought about your TIMS isolation you could get some really really good numbers for coisolation interference! 

Monday, January 3, 2022

Determining Plasma Protein Variation Parameters for TMT Biomarker studies.

Are you getting super excited to get back in the lab? I've planned some big projects out and can't wait for some deliveries to come in! 

Great! Let's talk about the buzzkill paper of the day, where this group digs deep into variance in global plasma proteomics (using TMT quantification on an Eclipse, which, my all accounts is a pretty good too for performing that experiment). I think there are a bunch of MDs on it because who else is boring enough to want to bring a can of  "confidence intervals" to our vacuum chamber party? 


Honestly, it's not anywhere near as bad as I would have guessed before I took a deep breath and opened the PDF. It is only a problem when you consider how relatively small the average study on ProteomeXchange actually is. Yo, instruments are way faster, why isn't the biological n going up?  


I'm largely joking.  I realize lots of work is happening where people are getting their replicates up high enough to draw big biological conclusions and there is a lag phase while some things sit in peer review for a year or two. My guess is the next study from this group in Denmark is going to be something spectacular based on the work they put in up front! 

Sunday, January 2, 2022

Up your multiplexing -- you can order the TMT17 and TMT18 reagents now!

 


Wooooo! If you've got a bunch of TMTPro16 pro in your freezers ordering a big kit of TMT18 sounds kind of inefficient, right? 

You don't have to! You can just get the big aliquots of tags 17 and 18 here!

IN MAMMALIAN BRAIN -- Nuclear Proteomic Dynamics!

 

Not sure how I missed this one, but this system is ridiculously awesome.

APEX in situ proteomics is super cool stuff, and before everyone shut down or had to do COVID proteomics for 2 years, it looked like that was all anyone was uploading to ProteomeXchange.

(If you aren't familiar, here is something about APEX and it's friend Bio-ID I never did get sorted out entirely). 

This study does it in a live organism! Did you know that was possible? I sure didn't

Obviously, this is a ton of work. You have to make mutant mice and then find the exact right conditions to induce the APEX reactions the way you want them to and where. This team cranks up the speed by using multiplexing on an Orbitrap Eclipe. 

Why go to the trouble? 

Name another technique that provides cell-type specific quantitative profiling of subcellular proteomics in the brain of a mammal. I'll wait. AND Worth checking out if only for the pretty pictures. 

All the RAW files are on PRIDE here

Saturday, January 1, 2022

2021 Proteomics Recap!

Wow! I just found a draft of another proteomics 2021 wrapup that I'd started and not finished.

I like this one better than the last one because it is almost entirely me rambling. 

With no particular order this is my recap of what happened in Proteomics in 2021! 

1) The outside world noticed Proteomics in 2021! 

What was the tipping point? Was it the peta-bytes of DNA sequencing data that had acquired on diseases like: Alzheimer's, Huntingdon's, ALS, Schizophrenia, on and on and on, that are diseases that do NOT alter someone's DNA? ("Come on, guys if we just increase our read depth and generate 15TB per file, we'll get closer to something we can't possibly measure")

Was it that someone finally looked behind the scenes at Single Cell RNASeq data and realized (as one scientist I have tons of respect for recently asked "...ummm....should it all be zeros?") that you only get 8,000 transcripts quantified by sequencing at least 10,000 cells because over 90%-95% of every value in each individual cell-- at the whole transcript level -- is a zero. More on this further in. 

I don't know what it was, but someone at Forbes got up and went and took pictures of mass specs in some museum and lots of people in our field got offered a whole lot more money to go do proteomics somewhere else. Wow, there are a lot of start-ups -- I know more than a few people who either took or hesitantly walked away from 7 figure offers,  and...as a consequence....ummm....there are a lot of jobs open.... (this was just one side of one board at ASMS Philly. If we normalize the number of postings by number of attendees this might be sortof insane..)

A few months ago I made a list of over 100 open positions in one U.S. city for a slide deck. We might seriously need to do something about this soon....and I'm not sure that waiving the requirements for degree completion (as some companies are doing) is the best long term strategy for our field or for students who have been isolated during what should be the coolest years of their life and with their goals for a graduate degree slipping further away each time their lab gets shut down by some evil virus. We'll work it out in the end, but everyone should be aware that things like this are happening. If a CV crosses your desk down the road where someoned did 5 years of work and no fancy sword or robe is attached, please stop and think about the fact that COVID has impacted everyone to different extents. 


2) Proteomics (and mass spectrometry) has proven that it belongs involved in monitoring and responding to emerging diseases. 

Whew. I can't even try to link even 1% of them here. Yes, COVID-19 is still around. And the absurdly huge mountain of evidence on how this and other viruses work makes one thing clear -- mass spectrometry HAS to play a role in the detection and study of emerging threats in the future. Heck, I think someone let me ramble and entire article about it somewhere....

How's this virus work? 

It splices! It...um...glyces...kind of worked...it uses complex glycosylations!  

And it rapidly mutates in ways that, currently, only mass spectrometry is fast enough to respond to due to the fact that we don't need complex reagents to be produced and shipped out to be generated to adapt. Who else is continuing to learn lots of stuff about viruses that they never wanted to before? 

3) Single Cell Proteomics (SCP) is our field's moonshot! 

I think people not currently doing (or trying to do) SCP are probably tired of hearing about it, but the challenges of single cell, and the solutions, are clearly starting to trickle down to help everyone out. 

Rumor has it that the first instrument designed specifically for single cell analysis actually went to a lab that does HLA/MHC immunopeptidomics. Which makes a lot of sense because signal limitations are a huge problem with those awful peptides as well.

In addition, we're getting exposure through SCP to new things we didn't know about. Before this year did you know that robots have been around for decades that can move picoliters of solvent around rapidly and accurately? Heck, 18 months ago a regulatory body and I did 4 rounds of paperwork filings on the calibration of a robot that had the job of moving 200 microliters of methanol around and today my problems are failed calibrations of -- not nanoliters -- picoliters -- of acetonitrile, with a robot that was first released 10 years ago.

Other things are trickling down as well, like new surfactants that reduce peptide binding to plastics and glass and better data processing tools for digging deep in our data. We're also punching some holes in some old ideas that have hung around a bit too long from our roots in analytical chemistry. Things like -- maybe you don't need 100% coverage of every peptide every single time to call it an identification.

Also -- I think we're starting to see behind the genetics curtain and finding that they've got some amazing marketing department covering up a lot of problems, like the whole 90% missing value at the gene level thing. 

4) If we're really really careful, we can have conferences again...probably...

I have a 25% written ASMS 2021 wrapup here and it's all positive stuff about the conference. I've had trouble prioritizing it because my impression of ASMS in my head is mostly about how I'd largely forgotten how to interact with human beings in person. To be honest I've never once in my life walked away from a human interaction and thought to myself, "great job, Ben, it totally looked like you've held a conversation with another member of your species before! Have you been practicing?" But I was totally impressed with people I spoke to who seemed to be the same awesome people I hadn't seen in a couple of years. And for those of us who might have appeared to have lost a step, it's great that you were so cool about it. 

I guess the questions that we'll need to look at going forward is -- do we need to? Or is the hybrid venue here to stay? There certainly seem to be environmental implications to think about. The first few were kind of clumsy, but we're getting better at it and the technology is improving! 

5) You don't have to use an Orbitrap for proteomics!  

From around 2007 or so the Orbitrap just took over. Look at this distribution in datasets at ProteomeXchange ---


There are 18,518 datasets and the SCIEX Triple TOFs warrant a place on the chart with around 1,100 datasets and the Synapt has 335. All other instruments combined are lumped one section that makes up less than 10% of the total. 

But check out two massive studies I rambled about earlier this year! (Left and right


Sure, the Orbitraps are involved, but just as many runs were performed on other instruments. 

6) Probably even cooler? Check out that image above again and see how many times capillary LC was used!  The arguments against the dreaded nanoflow liquid chromatography are continuing to build! 

Or...38,000 runs and counting...? 


Are you doing single cell level sample limited work? If that answer is no then I'm going to continue to say that you don't necessarily need Nanoflow. Someone dropped a new NanoLC in 2021 that can do 20nL/min and I sincerely wish anyone crazy enough to do that luck with it. Just doing the math in my head I think that it will take around 74 calendar days to clear a 4 microliter air bubble from your lines. We'll keep running a lot of our stuff at 100 microliter per minute because about 95% of the time there is plenty of sample to get the same coverage. 

7) On a similar note -- other proteomics technologies are coming. Or are here? Fortunately, they seem too busy disagreeing with one another to cause too much of a problem. Slooooooowly the outside world is realizing that even at 4 million read copies of DNA you still won't get to a protein or metabolite abundance. The companies that have sold instruments for DNA and RNA sequencing have tons of firepower, money and way way way better marketing than we do. 

I'll give you an assignment. Ask any random scientist how many transcripts they think are quantified in a single cell using RNASeq. I bet you a beer at HUPO that they answer something like "all of them" or "10,000" something. They won't believe you when you give them the real answer which is "a couple hundred". 

I should put some data up to download! That's the answer, though. But they're doing thousands of single cells and so their Venn diagram of unique reads from unique cells works it's way up to thousands of transcripts when you've done 100,000 cells. Missing values? Oh...yeah, about 95% missing values at the full transcript level. This isn't me making a joke, this is reality.

Okay, so where does this ramble end? Not real sure, but maybe here. This is a lot of words. We've got the world's attention right now and let's see if we can handle the pressure and use protein specific measurements to figure out biology and medicine! 



Friday, December 31, 2021

My favorite proteomic papers of 2021!


What a fucking year, y'all. This might be just one part of the wrapup that no one has ever asked for! Despite the fact that this was the least active the blog has been since 2012 I still feel the need to close it strong. I've probably read more papers this year than any year of my life, but I've had to focus on things like learning basic biology and figuring out how the hell a cell sorter works so I can better understand my data. 

Enough rambling (not really) but here are my favorite papers in proteomics of 2021, in no particular order. 

1) 38,000 runs and going strong. The mounting evidence that we're overusing the weakest link in proteomics -- the Nanoflow HPLC. Do you have nanograms of protein or picograms of peptides? NanoLC is still critical, but if you've got micrograms of protein the improvements in mass spectrometers over the last decade have largely made gains in sensitivity that you get with Nanoflow HPLC redundant. 

2) RAWBeans -- Rapid, near universal, deep insight into your instrument files and performance from a simple and handy little tool. You can get great insight into metabolomics files using the tool as well. 

3) Multiplexed DIA is real. Maybe this is confusing with actually multiplexing your DIA windows. In this case I'm referring to multiplexing your samples with tags and running DIA so you get data from multiple samples simultaneously. You can use it with SILAC! Or with two cool new methods that use 3-plex tags. This one in ACS earlier in the year and this more recent preprint. And -- TMT?!? Why not?!?

4) Key the groans, but I have used AlphaFold2 a couple of times in December. Does it sometimes output some whacky gibberish? Sure! But with color coding to indicate structural confidence it's pretty easy to rule out and it beats having no structure at all! 

5) MONTE -- a method to get all the materials you could probably want from your cells. Some biological samples are literally priceless. This is the cleanest procedure I've ever seen to make sure that very little goes to waste.

6) GlycoRNAs -- I mostly like this paper because it shows just how much more there is to learn about biology with yet another class of critical new molecules. 

7) Q-MRM might be a bit polarizing, but I think we haven't scratched the surface of the potential this represents for updating 60 year old colorimetric assays used in the clinic today (or...ugh...radioactive ELISAs...) with inexpensive single quads. Hey! I just remembered that I was interviewed about this and I've never seen the article. I'll have to look for it. 

8) I was trying to keep this somewhat vendor neutral, but I do really like these two studies that are definitely not neutral, so I'll give them the same number:

8a) SureQuant-IsoMHC -- stupid levels of sensitivity, selectivity and accuracy in quan for MHC peptides. 

8b) AlphaTIMS -- makes digging through TIMSTOF data intuitive and nearly instantaneous. Data export comes off kind of whacky and I keep meaning to write the authors. Maybe I'll do that now! 

8c) Okay...well...three...this new mass spectrometer has so much novel about it that I'm going to feature it here. I'm also supremely impressed by how good of a secret this was kept. 

9) Inactivating coronaviruses


This is largely me just picking things with a little spare time I have while these files transfer. It was another big year for proteomics and this is just some of the great stuff y'all have done this year. Looking forward to reading a lot more of your great stuff in 2022!