Wednesday, February 11, 2026

Thermo has embraced Windows 11 on a bunch of software!

If you are like me and you have a PC in your office that your IT security people don't know about (shhh!)  that you just copy your RAW files to in case you need to look at an actual spectrum, do I have great news for you! 

As of a few months ago Thermo started embracing Windows 11! Check out this list! 





Tuesday, February 10, 2026

A protein organ specificity atlas based on PROTEIN DATA!

 


Leaving this here so I don't lose it for another week because I am incapable of committing any of these author names to memory.


You'd think it would be easy to find something written in the last couple of years that was about tissue-specific proteomics, right? 

You would be very very very wrong. I have 9 tabs open on just the PC I'm standing in front of in my house (wait. why am I here? I have a meeting in Oakland in like 30 minutes...? TYPE FAST!)

In 8 of these papers, the authors who wrote it used the GTEX RNA DATA to determine organ specificity. As you might remember from some of my earlier rantings, GTEX prediction of proteins in organs is better than flipping a coin. Just not by a lot. 

This extremely polite group integrated the GTEX data, but went ahead and did organ specific proteomics (in-gel digestion 6 cut QE HF / QE HF-X) on their own. (Sorry, really truly moving fast, I can't reference the methods.) Then they were extremely polite and constructive and integrate the GTEX findings into their analysis. They go with something like "if 2 out of 3 atlases show a protein is organ specific" then they consider it specific and move on. I don't know the origin of the second (probably RNA atlas) but that's one reason I was super annoyed I misplaced this paper! Now I won't! 

Monday, February 9, 2026

NanoPots + TMTPro 32 >600 tiny single cell proteomes/day!

 


In an interesting recent trend, everyone seems to be emphasizing how small of a cell that they can do single cell proteomics on. 

Do we have a new winner? No, Akos did single E.coli. Even if it was only like 25 proteins, that's clearly the winner for craziest tiny cell idea.

But this group did PBMCs! 


How much protein is in a PBMC? 

14 picograms! (These author's math, not mine). FOURTEEN? 

My group has recently struggled with some human immune cells... and from the TICs I was guessing we were starting with less than 50 picogram. FOURTEEN? Geez.

How'd they get there? 

NanoPots. Ouch. Okay, so something you have to build yourself, but something with incredibly ridiculously low sample loss. 

Then TMT 32plex. 

If they were able to recover all 14 pg, and lets just say that they used 30 channels for actual single cells. To the mass spec, that looks like an MS1 signal of 14 x 30 = 420 picograms. Ouch. That's still not much at all....

The instrument used was a FAIMS (2 voltage) Orbitrap Fusion III (Eclipse). Real time search (ion trap matching) was used to determine was ions to analyze in the Orbitrap for MS/MS. Dual columns and emitters were also used here, and they did have to fabricate a bracket to make that work. 

For samples this low in concentration there was some painstaking optimization, in particular of the "carrier" or (here) "bridge" channel. Sometimes called "boost" or "orgeano" because mass spectrometrists are still a bunch of cantankerous assholes who like to make up new terminology so that we seem as annoying and unapproachable as possible. I'm pretty sure it's just because we all hate research money and being taken seriously and we'd get a lot more of both if we'd quit making up stupid new terminology. 


The bridge channel was kept very very low. The highest tested appears to be 1ng. So...1.4ng on column....

You have to dig for the HPLC stuff in the supplemental but it's about an hour. I'm a little confused about how this version of the dual column parallelization works. It is detailed, but I didn't take time to draw it out, but it looks like each sample is about 60-75 minutes. 60 minutes for 30 cells gives you 720 per day and 75 minutes for 30 cells gives you about 575. The authors report 660 cells/day, so it's somewhere in the right range! They squeeze extra signal out with a ridiculously tiny column. 50um internal diameter and 25cm length. I think this is =>100 nL/min to keep the HPLC from leaving craters where labs used to be. Real time search with spectral libraries made from these samples do some heavy lifting here. And once the authors get it all optimized out they run the system for about 3 days to report data on over 2,000 single cells. 

Really truly impressive work and another great resource that demonstrates we could do high numbers of single cells if we really put the effort in! 



Thursday, February 5, 2026

Poor RNA to protein correlations are an artifact of poor proteomics data?

 


I was making slides for a class lecture and went down a long and windy rabbit hole on what we now know about the discrepancies between RNA and protein regulation. I landed on this one from 2022, and while it may seem like I'm rage baiting....I think it should go here anyway..


Despite the title of this blog post we aren't firmly blamed for all the errors. Some error does exist in the mRNA measurements, but it's pretty clear that the disagreement in protein measurements between different studies is something that is worth thinking about. 


Wednesday, February 4, 2026

Acceleromater correlations to the UK Biobank proteome data!

 


Wooo! Okay, if your reading this in a first world country this won't apply all that much to you. In my country you can now spend $20k USD per year on private health insurance for yourself and if you actually need it someone will be very financially motivated to deny you coverage for pre-exisiting conditions.

What if you could wear an accelerometer (I'm sure my wristwatch has one) and it could predict you might have a pile of different diseases? 

BOOM - pre-existing condition. 100% profit for the most profitable scam in my whole country! 


Science fiction? Or science fact? 

A bunch of people in the UK Biobank agreed to wear an accelerometer for a couple weeks as part of their contribution! And this group remined those data against the O-link proteomics data and the clinical data they could access from these patients.

You accelerate poorly? Dramatic increase in a ton of different diseases! 

Moral of the story? 



Tuesday, February 3, 2026

Do you need DIA-NN QC? Do you also need retro visualization choices?


 

Okay, we all need more ways to look at the quality of our data, particularly before we send it out to collaborators who may do who-knows-what with it! 

Only one QC tool out there gives you retro visualization options! And it's this one!

https://dia-nn-qc.streamlit.app/

Load a DIA-NN file and choose 80s terminal or 90s webpage or just the boring regular thing. Who says you can't have a creative background while you're making sure you've got the correct number of scans/peak? Not me! 

Monday, February 2, 2026

MuPPE - Serial enrichment of the phospho- and glycoproteome!

 


What a great month for method names already! Introducing the... 


...sequential single pot digestion and then sequential enrichment of the phospho- and glyco- proteome! 


I'm not entirely sure what all the advantages are of the Muppet method. The authors make it seem very streamlined, and I'm guessing that you can get away with less sample and sample loss by keeping things in the tubes, but early in they have to spend a lot of time diluting urea down to functional levels. If you want a lot of the details on how this is performed you'll need to go to page 28 in the Supplemental Info PDF. There you will find that an Orbitrap 480 was used for all analysis with DIA for the peptides and phosphopeptides and DDA for the glycopeptides. So it is still 3 different injections per sample. I am always happy to see something like this, in any paper even if it's on Supplemental page 32. 


I also find this a little concerning


...in Jonathan Pevsner's book (which you can get on Ebay for $12 in first or second edition), he warns that smiling volcano plots can be either a lack of data points, excessive presence/absense, or over-normalization. Since I think they've got a solid pile of data here, it does make me concerned that the data has bene over-normalized. Though...they used Bionic and specify a rather small n-glycopeptide library was used, so it could be the other two. Smiling plots just make me nervous. When I have one I generally find out I did something silly upstream. 

Otherwise this seems like an interesting method, particularly if you're not always doing phosphoproteomics or glycoproteomics and you have to do them. I don't see any reason why you couldn't digest the peptides with a more traditional approach and then put those peptides into this workflow around step 2 or so. 

Sunday, February 1, 2026

Target PTMs in single cells with ShtMtPro!

 


YES! Okay, so this is may finally be the smart solution to something we tried (and probably just about everyone else) with the SCoPE-MS/ScoPE2 workflows.

If you have a "carrier" "boost" "basil" or "oregonO" channel, why couldn't you load that thing with phospho-enriched samples (for example) instead of 200 cells or a a diluted perfectly digested pooled sample? The reason appears to be that your coisolated peptide (or junk) background ends up leading to a preposterous number of false discoveries. Remember that in these workflows your complete and total evidence for that peptide being there in single cells is just your single reporter ion. Since most PTM modified peptides are already in a suppressed region of signal to noise - and you only get one measurement of that phosphopeptide - you're already in trouble. (Wait. Is that too many dashes? Don't need y'all thinking some AI wrote this thing. Meh, I'll fix that in a minute). Throw in the contamination of your reporter ion signal with the isotopic impurities and now you've got tons of phosphopeptides and they may not really make sense at all. 

Ready to fix that? I sure am! Except...I don't have this hardware.... hmmm.... okay, but let's do it anyway! Introducing 2026's early entry for best method name......

ShtMtPro!


It's SureQuant with 

Super Heavy Tandem! (Sht) Mass Tags! (Mt) Professional (Pro) version! OMG. 

(Mandatory)

Okay, so the AMAZING name should not, in any way, distract you from how good these data are. Compare the number of PTMs you can pick up using this workflow vs DDA? ShtMtPro crushes it. Even vs PRM, ShtMtPro squeezes out a narrow victory! 

Intelligent - on the fly - targeting of chemically modified peptides IN SINGLE CELLS? Multiplexing so it's super fast?? Incredible idea that I bet no no one tried at all to talk another vendor into for 3 straight years. If you are thinking something dumb like "I can't do single cell proteomics, I just have this old Tribrid..." this is the second paper on this blog this week that should put you on the right track. If, however, someone offered you $75 and a pack of big red for that old Tribrid, I would happily give you twice that for it!

Edit: Okay, apparently they used an Exploris, which I was not aware could do the SureQuant thing. I thought it was a Tribrid exclusive workflow. Good news! There are a bunch of Explorises around! 

Saturday, January 31, 2026

ADAPT-MS - A starting point for automatically classifying clinical (untargeted) proteomics data!


This one took me a couple of rounds of putting it down and coming back to it later.

It's a smart concept and a very nice thing to think about as proteomics becomes more trusted as a diagnostic. 


I think I first thought it was something that it isn't, and that's why I had such a conceptual problem with it. Obviously, I might still have it wrong, but this is how I'd describe it. 

What if you had a random patient come in and you could do global untargeted plasma proteomics on their sample? Not inside of a controlled cohort that you planned 2 years ago and pulled all the samples from the repository? Just that one sample that just came in. That's how clinical stuff might work. A sleepy 22 year old might be working nights to save for grad or med school and be studying and run those 12 samples that came into the lab (typically because it's super important) at 3am. Could you do anything with global data? 

If the answer is no, then the future is not very bright for diagnostic untargeted proteomics. If the answer is shmaybe, then you're getting somewhere, and if it's a yes, then let's start building on this idea right this second.

To simulate it they pulled some traditional proteomic studies where they had a discovery cohort and then a validation cohort and someone did it all the traditional way. Found the markers in batch 1 and focused on how well that marker seemed to be predictive in batch 2. So these authors loaded those data, pretended they didn't know what went where and use the machine learning things to try and sort it out - and it totally ends up doing okay! 

We've got ourselves a shmaybe here! 

I appreciate the transparency of the authors, the conclusions almost read like a "limitations" section. The rest of the paper reads like someone was sending a secret code to Olga Vitek that only she would be able to decipher. If that was really what this was, Nature page fees may be the absolute most expensive way to do it....

Here is the thing, though, it didn't outperform the traditional human thing when the experiment is done really well (the example data they used is superb, probably outliers) but it did reasonably well, and that's still a huge deal. 

 And everything to reproduce it yourself is reasonably well annotated in these notebooks

Friday, January 30, 2026

Multi-technology analysis of human liver diseases!

 

I'm tired of reading today, but I really want to get back to this cool paper.


Really deep multi-proteomic type analysis looking for markers for why almost everyone has liver inflammation, but for some people it's a really bad thing that progresses to worse things.

You've got secretion proteomics, and neat plasma and depleted plasma, and some SomaScan data from a related study that they used, and normalized, but don't go into much. They describe the statistics and provide the output data as an excel spreadsheet, which I very very much appreciate. Really nice super high speed targeted work (5 minute gradients on a SCIEX) and just a whole pile of really cool stuff to dig through!