Thursday, February 19, 2026

Top down proteomics takes on liver fibrosis!

 


Wow, there are a lot of liver diseases and we've got decent diagnostics for....well...liver damage.... that's about it. A small panel of liver damage markers finalized around the time Stan Lee and Jack Kirby went on a creative bender and wrote everything from the Fantastic Four, through Spiderman and the X-Man comics. I'm not kidding, there really has been almost zero forward movement in liver diagnostics since the 1960s. The liver protein panel was old when I was running them in the clinic in 2003. And it's still the same thing. 

Could top down proteomics be the answer? 


Given the current limits of top-down it sounds unlikely, but you'll find what appears to be some pretty clear differentials in these small intact proteins (they seem to get up to 38kDa) in this study.

Is it the simplest way forward to getting some modern liver diagnostics out there? Maybe? But for a field that seems to have completely hit a wall 60 f'ing years ago, it's time to try everything! 

Wednesday, February 18, 2026

Arralyze CellShepherd- Live cell imaging tandem single cell proteomics!! Webinar 2/25!



Timing on this one is a little unfortunate, because everyone has schedules, however, we're getting this science fiction sounding new toy and I'm happy to invite you to this webinar next week! 

Okay, so what if you could do live cell imaging and - not messing around  - dose your cells with a drug and then use machine learning tools to go and pick up the cells that meet your criteria. For example, what if I was growing a population of cells - on the instrument(!!) - in the presence of a KRAS inhibitor and then when that annoying subpopulation of cells I can't seem to catch enough of with random sampling started to demonstrate the EGFR linked adaptation phenotype that no one can seem to figure out--- then the robot pick up that cell and then prep it for proteomics then moved to the other one?  

Science Fiction sounding, right?

I won't have data to show from this at US HUPO, Monday I'm showing the dingle cell workflows that you can get in the amazing Health Sciences Mass Spectrometry core and Wednesday I'll show Cameron and Shelby's work with single cells taken right out of human surgery. I'm biased, but that shit is Musa acuminata.

I'll post a link to the webinar on this the day of the seminar. We had a weird thing where some bored weirdo showed up at one of my writing group meetings and just yelled random bad words. 

You can learn more about Arralyze here. There isn't a lot of proteomics data there yet. Mostly them just showing off how they can observed and mess around with one cell at a time. 

Monday, February 16, 2026

RIPUP histones to rapidly find a new pile of post-translational modifications!

 


This new preprint is so legit. Not only does it identify a pile of histone post-translational modifications I didn't know about, but it does it fast AND it justifies the chemistry that makes it happen. 


What if the reason that some of these PTMs aren't visible is that the modification neutralizes that peptide's ability to pick up a charge? Makes sense. A lot of the more awful PTMs do. 

But what if you could make them visible by adding a boring ol' tandem mass tag? Bonus points for the introduction of an enzyme I didn't know about with an amazing name:

Stop, don't read this. Next line until someone is around and then read it really loud! 

"HEY! Someone order me some ULTRA ARRRRRRGGGG-C!!!!!"

Wednesday, February 11, 2026

Thermo has embraced Windows 11 on a bunch of software!

If you are like me and you have a PC in your office that your IT security people don't know about (shhh!)  that you just copy your RAW files to in case you need to look at an actual spectrum, do I have great news for you! 

As of a few months ago Thermo started embracing Windows 11! Check out this list! 





Tuesday, February 10, 2026

A protein organ specificity atlas based on PROTEIN DATA!

 


Leaving this here so I don't lose it for another week because I am incapable of committing any of these author names to memory.


You'd think it would be easy to find something written in the last couple of years that was about tissue-specific proteomics, right? 

You would be very very very wrong. I have 9 tabs open on just the PC I'm standing in front of in my house (wait. why am I here? I have a meeting in Oakland in like 30 minutes...? TYPE FAST!)

In 8 of these papers, the authors who wrote it used the GTEX RNA DATA to determine organ specificity. As you might remember from some of my earlier rantings, GTEX prediction of proteins in organs is better than flipping a coin. Just not by a lot. 

This extremely polite group integrated the GTEX data, but went ahead and did organ specific proteomics (in-gel digestion 6 cut QE HF / QE HF-X) on their own. (Sorry, really truly moving fast, I can't reference the methods.) Then they were extremely polite and constructive and integrate the GTEX findings into their analysis. They go with something like "if 2 out of 3 atlases show a protein is organ specific" then they consider it specific and move on. I don't know the origin of the second (probably RNA atlas) but that's one reason I was super annoyed I misplaced this paper! Now I won't! 

Monday, February 9, 2026

NanoPots + TMTPro 32 >600 tiny single cell proteomes/day!

 


In an interesting recent trend, everyone seems to be emphasizing how small of a cell that they can do single cell proteomics on. 

Do we have a new winner? No, Akos did single E.coli. Even if it was only like 25 proteins, that's clearly the winner for craziest tiny cell idea.

But this group did PBMCs! 


How much protein is in a PBMC? 

14 picograms! (These author's math, not mine). FOURTEEN? 

My group has recently struggled with some human immune cells... and from the TICs I was guessing we were starting with less than 50 picogram. FOURTEEN? Geez.

How'd they get there? 

NanoPots. Ouch. Okay, so something you have to build yourself, but something with incredibly ridiculously low sample loss. 

Then TMT 32plex. 

If they were able to recover all 14 pg, and lets just say that they used 30 channels for actual single cells. To the mass spec, that looks like an MS1 signal of 14 x 30 = 420 picograms. Ouch. That's still not much at all....

The instrument used was a FAIMS (2 voltage) Orbitrap Fusion III (Eclipse). Real time search (ion trap matching) was used to determine was ions to analyze in the Orbitrap for MS/MS. Dual columns and emitters were also used here, and they did have to fabricate a bracket to make that work. 

For samples this low in concentration there was some painstaking optimization, in particular of the "carrier" or (here) "bridge" channel. Sometimes called "boost" or "orgeano" because mass spectrometrists are still a bunch of cantankerous assholes who like to make up new terminology so that we seem as annoying and unapproachable as possible. I'm pretty sure it's just because we all hate research money and being taken seriously and we'd get a lot more of both if we'd quit making up stupid new terminology. 


The bridge channel was kept very very low. The highest tested appears to be 1ng. So...1.4ng on column....

You have to dig for the HPLC stuff in the supplemental but it's about an hour. I'm a little confused about how this version of the dual column parallelization works. It is detailed, but I didn't take time to draw it out, but it looks like each sample is about 60-75 minutes. 60 minutes for 30 cells gives you 720 per day and 75 minutes for 30 cells gives you about 575. The authors report 660 cells/day, so it's somewhere in the right range! They squeeze extra signal out with a ridiculously tiny column. 50um internal diameter and 25cm length. I think this is =>100 nL/min to keep the HPLC from leaving craters where labs used to be. Real time search with spectral libraries made from these samples do some heavy lifting here. And once the authors get it all optimized out they run the system for about 3 days to report data on over 2,000 single cells. 

Really truly impressive work and another great resource that demonstrates we could do high numbers of single cells if we really put the effort in! 



Thursday, February 5, 2026

Poor RNA to protein correlations are an artifact of poor proteomics data?

 


I was making slides for a class lecture and went down a long and windy rabbit hole on what we now know about the discrepancies between RNA and protein regulation. I landed on this one from 2022, and while it may seem like I'm rage baiting....I think it should go here anyway..


Despite the title of this blog post we aren't firmly blamed for all the errors. Some error does exist in the mRNA measurements, but it's pretty clear that the disagreement in protein measurements between different studies is something that is worth thinking about. 


Wednesday, February 4, 2026

Acceleromater correlations to the UK Biobank proteome data!

 


Wooo! Okay, if your reading this in a first world country this won't apply all that much to you. In my country you can now spend $20k USD per year on private health insurance for yourself and if you actually need it someone will be very financially motivated to deny you coverage for pre-exisiting conditions.

What if you could wear an accelerometer (I'm sure my wristwatch has one) and it could predict you might have a pile of different diseases? 

BOOM - pre-existing condition. 100% profit for the most profitable scam in my whole country! 


Science fiction? Or science fact? 

A bunch of people in the UK Biobank agreed to wear an accelerometer for a couple weeks as part of their contribution! And this group remined those data against the O-link proteomics data and the clinical data they could access from these patients.

You accelerate poorly? Dramatic increase in a ton of different diseases! 

Moral of the story? 



Tuesday, February 3, 2026

Do you need DIA-NN QC? Do you also need retro visualization choices?


 

Okay, we all need more ways to look at the quality of our data, particularly before we send it out to collaborators who may do who-knows-what with it! 

Only one QC tool out there gives you retro visualization options! And it's this one!

https://dia-nn-qc.streamlit.app/

Load a DIA-NN file and choose 80s terminal or 90s webpage or just the boring regular thing. Who says you can't have a creative background while you're making sure you've got the correct number of scans/peak? Not me! 

Monday, February 2, 2026

MuPPE - Serial enrichment of the phospho- and glycoproteome!

 


What a great month for method names already! Introducing the... 


...sequential single pot digestion and then sequential enrichment of the phospho- and glyco- proteome! 


I'm not entirely sure what all the advantages are of the Muppet method. The authors make it seem very streamlined, and I'm guessing that you can get away with less sample and sample loss by keeping things in the tubes, but early in they have to spend a lot of time diluting urea down to functional levels. If you want a lot of the details on how this is performed you'll need to go to page 28 in the Supplemental Info PDF. There you will find that an Orbitrap 480 was used for all analysis with DIA for the peptides and phosphopeptides and DDA for the glycopeptides. So it is still 3 different injections per sample. I am always happy to see something like this, in any paper even if it's on Supplemental page 32. 


I also find this a little concerning


...in Jonathan Pevsner's book (which you can get on Ebay for $12 in first or second edition), he warns that smiling volcano plots can be either a lack of data points, excessive presence/absense, or over-normalization. Since I think they've got a solid pile of data here, it does make me concerned that the data has bene over-normalized. Though...they used Bionic and specify a rather small n-glycopeptide library was used, so it could be the other two. Smiling plots just make me nervous. When I have one I generally find out I did something silly upstream. 

Otherwise this seems like an interesting method, particularly if you're not always doing phosphoproteomics or glycoproteomics and you have to do them. I don't see any reason why you couldn't digest the peptides with a more traditional approach and then put those peptides into this workflow around step 2 or so.