Friday, January 17, 2025

Second crazy high depth single cell paper dropped same day from another lab!

 

The preprint of this paper is on the blog here somewhere I think, but this paper has progressed quite a bit since the initial posting. 


I'm pretty sure the preprint used just standard "Whisper" 40 and 80SPD. It's been a while, but that's my recollection. Those methods are somewhere in the 100nL/min elution range, though if you watch the EvoSep monitor closely it sort of goes to 50nL/min but maybe just for equilibration purposes. This appears to feature the new EvoSep WhisperZoom methods which are 200nL/min, or what you might consider ...the default method on your instrument for nanoflow liquid chromatography... 

The authors also decided to add a completely unnecessary new name to their method because the one thing we need while mass spectrometry based proteomics is rapidly being displaced by "next gen" technologies is more confusing nomenclature. What's Chip-Tip? It's using a CellenOne to load a 96 well plate that can be loaded into EvoTips. It's a vendor provided protocol that you can purchase right now, and name whatever you want. You put down like $1.7M for this whole workflow if your vendors really really like you, you can call it whatever silly thing you want to. Snarky annoyance aside this is both interesting and useful information --


In this iteration, which I do not have the time to read fully (disclaimer - I'm skimming and running an excitingly high fever) - they do employ the WhisperZoom methods which now go up to 120SPD. That's one single cell every 12 minutes! On a standard commercial system! I've got some of those small Whisper columns here and I'm taking one apart when the emitter inevitably clogs to figure out how much C-18 is actually in it, but it's probably like 5cm. 

How's the chromatography on a 5cm column on an Orbitrap based system running 4amu Asstral windows? Here are some of my favorite examples from the nifty new DIA-NN viewer! 


Sweet! You don't get much more gaussian than a literal f'ing triangle! 

Here's a Rhombus! Is that a rhombus? Maybe it's the fever but I initally thought Wesley Snipes would carry something with an end this shape vampire hunting. 



There are good ones here, though. I was just jumping through one file after thinking - wow - that seems like a slow method for a really really fast chromatography gradient. Here's a really good one. 


For real - an astouding protein coverage for a single actual cell! 

I would like to point out that there are other Asstral single cell preprints out there, Schoof lab has some amazing stuff on biorxiv coming soon, I'm sure. They utilize  more traditional DIA isolation schemes to get amazing chromatography out of the instrument. The Asstral's ability to accumulate DIA data with tiny windows is super super cool - but even at those speeds, it isn't a short cycle time and might not match well with super short chromatography! 

Thursday, January 16, 2025

Up to 5,300 proteins per actual single cell!?!?

 


Holy shit. I'm way too busy today to spend the time on it that it deserves, but I'll set some files to download until I do...


Up front - I'm biased. IMP is one of my favorite places in the world and this study features both friends and a personal hero or two, but this is a sick paper.

I do very much appreciate how clear the authors are here about cell size, etc., they don't average 5,300 proteins per A549 cell, but they do get that on the largest ones with the highest relative protein content. 

There is a lot to unpack here. They start with peptide dilutions, but then move to multi- proteome mixture to make sure that what they're seeing at low loads are relevant quantitatively. All throughout they are transparent about carryover and other factors that mean they are seeing proteins in their "blanks". 

Y'all, it wasn't that long ago when I wasn't getting 5,300 protein groups single shot proteomics on A549 cell lysates regardless of how many cells I started with. The fact a big A549 cell could give you anywhere near those numbers is just absurd. 

Wednesday, January 15, 2025

Thermo scores 600,000 sample proteomics study! With O-Linking!

 


600,000 freaking samples? FOR PROTEOMICS! 

The goal is a depth of 5,400 proteins/sample. 

Quick reminder that when you're using this platform you are PROBING for 5,400 samples. That doesn't mean you detect them all. Obviously they might just not be there. Or they might be below your linear dynamic range. But you tried to measure 5,400. 

I'm unclear from the press release what these samples are. All plasma? In which case - 5,400 is going to change the game entirely. 

The human blood atlas (thanks Ben, for the link to this updated page) only lists 4,200 proteins detected by mass spectrometry! 



Tuesday, January 14, 2025

What's an IVD or an LDT and why are new rules a big deal for clinical proteomics?

 


I'm behind the ball on this, it's been a busy year, but Adam's write up on it is a great place to start

Some background would probably be helpful - 

An IVD is an in vitro diagnostic assay 

LDT is a laboratory developed test

Sounds the same? Basically is, but how they're regulated has been - until now -very different. 

I only know about the mass spec part, so here is my incomplete and quite likely inaccurate interpretation of how this works - 

Hospital clinics generally have MD/PhDs hanging around helping patients, doing surgeries, training people and doing research in-between getting paged to go to another patient room or surgery.

The outcome of some of their research is that they'll find - in samples they've collected from their own patients - that small molecule A or protein B can help them predict a patient's recovery (or the not recovery). 

They can then go through a process to validate that assay and after getting approval from a whole bunch of regulatory bodies (here I'm hazy, probably ASCP and CLIA, and I bet FDA at least wants to do a solid pile of paperwork).

Generally, though, once that ASSAY is developed, you just need to lock down the method and run it on an FDA approved medical device mass spectrometer. Each vendor has 1 or 2 of those. The vendors go through huge expense to get these instruments approved as medical devices. That's on them, and probably a decent pile of the expenses come to you, the assay developer, when you purchase the instrument, but the time and energy necessary to validate the whole package doesn't fall on you. 

Boom - you've got an LDT

An IVD is a whole lot harder. Imagine you have to go through all of this stuff but then you have a full regulatory investigation and approval process on each and every assay? In this case, all of the expenses for developing the whole thing fall directly on you - the assay inventor. However, the IVD should now we something that other people can purchase as a complete package. 

The number of labs, people, institutes and - heck - commercial organizations that can foot the bill for an LDT is way way larger than the number that can develop an IVD. Then you've got to think about motivation. Of the number of organizations with the resources to develop and IVD - how many have the motivation? It can be a long expensive process and there are probably shareholders involved who are going to be unhappy if it that assay doesn't get approved in Q3. An FDA approved device that can be used for LDTs is going to be a safer relative investment than a whole new IVD. 

Now - that's the background. Adam's article can fill in the rest. I don't know if those rules are proposed or through or when they kick in or when they have, but it could be a big thing for US clinical labs. 

Monday, January 13, 2025

Quantum-SI-Pt-Pro - leading the proteomics revolution 3 proteins at a time!

 


I honestly don't know why I still have GoogleNews on my phone, it's descended to AI written gibberish that has worse grammar and coherence than this awful blog you're visiting. However, these over the top Science business topics do provide me with thinking points.

If you aren't familiar with Quantum-SI, it is a little benchtop box that can sequence a couple of proteins really well by degrading them.

If you immediately thought...wait...like the Edman Degradation thing we used to use before mass spectrometry replaced it in about 94% of labs? 

No! It isn't just an expensive Edman sequencer, it is on the front page of the website here.

No complex and expensive cyclical chemistry here! Just a more expensive instrument and more expensive proprietary reagents. 100% different. 

Wait, why did mass spectrometry replace Edman anyway? Was it because in 2005 it wasn't that tough to do 500 proteins/hour but with lower relative sequence coverage? Yes. That was why. 

What makes Quantum-Si the solution that is powering the proteomics revolution? Same link as above.


Got it.  Are the $ decimals? if so and mass specs are $$$$$ and the best mass specs on the planet right now have GSA pricing around $1.0M then, the Quantum-SI is $10? I don't think that is true, I think I was told $100k, ish for a Platinum. Maybe it's base 4? Whatever. 

What you can't argue with is that Analysis in mass spectrometry is complex. Absolutely, the worst idea is to invest in people who can do those analyses. The best idea is to get something simple. 

Look, I have exactly zero issue with people getting benchtop sequencers in their labs that help them sequenced their digested proteins slowly. And I'm jealous that they could make the software simple enough that people are gobbling these things up. The more people who walk away from higher and higher throughput nucleotide sequencing as the way to solve all of their problems when it is probably only the way to solve a lot of them, the better.

As a proteomics blog devoted to creating mass spectrometry memes and pointing out that you should be able to get a fucking degree in proteomics in 2024, I'm clearly biased against any big news article about the next big revolution in proteomics. When it's a technology that I or a whole bunch of my friends would absolutely destroy in a 1-on-1 head to head with an old Orbitrap that's a whole 'nother level. But...then....it makes me think of who I'd put money on and I remember that a not insignificant percentage of them are ...dead....or worse, in leadership roles where they don't run their own instruments - then this thing ...clearly has a valuable niche that it can and probably might effectively contribute. 

Still a good idea to make fun of it, though. 

Friday, January 10, 2025

Boringest study of 2025 so far! 9,000 harmonized QC samples ran by a bunch of labs over 4 years!

 


In case you're new here or my sarcasm doesn't translate well - this study is amazingly fantastic and probably very very very very boring.


Quick summary? These nerds did the stuff that everyone hates to do and they came up with a QC standard that they could run at every site. FOR YEARS. Actually FOR FOUR (4, vier, quattro, cuatro, fier, quatro [cause they like t's less in Portugal than in Italy]) YEARS! 

Why is this super fantastic? I mean, besides the 9,000+ QC files? 

1) The mass spec instrument methods are completely harmonized (settings are the same for each piece of hardware regardless of which lab it is in). 

As an aside - if you've got an Orbitrap instrument and you're writing your method section and you are wondering what parameters to include in that section - DO IT THIS WAY. PLEASE. Everything that is actually important for me to go to an Exploris 480 and replicate your experiment is right there. 

2) They didn't standardize the HPLCs or settings! Look, I ain't going to use a Waters HPLC unless you give me one for free. I have close friends who are otherwise very informed and rational who will always use those HPLCs. (They do seem very nice except for that whole "they will never ever pick up all the sample in that well no matter what, and they seemingly last forever in the hands of people who know them. I just don't know them and I have my favorites already). This is real world stuff. 

3) They don't hide the results or the variation! No joke, between two core facilities using the same mass spec one lab gets almost 2x the peptides as another lab. It's totally worth looking at why this is the case. And don't get me wrong, it isn't because one lab is better than the other or one go a bum instrument, they're doing it different and delivering a different end product. Super interesting (yes, and very very boring). 

Ben Neely and I did a podcast (actually, out today? maybe) where we walked through our favorite papers of 2024 and he pointed out that 2024 was a huge year for good valuable QC/QA proteomics stuff. 2025 ain't off to a bad start either! 



Thursday, January 9, 2025

How does population level targeted proteomics do in predicting cardiovascular events?

 


Unless you've been living under a rock where you don't hear much about proteomics, you probably know that a bunch of people did plasma proteomics on 53,000 samples with proximity extension assays (O-link panels, not the little 96 target ones, the big ones that require super high end RNA Sequencers). 

Here is one of the original papers (October 2023). There have been some really positive statements about how the O-link data correlates well in some regards with GWAS level findings. 

Now this data is basically out there for people who want to explore it. You can find it here, but keep this in mind.


Excluding GWAS associations, how does it currently look in actual practice? There are going to be tons of thse out here, but this one is really fun

Realistically, I like this because it's short and it is focused on current clinical markers of cardiac stress, which were old when I was running colorimetric assays in the clinics from 2003-2007. They're still in use because they're pretty darned good. 

This group took the O-link data and did some very fancy sounding machine learning stuff I couldn't possibly explain or replicate and had the algorithm predict if someone would havea major cardiac event. 

And this is what is SO COOL about a dataset this size. They actually have samples from people over time. They can separate the same patients that they have proteomics on in the past and the future. Then you can see if the smart machine was right or wrong. And you have the original clinical data from regular old comprehensive metabolic or cardiac specific panels for both. 

How'd it do? The authors words are better than mine. 

Don't get me wrong, this resource is awesome. And maybe a different machine learning tool would do better. And if our cardiac markers in the clinic weren't as good at they are (for identifying cardiac events in men, they aren't nearly as adequate in women, but that's a completely different topic for more qualified people), they wouldn't still be in use. 

It is, however, interesting that when we apply these data to something concrete, it isn't quite the world shaking advance that we might have hoped from the genome wide association studies. 

Tuesday, January 7, 2025

SpectroNaut 19 has super smart new PTM functionality!

 


Happy New Year, y'all! Wow, I'm busy with this move, it's 1/7/25?? Typing fast! 

Okay, so a while back I did a funny thing where I put up a survey on LinkedIn and also on some other social media platform and got two very different replies from the respective communities. 

My exaggerated summary is: 

Industry people seem to ALWAYS normalize the amount of PTM (for example, phospho) versus the total amount of protein present.

Responses from academics were more like - 


If you're in that latter category, here is the thing from my perspective - you never ever see a western blot for the MAPK pathways where someone just blots against phosphoSTAT, phosphoERK, you always see this sort of thing (taken from this paper

They do this because, presumably, having 10x more ERK protein at 1% phosphorylation site occupancy and no change in ERK protein but a 10% increase in phospho-ERK means something important. I honestly don't know, I just always do this because the western blot people do. I do it with an Excel sheet I made like 10 years ago and a Shiny App a previous student and I made more recently. 

SpectroNaut now does really really smart things (smarter than I do) automatically! There is more information about this here!

Tuesday, December 31, 2024

Rapid microglia phagosome proteomics!

 


A big thank you to Aleksandra Nita-Lazar for suggesting this as one of the most impactful papers of 2024 that didn't make my initial list! 


Again, the less I type here the better on the physiology/interpretation point, but from the introduction it's pretty clear that: 1)  microglia phagosomes are super important biologically and in a "plethora of brain pathologies" (stolen from the abstract) and 2) you'd think we'd know a lot about them, and we do not.

The part I do understand here is that a LOT of techniques were employed including proteomics on the purified phagosomes. That part was performed using a QE HF-X using around 100 minute total gradients. I like the fact that a very large inline traps were used (6cm x 100um) and I think that extra pile of chromatographic resolution in the trap helps make a 14cm x 75um separation column better match the relatively long gradients employed. DDA was used and downstream analysis was performed with MaxQuant and Perseus

LCMS metabolomics was also performed on a QE Classic exclusively using HILIC separation (I think) with data analysis through Compound Discoverer. There is a typo in the MS1 resolution, but otherwise it's all really well detailed. 

I love me some multi-omics data integration and there is a lot here to integrate. 

Aleksandra said that there are fantastic biological findings here and the authors seem equally enamored with this whole pile of data we have on these important subcellular compartments.