Saturday, April 2, 2022

SOMASCAN vs O-Link -- nextgen proteomics deathmatch!


(I couldn't find any pictures of the actual instrument that does one of these things, so I used my best judgement.) 

The blog hasn't had a head-to-head deathmatch in a while, but wooooo-- what a great topic for one! 

I gotta move fast -- and I've mentioned this preprint in passing a while back, but it keeps coming up at meetings and in calls, and it is absolutely worth revisiting


That sounds pretty benign, right? 

SomaScan is an aptamer based proteomics technology (nucleotide thingies are specifically designed to bind to whatever you want) that has been around for several years now. It has been getting a lot of attention recently thanks to a couple of big studies and the amazing amount of capital that the company pours into advertising revenue. I think a deep dive into ad costs vs investment capital for this tech would be a lot of fun. Strangely, you can't find a picture of the device that generates the "data." Many of the searches direct you back to this blog, and I'm probably not being all that helpful, but I'll try harder. 

O-link is a newer proteomics technology that is antibody powered, but actually pretty clever for a technology based on a reagent that millions of years of evolution has worked very very hard to make unpredictable. To get around the fact that antibodies will randomly bind to whatever they want to because binding to new things is legitimately their biological task, O-link requires that two matched antibodies bind to a target protein. When that happens the oligonucleotides on antibody A and antibody B will hybridize (bind together into double stranded nucleotides -- amplification of that sequence can only happen if hybridization happens. Only antibody A? No signal. Same for B.  This should rule out some false positives, but due to the unpredictable nature of antibodies, I expect an increase in false negatives also must occur. 

Deathmatch time! Ready? 


(Unrelated. Just popped up when I was looking for GIFs of impatient people.)

How'd they do this deathmatch? They looked at 871 proteins that both technologies could quantify.

In 10,000 individuals(!?!?!?) These things do seriously have some throughput capabilities! 

Good news? Neither technology appears to have a systematic bias toward specific pathways. That suggests there isn't a copy number bias or anything. 

Bad news? 

One does much worse with membrane proteins than the other, there is a thorough evaluation of that in the study. 

Correlation of results between the two? 0.38, you know, a little bit closer to a few things match here and there to NOTHING MATCHES. 

Which is right? Are either? Well, these authors go to the trusty pQTLs and genetic elements to see how they agree? Which is...ummm.....


Okay, so one of them might be great. Both of them might be terrible. 

The one thing that is clear from these data is that both of them can not be right. At least one of these technologies is bad at measuring protein abundance.  And this is where my over-the-top frustration comes in. 
The money being poured into these technologies is on levels that traditional mass spec based proteomics has never really had access to.  We know what our problems are. We know that we've been too slow in the past, we haven't scaled well or automated much and we haven't standardized. But here is the thing -- how much of this is due to limited resources. If the best group you could think of right this second had access to the kinds of resources these unproven "next gen" proteomics sytems have, would they fix all of that? I sure think so. 
My argument evaporates as soon as someone shows some convincing evidence that one of these things can measure a protein accurately. I'll keep complaining, and waiting till that happens. 

No comments:

Post a Comment