Thursday, October 17, 2024

Three different retinal degeneration mutations result in the same (treatable?) phenotype!

 



Need to read something super positive and optimistic today? I strongly recommend this new study in press at MCP that totally made my day! 


It's really easy to look at the broad range of different genetic mutations that can lead to a single disease and think.....


Retinal degeneration diseases ABSOLUTELY fall in this category. Check out this associated paper on progressive vision loss in dogs.

Mutations on 17 different stupid genes are known to lead to just progressive retinal atrophy - which is just one of many diseases that cause dogs to go blind later in life. 

If you are in drug development in either primary research or for applied for-profit stuff what do the odds of success sound like for a disease caused by at least 17 different things? Can you convince someone to help fund you while you chase targets that may only help a small percentage of those afflicted? 

Almost always? No. That's a bad elevator pitch and a worse grant application. In pharma? Start sending out CVs before you ask.

Why this paper is so very very cool is that they took some of the mouse models for progressive retinal degradation (mutations on different genes!) and looked at the proteins that actually change vs controls. They're the same! 

Unnecessary reminder for most people here (good for outsiders, who still can't seem to get this stuff straight) 

Genome is genotype, that's what the DNA says, but that isn't what is physically happening

Proteome is often the phenotype (what is physically happening!) (or at least very close and involved in the phenotype)

AND - Nearly all drugs target proteins! 

These authors don't miss the point here either. Who cares what the gene is that caused the protein change if you know the protein causing the problem? Not me, not these authors, and certainly not patients. Cause now you've got something to develop a drug against! 

Tuesday, October 15, 2024

Revisiting the Harvard FragPipe on an HPC technical note in terms of total time/costs!

 


I read and posted on this great technical note from the Steen groups a while back and I've had an excuse to revisit it today.


Quick summary - they ran EvoSep 60SPD proteomics on a TIMSTOF Pro2 on the plasma of 3,300 patients. They looked at their run time on their desktop and estimated processing it the way they wanted to would take about 3 months. Ouch.

What they did instead was set the whole thing up on their local high performance cluster and they walk you through just about every step. 

It took them just about 9 days to process the data using a node with 96 cores and 180GB of RAM. They do note that they never appeared to use even 50% of the available resources, so they could have scaled back in different ways. 

Where I was interested was - if I was paying for HPC access, how many core hours would I be set back for doing it this way? 9 days x 24 hours = 216 hours x 96 cores puts it at 20,000 core hours, right? I know some HPCs track how much you actually use in real time based on the load you're putting on their resources, but others don't. So it's probably at the very  most 20,000 core hours. Which is the estimate that I was looking for when I went looking for this paper.

Not counting blanks/QCs/maintenance - 2 months of run time for a 3,300 patient study. 9 days to process. It's such an exciting time to be doing proteomics for people who care about the biology. And - I'll totally point this out - 60 SPD isn't even all that fast right now! It's a 6 week end to end study at 100SPD! 

Thursday, October 10, 2024

Use a carrier channel - to reduce(!?!) your boring background!

 


This smart new technical note does something that I think many people have thought about, but both pulls it off AND methodically dissects it so it's now a completely valid tool to put in our utility belts. 


Problem: There are 10,000 proteins here and I don't care about any of them. I care about the stuff after those first 10k. 

Traditional solution: Fractionate and fractionate some more and cross your fingers. 

New idea - Isobaric tag (TMT is one solution) all your peptides. Then tag (with a different channel) a higher abundance amount of the peptides that you care about.

Perfect application? Infected cells! Even if you've got a super duper bad bacterial infection pretty close to 100% of the protein around is going to be human. But if you label bacterial proteins and spike those in at a higher level you've biased your stochastic sampling toward the bacterial proteins and effectively reduced the host background! 

Where this shines is the pressure testing. Smart standards are made and tested and tested. Instruments that can reduce coisolation with tricks like MS3 seem to be the best. Ion mobility (here FAIMS) coupled MS2 comes in second and MS2 alone has a lot of background, but still works. 

The proof is divided between a bunch of public repositories. Easier to copy paste than link them here. 


Wednesday, October 9, 2024

How much do sample specific libraries help in DIA low input/single cell proteomics?

 


At first this new study is a bit of a head scratcher, but once you get past the unnecessary nomenclature, it's worth the time to read. 

Ignore the DIA-ME thing altogether. I should remove it from the title. Wait - I have a car analogy - just about every review of the Ford Mustang Mach-E is something like "this is a really nice EV, we were just confused about the whole Mustang thing." 

DIA-ME is just a name for how literally everyone processes single cell DIA data. We know library free isn't as good as library. And we know that it really doesn't make sense to look for transcription factors in global single cell data. Not even the marketing releases at ASMS have claimed to get to proteins at 10 copies/cell and - oh boy - there are some slide decks from ASMS 2022 that no one has published yet...and not just because I'm reviewing every other SCP paper and limping around punching things while typing anonymous snarky things (I'd rather write snarky things where everyone knows who I am and why). So you run 100 or 200 of your cells on your super sensitive new instrument and you make a library out of that data. Maybe you do that 10 times. Then you analyze your single cells against that library. Works great. Walkthrough here for 2 popular programs. 

However - we're all largely doing that because you've got to get 1,000 proteins/cell to get your paper published in a Nature family journal. How much does using these sample specific libraries effect our results and the biological findings? 

That's the gold in the method of this paper. These authors painstakingly disect it with spike ins and different library loads and it's all very telling. They use 5 cell and 20 cell and 100 cell libraries and on and on. 

If you're interested you can read it. I'm adding it to my reference folder for later. 

THEN - the paper gets cool. Forget the mass spec stuff - this group takes some U-2 OS cells which are one of the best studied cell lines for understanding circadian rhythm (smart! stealing this idea for some targeted stuff coming up) and they hit the cells with Interferon gamma. I don't know how to make the funny greek letter thing. 

And - no real surprise to anyone who has seen a control/dose response thing in single cells - they identify 2 very different populations of cells. In fact, the two populations appear to be almost entirely opposite in their response! There isn't as much on this as you might hope from the biology side, but it's still cool. Would we want every single one of our cells to go into a pro-inflammatory response? Probaby not! Most adult humans I know are doing everything they possibly can to reduce inflammation whenever possible because that stuff is gross and toxic. 

It drives home how important it is for eukaryotic cells that not every cell is going into a full out inflammation cascade when messed up cells derived from a cancer patient and grown in plastic since 1964(!!!) are exhibiting a bimodal response. I was snarky at the beginning of this post, but I think it's both an important and very interesting study, as well as both visually pretty and well organized.

Thursday, September 26, 2024

Wait - do we even need high resolution mass spectrometry if we're doing protein ID/quan?

 


This one is totally worth thinking about - AND - it's open access! 


A lot of proteomics today is just measuring protein abundance, right? And now that we have all these cool ways of predicting and matching the relative intensity distributions of fragmented peptides, do we even need to go past unit resolution mass? Or....did someone....just convince us we absolutely needed it all the time....


Yo, I am not a big unit resolution anything fan. I've been stuck on things like - is this a nitrate or a sulfate or a phospho and it's big enough I can't tell what the monoisotopic ion is. 

You know - mass spectrometrists probably don't get enough credit for how absolutely bizarre our sense of humor can be about the stuff we do. 

Chris posted this paper and it descended into chaos


This is funny because citrulline is such a pain in the ass PTM that even Orbitraps suck at determining what is a citrulline vs what is an M+1 isotope when it's an intact peptide. And you aren't just fragmenting that probably-not-really-citrullinated peptide - you're fragmenting all the crap around it in this big dumb window. 

And the whole reason I'm writing this post instead of cleaning my house before my Mom - who will totally tell everyone back home that my house isn't clean? 


Spit out my coffee. OMG. It's so great to know a group of people this funny. 

Back to the paper - this is super important. We've got people out there measuring proteins with arrays and antibodies - poorly - but rapidly - and some of us are about to lose our lunch money. Maybe we are overdoing it here and there. And ion traps are tough - and easy to build - and fix - and they can be screaming fast. And they can be cheaper to buy and run with those little vacuum pumps. It's totally worth thinking about. 

Wednesday, September 25, 2024

The current status of the NCI Proteomic Data Commons - it'll get there!

 

The National Cancer Institute Proteomic Data Commons is such a big big big idea. And it is dealing with super important human samples in formats that are generally evolving. I honestly can't imagine what a hassle it is to pull something like that together - but they've got one heck of a team working on it.

You can read about the current status of things and where it's going here


If you just want to dig around and try to look for things you can check out the portal here

If you're used to other big endeavors like the Human Protein Atlas or ProteomicsDB you might find yourself wondering - did US Government employees design one of these things and ...not.... the others? Well.....maybe....why would you ask that.....?......but again, that's an absolute shitload of data and it's tough to make it organized. Again - super cool plan - and when they inevitably get it all working please check the date on this post before you leave a comment "what is this weirdo talking about - it's awesome!" Or do leave the comment so I can check back. 

Tuesday, September 24, 2024

How many scans/peak do you need for accurate quan in LCMS?

 


This is a couple of years ago but this group makes a compelling argument for 6 scans/peak! 

That's about 1/2 what I'm generally trying to get (10-12) but as I'm looking at a LOT of recent data from different instruments it looks like I'm old fashioned. I might need to put up a poll to hear what the community thinks.


Monday, September 23, 2024

What could you do with some free proteomics? EuPA YPIC Student Awards accepting applications now!

 

Are you a student with a problem that maybe some proteomics could solve? Or do you have a great new idea that you just need some real evidence could change the future of proteomics research? 

Applications are open til November 24th. Students at European universities who apply can get up to 5,000 Euros to use for their work. Finalists get a free trip to Greece to pitch their ideas. 

Another game changing idea from the EuPA YPIC! Find out more here! 

Sunday, September 22, 2024

Accurate transient lengths/times for Exploris 480 (and very related systems!)

 


I stumbled backwards into this when I realized my Excel sheet cycle time calculator wasn't lining up with Astral data. Turns out I either had the D20 high fields wrong, or there have been some incremental improvements since the QE HF launched (....a while ago....) either way - this is a cool paper and it's what I'm using to fix my math.



Friday, September 20, 2024

Did y'all know DIA-NN has an integrated viewer now?!

 


Alllriiiiight! I've been doing this thing recently where I forget things here and there like - 

Maybe not a lot but I was really surprised to find out that my desktop PC at home is 8 years old (eek!) and my wife said something really weird about being 40 soon and I used to be older than her. 

I also just discovered that this old PC is running DIA-NN 1.0.0 -and Vadim mentioned something about 1.9.1 - and - WHOA - what an upgrade! (I was doing SLICE-PASEF a couple years ago so some computer was on a more modern version at some point) - HOWEVER - it isn't really obvious what one you're currently using and this one HAS A VIEWER! 


On top of "hey! here is a viewer!" it's really fun to go and just see how many scans/peak you're getting for published data!