Wednesday, June 19, 2024

New biomarker panel for Parkinson's disease with early predictive power!!

 


Now for some good news! While there are surprisingly well-developed set of genetic markers for Parkinson's Disease that you can even get info on through things like 23andMe, sometimes it develops without any of these. It's called de novo when that happens. What we need are some protein level markers, because it sounds like this yet another disease that isn't (or isn't entirely) genetic! 

We've seen some amazing progress on Alzheimer's Disease biomarkers, largely out of the school in St. Louis that sounds like it should be in Seattle which adds to why literally everyone forgets that it is there. 

Super encouraging paper title for Parkinson's time!

https://www.nature.com/articles/s41467-024-48961-3 (something is up with my ability to hyperlink on blogger) 


How'd they do it? I definitely expected some of the super high tech nanoparticle based plasma preparation stuff that lets us see ultra deep into the fluids. Nope. Not here. This is a story about having access to a priceless patient sample set and doing things the hard way. 

They used a standard depletion strategy (read it last night on my phone, but I suspect we're talking about the top12 depletion Agilent column or something similar to what Michal and I were using at NIAID a dozen years ago) and - 

MSe! (WTF is that?) Oh. Let me tell you (it's probably in the terminology translator over there somewhere --> ) In the long history of this blog which is now a summary of something north of 3,000 proteomics papers (about half you can see) - I think there this is MSe paper number 3. 

It is a technology we had at NIAID in 2010(?)  and it is Waters for  All Ion Fragmentation. It is a near 100% duty cycle technology. You get your MS1 scan then you get another full scan where every peptide is fragmented in a single window. With the rapid improvements in data independent analysis algorithms, you'd guess that maybe we could make sense of these data better now than ever. I honestly don't know, I haven't seen it used in a long time. 

Something like 1,200 proteins were quantified in the patient samples. They used some criteria for filtering I don't quite understand, but sounds more strict than what I'd use in this case and work there way around to around 900 for quantitative analysis - landing on around 120 that they build targeted assays for their large cohort study. Told you - they put the work in and did this the classical way. 

Of their 120 markers, they can consistently detect about 1/4 and when applied to their larger cohort a learning machine can accurately classify the patients! 

Minor criticism of the paper because I work in a lab without air conditioning during a historic heat wave and it is making me very negative:  The targeted data is all up exactly where it should be - on Panorama where Skyline nerds can take a look at it. The global data does not appear to have been deposited. While....there are probably 10 people on planet earth who can process MSe data (maybe there are more? No way there are 100, right?) this is a dataset that might make some of us want to try. These are, however, clearly actual patient samples and sometimes IRBs don't allow global data to be deposited, but it would have been cool to see these results. As an aside some recent TIMSTOF methods described by Vadim's team have essentially (link) been All Ion Fragmentation methods so DIA-Neural Network should be able to make sense of these spectra, if the screwy Waters data format could be converted to something universal. 

Don't let this offset how great this study is and the real theme here - if a skilled team can get access to the right samples maybe we don't need the best and most expensive instrument or sample prep method in the world to do something truly important. 



No comments:

Post a Comment