Saturday, January 27, 2018

Ionstar takes on combined chemotherepeutics in pancreatic cancer!

I'm admittedly biased, but I unabashedly love the IonStar methodology.

#1 reason? The Qu lab developed a pipeline for sample prep, running their instruments and they don't vary from it.

An awesome thing about our field is that we're tinkerers. Let's try this new sample prep method. We'll get 4% more IDs. Or let's mess with the gradient here and that'll get us 2% here.

Could Qu lab take every sample that comes in the door and reoptimize their sample prep and instrument parameters and maybe get a little better data? Probably. Could they introduce more sample handling like sample specific fractionation protocols and get deeper coverage? Definitely. But the number one thing they focus on is run-to-run reproducibility and feel free to pull some RAW files and line up the peaks. They are shockingly consistent from one study to the next. These aren't the results you get if you tinker from one experiment to the next. They aren't the results you get if your department's big deal MD sends half of a sample he/she is totally fixated on to one proteomics lab and the other half to another.

(Typical MD response to low reproducibility...)

Case in point is yet another great and powerful application of the IonStar methodology in press at MCP.  This. study. is. awesome.

One of the oncologists's biggest core problems is -- what nasty, probably toxic chemotherapy drug do you subject this poor patient to that will have the best chance of killing the tumor and the least chance of killing the patient? Complicate that with the fact that combination therapies are probably the way they have to go -- and they are often making these decisions with less information than they'd really like to.  Any extra info might help them make a better choice. These aren't trivial choices. A person's life might hinge on picking the right 2 drugs and how much of them...based on a couple ELISAs and amplification of..what?...maybe 50 genetic targets...?

In this study the authors take pancreatic cancer cells and characterize their response to single and combination therapies. This allows them to characterize the downstream effects and helps elucidate why the combination ends up being effective in these specific cell lines.

Okay -- and here is the best part and I'll get off my soap box. They looked at 40 samples here. If someone brings them in 20 more samples from, for example, patients who have been treated with these drugs they'll be able to run those new samples and line them back up. The same sample prep, the same trap and column manufacturer and the same gradient, LC and MS parameters will allow them to continue to add data to this cohort and extract useful quantitative measurements across all of the samples from beginning to end. I'd argue that this is every bit -- if not more -- valuable in many contexts than picking up 10% more peptide IDs in the second cohort and losing some of your original measurements in the process.

Don't get me wrong. It's AWESOME that we're tinkering with proteomics and coming up with new methods and techniques and pushing our field forward. This works for the theoretical proteomics and you could argue our labs that are innovating like this are our field's major driving force. Unfortunately, it is also the number 1 thing slowing down the realization of clinical proteomics and this study is another example of how focusing on reproducible measurements over innovation is what we'll have to do to bridge the gap.

(Minus the hair...)

No comments:

Post a Comment