Friday, June 8, 2018

ASMS 2018 takeaways!

I didn't get to see much the final day of ASMS as I traveled back, but the two of us from our group who got to go are working on a wrapup for those who couldn't. It'll honestly take months for me to sort through all the notes and for the Google Scholar alerts to stop coming in for all the cool stuff I don't feel I can't talk about yet.

My biggest takeaway --

Our field is still beset with difficulties -- but the instruments don't seem to be the problem right now. How can it be with so many people achieving near theoretical proteomic coverage? With thousands of PTMs of any type seeming like something that even I could get?

This ASMS felt like -- time to confront our biggest shortcomings, like:

1) The fact proteins are really hard to get out of cells easily and consistently

There were sample prep robots everywhere! And high tech new sample prep kits, like S-Trap, the awesome thing Pitt is working on that I can't find note of yet, some new stuff coming from vendors themselves (finally?) that all seem tilted toward being automation compatible

2) The big one -- How immature our informatics are -- and how we fix it!

I think I'd been lulled into some sort of a complacency finally that our data processing pipelines are just fine. A major emphasis of ASMS on the proteomics end is that they aren't. They're better than they have ever been, but this explosion in scan speed and data complexity is showing that some of our core early 2000s data processing assumptions are in serious need of updates -- but really really smart people are working on it. We probably don't ever need to get as sophisticated as the genomics people -- our raw instrument output is easily 1,000 times less noisy and more accurate than theirs (have you looked at raw output from these next gen things?!?) but we've got some ground to make up.

(Images like that do make me feel better. I can always manually sequence my peptide if I have to, best of luck with those 4e7 transcript reads)

(Another way to make myself feel better, I went to some metabolomics talks -- they're trying hard and making up ground, but they are way behind where we are, partially due to the smaller size of their field, some really poor assumptions that were made in the past, and --most importantly -- some really unique problems they face. "Oh-- cis- and trans- makes a huge difference here? Great! Best of luck with that! I'll...umm...check..back...later...")

3) Glycomics and glycoproteomics are coming -- and is about to become a primary thing we hear about. I'd have to stop and check the signs while walking around "yup -- I'm still surrounded by posters about sugars..." Everything is glycosylated -- and glycans totally suck to work with -- but it wouldn't be crazy to suggest that they are more phenotypically important than unmodified proteins.

New reagents, new columns, new methods, new software. It's all going to help when that scientist knocks on your door and has no intention of letting you cleave the sugars off and throw them into the waste bin. "Oh -- d- and l- makes a huge difference here? I'll...umm....")

4) Proteogenomics may finally be something that I can do! Surely, out of all of these new tools there has got to be at least one -- at least one -- that I'm smart enough to figure out how to use, right? I hope more details to be on the way soon!

1 comment:

  1. Hi Ben,
    You hit the nail on the head about the state of informatics in proteomics. Pipelines have gotten long and the upstream tool builders do not seem too care much about downstream steps. More emphasis needs to go into getting the analyses to the finish lines.

    A great example is larger TMT experiments. What we know about single TMT experiments may not extrapolate to multiple TMT experiments. You can underthink (maybe overthink?) the problem and come up with complicated models that are hard to evaluate. Or you try to understand the problem a little better before attempting to solve it. The solution is straightforward and it becomes clear why an extra analysis step is required. I presented a poster Thursday ( that shows we often make these problems harder than they really are.

    So many papers comparing algorithms are like throwing darts at a dartboard. All of the darts completely miss the dart board. Despite that, the distance of the darts to the bull eye are measured and the closest dart is deemed the "best". I think if you are not actually close to the bulls eye, then the result is more something else that bulls make.