One of the things on my checklist to discuss with potential new collaborators for LCMS based proteomics is going to come down to how we currently run the instruments based on current informatics challenges.
Data Independent Analysis (DIA) is probably going to give you the highest number of peptides and proteins and best quan.
Then why do anything else? Cause the other one
Data Dependent Analysis (DDA) is way more likely to identify cool PTMs.
I'm convinced this has very little to do with the instruments or the instrument methods - we just haven't sorted out the informatics yet. Probably because
1) we don't really understand the whole "proteome" (it's fun to forget that whole dark matter (WTF are all these other peptides?) thing, but I think it/ they stress out the neural networks. and
2) All our deep learning things for retention time/ion mobility (where applicable) and fragmentation patterns are based on unmodified peptides. RT is a big problem. For real, what do you know about how a PTM shifts a RT? A phospho or a GlcNAc makes the peptide come off earlier? What about an oxidation?
You know what you need? Shots of monster!
Nope. Wrong. That's something else).
Zero-shot RT prediction with MoSTERT!
There are lots of RT prediction things out there, why is this one worth typing about? Check this out. This is what the program is thinking about the new PTMs it finds.
No comments:
Post a Comment