Thursday, July 18, 2019

ABRF GenomeWeb Follow up.


If you missed the ABRF GenomeWeb talks we did the other day, you can still watch them on demand by going to this link and registering. Since my animations didn't work, you can get my slides with working animations here.

We went a little long and didn't get to all the questions. I took screenshots of them and have been working on them.

Q1: Is there something like RAWMeat that can monitor specifics of instrument performance longitudinally?

A1: Yes. There are several, but these 2 are my favorites:

RawBeans

(New!) QCloud

AutoQC and sPRoCoP have to be mentioned as well. There's loads on this dumb blog about them.

Q2: TMT vs DIA comparisons with today's instrumentation? How do they compare?

A2: It sounds like Dr. Schilling's group is doing a massive and comprehensive comparison and we should watch out for it, but this is a great recent paper where time constraints were utilized as a parameter:



Q3: Can you use a Q Exactive Classic for DIA? 

A3: Totally! Is it as fast as the high field stuff? No, but once you adjust for these limitations you're set. Here is a great example:



Q4: Is there a reason to generate libraries now that ProSit is out there?

Uninformed Opinion 4: Scott Peterson did a really extensive analysis years ago, I have slides here somewhere and ALWAYS in house generated spectral libraries were better than any in silico models that he tried. It wasn't even close. There is individual instrument and lab and sample variations. ProSit is a big step forward. These new intelligent machine based libraries are better than anything we've ever seen, but I bet you that in house generated will still be better. I don't have data to back this up. I hope that the margin is small. I'll definitely go to Prosit first. Instrument time isn't free. However, I'd comfortably bet $4.13 that when the dust settles and we see the inevitable 10-15 papers that compare the two in the next 18 months or so that in house is at least marginally better. (I do hope I'm wrong, though)

Q5: How much longer do you think that MS will be the dominant technology in proteomics? 

Uninformed Rambling Opinion 5: It depends on how we define proteomics. Are we talking peptide abundance? If so, then I don't expect MS to be the leader for much longer. Arrays are coming. Nanopores are coming. If we're defining proteomics as modified protein abundance and/or top-down, intact protein analysis and quantification? I think we can safely say that MS will be the dominant force for the remainder of my career.

Arrays are neat and everything. And cheap. And they'll get better, but you have to know so much in advance to use them. I'd guess only a few more years before it is better/cheaper/faster to use arrays if you want to find out protein abundance. Honestly, though, who cares about protein abundance? Really? RiboSeq is getting a lot more accurate and faster/cheaper.

We might even see arrays replace phosphoproteomics. I'd have a party. Let's give phosphoproteomics to someone else, because, at the end of the day we all know that LC-MS/MS isn't very good at it. Not really.

Where LC-MS/MS has a tremendous advantage is all the other PTMs that we've only barely explored. Glycosylations, acetylations, succinylations, Sumoylation, Ubiquitinations and our ability to look at all of them. And intact protein analysis on a global scale? Yeah -- it's getting closer all the time and no one on earth wants that problem except for us.


No comments:

Post a Comment