Tuesday, June 19, 2018

Ummm....can we identify people from proteomic samples...?


I didn't make it to a poster by some of these authors on the final day of ASMS but it is something I've thought about at least once a day since.  You can check out the study here.

Actually, this one is also really interesting.

You know when they do the DNA forensics stuff that they really only look at small bits of the DNA -- just enough to determine match against random occurrence. The early stuff was just restriction digests and running gels and trying to match the gels. Actually -- found a good WikiPedia post here.  I think the newer stuff is all SNP. Full DNA sequencing is still about $1k -- which is a LOT of donuts --  which is still far too expensive to be practical for law enforcement, even without the data processing.

If we're looking at small snipets of DNA -- what are the odds that small snipets of protein would have enough info to do the same thing? This team is building an increasing amount of evidence that there is and it, quite likely, exists in our data and we can find it if we really look for it.

That's great for forensics -- but -- umm....


...
.....
........
...........
.............
...................um....

If this is true --- this could be seriously bad, right?!!?!  I don't know how things are in the civilized world, but in my country if you get sick you die when you run out of money to pay for your medical care. We have "insurance" companies that we pay our entire lives that make their fortunes on gambling that we won't ever get sick enough that they'll lose money on us. And if we do need their services it is in their best interest that they find a way to not pay for our medical care.

Now...this is obviously stretching it and might sound like I need an aluminum foil hat....

Yo...so what if I'm one of the control samples in a big study that is on Massive that you could find by ProteomeXchange....and you could figure out from the RAW data that -- 1) that sample is from me and 2) in that dataset you can see that I have 2 of the single amino acid variants (SAAV) in PARKIN1 that we generally consider sub-ideal to have?  Could that one day be used by some enterprising insurance company scientist to build a case for why they don't have to cover my medical bills anymore?

Obviously this is just an extrapolation -- there are hundreds of variables here. Even if I had that level of insight going into a sample -- it's pretty darned hard to prove SAAVs and this forensics profiling doesn't sound real easy either, our coverage is extremely sample dependent...and on and on...but it's interesting to think about, right?

Also, thanks to legislation passed during our last federal administration here, bankruptcy rates due to health care costs are not nearly as common as they were (#1 cause of bankruptcy in the U.S. in 2013).  This legislation has been under a lot of fire recently, but its still standing.

Monday, June 18, 2018

Most boring post ever -- Extending EasySpray Column life....


I disappear into my day job for about a week, don't answer any phone calls or emails or blog comments -- and this is the first thing that I get caffeinated enough to write...ugh...

The only fun thing about working on this post was trying to find a video of a Boston Terrier yawning. This is as close as I got.

I really like EasySprays -- if you find something easier, let me know -- I'll switch to those, but to make the math work out that they are a great idea financially I think they need to last at least 2 months, give or take.

So far -- 2 things appear to be the most critical -- #2 doing the long and boring column conditioning protocol and #1 doing high organic "column restores" periodically.

#1) Let's talk about column restore (I might have made this name up, but it is discussed in the so-boring it will make you want to get into the diethyl ether (please don't) to make it stop) EasySpray manual -- at least the new one, anyway)

The logic is that the spray instability (which is the #1 reason why we stop using them) here has more to do with the lines (including waste) leading up to the EasySpray. Column restore is picking up a full loop injection of organic and running it through the system, followed by maintaining tip top organic for at least 30 minutes

The pick up is 18uL of ACN followed by this method (no equilibration required)


It really seems to help. If the spray stability starts getting wobbly, this generally gets me back to baseline -- or at least allows me to lower the voltage back down a bit.  I run it a couple of times a week, probably and -- anecdotally, it appears to help keep the columns going beyond the 2 month cutoff.

#2 (In this order because I couldn't figure out where the photos went.)


I saw this in the manual, saw it was another 40 min of the nLC doing stuff an the mass spec not collecting data and thought....


...okay...I guess I wasn't that serious about it...cause  I did do it....

Worth noting -- we've got 6 EasyNLCs hanging around and 2 of them have this script included in it. The oldest one of the two has this version.


(P.S. if you have a service plan you can request service update your EasyNLC software. There may be earlier versions of the Easy1000 that can't be brought all the way up to the newest firmware but our local FSE is investigating that, cause we have a request in now for all of ours)

If you don't have this software or have a different LC there is no magic to this script. It just starts running buffer A at low pressure and gradually steps it up to full operating pressure over 40 minutes. The idea is that if the beads or particles or whatever is inside these things got unsettled a little during their travel to your lab then this will help get back where they all ought to be. Sounds like pseudosciency mumbojumbo to me, but I'll probably keep using it cause - in this anecdote -- column life is now longer than it was before.


Thursday, June 14, 2018

The Lazy Phospho Normalizer!



Okay -- so there are easily a million smarter ways to do this. I know it. However -- I doubt there is a lazier way.

Here is the scenario

You did a global phospho TMT study.
Because this is a time course long enough that trancriptional/translational regulation isn't something you can rule out you also kept the flowthrough (or part of the sample that you didn't enrich) and you also ran it -- ala SL-TMT or something similar.

How do you combine that data?

Like I said -- 100 smarter ways to do it, but last night around 8pm I would have given just about anything for an Excel sheet that said this --


Around 11pm I started channeling that frustration into watching some tutorial videos on the Microsoft Office website and early this morning I woke up and finished this tool that does EXACTLY what I needed last night

You can download it from my Google TeamDrive here.

As it says in the instructions -- if you make something easier and better (or already have one), please let me know. As always, happy to see comments regarding what could be fixed or improved. Can't guarantee I'll actually do it -- as I said -- this does what I want it to do, I'm just putting it out there in case it would help anyone else.

Also, if you're just writing me to make fun of the fact I could do this in R in like 3 seconds, I'll have you know that I've installed 6 copies or R and R studio on this PC and I don't even know which icon on the desktop is the most current one. Keep that in mind.  You could do it in 3 seconds, I'd spend an hour -- easy -- clicking the wrong desktop icon. "Where is the one that links to that cran thing..?"

Wednesday, June 13, 2018

MaxQuant goes Linux!


It's been possible for quite some time to run MaxQuant on Linux in different kinds of "virtual environments" and things. I know people who have been doing this for a while. These, unfortunately, have loads of overhead and sap your processing power.

MaxQuant having a true Linux version?  That's a big deal. Nature Methods level big deal? Sure... why not!


Monday, June 11, 2018

Tired of trying to find your PARPs? Cleave 'em off!


Do people still look for PARP inhibitors? I haven't heard of any in a while.

PARP is obnoxious because it's a polymer PTM. (This is a good review on it.) Unlike more friendly PTMs like ubiquitin, there isn't a friendly cleavage site coupled with a nice mass shift.

This new note at JPR shows a great way to get to the proteins that are PARP'ed (PARPyPARPylated?) by getting down to the proteins and knocking it off.


Much better idea!

Edit: I didn't try hard on the nomenclature at all. This link at Ribon pharmaceuticals explains the different types of these things.

Sunday, June 10, 2018

ProteomeTools takes on 21 different PTMs!!


Spectral libraries are coming back with a vengeance. Check out this great new study in press at MCP!

What were their limitations again?

1) The libraries weren't big enough?

2) Integrating library search into normal workflows wasn't straight forward?

3) There aren't enough PTM libraries?

1) ProteomeTools has already dropped 400,000 synthetic human peptides through their site and through collaborations with institutions like NIST. A LOT more are coming. Couple this with the MASSIVE's new libraries and the treasure trove at NIST?

2) More on this later, I think. But more and more of our normal software workflows are becoming spectral library compatible. Mascot supports them now (right?). I've seen two mentions of spectral libraries in MaxQuant in the literature, so that's coming and all the DIA software is ready to go for libraries of all kinds.

3) NIST has had a huge human phospho library for years, sounds like MASSIVE has a ton now as well -- but ---


This team synthesized a ton of peptides with weird PTMs on them!! I'm sure they have or will release the libraries -- but what might be more important right now is that the RAW files are available via ProteomeXchange now (PXD009449, here)

Is that really a crotonylated lysine you're looking at? Ever seen one before? (I'd never even heard of it until recently...) sure would help if you could download a couple RAW files with real ones in them and find out they look like this in HCD, right?


Better image from the RAW file itself (yeah -- that 152.107 is in virtually every MS/MS spectra -- you know, just checking...):




This paper is an absolute treasure -- if you're European you probably know that Andromeda can make use of these diagnostic ions in the scoring algorithm. So...if 92% of all crotonylated lysines made a 152.1070 fragment ion, Andromeda can take that into account and help you weed out the false discoveries....how cool is that?!?  I just went through PD and even in the text editor interface for modifications in MSAmanda 2.0 I don't seem to have the ability to do that at all...(you can, however, get MSAmanda to preferentially score you new neutral loss masses, but it requires some finagling to get it right. Can't guarantee I've got it, but you edit those here in the Administration).



Wow -- as soon as I think this paper has stopped giving -- there is more stuff. If you're interested in any sort of weird PTMs -- this study should be on your desk. It'll be handy when it's formatted, but it's worth cutting 45 pages out of a tree right now --

HOLY COW --- The very last figure of the supplemental info casually shows you how to resolve one of the single hardest things I could ever think of trying to work with -- there are different neutral loss patterns whether a peptide is symmetrically or asymmetrically dimethylated....the more I think about that the more I'm certain I probably couldn't manually sequence that without this information....but -- could I go back into the settings above and feed MSAmanda this information and get it and PtmRS to use this information to resolve these correctly?

I can't believe how great this study is.  I don't use this .gif lightly...


Saturday, June 9, 2018

Trypsin and urea? Keep it at room temp, y'all!


This makes a lot of sense -- and makes me really happy since I only recently found out where our incubator is (room temp overnight digestion, FTW!)



Friday, June 8, 2018

ASMS 2018 takeaways!


I didn't get to see much the final day of ASMS as I traveled back, but the two of us from our group who got to go are working on a wrapup for those who couldn't. It'll honestly take months for me to sort through all the notes and for the Google Scholar alerts to stop coming in for all the cool stuff I don't feel I can't talk about yet.

My biggest takeaway --



Our field is still beset with difficulties -- but the instruments don't seem to be the problem right now. How can it be with so many people achieving near theoretical proteomic coverage? With thousands of PTMs of any type seeming like something that even I could get?

This ASMS felt like -- time to confront our biggest shortcomings, like:

1) The fact proteins are really hard to get out of cells easily and consistently

There were sample prep robots everywhere! And high tech new sample prep kits, like S-Trap, the awesome thing Pitt is working on that I can't find note of yet, some new stuff coming from vendors themselves (finally?) that all seem tilted toward being automation compatible

2) The big one -- How immature our informatics are -- and how we fix it!

I think I'd been lulled into some sort of a complacency finally that our data processing pipelines are just fine. A major emphasis of ASMS on the proteomics end is that they aren't. They're better than they have ever been, but this explosion in scan speed and data complexity is showing that some of our core early 2000s data processing assumptions are in serious need of updates -- but really really smart people are working on it. We probably don't ever need to get as sophisticated as the genomics people -- our raw instrument output is easily 1,000 times less noisy and more accurate than theirs (have you looked at raw output from these next gen things?!?) but we've got some ground to make up.

(Images like that do make me feel better. I can always manually sequence my peptide if I have to, best of luck with those 4e7 transcript reads)

(Another way to make myself feel better, I went to some metabolomics talks -- they're trying hard and making up ground, but they are way behind where we are, partially due to the smaller size of their field, some really poor assumptions that were made in the past, and --most importantly -- some really unique problems they face. "Oh-- cis- and trans- makes a huge difference here? Great! Best of luck with that! I'll...umm...check..back...later...")

3) Glycomics and glycoproteomics are coming -- and is about to become a primary thing we hear about. I'd have to stop and check the signs while walking around "yup -- I'm still surrounded by posters about sugars..." Everything is glycosylated -- and glycans totally suck to work with -- but it wouldn't be crazy to suggest that they are more phenotypically important than unmodified proteins.

New reagents, new columns, new methods, new software. It's all going to help when that scientist knocks on your door and has no intention of letting you cleave the sugars off and throw them into the waste bin. "Oh -- d- and l- makes a huge difference here? I'll...umm....")

4) Proteogenomics may finally be something that I can do! Surely, out of all of these new tools there has got to be at least one -- at least one -- that I'm smart enough to figure out how to use, right? I hope more details to be on the way soon!

Thursday, June 7, 2018

ASMS2018....Day 3....


ASMS 2018 described in too many words, what about this --- Amazing, dedicated, super nice and scary brilliant people everywhere who are all unified by some strange desire to use vacuum chambers and magnets and electric fields and stuff to somehow make the world a better place.

ASMS described in <140 characters?


Day 3 was awesome. My day started with Michael Shortreed describing his team's MetaMorpheus software. If you haven't used it, I strongly recommend you check it out. If you were awake at the cursed hour before 9am (whatever it's called -- I don't even know or care to find out) and got to see it, I'm sure you know why.

I have a problem (or two, I guess...) -- but the one I'm thinking of is something about a processed protein that gets stuck in something weird and I need to know how much of that protein is stuck in that weird thing (like -- up to what amino acid is it stuck in this weird thing, does that make sense? -- this is often about the level I understand around work) and the talk on DiPS presented the solution I've been looking for! You can read about it here. DiPs relies on multiply microwave assisted acid protein cleavage and de novo sequencing to fully sequence proteins. So -- if you're protein is stuck in something weird up to an unknown point, maybe you can figure out exactly where the sequence ends!!  This digestion isn't new, of course, but the strategy for overlaying the fragments from different runs is EXACTLY the right twist to make this work (I think!)

Random favorite stuff:

Hong Wang's (pretty sure he's in Junmin Peng's lab) from St. Jude's -- comprehensive proteogenomic study on how to deal with rare (low n) diseases was a serious WOW. Had to text colleagues to come see it (including an infamous MD I work with, so that I could verify that it was as smart as I thought it was), but it hasn't been published yet. Can't wait to talk about it, though. Google Scholar Alert set!

Mark Chance showed CrossTalker -- it isn't out yet, either?!?!  I've had the fortune of getting to spend some time with Dani Schlatzer in the phenomenal Case Western Proteomics Center. They have amazing in house tools -- and it looks like some are coming to the outside world. CrossTalker is an open network and pathway analysis tool that has some really cool twists. I hope to use it soon.

Amber Moseley (who was too flooded with interested people for me to catch up with) showed an application of thermal profiling for studying mutations!!  I swear, I always think about this tool as one of those things that is amazingly powerful -- but is looking for an application. Amber's team at IUPUI found one and it's scary smart.

Pierce is releasing a TMT QC reagent!!  It isn't on their site, but -- how useful will that be?

I got an inside perspective of what we'll see out of CPTAC 3 from David Clark at JHU -- just the controls they are using is worth a serious publication. This going to be an amazing dataset to mine forever. (Hurry up and get it done, y'all!!)

The middle of my day was software demos and looking at sample prep robots. Paul Stemmer just upgraded his robots at Wayne State to the Agilent AssayMap and had nothing but good things to say -- given the throughput and quality of work that comes out of that lab, that says something. However -- that robot is VERY sophisticated and that translates to pricy in robotics. I know a number of people who are loving the cheaper (though write your own python scripts to use it...) OpenTron route.


Okay -- these are clearly some of my favorite people to talk about (do other proteomics people have real life PIRATE STORIES?!?!) but Mak Saito's team at WHOI has been doing amazing stuff profiling the metaproteomics of the sea. Jacyln from that group (probably doesn't need a last name if you spell her first correctly) detailed the study of huge ocean zones where there is virtually no dissolved oxygen. Huge areas completely full of some of this planet's most ancient (pre-oxygenic catastrophe! life) and how she's studying it with proteomics. Wait on that paper -- I suspect there is a reason she's getting NASA funding, for real.

(Probably joking!!)

However -- they've got loads and loads of data from the ocean and they've been working to develop standards for oceanography data collection and dissemination. OceanProtein Portal is the first attempt at the dissemination part.

Check this out -- you put in a peptide sequence (partial or intact, or identifier) and you can see if they've collected information on it and it's abundance -- at different spots on their global explorations.  (P.S. I've provided feedback that the word "cruise" is not a good PR decision for what are extremely long and dangerous sounding deep see mission)


This obviously isn't my field, so I can't really do anything smart with it except punch in peptides, but for people who are in metagenomics/metaproteomics how lucky are they to look at such a cool resource?!?!?

Woooooo!!!! ASMS, yo!!

Wednesday, June 6, 2018

ASMS2018 -- Ben's even more useless recap of Day 2



Hey! Are you looking for a postdoc who can make great proteomics informatics posters? Is quality of handwriting not the highest priority? 😋  You can reach Donxue (she's in Kuster's lab! I hear they're pretty good there.) here.

With that out of the way (clever idea, Dx!) time to dig out the ASMS 2018 day 2 notes. Besides crack 20,000 steps again today...umm...what did I do again? 

Wow -- Ben Garcia's acceptance speech of the Biemann Medal -- wow -- informative, funny, sincere


--- included a very detailed SpiderMan reference (there is a histone thingy on that chalk board in the movie)-- what more could you ask for? Seriously... one of the best talks I've EVER seen about anything.  I still don't quite understand what a histone is or does, but it is clearly a problem that nothing currently exists that is better suited to conquer this problem than our technologies.

This was PROTEOGENOMICS day! How do we finally merge all this transcriptomics and proteomics data together? And there are crazy powerful tools out and more coming.

Some highlights!

PepQuery is live here and can build off of VCF, BED GTF or WTF files (one of those is made up, the rest are things your genetics people talk about -- and is peptide centric comparisons off of those file types.

On that same tilt -- ProteomeGenerator is also on BiorXIV here. More tools to make integrating all this data together, please!

Side note --
RawMeat may be coming back finally! Should be investigated. You might LOL when you find out what it's called (pdf download of poster is here)

Something ABSOLUTELY WORTH CHECKING OUT -- PeakStrainer --> it reads directly from RAW and does some amazing stuff in terms of discerning peptides from noise based -- not on intensity -- but on the frequency. Super smart....

Back to the proteogenomics -- besides the tools there were amazing examples of large integrations from tumor samples to patient cohorts to using the two to tell between human and mouse in xenograft models. With better tools maybe all of this is going to become easier (it still seems tough and scary to me...but there are some great templates coming)

Just to double back to the very top -- Donxue's poster was on healthy tissues, something I feel like we sometimes forget about the need for profiling. If she did that work (it doesn't appear to be out yet) and you've got a position somewhere on earth that isn't too cold, it would be my recommendation you shoot her an email -- if only to learn about what she's working on. Suuuper cool.

Okay --coffee -- more informatics --go!!


Tuesday, June 5, 2018

ASMS2018 -- Ben's totally useless recap of Day 1!

Cause I need something mindless to do while staring at this coffee -- here is some useless rambling about ASMS2018 day 1

Actually, I'm going to start with this awesome Tweet first.




Great thinking points, right?!?  This should be hung up somewhere....

A random list of observations:

1) Be sneaky if you have your kids with you and you want to walk around posters. No children are allowed in the poster area. My best guess is that there is a fear they will die of boredom and the San Diego convention center will be held liable.
-I'm no expert on being sneaky, but the poster room has 4 entry ways with doors that only lock from the outside. There is only convention center staff at the single main entrance. Have someone (me, if you see me) walk to a side door and let you and your kids in. Your kid gets a paper cut and that's on you, though.

2) Shimadzu has the best food -- their TOF still only has 100 resolution in high mass mode, but you can't have it all. (I can make this joke -- my work is getting a second one)

3) I learned that despite all of our differences, all mass spectrometrists seem to be unified in their hatred of Budweiser (whatever that is) -- definitely the most common topic of the last 3 hours of the conference.

OH YEAH THE SCIENCE.

Informatics is king! We're continuing to realize our limitations on this end and more tools to improve everything is coming.  One thing I can't wait to post -- JudgePRED (best name ever?) isn't out yet -- watch out -- it's cool. Triqler is a much smarter way of doing FDR (integrating peptide and quan FDR) and the python code is all at that link.

Then Bill Noble came on stage and gave one of the scariest talks I've ever seen. I think I can talk about it because he presented it at RECOMB last year and the notes are in Cell here.

So....if you randomly shuffle your decoy database every time you process the same set of global proteomics data --- you...umm...can get terrifyingly different results.

He showed a couple extreme examples just to see if anyone would run out screaming using both large datasets and large and small FASTAs. If he used a FASTA with less than 1,000 protein entries and then reshuffled the decoys (changing NO other parameters) the peptide IDs could differ by 20%(!!!) from experiment to experiment. (This is real footage of what happened next)



Did you know MCP doesn't allow you to use global data searched against FASTA's <1k...? This is why.  His team has a solution coming and it might even be available on Crux now.  For now -- umm -- let's all ignore it and not let the collaborators know.

I've scanned the other talks and -- I'm seeing a lot of stuff that isn't out there yet.

There was a great PACOM talk that really showed what it can do -- and was delivered with much more confidence that I could have done with Jurgen Cox sitting in the front row. 😅. We all REALLY LOVE MaxQuant and Perseus and the fact we're trying to build things above this lofty bar is a testament to that. PACOM makes some reeeeeaaaallly pretty plots, tables and graphs.

Instrumentation-wise -- I'm a little behind. I got a personal demo on the Q Exactive UHMR last night -- and I totally want one -- but its gonna be a tough sell for me. This is the top down instrument that the field has been needing. Its an Orbitrap that scans up to 80,000 m/z (not m+H+ --- m/z!!) can do 280,000 resolution, has magical front end trapping (essentially making MS3 feasible in a Q Exactive!) and doesn't require the double calibration stuff that the EMR does. Native top down and they have data for mega-dalton top down work.  Is it the most powerful top down, intact mass characterization instrument ever? Yeah -- without a doubt. Can I find a biological problem that would mean we absolutely needed something this awesome? Working on it....I'd love ideas....

There is another contender in the DIA software space -- with Scaffold coming in with a really nice interface. I'm really in love with Pinnacle from OptysTech -- and it can do a lot of other things I really really need (untargeted Discovery! -- I.e., great SIEVE replacement), but it's always great to have options. Both are doing live demos here if you're looking at getting into this field.

Coffee down -- time to hit Day 2!

Sunday, June 3, 2018

The Skyline User meeting recap!


The Skyline User Meeting was 🔥🔥🔥🔥🔥🔥 (I'm not sure what that means, but context clues from Twitter and Reddit suggest to me that I'm using it right).

How do you have a successful user's meeting for awesome open source software?  Maybe you have it in an amazing old public library -- you get support from vendor sponsors to provide really good food, and -- most importantly (of course) -- maybe you stack a ton of talks on things that we didn't know this great software on our desktops could even do!!  Even though a vendor had offered a free cruise of the harbor for lunch -- enthusiastic Skyline users filled this less flashy event.

Talk 1: Chris Ashwood of Medical College of Wisconsin (and Head Editor of Glycomics Methods for www.massspectrometrymethods.org)

Glycan analysis and characterization with Skyline! The first paper came out recently, but he showed that this might be the tip of the iceberg. Even better maybe? What about a fully developed internal/QC standard? For glycomics? Yeah!

Talk 2: Paul Auger at Genentech

Automated QC with Panorama for peptides AND small molecules!! Okay. I knew Panorama was out there. I knew that Mike Bereman's work with sProCoP had been integrated entirely into Panorama. What we didn't know? That you can set up a local Panorama server (in case your instruments aren't accessible to the outside world!) and get all the benefits of Panorama inside your network!

Benefits like: real time QC/QA on ALL of your instruments.
Know that ion transfer tube is gonna need a new one dropped in --- before you're 2SDs low on sensitivity!   This talk required an email to our director (we need him to buy us a server so we can put Panorama on it)

Talk 3: Robert Ahrens -- LipidCreator/LipidExplorer + Skyline!!

Have you even heard of this?  This open(?) lipid software looks ridiculously powerful and the developers got in touch and now its a Skyline powered quantitative lipidomics platform!  Is there anything better than adding new applications to a piece of software you already know really well? Sure, you'll need to learn something new -- but you aren't starting from scratch and that gives you a massive lead on getting that application going in your lab!

Talk 4: Don Davis of Vanderbilt and developing Clinical Assays in Skyline!!!

This was a great lightning talk where the real consequences of the implications of the work this team at Vanderbilt is doing didn't really hit me until about 3 minutes into the next talk. For a lot of us, our end goal is new biomarkers or diagnostics -- and Don showed how feasible moving what you've developed in the lab could really be -- without the added complication (again0 of having to learn something new. With Skyline piling on new Auditing features (which Brendan assures us you can actually see -- just not on a projector 😜) why not? If all the necessary security protocols are in place -- why couldn't we develop an assay and just port it right into the clinic -- without having to re-optimize with new software??

Talk 5? Yao Chen from Catalent -- what about antibodies?!?  

Heck, why not. Why not use Skyline as a (digested) antibody characterization system?! Apparently it can do everything else. There are some really important reasons to use more targeted approaches for mAB characterization, but I've been trying to pull his references and I'm not sure what has been published, so I'm gonna stop that one here.

Talk 6? Kristen Geddes from Merck -- Panorama QC of a diverse portfolio of instruments doing all sorts of things.

If Conor and I weren't already sold on Panorama -- seeing this great talk -- Merck isn't a single vendor kind of place. Their LC and MS systems aren't even single vendor -- but QC with was shown to be powerful and well automated with a local instance of Panorama handling the whole thing! Yeah -- we need to get this set up.

Talk 7? Buyun Chen from Genentech -- taking a deep look at peptide and protein quan. 

Again -- Buyun gave a great talk, but I'm not sure how much has been published. She raised some really important questions (with some solid data) about peptide and protein quan -- and how Skyline can really help you find those peptides that TRULY reflect the amount of the proteins present. Some basic fundamental questions are being explored and if I see that this work is published, it's going to end up here for sure.

Talk 8? Lindsay Pino  SIGNAL calibration in quantitative proteomics. 

I think this is being written up (maybe I'm just being lazy) so I'm not going to go into it much -- but -- what if you could do a PRM on your Orbitrap Elite AND compare it to PRM on your Orbitrap Fusion (I'm using this because it's an extreme example -- I often get areas >1e11 on the Elite -- I don't think numbers that big can be displayed on the Fusion -- Signal calibration is something we're going to be talking about in the future -- Lindsay demonstrated with some amazingly diverse datasets that 1) Signal calibration is critical and 2) She can show us how to do it.

Wrap up:

Brendan MacLean filled in some interesting history on the almost 10 years of Skyline -- where it is -- and where it is going next.

Worth noting, maybe -- due to some rearrangements in the grant process -- (NIGMS no longer has a separate category for software like Skyline. The grants supporting this software are going to need to be fully competed (against things that aren't software). User feedback to the grant review boards and the vendors who chip in to support the process will likely become more important as time goes by. Not to end on a downer -- this was an AWESOME workshop -- but I'm sure it would be easier for the developers if the 10,000 of us that have this software had each paid $15,000 for it. So it's worth keeping in mind we may have to do surveys once in a while to keep Skyline free, and open and improving.



Friday, June 1, 2018

Blog is on hiatus for ASMS 2018


I have a tendency to blog too much at ASMS -- which sometimes gets me in hot water, so the blog is on hiatus this week.

I've got to focus on learning everything I possibly can and not causing any trouble at all this year.



Even better? According to the yelp! reviews, I don't think people will feel safe enough in my neighborhood to wake me up by sliding marketing material under my door this year!



SEE YOU THERE!!!  I'm the guy who looks like he's been sleeping outside.

I am, of course, joking -- about being on hiatus, anyway!

I applied for official press credential for a reason.