Saturday, June 30, 2018

Battle Royale! 7 Serum depletion kits. Which is best?!?!


As fast as things have been changing in proteomics -- and with all the upstart little companies with all their cool technologies, it's hard to keep up. If you haven't had to do plasma/serum proteomics in a while -- this might be the ultimate guide right now.

Yeah -- I know it's ElfSeverer -- but the pre-formatted version is open access -- and the supplemental table is an Excel document that you can download from the abstract that provides a lot of the results.

I'm definitely not going to complain about anything since this great team in -- wait. what? Google Images informs me that they are on an island in the Mediterranean that looks like this --

-- this is correct -- good for them!

Back on topic -- this study was obviously a lot of work and it sure is going to save me time the next time I have to think about which depletion kit to use for what.

Friday, June 29, 2018

How to get data from Proteome Discoverer 2.x results uploaded to ProteomeXchange!

Congratulations! You're about to submit a manuscript. I, quite seriously, probably can't wait to read it. Shoot me an email if it's really smart. I can't guarantee I'll put it here, but I'm always looking for tips on what to read next.

Okay -- now you've got to get that data uploaded somewhere so the reviewers and nerds with insomnia can download and look at it, right!?!?

If you haven't submitted data in a while and you're using a new version of Proteome Discoverer you might rapidly find that the tools you used to use for PD 1.x require you upload a .MSF file.

Your results in PD 2.0 or newer are actually something called a .PDresult file. (You have an MSF file, but it isn't your FDR filtered results). Don't fret. You can directly export the data out of PD into mzIdentML (which is often shortened to .mzId -- just to keep you on your toes.)

Open this revolutionary dataset that you have generated in PD 2x (2.2SP1 pictured) apply your filters, and export your mzIdentML or mzTab. Boom! Done!

Add that to your complete submission.

Oh -- this is a great recent shortcut as well -- you can, of course, still directly upload to your storage site of choice, but if you are using a ProteomeXchange partner, you can shortcut by using the PX Submission java GUI. You can download it here.

Thursday, June 28, 2018

MzMine -- Find changing features in any LC-MS datasets!!

I left a post here back in September mostly to remind myself to check out this software. Then I found that post while looking for a piece of software that does this. Who needs a memory? Not me!


You can get it here.

Did you think the idea behind SIEVE from Vast (and later Thermo) was a great idea? Let's look at what is changing from one LC-MS run to the next -- and THEN let's try to ID those?

Did you also try SIEVE and wished it was: More stable, faster, more stable, had more features, was more customizable, more stable and didn't make you frightened to leave your 32-bit PC alone in your house when it was aligning big files?

MZMine is everything that SIEVE could have been. I can be mean about it now that it's been retired, right? Worth noting -- Compound Discoverer can do everything (and much more) that SIEVE could do -- for small molecules. It doesn't work well with peptides -- in my hands at least.

There are two downsides to MZMine that I've found already --


You can pick up most software for LFQ analysis and it will make a lot of useful assumptions for you. Like -- you'd like to align your peaks the one way it knows how to and you'd like to filter them in the one way it knows how to and you'd like to merge and filter them and it has a logical order for doing all those things.

In the MZMine your destiny is your own.

Ummm...what..?. ....


In MzMine -- you need to do some thinking -- maybe lots of thinking. You need to take your data and work through each step of processing it through. You appear to be able to do stuff that might not make sense at all. But you can always remove what you've just generated. I'm currently pressure testing that functionality.

Filtering? There's 5 of those. Peak detection? 8. Alignment? At least 3. 

Once you develop your pipeline that isn't crazy at all you can build a batch mode to walk your data through all these steps. Okay -- I'll assume that's what you can do -- cause for all the attempts I've made for everything to turn out right Elizabeth is still being obstinate about Mr. Darby and Captain Bingsley seems like he'll never marry Lydia -- who was what? 14? Blech. What I mean is that nothing is working out in anywhere near the way I (or old Mr. Bennett) want it to. 

Did I successfully interest you in checking out MZMine? Or did I just bring up half forgotten memories of creepy old books you had to read in 9th grade? If it's the former -- there's tons of documentation on MZMine. Here is the manual I just found. (PDF download) this is not a piece of software where you can succeed while randomly pushing buttons. 

Wednesday, June 27, 2018

Need a cool fall class project? Have an extra turkey?

Is anyone else jealous of the cool stuff undergrads get to do in class these days? I've met a number of people recently who used MS in undergrad classes and have done some really serious science. \

Last year,  Rich Helm's BioChem 4115 at Virginia Tech did this one (all the students are even listed in the manuscript)!

If you're looking to mix up your lab class in the fall, this might be an interesting way to do it.

Tuesday, June 26, 2018

Complex Portal -- New superpowers for UniProt!

It can be hard to keep track of all the awesome information and links that are around EMBL and UniProt. There is so much information now that sometimes I can't even find what I know I'm looking for when I know I'm on the right page (this is a good thing even if it makes me feel just a little crazy sometimes).

ComplexPortal is another new UniProt power -- looks to me like it's easier to go to ComplexPortal and then follow the UniProt links back to get specific information on specific proteins.

Bonus: If there is lots of information on your complex it'll get all wobbly and animated for a second and make you wonder when you left lab last night.

Pro Tip: My web browsers at home have Adblockers installed. I don't have to whitelist this page, I only need to click links twice to get them to load, but if you aren't getting anything to show here you might need to allow popups from EMBL.

2 dimensional online fractionation with EasyNanos?!?!?

I have honestly never considered doing this. And can't think of a good reason for doing so now, but in case one comes up -- at least I now know it is possible -- and there is loads of proof. 

Check this out --

What the Heck!?!  These authors take our boring old inflexible EasyNLC and without adding pumps or anything -- turn it into an online 2D system. Somehow they get it up to 21 hour separations -- moving their QE Classic from 3k proteins from a HEK293 digest to closer to 8,000.

I found this while looking for something in no way related. Now that I know about it, however, this isn't the first or last time someone has pulled this off. It's just the one that Google Images knows about (if you hit a paywall -- I definitely didn't tell you that you could maybe get the paper if you went through Images, because I'd never ever recommend circumventing a paywall!)

Now that I know about this -- I found another instances of people doing something similar. This one is from 2010.

Monday, June 25, 2018

Common contaminants in mass spectrometry!

I thought for sure this was on this dumb blog somewhere..but I can't find it...

Hey -- and if anyone has ever updated this ultimate guide to contamination from 2008 -- please let me know!

Got something weird coming off your column? Chances are it's in this thing!

The paper isn't open access so if you can't get to it -- do I ever have good news for you!

Boston College Chemistry Department hosts the supplemental lists. If you use them, please site this amazing work, but you can get the full lists here!

Saturday, June 23, 2018

Worse than boring post -- Trying to find specific MS/MS scans in Xcalibur!

Imagine this strange scenario -- you have a peptide that you discovered with your search engine and you just want to take a quick look at it in Xcalibur.

You have the assigned monoisotopic mass from your search engine -- all the way out 4 decimals.

You are easily able to find it by XIC with ranges like this --

And now you want to take a look at the MS/MS spectra in Xcalibur -- you open up ranges for the MS/MS spectra triggered -- and it isn't there -- there is some stuff that is close -- but nothing exact...

Ummm....where'd the MS2 go? 

Turns out Xcalibur has it listed twice. 

If you are just scrolling along through with no range filters applied you'll find the top spectrum (T), but if you go along with ranges you'll need to hunt down the bottom spectrum (F).

Note that this is the exact same MS/MS scan -- but the top one is the mass you are looking for and the bottom is the one you'll have to use in ranges to find it. How fun is that?!?!

I have no proof of this -- but I believe that the bottom mass (and one that is assigned in ranges) is the preview scan mass. The Orbitrap doesn't complete it's full MS1 scan before it starts acquiring ions that it is going to do MS/MS on. We don't want to waste that much time.  Partway through the scan it's already created a list of ions that either passed MIPS (also called PeptideMatch) or the minimum intensity and charge cutoffs you provided for it and then starts working on getting them.

In this case -- this wasn't hard to find. We're off by just a tiny amount. And this is the Tribrid. This issue is much worse on the Q Exactives.  Here is a typical example.

A standard tactic I employ is opening an MS/MS range for every ion mass close to my ion of interest.

In this case I can find the scan on the second attempt

The difference here is about 0.02. Off the top of my head, that sounds like about 30ppm. Neat, right!!??!  My second favorite part about it is the fact that the ranges aren't listed in --- numerical order. 

This post is worse than boring. I honestly thought when I started this that I'd come up with a clever solution for finding what I was looking for -- and I still don't have one...

It is worth noting that Xcalibur is kinda old. And FreeStyle is probably meant to replace it in the near future for good reason.  Maybe FreeStyle is the solution?!?! 

EDIT: 6/26/18. Thanks for the tips in the comments, y'all!!  I will try tracking by scan number. Unfortunately -- and I should talk about this later -- sometimes I'm trying to track Minora features that don't provide you with scan numbers. 

I did try FreeStyle and it does the same thing -- 

--- but it does it SO MUCH FASTER that it might become my go to almost immediately with one really weird outlier -- data from our Elite only gets 2 decimal places by default while the Fusion gets 5....

...which is probably my fault, but I can't seem to change it...FreeStyle automatically recognizes the correct settings for the Exactives and Tribrid and that saves me a lot of steps. 

Friday, June 22, 2018

MOFA -- Reduce the dimensionality of all the data!

What a great way to start my morning!
1) My Twitter feed popped up a paper that I checked out because it had a funny name (MOFA! link here!) and, while a little scared to check Google Images, it turns out it is a moped that has pedals!

2) I realize on page 3 or so of the paper that this is one that my wife was talking about that started a conversation that we should write some journals and suggest that software links are provided in abstracts.

You can get the software for R or Python here.  This post isn't just rambly wasted time! The link is hard to find in the paper. With my service today complete -- time for (probably inaccurate) rambling!

What is MOFA?!?!  Multi-Omics Factor Analysis, duh.
Could that mean anything? Sure it could!

What does it mean here?

It means a new way of integrating data from all sorts of input -- the more I think about it, the more I like it. However, after 4 shots of espresso there is a period of time in the morning when I like everything, especially my cat.  Sorry, this has been cracking me up all week....he's fine with business catsual.

Stop laughing at the cat, Ben -- be serious and talk about dimensionality reduction!

How are we doing things in proteogenomics/metabogenomics/multi-omics right now?

1) Somebody does transcriptomics on the cell/patient and works out a huge list of the transcripts that are changing (and probably those that are unique to the cell -- variant call files and such, but lets ignore those right now)
2) Somebody else finds a list of small molecule features that are changing from sample to sample and assigns the best metabolite ID they can to all of those features
3) You identify as many PSMs as you can and then quantify those.

Generally these lists are reduced to what appears to be significantly different between these groups -- based on the significance that makes sense for each individual experiment. This is likely highly driven by the depth of coverage and the number of samples. It isn't hard to imagine a problem if you had 300 metabolites quantified compared to 30,000 transcripts quantified, right? Is the significance cutoff the two lists the same? Sure, your cutoffs make sense in each individual experiment....

Then someone converts those lists to something universal -- probably the proteins to gene IDs (which has some serious weaknesses I should ramble about some day) and then puts those lists all into KEGG or Ingenuity(tm) or something similar. (Perhaps the complete lists are fed to Ingenuity).

MOFA says -- before you do all that stuff -- why don't you just try reducing all the factors to what changes between your sample sets?

What is the output from all of these things? 3 dimensions.

Dimension 1: The patient or sample
Dimension 2: The transcript, PSM/protein, metabolite ID
Dimension 3: The relative quantification you get for Dimension 2

What if -- for just a minute -- you forget where that data came from? What if you didn't care that this was a metabolite and this was a transcript and so on? Now you just have a big list of things about your sample versus the other samples and their quan. Could you just reduce the data to seek the factors that are explaining the variance between Sample A and Sample B? (More realistically -- Sample Set A and Sample Set B -- a big n is going to be required to do it)

This is probably inaccurate -- but this is what I interpret that MOFA is doing. Massive multiomics data reduction.  Figure 5 was what finally convinced me I was on the right track logically about what was happening here. I suggest scrolling down to it and then start into the results section.

The paper is open access, you should check it out, because they look at 200 patient samples with multiomics data integration and they pull out some really interesting observations with this approach suggesting that this makes a lot of sense.

30,000 transcripts with abundance --> get a significant list
+ 3,000 metabolites with quan --> get a significant list
+ 8,000 proteins quantified --> get a significant list
Try to combine that significant list with cutoffs that make sense in terms of the data source itself but perhaps border on arbitrary compared to the sum the total variance from all the data as a whole.

OR MOFA it all down to what is really different between your samples first while using the sum of all the data points you've all worked so hard to generate together to increase the true power of this huge effort?

By the way -- they don't ignore the mutations and stuff in their study. They integrate all that too!

Thursday, June 21, 2018

IdentiPy -- A central server solution for proteomics labs?

I'll be honest up front -- I can, in no way, evaluate whether this software works. I'm gonna leave this link here so that maybe people I know who can start a Python script without going to their niece's PDF copy of Python for kids  and staring slack jawed at the screen for an hour before getting distracted can take a look at it -- and I can ask them if it's as smart as it looks!

Here's the paper! 

Okay -- there is a bunch of Python stuff out there. Pythomics (whatsuuuup PandeyLab!?!?) and  Pyteomics and PyProteome and pyMzML and UrsGal (which gets points for 1. combining the most tools and 2. being the least easy to confuse every single time with the every one of the others.)

Okay -- so why post another?

IdentiPyServer is why. IdentiPyServer is designed to be a central lab solution for multiple users-- load it up on some powerful box or server in your lab and then all your users can utilize an easy and customizable webpage interface to load up their runs. It's got some other nice bells and whistles like Autotune which will take a look at the data being uploaded and automatically set the search parameters.

How many times have you had to open your RAW data file because you forgot whether you used the ion trap or Orbitrap for MS/MS on that one weird experiment? You can be forgetful AND lazy and not even check and it should all be just fine.

I'm posting a lot of tools lately -- have I put this thing up here yet?

This paper has something in common with IdentiPy -- in that each one had a word in the article title that I had to look up.

"Extensible" sounds like a real word. It totally is, and it means what it sounds like. Doesn't hurt to check.
"Bifurcated" Absolutely not a word. (It is, I discovered).

Once you get over that hurdle -- this is a really interesting perspective from two sides regarding where we currently are with mass spectrometry software and I highly recommend checking it out.

I've got other tools to discuss as well -- including some we're actively evaluating and comparing. There is some GOOD new stuff out there and in the pipelines, but I think we need to keep in mind that we definitely aren't where we need to be yet and development of new and better software tools for specific applications needs to be a major priority (whaaatttsssuuup grant reviewers!??!?!)

Wednesday, June 20, 2018

PepSAVI-MS -- a workflow for identifying antibiotic compounds!

The word "savvy" brings up lots of Jack Sparrow. No idea why, but not important-- PepSAVI, however, is important! 

This is the newest paper describing why we want to know about PepSAVI -- but this study from last year details this awesome workflow!

Since it's open access (yay!) I think I'll just steal a picture that describes how it works --

Look like a lot of work? It's cause it is. However -- as you can tell from the first study, the pipeline is yielding dividends in finding natural products with antibiotic, antifungal, and anticancer activity. In case you've been living under a rock since the 1950s -- bacteria and fungi to figure out how to live just fine in high concentrations of all those cool drugs y'all had and put on everything (thanks, by the way!). Since we're looking the possibility in the face of super lame things like strep throat and minor infected cuts killing us when we get infected again, we need some new cool drugs in the pipeline -- and maybe PepSAVI can help put a bunch there through MS1 based feature based analysis of natural product activity.

As you just got out from under that rock and the first thing I did was be passive aggressive about the overuse of antibiotics -- which, as I reflect on it, might not have been the fault of someone who later decided to live under said rock -- I feel bad being the one to tell you about cigarettes and cancer, global nuclearization (wait -- is that why you moved under the rock? okay, so you already knew about that....)

To make it up to you -- get this -- we put a man on the moon. For real, yo'. And it made THIS possible!

I know, right!?!?! You're welcome!

Tuesday, June 19, 2018

Ummm....can we identify people from proteomic samples...?

I didn't make it to a poster by some of these authors on the final day of ASMS but it is something I've thought about at least once a day since.  You can check out the study here.

Actually, this one is also really interesting.

You know when they do the DNA forensics stuff that they really only look at small bits of the DNA -- just enough to determine match against random occurrence. The early stuff was just restriction digests and running gels and trying to match the gels. Actually -- found a good WikiPedia post here.  I think the newer stuff is all SNP. Full DNA sequencing is still about $1k -- which is a LOT of donuts --  which is still far too expensive to be practical for law enforcement, even without the data processing.

If we're looking at small snipets of DNA -- what are the odds that small snipets of protein would have enough info to do the same thing? This team is building an increasing amount of evidence that there is and it, quite likely, exists in our data and we can find it if we really look for it.

That's great for forensics -- but -- umm....


If this is true --- this could be seriously bad, right?!!?!  I don't know how things are in the civilized world, but in my country if you get sick you die when you run out of money to pay for your medical care. We have "insurance" companies that we pay our entire lives that make their fortunes on gambling that we won't ever get sick enough that they'll lose money on us. And if we do need their services it is in their best interest that they find a way to not pay for our medical care.

Now...this is obviously stretching it and might sound like I need an aluminum foil hat.... what if I'm one of the control samples in a big study that is on Massive that you could find by ProteomeXchange....and you could figure out from the RAW data that -- 1) that sample is from me and 2) in that dataset you can see that I have 2 of the single amino acid variants (SAAV) in PARKIN1 that we generally consider sub-ideal to have?  Could that one day be used by some enterprising insurance company scientist to build a case for why they don't have to cover my medical bills anymore?

Obviously this is just an extrapolation -- there are hundreds of variables here. Even if I had that level of insight going into a sample -- it's pretty darned hard to prove SAAVs and this forensics profiling doesn't sound real easy either, our coverage is extremely sample dependent...and on and on...but it's interesting to think about, right?

Also, thanks to legislation passed during our last federal administration here, bankruptcy rates due to health care costs are not nearly as common as they were (#1 cause of bankruptcy in the U.S. in 2013).  This legislation has been under a lot of fire recently, but its still standing.

Monday, June 18, 2018

Most boring post ever -- Extending EasySpray Column life....

I disappear into my day job for about a week, don't answer any phone calls or emails or blog comments -- and this is the first thing that I get caffeinated enough to write...ugh...

The only fun thing about working on this post was trying to find a video of a Boston Terrier yawning. This is as close as I got.

I really like EasySprays -- if you find something easier, let me know -- I'll switch to those, but to make the math work out that they are a great idea financially I think they need to last at least 2 months, give or take.

So far -- 2 things appear to be the most critical -- #2 doing the long and boring column conditioning protocol and #1 doing high organic "column restores" periodically.

#1) Let's talk about column restore (I might have made this name up, but it is discussed in the so-boring it will make you want to get into the diethyl ether (please don't) to make it stop) EasySpray manual -- at least the new one, anyway)

The logic is that the spray instability (which is the #1 reason why we stop using them) here has more to do with the lines (including waste) leading up to the EasySpray. Column restore is picking up a full loop injection of organic and running it through the system, followed by maintaining tip top organic for at least 30 minutes

The pick up is 18uL of ACN followed by this method (no equilibration required)

It really seems to help. If the spray stability starts getting wobbly, this generally gets me back to baseline -- or at least allows me to lower the voltage back down a bit.  I run it a couple of times a week, probably and -- anecdotally, it appears to help keep the columns going beyond the 2 month cutoff.

#2 (In this order because I couldn't figure out where the photos went.)

I saw this in the manual, saw it was another 40 min of the nLC doing stuff and the mass spec not collecting data and thought....

...okay...I guess I wasn't that serious about it...cause  I did do it....

Worth noting -- we've got 6 EasyNLCs hanging around and 2 of them have this script included in it. The oldest one of the two has this version.

(P.S. if you have a service plan you can request service update your EasyNLC software. There may be earlier versions of the Easy1000 that can't be brought all the way up to the newest firmware but our local FSE is investigating that, cause we have a request in now for all of ours)

If you don't have this software or have a different LC there is no magic to this script. It just starts running buffer A at low pressure and gradually steps it up to full operating pressure over 40 minutes. The idea is that if the beads or particles or whatever is inside these things got unsettled a little during their travel to your lab then this will help get back where they all ought to be. Sounds like pseudosciency mumbojumbo to me, but I'll probably keep using it cause - in this anecdote -- column life is now longer than it was before.

Thursday, June 14, 2018

The Lazy Phospho Normalizer!

Okay -- so there are easily a million smarter ways to do this. I know it. However -- I doubt there is a lazier way.

Here is the scenario

You did a global phospho TMT study.
Because this is a time course long enough that trancriptional/translational regulation isn't something you can rule out you also kept the flowthrough (or part of the sample that you didn't enrich) and you also ran it -- ala SL-TMT or something similar.

How do you combine that data?

Like I said -- 100 smarter ways to do it, but last night around 8pm I would have given just about anything for an Excel sheet that said this --

Around 11pm I started channeling that frustration into watching some tutorial videos on the Microsoft Office website and early this morning I woke up and finished this tool that does EXACTLY what I needed last night

You can download it from my Google TeamDrive here.

As it says in the instructions -- if you make something easier and better (or already have one), please let me know. As always, happy to see comments regarding what could be fixed or improved. Can't guarantee I'll actually do it -- as I said -- this does what I want it to do, I'm just putting it out there in case it would help anyone else.

Also, if you're just writing me to make fun of the fact I could do this in R in like 3 seconds, I'll have you know that I've installed 6 copies or R and R studio on this PC and I don't even know which icon on the desktop is the most current one. Keep that in mind.  You could do it in 3 seconds, I'd spend an hour -- easy -- clicking the wrong desktop icon. "Where is the one that links to that cran thing..?"

Wednesday, June 13, 2018

MaxQuant goes Linux!

It's been possible for quite some time to run MaxQuant on Linux in different kinds of "virtual environments" and things. I know people who have been doing this for a while. These, unfortunately, have loads of overhead and sap your processing power.

MaxQuant having a true Linux version?  That's a big deal. Nature Methods level big deal? Sure... why not!

Monday, June 11, 2018

Tired of trying to find your PARPs? Cleave 'em off!

Do people still look for PARP inhibitors? I haven't heard of any in a while.

PARP is obnoxious because it's a polymer PTM. (This is a good review on it.) Unlike more friendly PTMs like ubiquitin, there isn't a friendly cleavage site coupled with a nice mass shift.

This new note at JPR shows a great way to get to the proteins that are PARP'ed (PARPyPARPylated?) by getting down to the proteins and knocking it off.

Much better idea!

Edit: I didn't try hard on the nomenclature at all. This link at Ribon pharmaceuticals explains the different types of these things.

Sunday, June 10, 2018

ProteomeTools takes on 21 different PTMs!!

Spectral libraries are coming back with a vengeance. Check out this great new study in press at MCP!

What were their limitations again?

1) The libraries weren't big enough?

2) Integrating library search into normal workflows wasn't straight forward?

3) There aren't enough PTM libraries?

1) ProteomeTools has already dropped 400,000 synthetic human peptides through their site and through collaborations with institutions like NIST. A LOT more are coming. Couple this with the MASSIVE's new libraries and the treasure trove at NIST?

2) More on this later, I think. But more and more of our normal software workflows are becoming spectral library compatible. Mascot supports them now (right?). I've seen two mentions of spectral libraries in MaxQuant in the literature, so that's coming and all the DIA software is ready to go for libraries of all kinds.

3) NIST has had a huge human phospho library for years, sounds like MASSIVE has a ton now as well -- but ---

This team synthesized a ton of peptides with weird PTMs on them!! I'm sure they have or will release the libraries -- but what might be more important right now is that the RAW files are available via ProteomeXchange now (PXD009449, here)

Is that really a crotonylated lysine you're looking at? Ever seen one before? (I'd never even heard of it until recently...) sure would help if you could download a couple RAW files with real ones in them and find out they look like this in HCD, right?

Better image from the RAW file itself (yeah -- that 152.107 is in virtually every MS/MS spectra -- you know, just checking...):

This paper is an absolute treasure -- if you're European you probably know that Andromeda can make use of these diagnostic ions in the scoring algorithm. So...if 92% of all crotonylated lysines made a 152.1070 fragment ion, Andromeda can take that into account and help you weed out the false cool is that?!?  I just went through PD and even in the text editor interface for modifications in MSAmanda 2.0 I don't seem to have the ability to do that at all...(you can, however, get MSAmanda to preferentially score you new neutral loss masses, but it requires some finagling to get it right. Can't guarantee I've got it, but you edit those here in the Administration).

Wow -- as soon as I think this paper has stopped giving -- there is more stuff. If you're interested in any sort of weird PTMs -- this study should be on your desk. It'll be handy when it's formatted, but it's worth cutting 45 pages out of a tree right now --

HOLY COW --- The very last figure of the supplemental info casually shows you how to resolve one of the single hardest things I could ever think of trying to work with -- there are different neutral loss patterns whether a peptide is symmetrically or asymmetrically dimethylated....the more I think about that the more I'm certain I probably couldn't manually sequence that without this information....but -- could I go back into the settings above and feed MSAmanda this information and get it and PtmRS to use this information to resolve these correctly?

I can't believe how great this study is.  I don't use this .gif lightly...