Monday, December 29, 2025

MASLD liver tissue AND plasma proteomics!

 

Thanks to our colleagues and affiliation with the Pittsburgh Liver Research Center we get access to a lot of human liver samples. Typically, however, this is because the liver is taken out of a person so that a new one can be put in. They don't typically take those livers out because they're in great shape. They're generally super extreme cases of bad livers. As you dig through the repositories you'll find that is often the case. Not a ton of helathy controls of livers or less terminal liver disease samples. 

Merry Xmas to me! 


Seven healthy controls WITH both liver homogenate and plasma proteomics to add to our hard drives? It is funny to me that the front page illustration shows a Sciex QQQ and the study was done with an Orbitrap Tribrid running DIA proteomics. I have one out or out soon where the other authors put an Agilent ICP-MS or something in their abstract graphic without consulting me on that change. 

All the data is up on MASSIVE here

What is really cool to me is that they only did the ones where they had matched plasma. Since the liver is a big organ with a tremendous interface with liquid blood stuff, the two should be closely related. And this group points out clear disparities between the two that are definitely worth my team thinking about. 

Saturday, December 27, 2025

iFishMass - Direct (digested) nanoinfusion antibody (and ADC!) analysis!

 


Antibody and antibody drug conjugate (ADC) drugs are EVERYWHERE. You can't watch 3 minutes of YouTube or Television (is that still a thing? I can't believe YouTube still exists at all either, how hard would it be to replace it with something that was good?) 

Do you have to run 1 hour or 2 hour gradients of digests to work them out? If so, you sure couldn't keep up with a big multi-lot multi-batch generation facility.

Could you just digest and direct infuse? Probably! But how would you analyze those data? With iFishMass! 


They start off by doing some antibodies and ADCs with a standard nanoLC setup and then they move over to the NanoMate. Remember those? They are typically used for intact protein analysis. 

Turns out that on a Tribrid it works pretty great for simpler single protein digests. Some great news about the software for me is that you convert the data to a universal format before you put it into the software (one of the formats that starts with an X and ends in an ML. I forget which one. 

And iFishMass is up and available to run here. Funny point in the manuscript is where it says something like "the easiest way to install it is..." which is super handy and I'm glad the reviewers let them sneak in a helpful tip there. 

Monday, December 22, 2025

Spatial proteomics (with targeted technniques) at sub micron resolution! 30 proteins at a time!

 


I pitched mass spec based spatial proteomics a while back at a building where they have a whole ton of microscopes that are cooled with cryogens. Supposedly someone in the audience invented the whole idea of making a microscope super cold. A Wikipedia suggests that isn't all that unlikely. Right zipcode for sure. When they started asking questions about spatial resolution and I enthusiastically gave the best I'd ever seen, it sucked all the interest out of the entire room. 

For real, I think they considered not validating my parking. The microscopy people can't look at a lot of proteins at once, but when they look at them they want to be orders of magnitude below the best laser pulse our mass specs get.

What if you want to look at more than one protein at a time in microscopy? 

You get SUM-PAINT, I guess? They get some stupidly high spatial resolution while using oligonucleotide barcodes behind to label your targets. Ridiculously beautiful pictures and - I don't know if this is a great paper or approach, but it give some perspective on scale. There are a lot of 5nm pixels (their scale) inside of a 20 micrometer one....




Saturday, December 20, 2025

NanoDESI allows spatial intact protein complex analysis!?!

 


Well...this one looks like magic...


I was reading it on my phone yesterday, but I'm reasonably sure this was a custom Nano DESI source equipped on an Orbitrap Fusion 3 (Eclipse) or 4 (Assend). 

There are some ridiculously nice pictures in it, but if you're getting spatial localization of large proteins or medium sized protein complexes, there is some impressive mass spec wizardry going on here. Localizing a 185kDa protein in a human kidney??? Whoa. One of two really impressive spatial proteomics papers that dropped this weekend. The other one is in my wheelhouse a bit more and I'll probably get to my notes on it later. 

Friday, December 19, 2025

A spatial proteomic and phosphoproteomic map of liver mitochondria!

 


This isn't super new, but a colleague sent it to me and it's really really cool.


The liver is really weird and even though from a microscopic level it looks like a bag of square (they like "cuboidal") cells all stacked like bricks row after row forever, these cells are very different depending on where they are. These big bricks of cells are also packed full of mitochondria and may have hundreds of them per cell. This group used spatial sorting to get piles of hepatocytes from different zones THEN did mitochondrial enrichment THEN did (TMT) proteomics and phosphoproteomics.

There are big differences in mitochondria depending on where the cells are spatially in the liver. I was going through the methods and thought something like "wow! someone knew what they were doing! why don't I recognize any of these names?" I re-read the names. I know the 9th author. Wait. How is a mostly proteomics paper...PI...is....9th...author....meh...probably politics stuff..... There is pretty ....confocal...microscopy pictures, though, and those can be hard to do as well, you have to sit in the dark forever and take pictures and some people were deeply offended if you listen to music while you do so!  I have no idea what the top panels mean, but I do like pastels (see top panel). For real, really nice work though and something we'll definitely discuss in a lab meeting in the spring semester, for multiple reasons! 

Thursday, December 18, 2025

Is that a peak? A pandemic remote learning success story!

 


1) Today I just discovered the ACS Journal of Chemistry Education (or maybe re-remembered it was a thing?) And it's a treasure trove of interesting stuff. 

Example? Check out this cool story from the pandemic where students remotely analyzed DIA generated MS2 spectra. 


This is how it went (stolen brazenly from the paper)!   


Wednesday, December 17, 2025

How does time blood spends on ice alter the proteome? At least some important proteins absolutely change!

 


I can't possibly spend the time on this new study that it deserves, but I really am going to think about it on my commute today. Or listen to the Halo Effect album I didn't know about until yesterday because that's how busy my 2025 has been. And maybe also think about this paper.


When Anna Barker was on THE Proteomics Show podcast she stressed how absolutely critical the sample handling was to the setup of CPTAC (that was pretty much her idea, btw). I asked her what she thought about all the people who are just pulling from repositories and doing studies on historical material and, best I can recall, she wasn't optimistic about the value of those results.

I don't feel like this study is either..... In a big hospital clinic like the one I worked in for years, blood would come in from upstairs really fast, and then we'd have these big drops of blood daily from remote clinics. Some would arrive on ice for specific assays but most would arrive room temperature. Is the proteome of the dude upstairs the same as his identical twin who had his blood pulled at the clinic a half hour away but didn't arrive at the main hospital for 4 hours? When that blood is deposited in a huge biobank, is that data conserved? Maybe now it is? I'd be confident betting that our IBM XTs (not kidding) did not have the capacity in their databases to retain transfer time information, particularly if it was coming in for an assay where it didn't matter.

Stuff we could totally handle, if we knew that it was important. This study suggests that it definitely could be. 


Sunday, December 14, 2025

From LCMS to clinical diagnostics! Is proteomics finally realizing potential? A new win!

 


I'd have went with the MCP abstract graphic but it's all smooshed up on my screen.

Related, we just had Mike MacCoss on The Proteomics Show (dude won the Don Hunt award for distinguished contribution in proteomics!). I try to always ask guests what they're most excited about in the present/future of proteomics and he cited clinical and translational assays that have happened or are happening. FINALLY! (I added that bit, 'cause it's about time!) Having trouble coming up with a list of them? Here's one to add!  

Check out this sick new one I just stumbled onto while not at all procrastinating on some budget stuff.



Thursday, December 11, 2025

GigaTime - AI decoding of multiplexed imaging slides!

 


I realize more all the time that I'm in the AI Skeptic camp. Every time I try to get an AI to do a simple task for me and I end up doing it myself I move further into that camp. I've got a whole list of failures in the accounts I or my employer pay for me. Confident python corrections that are definitely not correct, 14 attempts to have my publications on my CV reordered to meet the opposite requirements of my previous and current employer, and artwork that is hilariously awful. I fully expect everything "generated" from an AI to use more electricity than all of Panama to generate something that I will never be able to use (unless of course it blatantly stole it from some other source, in which case I can't use it anyway). I'm also convinced that several of my recent "peer" reviews were written by LLMs, but that's okay because they tend to be less critical of my work than my biological peers.  

So....it's with a healthy to borderline excessive level of skepticism that I place this paper here so I can read it later. 


The goal is to do image recognition based AI on histological samples. Which would be the 100th time I've heard of someone trying to do this. There was a really cool company in Gaithersburg a while back that started up and shut down a while back, but they didn't have these LLM things, so maybe this is the real deal? 

Monday, December 8, 2025

Benchmarking algorithms for single cell proteomics - is multi-proteome the right way to do it?

 


We kicked this around really hard back when there was a Proteomics Old Time Radio Hour. Not this paper, but the base concept of mixed proteome digests for quantitative studies. I'm still uncomfortable with it as a concept, but let's talk about the paper first.


The main part of the study is simulating single cell digests with proteomics people's favorite toolkits. They took a cancer digest and spiked in an E.coli digest at one concentration and Yeast digest at another.

Then they used a really cool robot that doesn't appear commercially available that they have designed themselves over the last 16 years or so that I really truly wouldn't mind having and they did some actual single cells. Most of the paper is on the first part, the low level mixed organism quantitative digest.

Since they now knew what the ratios should be they ran a bunch of samples and replicates and used DIA-NN and SpectroNaut and PEAKs and tried different settings and came up with some interesting findings. 

Begin concerns about multi-species proteomic mixtures as benchmarks

Here is where my concern always comes in for these things, though. The yeast proteome is like 4,000 proteins, and you'll basically always see 1,500-2,000 of the higher concentration ones. E.coli can produce like 3,000 proteins, but some are for anaerobic growth and whatever so I think you'll normally see something like 600-800 E.coli proteins from an aerobic digest without trying too hard at all.

I love the concept of a mixed species digest, but is that a realistic biological model? In what point in human biology are 1) there going to be an extra 30% of proteins available and 2) is 30% of the proteome going to be significantly altered and 3) altered in the same way? 

It's weird, right? Like if I was writing a normalization algorithm I think that I'd write an IF/Then statement that is like 

IF 30% of the proteome is at 1/10 of the base peak

THEN you f'ed up somewhere, PRINT gibberish. 

That's just me, and I don't know what the real answer is, but I sure haven't seen a comparison of two drug treated cells where 1,000 proteins have been significantly altered. I doubt that if you had a biopsy of a patient colon that was noncancerous and one that was at tumor that you'd see over 1,000 proteins that are significantly altered. So I'm not sure that's the best possible way to test an algorithm.

Back to the paper! 

 - there is solid gold in this study, btw. What normalization things to use, what post-analysis R packages seemed to work and what seemed to distort things worse. Totally worth realding even without the bit about the 5 papers about their microcope based sample pickup and prep robot. 

Also - just noting - the instrument used for label free single cell proteomics is a Pro2. Not an SCP or Ultra, etc., and they get some legitimately useful numbers. 

Sunday, December 7, 2025

Finally! A ready-to-run human plasma proteomics standard!

 


Disclaimer: I'm going to ramble about a new commercial product that was totally my idea and if you buy it I'll probably get money back for a whole lot of enzymes I personally bought. This was actually a tough post to write that I deleted and re-typed several times because it seems antithetical (which might be a thing) to this whole blog thing. Meh.

Ramble: 

I had a few months between my academic appointments which ended up being a top notch sabbatical, and that's what I'm going to call it from now on. I consulted for some really cool companies, found time to gracefully exit the CRO thing I founded several years ago, and got a really up-to-date view of what dozens of companies in proteomics are doing these days. During the consulting bit I'd sometimes go places or remote log in to instruments and help with experiment optimization. 

Everyone had the K562 proteomic digest from Promega or the HeLa digest from Thermo/Pierce. Add formic acid, inject it, it should look the same on identical instrument configurations regardless of where you are. 

Unfortunately, almost everyone actually wanted to do blood/plasma proteomics. And these things couldn't be more different. More than 90% of blood is composed of 1 protein and 95% of it is composed of like 14 proteins. That's not what the proteome is of cancer cells with 150 chromosomes which are full almost to bursting trying to express every protein in their entire genome. A great K562 method might give you plasma proteins, but it's not going to be great. It's tough to find 2 things in proteomics that are more different. 

So I went and batch prepped some plasma so I had a standard that I could use to compare things for the companies I was working with - and it was awesome. I also had comparator data because it was a sample I'd used before on multiple instruments over the years, and I ain't changed my bulk proteomics sample prep method since 2017. 

Then I was like - wait. WTF. Shouldn't there be a commercially available one? Why isn't there a commercially available plasma proteome tryptic digest?? 

How hard and expensive could that be? 

Oh. Oh ye of excessive confidence. 

But now you can just buy the first successful attempt at a standard - Equalizer I - from ESI source solutions! It's just a neat plasma digest, so it's ridiculously insanely hard to see anything besides albumin and immunoglobulins and about 100 other things, which is the exact opposite of the cancer cell line digest. Again, very clearly biased, but if no one ever buys it, I honestly don't care because I won't ever have to prep a plasma proteome digest ever again in my life and I've personally got something to do method development on. If anyone else finds it useful, we tried hard to keep the price down and $375 will get you 100x 200ng injections along with comparator data from 6 different instruments or something (a number I hope will grow soon). 

Saturday, December 6, 2025

DancePartner - Use Python wizardry to mine multi-omics from...PubMed?

 


I saw this one 3 times, loved the logo, but questioned whether it was anything useful to me and finally just read most of it. I moved to the Github halfway and started trying to install it

Paper link


Is it the easiest thing I've tried to do today? No, but I also had a 4 year old pumped full of hot chocolate in a Sporting Goods store when dude decided football cleats WERE MISSION CRITICAL and we ended up leaving with nothing at all. 

But....could you....hypothetically have Dance Partner dig through PubMed and find you a list of proteins, transcripts, lipids and metabolites that have been associated with the blood brain barrier? I don't know, but my cat keeps screwing with my mouse and if I put typos in some python code in Spyder nothing works, where I can put typos in this box and just hit the publish button and it's just normal. 

Friday, December 5, 2025

Frustrated by TIMSTOF chromatography limitations? FREE THE CAPTIVESPRAY!

 


I ran across this looking for something else.... Honestly, I really like the Ultra2 source, but if I still had one of the older ones I'd look into this, for real. 

Tuesday, December 2, 2025

opt-TMT -scale down everything so you aren't wasting so much reagent!

 


There is another optTMT, but that one doesn't have a dash and it's for designing smart multi-batch mutiplexed experiments. You can read about that one here

This new one is about how a lot of TMT labs are labeling 400 pounds of peptide (181 kg) and then injecting 200 micrograms per injection on their Orbitraps and 1000 micrograms on their Astrals. 

If you wanted to just label 10x more peptide than you'd possibly use instead of 10 million times more peptide, how would you do it? That's what the dash is for! 


While this might seem just a little silly since there are protocols out there that have been replicated dozens of times for labeling single human cells, they are actually a lot more convenient than you'd think. We know how much reagent in our lab to use for 1 cell or 25 cells and it's a drag when we have to break out the peptide quan kits and borrow someone's plate reader. This study gives you that in-between concentration fully optimized. 

Monday, December 1, 2025

Another funny solvent is better than formic acid for proteomics?

First off -- 

CHECK WITH YOUR HPLC MANUAL OR MANUFACTURER!!



Is the resolution of GIFs getting worse all the time? If so, it's the only change I've personally seen from this whole "AI revolution", except people saying "I asked ChatGPT" when they would have said "I did a Google search" back before Google reorganized and put their search algorithm teams under the control of their marketing teams. True story, that's why Google really doesn't work well anymore and AskJeeves is back, but now it needs more electricity than all of Spain will sue this year to look up stuff on Wikipedia for you. 

Okay, so someone at some time decided formic acid was a pretty good compromise. Pretty sure it was people in the John Yates lab. TFA gave you the best possible HPLC peaks for peptides, but it lowered your ionization efficiency. Acetic acid gave you the best ionization efficiency but if you were doing MuDPiT (which was a 2D chromatography system for proteomics best left forgotten today but it provided unprecedented proteomic coverage with the awful HPLCs we had at the time), acetic acid messed up your peaks too bad. So...formic acid it is.

Worth noting, formic acid has some drawbacks like poor stability in light, particularly when diluted. So when a lab dropped a paper showing acetic acid should be revisited, we jumped on it. My lab doesn't use formic acid in our HPLCs at all. We do have vendor permission and we have several thousand runs to demonstrate it hasn't been a bad idea at all

So when I was contacted by a researcher who was like - "yo, we have something better!"  we borrowed someone else's HPLC and tested it out. In our hands on (nanoflow) it's only marginally better than acetic acid, and possibly so marginal that on the sub-nanogram loads it wasn't significant by student's t-test. I forget, and Cameron actually did the work while I was visiting collaborators. But when you crank up the flow rates? 


Okay, so someone at some time decided formic acid was a pretty good compromise. Pretty sure it was people in the John Yates lab. TFA gave you the best possible HPLC peaks for peptides, but it lowered your ionization efficiency. Acetic acid gave you the best ionization efficiency but if you were doing MuDPiT (which was a 2D chromatography system for proteomics best left forgotten today but it provided unprecedented proteomic coverage with the awful HPLCs we had at the time), acetic acid messed up your peaks too bad. So...formic acid it is.

Sunday, November 30, 2025

New Nature Genetics study comparing pQTLs is....worth reading....

 


Ummm.....so...Imma just leave this here and not talk about it any more, maybe. Wait. Maybe just this - if your technology is producing results that can be validated 30% of the time then you could save a lot of time and just pick a gene or protein and flip a coin and go read up on other technologies....



Saturday, November 29, 2025

DIA Multiplexed proteomics with off-the-shelf TMTPro reagents!

 



This is obviously interesting - and surprisingly easy to pull off. The data is processed in FragPipe and one of the output sheets is put into these python tools to identify the complementary fragment ions. 

I like the figure above because they use 2 very similar peptides labeled with TMT and demonstrate that they can clearly find clean complementary fragment ion pairs. Oh yeah, here is the paper

They really really don't want to do any spectral deconvolution so they only used 3 of the TMTProC tags that give them clusters of complementary ions 4 Da apart. The open suggestion is here the whole time that if you aren't afraid of deconvoluting your complementary ion clusters - you can obviously do more than a 3-plex DIA experiment. 

This is a really nice read with the appropriate controls included as well as a way to dramatically increase the throughput of some DIA proteomics workflows on basically any mass analyzer. Worth a read for sure. 

If you type "TMTc" into the blog search bar you'll find a lot of stuff over the years. This is one old post that goes more into what this is and why it can be valuable. 

Wednesday, November 26, 2025

Y-MRT - a new prototype TOF with 1 million resolution and 300 Hz?!?

 


Ummmmm......okay so....these specs are amazing....


How do you increase mass resolution? Generally just increase the flight path, right? But you can only go so far before there isn't enough electricity on earth to generate the appropriate vacuum. Reflectrons double the path and the W-TOFs from Pegasus that a big vendor acquired recently can really push those numbers up by multiple reflectrons.

The Y-TOF takes that concept to 11. It's one thing to say "I can make my instrument do 1 million resolution". Give me 45 minutes with your Q-Exactive and I can make it do 1 million resolution. Each scan will take about 8 minutes. (more like 4 seconds, I forget) but it's completely impractical. 

AND you can tune a Time of Flight to get really good mass resolving power at one particular m/z. My Q-TOF gets incredible resolving power in a mass range that isn't exactly where I need it.

The Y-TOF did a 30 minute proteomics run and averaged 600,000 to 800,000 resolution across the usable peptide range!!!  

AND sub-PPM mass accuracy. Parts per BILLION mass accuracy. ON A TOF. 

Obviously a prototype, but more obviously something we should keep our eyes on. Worth noting, they do have to use Astral level loads for bulk proteomics (1 microgram of peptides for the best data) and that this prototype isn't going to smoke your recently purchased $1M instrument, but it's starting in a very nice spot. 

Tuesday, November 25, 2025

Prosit-PTM! Deep learn modified peptides???

 


We all know other great protein informatics teams are working on the holy grail for DIA proteomics - deep learning and prediction of modified peptides.

Am I extra excited because the team that gave us Prosit is working on it? Yes. Yes, I unfairly am, when I should be evaluating this preprint purely on it's own merits and not the historic success of one of our field's most historically reliable teams.  And not just because of their informatics skills. What makes me excited the most is their long history of making tools that anyone can use. 

Check out this preprint here! 



Monday, November 24, 2025

Breaking through barriers with an Orbitrap-TOF instrument!


Thanks to all the journals allowing Open Peer Review and allowing me to sign about half of the 30 or so papers I've reviewed recently, it's pretty clear to people how unproductive I think things like this title are. Even if, as in here, I really do like the paper. 


I think I'm just old, for real, but I do think that if you've got cool biology in your paper but you've got the instrument front and center you're doing yourself a disservice. 10 years from now that instrument is going to be $100k from second party vendors or $50k on Ebay without an ionization source or accompanying PC and no one is going to look at the biology in your paper. 

However - this is some pretty amazing crosslinking data. And that's my point, I guess. It's a nice study. FAIMS helps a lot with crosslinking on both an Eclipse and Astral, but stepped collision energy - while helpful on the Tribrid, has minimum effects on the Astral. Higher CE helps a lot. There is also a neat toolkit I heard mentioned at iHUPO called "Raw Vegetable" which I assumed someone said but actually meant "Raw Beans" (which I love). You also get a cool step-by-step breakdown on how to optimize crosslink data analysis in Proteome Discoverer. They do some filtering inside the software and break it down at every level. Super helpful for anyone using that toolkit in their lab (I think they use 3.1).

Worth noting, they used the freely available MSAnnika node for the crosslinking, which is pretty cool to see it in use - and optimized through. 

Sunday, November 23, 2025

Plasma fractionation increases proteomic coverage!


 

Y'all aren't going to believe this one. For real. 


Everyone out there complaining about the number of proteins you can identify in plasma proteomics and no one has ever tried fractionating it??? What is wrong with us? 

Whoops. Not everyone got that this is a joke. Okay...so...literally for all of time everyone has fractionated plasma in some way to get higher coverage. That makes the title funny. 15 years ago I was running SDS-PAGE gels and cutting fractions and running them separately on an LCMS. 13 years ago I was using something called an OFF-GEL to first fractionate the proteins at the intact level by isoelectric focusing and then digesting the proteins in those fractions and fractionating them at the peptide level to get proteomic depth. The problem with fractionating is that mass spec time is expensive and if each sample takes 144 hours to analyze (example) you only complete 60 samples per year if you never run a QC, a blank, your instrument never needs maintenance and you work every single day of the year. The UK Biobank study 1 would take 833 years. Most people aren't that patient and we're all sort of looking for ways to get a lot of samples analyzed before we retire. 

Saturday, November 22, 2025

Hypothetical multiplex tag works in single cells?

 


Did you know there are other tags out there for multiplexing proteomics? Younger people probably don't and I can't tell you where they to get them because I actually truly can't. Let's change the subject entirely.

Did you know lawyers are seriously expensive? Like, for real expensive. If you're struggling with science salaries maybe you should check it out. Okay, let's go back to this paper.

If hypothetical multiplex tags did exist in some places where I couldn't tell you about could those ficitious tags be used for single cell proteomics?  Are you thinking....ummm...yes...? why wouldn't they be? These people found a team of peer reviewers who thought it was useful to check - at Analytical Chemistry! 


And they compared it directly to the commercial reagents that we all know and love. They used the same intrument Orbitrap Fusion Eclipse using 120kDa MS1 and 30kDa MS/MS. The only difference is that the fictitious tags that don't exist and if they did I couldn't tell you where to get them used a slightly lower m/z cutoff. They also optimized at 5:1 tag to peptide. 

For the experimental design here, they sorta mailed it in. 3 tags were used for cells and other tags were used for different controls. They also optimized the carrier to single cells by basically not saying "...why would it be different for one multiplexed reagent when 15 different papers already optimized this on the same Orbitrap hardware...? and said you could go about 100x - 200x to one? 

Then they actually did some interesting stuff by labeling mouse spleen cells with 13 of their available 16 channels. The most interesting part is where they find that if they don't use FDR at all ("set to 100%") they can get 12,543 proteins in mouse spleen cells!!! Someone said "ummm....wtf....you need to use FDR..." and they get 3,991 peptides and 3,602 proteins. So....1.1 peptides/protein on average. Ouch. The FDR calculation scheme is ...nonconventional.... and I almost want to download their data and reprocess it in FragPipe and see if the data is good but the data analysis is unnecessarily strange. Oh. The fun I had when I had free time....

Interestingly, however, the authors get those 3,602 one-hit-wonder proteins to clearly separate the different cells in the mouse spleen into their originating cells, and generate a beautiful T-SNE or U-Map plot, and that's what we came here for anyway, right? The authors suggest some follow-up experiments where they plan to combine both their tagging solution with the amazing commercial one....



Thursday, November 20, 2025

GlyCounter - find all those glycopeptides whether you fully sequenced them or not!

 

If you've ever tried to look for a glycopeptide in any type of MS/MS spectra you know how very very rare it is that you get all of the information that you're looking for.

If you want to get full sequence coverage of everything it's probably going to take ETD and 2 different energies of collision dissociation of some kind. The clever combinations of energies certainly help get you more fragments, but they also increase the background complexity. "Is that 8ppm away from the b5 ion or could that actually be the NeuNaC is the third sugar in which case that's the 3 ppm off of the z4 after the loss of a less likely HexNaC at the end? (I possibly made that up because it's equally funny to me if that is a chemical impossibility or if it isn't).

Do you need ALL the info, though? Sometimes I just want to know things like "did this drug increase the number of spectra with glycan related oxonium ions".  I definitely do want to know more than that, but that's what I know how to do with some clever R scripts Conor Jenkins wrote me almost 10 years ago. 

How do you get to real information - and spectra - for glycopeptides in your data in an easy way? 

You don't.

Until now! Hello GlyCounter! 


You're probably assuming "cool, now I just need someone to download some crappy python scripts, fix them and then make me a dummies guide on how to run them." 

NOPE! Check this out! I wouldn't write about it if it was the python thingy, probably.


It's a slick little GUI that takes straight RAW files or mzMLs! Click your options (including whether you used UVPD or ETD(!!!!!!!) and it does the rest, including kick out handy IPSA annotated spectra! 

Important - if you are using a non-Thermo format and convert your data to mzML you don't want to compress them. In MSConvert, turn off that thing. Honestly, that thing messes up a lot of other workflows. If you're converting through FragPipe, it might convert them by default depending on what version of FragPipe you're using. 


Gotta run, but if you need a solid new and approachable toolkit for glycan modifications, you should absolutely check this out.