Thursday, March 31, 2022

Fe-NTA for you magnetic people!

 


I'm increasingly encouraged by the increasing level of standardization that we're starting to see in proteomics sample prep. I bet if we did some word cloud things in the literature we'd see SP3 and S-Trap getting substantially bigger in the literature each year. I'll be honest, I've never tried the magnet prep, and I probably won't. I've got my prep method and we've handed out these color-coded S-Trap kits to collaborators who have never prepped a proteomics sample in their lives and I can use Match Between Runs for their preps to the library data we have here. The consistency is beyond value for me. The same has to be true for people using the magnets. 

If you've built up a magnet powered workflow I'm going to guess this could fit right into it -- 

Wait. I've got to put this in here somewhere, might as well be here. It's too stupid to put at the bottom!


I don't see these listed as available yet, but since they look every bit as efficient as the spinning kits that we use here, I bet we'll see a part number on Pierce in the very near future. 

Wednesday, March 30, 2022

Mapping the Resistome of Pathogens with Proteomics!

 


If you talk to any microbiologist for long enough, generally at least 7 minutes or so, you'll probably land on the topic of how WE'RE OUT OF FUNCTIONAL ANTIBIOTICS.

Outside of really handy bacteria like the gangrene causing Clostridium perfringens, which has been nice enough to -- knock on wood -- show no signs of figuring out penicillin resistance, everything else is resistant in one way or another to basically everything. Yaaay. Lots of reasons for this, including the fact that antibiotics just aren't profitable to develop. 

Proteomics to the rescue?!? I mean, there aren't all that many proteins in a bacteria to think about, how hard can it be?  You can find out in this great new review here! 


This is one of the rare cases where it's more fun to read in HTML than PDF because of page rotation, but this is a stellar and remarkably comprehensive review. 

Tuesday, March 29, 2022

Stuff to read before you install MaxQuant.Live! Important Scary Legal Stuff!

Hey! don't read any further! 

This version of the IAPI is really old and is not accurate. 

Please see recent blog post corrections! 




Please read anything you get from a vendor thoroughly and, when in doubt, send it to your legal team.

I'll leave this post here for now until I have time to make proper edits. 

Turbo-ID -- in a mouse brain -- with control under cell-type specific promoters!

 


Well....this is a pretty paper to look at.....


And I'll be honest, I don't know how much of an advance this is over a very recent study using the largely similar APEX technology (both of which are somewhat explained in the post, if you aren't familiar, I think). APEX/Turbo-ID allow you to actively biotinylate and enrich anything nearby your protein of interest. By placing the activator under a promoter that is only used in a specific context you can get even more ridiculous specificity in what you end up biotinylating.

Looks like a lot of molecular biology is still necessary to pull this off but these studies are so cool that it almost seems worth it! 

Monday, March 28, 2022

Key lessons in carrier or boost channels - a new review!

 

This is a really nice perspective and review piece that I just stumbled onto and am going to use a lot during an upcoming marathon of trying to lure biologists to single cell proteomics.


You can check it out here!




It's ABRF time!

 


It's already March, people! 

Heads up, next year is Boston, the US proteomics capital (capitol?). Mark your calendars? 


Sunday, March 27, 2022

EvoSep One -- First (early) impressions!

 


Reminder (Disclaimers --- over there somewhere -->) and never take some weird blogger's opinion of things seriously. 

As much as I always feel like typing about an instrument will probably get me in trouble, I guess that hasn't stopped me and I've wanted some hands-on time with one of these things for a long time. And here it is and it's better than I'd ever hoped, even though I realized almost immediately that I completely misunderstood the whole central premise. 

Never heard of it? No problem, I'll (probably inaccurately tell you about it!) 

What if you weren't allowed to tinker with your flow rates, gradients, or your separation? What if you had a limited set of columns to choose from and each column had a small set of perfectly optimized gradients that come with the instrument when you install it? The team does roll out new methods but people who know enough about HPLC that they can completely design HPLC systems from the ground up do the optimization for you? 

I bet this will and does lose some people and customers. Me, personally? I'm dreaming of multiple method sections in papers where instead of looking up my HPLC gradient and poorly describing it, I type "used the vendor 60 SPD method" (its a 22 minute gradient or something, so you can complete 60 total runs in 24 hours). 

If your HPLC needs a lot of optimation, like you need to run metabolomics and lipidomics and glycomics on your proteomics HPLC and might require 200 nanoliters/minute for one thing and 500 microliters/minute on others, this might be too limiting. If you're doing proteomics and are tired of everyone yelling at you for your team's gradients always being different lengths on different columns, this is. If you spend a lot of time preaching about reproducibility in proteomics but have a pathological obsession with tweaking settings on instruments and...


(...you can't stop...?) maybe this is technology that makes sense for you. 

Something I didn't understand until Dr. Matt Willetts at Bruker let me look at his system is that -- no autosampler vials or plates ever! You load your samples onto little (probably C-18? tips that can hold [according to something I saw online] something like 1.5 micrograms of peptides. They are like ziptips, but you load them from the top and centrifuge the tips for 1 minute for each step of preparing the tip, loading the sample, and washing the sample). With a microchannel pipettor (now having prepped sample tips around 10 times) it takes me just about the same amount of time to prep 6 samples or 48, which is the most I've prepped at one time so far). 

If you do just need to add a QC sample because you're doing an optimization study and things went wonky over the weekend, having a bigger autosampler vial full of your QC sample is more convenient. I think I'm going to try prepping 96 QC samples and then leave the tips in the refrigerator and see how consistent they are as I pull from them over time. Again, a big autosampler vial would probably be easier. 

One other intersting thing is the columns that the system uses. They are much shorter and wider than what we're used to! I think the idea is that the peptides are interacting with the same amounts of stationary phase and wider columns last longer. I'm mostly running TMT labeled samples and I care a lot less about chromatography than about getting as many MS2 scans as possible, so I haven't done a thorough (and boring) retention time analysis on it. I really like the 8cm x 150um column, but if I want to use "Whisper" methods (100nanoliter/min) I swap to a more normal 15cm x 75um column. 

What else? There is almost no lag time between sample injections once you get going. Maybe a minute or two? 

Baseline sensitivity is a bit lower run to run on the normal EvoSep methods than a more traditional HPLC, but I suspect that is entirely because of the higher flowrates (1-2uL/min) than anything else. I bet that is more pronounced for people running 200nL/min. 

Final notes -- the tips aren't free. It looks like they're $1.5-$2.50 each depending on how many are purchased at once. Autosampler vials are a lot less expensive, but if I can fully drop cleanup spin columns then this ends up being a dramatic cost savings.  Maybe more impressions will follow later, but after 2 weeks and maybe 300 (mostly TMT) samples, it's a great fit for our proteomics projects. Since we do metabolomics, lipidomics, top down, etc., my impression would be very different if there weren't other HPLC systems here. 

Saturday, March 26, 2022

MaxQuant.Live 2.1 is live with Tribrid (Eclipse?) support!

 


WARNING! Please actually check the legal stuff that I mention below at this other post

Oh yeah! As I mentioned earlier this week with the really impressive new preprint from Huffman et al., this group used new functionality on their Q Exactives that were enabled through the use of MaxQuant.Live. 

A friend downloaded it for his Fusion 3 and let me poke around and take screenshots during the process. While Tribrid support is a big win to allow super powerful enhanced functionality for these system, that isn't all that is new here. Priority management for targeting was the functionality that really pushed the limits on Gray Huffman's new study and while that looks greyed out here this screenshot was taken before everything was connected to the Tribrid.  I hope to know soon how well this works on our Q Exactives. 

However, I do think that it is worth mentioning that there is possibly more words in the legal agreement than there were before. Some of these are very cool. Some appear less cool, but who knows with legalese. Just to be careful I've scribbled out identifying information. 



If you're part of a commercial entity you might want your lawyer people to take a look at it before setting it up. I do know for sure that one US company did have problems with the wording in an earlier version of the MQ.L agreement and won't let scientists there use it. Always worth considering, I guess. For the capabilities that this offers (BoxCar, huge intelligent targeting and exclusion lists with retention time alignment, the ability to set your own resolutions, etc.,) it might even be worth talking to the smug people in your legal department to get if you have to! 

Friday, March 25, 2022

Nanoparticle proteomic profiling!

 

I'm at the mercy of how fast a bunch of cells in culture are dividing and the 8-4pm M-F availability of an instrument I need to use so I have stuff to run this weekend -- so for real (Ben, stop typing and put on shoes!!) - this just needs to go here.


Why? 1)  I've started a pilot study for something remarkably like this and I had a shortage of ideas on what to do with the data that I now have.  2) I might lose this paper and from the first read I can do the LCMS work but I will need a careful read of the supplemental to do the downstream analysis, but not I've got a starting point! 3) There are a lot of people doing nanoparticle delivery stuff around me and I'd love it if we were able to help them all! 

Thursday, March 24, 2022

HPAStainR - What is that protein doing here??

Wow, did I just accidentally stumbled onto something super useful while looking for something completely different! 
You know how you're digging through a protein list and -- based on annotation you might ask -- 


Does this liver cell express this protein that the accession says is a neuronal thing? Is the accession wrong or did I mess up really bad? 

The accessions of most proteins are interesting in a historical sense -- like some dude or dudette in 1984 stumbled across it while pumping fruit flies full of radioactive cesium in an attempt to create an even more powerful hairspray -- but they can be misleading or confusing on the surface. 

What if there was a way to quickly look at protein expression -- across the Human Protein Atlas?!?!

That's exactly what HPAStainR does! 




Turns out that "neurowhatever" protein is highly expressed in every tissue! 


Wednesday, March 23, 2022

Toward zero variance in proteomics sample preparation!


Wooohooo! 

More automation, please! 


This group demonstrates the use of an Agilent Bravo positive pressure system for sample prep of mouse heart chunks (lysate is obtained via Barocycler, I think) and then breaking out these things:  30 kDa AcroPrep Omega filter membrane plate

Lot of people are working on these things (we're working on automating 96-well S-Traps and should be up shortly!) but this group went with FASP (filter-aided sample preparation) and the results look impressively consistent. 

FASP does take a while, and I bet you can't add too much positive pressure without breaking them. Interestingly, this group quantifies their protein using amino acid analysis which probably only sounds difficult to me because I haven't done it, but -- again -- the results look fantastic. 

I figured that a big advantage over my S-Trap system would be cost savings, but there isn't as big of a gap as I'd guessed. I think we pay $350/96-well S-Trap plate. So more like $4/sample vs $3/sample. This would obviously add up over big cohorts, but it isn't something that would make me consider changing my current plans. 


In the end, if a robot is doing the pipetting in exactly the right volumes at exactly the right times, proteomics and science wins. If every study had the level of reproducibility across the board as this group shows in these impressive plots, LCMS proteomics as a field shakes off one of the biggest criticisms the outside world has historically thrown at us and we can push forward to the next challenge! Everyone wins. 

Tuesday, March 22, 2022

New blog rule -- no HeLa studies posted going forward.

There aren't many rules on this blog. There is a general "if you can't type anything nice don't push that orange button"and there are several...recommendations...regarding the use of images of the triumph of 1970s pharmacology that resulted in the one and only Mr. Madness, the Destroyer, Randy Savage.

However, the continued use of HeLa cells in proteomics is dumb and I'm going to attempt to make a rule here and not post on any studies that use these cells going forward. 

Why?

1) Ethical reasons. The estate of Henrietta Lacks filed a lawsuit against the Walmart of Science in October to stop the selling of these cells

2) Ethical reasons. The NIH has strict rules on the use of HeLa cells for genetic analysis for privacy reasons. The genomes of these cells are strictly controlled. You can get the genome from the NIH, but only through controlled use release, similar to getting access to other patient genetic information. 

Since we are increasingly aware that there is clearly identifiable information (and therefore privacy concerns) in proteomics data, it seems increasingly surprising that the NIH hasn't stepped in on this as well.

3) The proteome of the cell line isn't realistic when compared to most other human tissues. 

One of the reasons that proteomics has continued to use HeLa is because it makes every mass spectrometer look better than it really is, and we're a very vendor driven field. But we're running into this expectation gap. 

Extreme example: someone brings sickle cell affected red blood cells into your lab for analysis and they know that shiny new instrument their grants chipped in on can identify 5,000 proteins per run from the video that convinced them to sign over funds. You come back with 1,800 proteins and what will they assume? That they were scammed? Or that they should also find funds for a better operator. Hopefully they know that RBCs may only express 2,500 proteins. But other cells and cell types are similar. They don't express every protein in the genome, that's silly. HeLa cells with their mixed up extra chromosomes actively try to express nearly every protein at once. 

Good for mass spec vendors. Bad for expectations for proteomics.

Reminder, there are other QC materials for proteomics out there.

The Promega K562 digest is great. You don't get as many proteins as a HeLa digest, but it isn't that far off.

NIST produces a pooled liver protein sample for proteomics. I've heard it's great. 

For more words and a video on how my school takes the legacy of Henrietta Lacks seriously, check out this link. 

Monday, March 21, 2022

Prioritized target selection improves reproducibility in (anything) and single cell proteomics!

 

There's a lot here and I am in the middle of a single cell prep, so I have to move fast.

What is it? More proof of what having single cell proteomics as a moonshot project can do for everything! 

Here is the preprint:

Whoa. That's a lot of names! 

Are you super tired of single cell proteomics and did single mouse macrophages just make you vomit in your mouth a little? I get it, but you know that whole "proteomics isn't reproducible" thing that has followed us from the first QTOFs and won't seem to go away?

MaxQuant.Live targeting is a really smart step in the right direction. What it lets you do is make a crazy huge list of targets (25,000 is realistic) and fragment those things in just about every run. But what if you don't have enough cycle time to get to everything that you want to? That happens. Heck, I'd prefer to just fragment every peptide from every human protein every time, but I can't yet (....DIA people...sorry, I ain't there yet). What if now you could prioritize your targets. 

It's sort of like using a targeted list on the Q Exactive and then checking that little box waaaaaaaay down at the bottom that says something like "if idle...pick others", but imagine that you've got tiers. And real time retention time alignment. And you went ahead and built your target list and retention times automatically off of a DIA run. Then you've got a handle on what this is. 

If you're thinking....meh...that's cool, but I don't have a million dollars right this second? How 'bout not a goshdarned thing? 

You do need MaxQuant.Live 2.1, but it isn't quiiiiiiiiiiiiiiiiiiite ready yet. I found 3 links in the paper and one that did this... what we're currently running is 2.0.3, I think

Now, this is where the plot thickens. 2.0.3 works for all Q Exactives (actually, I've never tried a Focus or a UHMR, but I've installed it on everything else)  and Exploris series instruments, but check out this statement: 


If true (this is a preprint, of course) this could be really great for a lot of people out there who wouldn't mind a little more reproducibility out of a venerable instrument! 

Back to the paper, this group does some really impressive single cell proteomics using this approach and reading through that will take more time than I have. I'll keep hitting the refresh button on the MaxQuant.Live webpage and let you know when I see a new link! 

Sunday, March 20, 2022

MicroFASP -- Peptidomics off tiny amounts of material!

 


I've got a deadline so I can't give this study any time but it is here so I can read it later! Just skimming it but it sounds like a really really smart way of getting endogenous peptides from small amounts of sample. 



Saturday, March 19, 2022

CellenOne -- Impressions and suggestions!

 

This year I've got a lot of questions about the CellenOne system. This is a big topic since I've basically spent the last 10 months working on sample prep and stealing data analysis pipelines from the single cell RNASeq community. 

If I have to sum up what I've learned rapidly I'd say: The flow/sorting community is actually really really smart and their instruments have evolved in the last few years as much as ours have, but they're just as inundated with concepts and acronyms as we are. I only recently realized that we've been using the wrong word over and over again with our cores and that has had effects on our data. "Sorting" and "aliquoting" single cells are concepts as different to a flow core as top down and shotgun proteomics are to us. Taking the time to learn some of the concepts and key terms can be transformative. Also, we aren't anywhere near as good at bioinformatics (as a community, some people in our field are awesome) as the RNA/DNA people. They have to be great because their data is so comparatively lousy. 😇 Steal their tools for our superior data and we all win. 

Back to the topic: Sometime in November we got a CellenOne demo instrument. In January we let them pack it up and sent it back. We're a relatively small pharmacology group with a lot of tools to study human pharmacology, that's our number 1 focus. We have nowhere near the bandwidth to have our own CellenOne system. I can't tell you how very happy I was when I went to the loading dock and the thing was finally gone, for a lot of reasons, not only that we couldn't get our SCP into the bulding because this thing was blocking the loading dock. 

However, a new CellenOne should arrive on campus sometime next month -- and this is how we've decided to live with the system. It's going to the single cell (RNA/DNA) core and our lab and the others on campus will be paying users. Experts in isolating and studying single cells will have it where it will have a dedicated person to do the daily buffer washes and washes and washes and keep the thing in pristine operating condition. The CellenOne can also prep valuable RNA libraries which will help pay for the instrument and it's upkeep, operation and emitter washing and washing and it will get to do ProteoChip preps around those.

Also -- while there have been really amazing sample preps using the CellenOne -- the ProteoChip is, in my opinion, by far the easiest. It is also, by far, the lowest throughput. For an Orbitrap based system you're talking about 500-ish single cells prepped, which isn't nearly as impressive to reviewers as it was a year or two ago. There are strategies now for TOFs since you can't use all 16 channels, but when I used it I was skipping every other well, so I was getting only 260 or so cells prepped. For context, a small robot like a Mantis can prep over 300 cells in one go and it costs around 1/8th the price of a CellenOne. Again, however, it can't aliquot single cells. You need to get a plate with aliquoted single cells and then take it to the Mantis. 

If I have to wrap up my impressions they are: 

1) The CellenOne is currently the best single system for working with single cells today. Due to the fact that cells that don't end up on your plate are going back where they came from the efficiency is fantastic. If you only have 1,000 cells to prep this is the way to go (1 well of a 12 well culture plate is around 1e5-1e6 cells). 

2) If you're expecting a turn-key solution where you put cells on the deck and push a button and prepped labeled single cells come out the back you are going to be disappointed. Working with single human cells sucks. That's why there are still so many "single cell" studies that are diluted K562 and so few that are actual human cells. 

3) If you can figure out a way to divide the effort and rapidly climbing price of the CellenOne system with other researchers, I think that is a solid plan. If you can afford to have your own, you should probably consider hiring a single dedicated operator. You do not have time to prep your cells on this system and calibrate your mass spec in a single day. If you have a job where you have meetings that you can't take while swearing at a robot....swearing at a robot, regardless of how loudly, doesn't actually seem to help...it just makes the people in the microscopy core next door avoid eye contact with you in the hallway....

4) A CellenOne system is not a requirement for you to do single cell proteomics. Sure, there are some applications where the efficiency of the system makes a lot of sense, but if you have single cell aliquoting capabilities on your campus through a core or a collaborator, you can absolutely still do single cell proteomics. You just really need to stop and think about your pipeline and really discuss with the people who are handling the cells what you want to do and make sure you are all speaking the same language. 

Friday, March 18, 2022

The how-to guide for working with cardiac tissue with shotgun proteomics!

 


One of the reasons that proteomics has started to take center stage over genomics is the increasing realization that some of the most terrifying diseases aren't really genomic in nature. 

Germ line mutations (the stuff you're born with, thank you ancestors!) and cancers are diseases that are due to genetic changes. Deleterious cancer mutations propagate because a mutation in a cell give it a selective advantage over the cells around it so it out-divides them. What about cells that don't divide and are not genomic in nature? You can spend ten million dollars getting read after read after read and not learn a darned thing except maybe invent a new harder way to not learn anything.

Neurons don't divide. DNA alterations aren't propagating

Cardiac cells don't divide. DNA alterations aren't propagating.

While you could argue that all diseases are protein level diseases, these are kind of home runs, right? 

Unfortunately both are a complete pain in the ass to work with. What if you wanted to dive into the latter? Would an incredibly well detailed how-to guide that someone else optimized help? Can I get hell yeah from the back? Thank you!


If you don't take anything away from this great optimization study before jumping right in on cardiac proteomics and following these steps, keep these 2 sentences in mind....


...it sounds like that thing in your chest doesn't beat every second of every day for your entire life because it is simple but if we want to explore it's secrets this tells you where to start! 


Thursday, March 17, 2022

Optimized spatial peptide imaging in frozen tissues!

 


I probably shouldn't admit to this, but I have paid so little attention to MALDI the last....forever.... that I was a little stunned when I realized people in our lab were digesting tissues on slides and identifying/quantifying peptides in those tissues. 

With some of these new MALDI lasers getting to increasingly ridiculous resolving capabilities, the methods increasingly make sense. 

I haven't tried this myself but I'm kind of stumped on something that this technique might be able to solve, and I've got access to the hardware. What I need is a modern, well-written, well-illustrated, optimized-by-experts protocol on the subject that covers everything!


That's what this is! 

Wednesday, March 16, 2022

Challenges and opportunities for Bayesian stuff in proteomics!

 

In the "ummm....what the f- even is this?? did this exist when I was tutoring stats in undergrad?!?" category, I am happy, though somehow slightly nauseated and very aware at 3am of how very long ago undergrad was that i either forgot every word, or they changed them all?? I present -- 


According to the WikiPedia rabbit hole I went down that started here and somehow got worse the more I clicked, this certainly did sort of exist when I was in undergrad, as Bayes first postulated this in 1763. 

My interpretation of the introduction of this paper is that Bayesian statistics allow for the detection of less extreme ratios of changes and instead measures the degree of change (where we're used to testing frequency). Which might actually mean that this post moves from the list of several thousand sitting here unposted to the one where I push the big orange button and it says "are you SURE you want to post this??" because this sounds like it is a whole lot more like a biological system than what it is convenient to pretend they are like. 

It is amazing when you find a biological condition that produces a massive increase in the whole and mostly unmodified form of your protein....and I can think of just one right this second. Myocardial infarctions cause the creatine kinase levels in someone's blood to jump right through the roof compared to their basal level. Even then, there are a bunch of other things that can cause high CK, and some people just have higher/lower levels of the protein in circulation anyway, but that's what we use in the clinic -- since at least the 1980s....suggesting that most things that are easy have already been found? 

CK spikes 6 hours after a heart attack that drops off slowly over the next two days sounds like something with a "frequency" that we could find with what I use for identifying protein changes. All the other more complicated stuff is way harder to find that way. And this is where the Bayesian stuff seems to have a lot more power? 

As the authors note, LCMS based proteomics might have several reasons for not using Bayesian statistics such as "lack of familiarity" which is a nicer way of saying "I don't know what the f- are you even talking about? are these words?!? help!!" and a lack of access to these tools.

Every figure for this paper and how it was generated is available at this Github. I'm not sure that I could use them to apply a Bayesian framework to some data here that is clearly nonfreqential(...meh...) but it would be a place to start! 


Tuesday, March 15, 2022

The Emerging Potential of Advanced Targeted MS to Become a Routine Tool in Biomedical Research!

 

How cool is that graphic above? Assuming the screenshot is better quality than it looks what you'll see is a really smart breakdown of different LCMS targeted methods.

It was poorly cut from this beautiful open review at Chimichanga that you can check out here



Monday, March 14, 2022

Malaria parasites are regulated through combinatorial histone codes!

 

Gotta run, but leaving this here so I remember to spend more time on it later


Malaria is INFURIATING to work with. The published genomes are obviously crappy and haven't been improved much in a dozen years. The stupid thing has a huge protein that class switches almost as fast and with as much frequency as human antibodies AND the parasites are low in abundance AND the parasite has a 415 stage lifecycle with different things happening all the time.

How does something with only slightly more genes than E.coli do all of this stuff??? Looks like through a dynamic and combinatorial histone code!! 

Sunday, March 13, 2022

DIA Analysis of 400 proteins in non-depleted dog plasma!

 


Wanna feel better about the state of clinical science? Take your dog to the vet when she/he isn't feeling well, and wait 7 days to find out that your dog needs an antibiotic. Most veterinary lab tools are 1) outdated 2) slow 3) outdated and 4) outdated. 

Let's bring in some modern proteomics and see if we can update things! (I mean...you do it...I can't do animal diagnostics due to a big huge COI that I am very aware of and nothing that I'm writing here should be construed as me volunteering anyone for more paperwork because I am 100% NOT suggesting that I should or will do veterinary diagnostics. I'm suggesting other people do it and I'll just tell them how cool they are for doing it, because today's veterinary diagnostics suck) 

That's what this group of cool people did! 


They pooled some plasma from healthy dogs and those with inflammatory conditions of some types, did some DIA (IDA) on a SCIEX 6600 running at 10uL/min (omg, not 10 nanoLiter/minute or even 50 nanoliter/minute -- for real, you can see if it is leaking (probably) and you can flush that air bubble out in a reasonable amount of time -- 10 uL/min to build a spectral library of interest.

Then they used SWATH on undepleted plasma -- and they find useful quantitative differences (markers?) of dog inflammation that could effectively discriminate. 

Is it a diagnostic? Not yet, but if you could do relatively easy sample prep and look at protein level changes in dog plasma to diagnose diseases it could be a huge step forward and I think it's worth celebrating a start. 

Saturday, March 12, 2022

Exposing the Brain Proteomic Signatures of Alzheimer’s Disease in Diverse Racial Groups!

 


I'm not going to embarrass myself by pretending that I can follow the maths used in this impressive new meta-analysis, but the results are....sobering...and absolutely critical to keep in mind! 


What did they do? They used powerful machine learning approaches to look at several cohorts of Alzheimer's proteomes that were all analyzed with TMT based multiplexed proteomics. There were a couple of goals here. The first was to see if they could identify important biomarkers in the disease. The second was to see how much the genetic backgrounds of the patients change things. 

Good news? They found some interesting markers. 

Bad news? 
We've really really got to think about the patient samples that we use as if they come from genetically distinct individuals who may have seemingly unrelated traits that may affect our results and whether our results will be the same in other people or populations. We know this. We're all trying to get our numbers of technical replicates and biological n up. While this study reinforces how important this is, it is still optimistic that advanced tools can still find potential markers! 




Friday, March 11, 2022

#JBCMethodsMadness 2022 is underway!

 If you aren't one of the Proteomics Twitterfolk, you might be missing the fun of 

 JBC....

 Methods....

 Madness.....

Imagine like you're yelling it into a refrigerator. OR. YELL IT INTO A REFRIGERATOROROROR...


I've been swamped since US HUPO and didn't know why my Twitter had blown up.

MONDAY (THIS MONDAY) #TeamMassSpec is up against something I've only read about in textbooks and I'm pretty sure no one does anymore and the only person I can think of who ever used it was Rosalind Franklin. 

Want to get in on the fun? 

https://twitter.com/jbiolchem


Blowing up your search space with the expanding role of acylation!

 


Crud....okay, well I WAS excited to read this paper....


I was excited because people have increasingly been talking about the role of acylations in various diseases. (I finally have proof in the reviewed literature that I can do top down proteomics thanks to an annoying multi-acylated protein that changes its subcellular localization depending on how it's acylated. Proof that I don't just read your papers, I can use your techniques as well! One take away from the metabolomics study is this: If you put up a sign in some historically poor cities that are largely infamous for illegal drug use that says "we will give you $200 for a tiny amount of your blood but only if you check a box on this form that says I DO NOT do drugs" some people will -- no joke -- actually check that box and give you blood for $200 who actually DO DO DRUGS! WTF, right? Who would have guessed??? So if you are doing a study on the link between different diseases and drug use, you should consider testing your control group for drugs even if they say they don't use them. Crazy, right? This does, in my mind, put a  question mark on more than a few studies out there where LCMS based sophisticated targeted or global untargeted analysis was not performed on the patient samples. I say sophisticated targeted because you can find a lot of craziness in the blood of people that are not covered in your standard historic drug testing panels. AND one of the most used colorimetric assays for testing illegal drugs can not detect ANY of the fentanyl based compounds and there are at least 3 in heavy circulation in my country today). 

Great, I wrote a bunch of words in the orange box, and now I don't have time to read this study right now! I'm not super excited to read about how important these 14 PTMs are especially considering how most of them are on my least favorite amino acids to consider yet another modification on. I need my trypsin to cut at lysines and I need my TMT to label my lysines and modifications that mess with either of those tend to end up not getting detected nearly as often. Cysteine mods don't exactly fill me with joy either. Does reduction remove that mod? Does this now mean that I can't use cysteine alkylation as a static mod? 

Important to think about, but as the authors note, we have some considerable technical limitations to consider.