Wednesday, August 16, 2017
If this is true:
1) This could be amazingly useful!
2) Somebody please commercialize it fast so I don't have to ruin my day making my own (terrible) gels!
This new study in ACS can be found here.
You can't tell me you haven't ran a gel at least once and wished you could just get that protein out of it intact, right?
The info for what is in the dissolvable gels is in Supplemental Table 1. This doesn't look all that weird. I guess the power comes from the slow dissolving of the gel in the special mixture they used. Hey -- works for me -- someone is going to have a cool problem soon that they ask me about and I'm going to have this paper in my utility belt (well...Kindle...) that will get them one step closer to a solution.
Shoutout to Dr. Murray for the link to this exciting method!
Tuesday, August 15, 2017
Like just about everyone in the world, I'm shocked and saddened by what put a rural town in Virginia in the spotlight this weekend. I'll be honest -- I'm also really really angry and I hope we can all focus these emotions here into motivation to get things back on track.
When I think of Charlottesville, I don't want the first thing to come to my mind to be what happened this weekend. And I don't want it to be for you either -- so here are some unnecessary reminders that Charlottesville is and has been an awesome hub for science!
I'm going to start this post like this -- some guy named Don Hunt is there!!! -- AND he's been there for kind of a while.
Some decent science has come out of the Hunt lab in Charlottesville, I'm having trouble coming up with anything off the top of my head, but I think there are a couple (come on, brain...)
-- Oh wait --- here's one
--- ETD! Electron transfer dissociation -- a strategy essential to many proteomics experiments today for PTMs and intact proteins -- and heck, regular big 'ol peptides came from that little town.
--hmmm...there was this other kind-of-important paper -- I can't remember what it was called -- wait! was it called PROTEIN SEQUENCING BY TANDEM MASS SPECTROMETRY!?!?
Might have been the one I'm thinking of....though, I swear I was thinking of something that everybody is trying to do right now that even with today's tools, software, and super computers is still really hard to do -- wait -- was it MHC peptidomics?!?
I can go on and on with this. Work from the Hunt lab in Charlottesville has been cited in over 47,000 other studies. (Google Scholar numbers this morning).
Of course this hasn't been Dr. Hunt toiling alone in a little closet lab in that scenic mountain town. In 2007 ACS reported over 130 grad students and postdocs had trained in the Hunt lab -- I've had the privilege to work with several and among them have been some of the very best mass spectrometrists I've ever met -- some of these students have went on to do amazingly impactful research in labs of their own, names like (in no particular order, and just a few chosen off of the top of my head -- no disses intended! The Hunt lab either doesn't recruit or release dummies.)
John Yates III
...all passed through Charlottesville. No joke -- according to Scholar you're looking at something in the range of 150,000 to 200,000 citations from just the work of these authors alone!
I do have to mention that the chemistry department isn't the only place in Charlottesville with a mass spectrometer. The UVA School of Medicine has a top-notch core facility open to internal and external customers. Proof? De novo protein sequencing is just listed as a service. I've visited. They know what they're doing!
It is also worth noting that some of John Fenn's earliest work on ion beams was done in collaboration with John Scott in Charlottesville (as noted in Dr. Fenn's Nobel Biography here)
Look -- this post is probably dumb -- but if this helps to remind anyone that Charlottesville, VA is a place that can be directly linked to some of the most impactful science (IMHO) of the last half century rather than just a place associated with the deplorable acts of last weekend, then I haven't wasted all my pre-work time this morning!
Monday, August 14, 2017
Carbamylation is one of the primary reasons that I always use the free Protein Metrics Preview node on my RAW files before queuing up a big run. It is generally an unintentional artifact caused by your proteins spending too much time in urea -- or in urea that is too hot. However --
these authors carbamylated some stuff on purpose!
This team intentionally carbamylates proteins and looks at changes in the MS1 profile of several different proteins.
However, UVPD fragmentation of these proteins remains mostly unchanged -- carbamylated or not. While the authors may have used this study as a way to learn more about how UVPD fragments intact proteins and conclude it does not follow the mobile proton model, I'm wondering if this could have other applications.
When looking at intact proteins, there are some big advantages to having proteins accepting fewer charges. This makes the protein isotopic envelope easier to deconvolute and helps reveal co-eluting species (often proteoforms). Obviously, there would be some disadvantages to carbamylating all your proteins -- like now you have another PTM to worry about -- and less charges means your proteins are in a higher m/z range, but I can't help but think this might come in handy later!
Sunday, August 13, 2017
I'm not even gonna pretend I could've come up with this idea.
This is the paper.
What did they do? They purified proteins that they are interested in that interact. They purified these from cell lines where they made mutations that changed single amino acids in the interacting sites of the proteins. Then they did native mass spec (in a modified(?) Q Exactive Plus system. It's a little unclear, it may have the vendor's native mass options "Biopharma mode" but I didn't check the reference.
If you have two proteins that interact and you can get a mass spectrum from the native configuration of those two proteins, but you control the mutations in the proteins -- you get a super cool readout! What is the strength of the interaction of those two proteins -- and, more importantly, how does it differ when you change this amino acid over here, vs this one over there?
Right? See how cool that is? How easy would it to be to extend this to your protein(s) of interest that you already had somebody construct all those mutants of?!? Sure, the math looks a little daunting at first, but they explain it in such clear detail (good writing!) that I feel like you just plug this all into Excel and only use that scary math thing at the top of this post in presentations to impress people in your department.
Saturday, August 12, 2017
On the new list of surprisingly complicated matrices that also make me feel strangely thirsty on Saturday afternoons...
...the BEER PROTEOME sequenced to a depth of 1,900 proteins!
Why are there 1,900 proteins in my beer? I'm going to guess there isn't in mine, because I'm trying to get in shape for my local "basketball league for people way too old to be playing it" and I'm sticking to an American atrocity called an "ultra light" (don't try it!!).
This group pulled a real beer -- a traditional Czech one -- and did some interesting isoelectric focusing based fractionation (the opposite of the 1D-gel that I do) and used an LTQ Orbitrap Elite on the fractions (they employed "high/low"). (FASP was also utilized for digestion as shown in the image above).
I know what your first question is -- where did they get the beer from? From a grocery store! Apparently they live somewhere civilized where you aren't limited to one specialized government-operated location per township that is authorized to distribute these sinful liquids. These intrepid researchers can buy beer for some topnotch science experiments and also pick up some crisps to eat without wasting gas driving somewhere totally different! I'll trade that for my local grocery store -- that also carries firearms (an increasingly common thing in the U.S..) in an instant!
If your second question is also -- how much was necessary for this proteome depth(?) I can't quite tell. They concentrated it to 3mg/mL prior to the desalting step mentioned above.
Ben's rambling aside -- This is a solid bit of science on a really interesting topic.
Friday, August 11, 2017
These authors sure know how to make my day! An amazingly in-depth analysis of the lipid membrane microdomains of anything would probably get my attention anyway, but this great new paper is extra awesome:
1) They combine proteomics and lipidomics
2) They do the work on the sexual stage parasites of the universally despised Plasmodium genus (in this case, the mouse infecting model organism Plasmodium berghei
3) They do top notch bioinformatics that not only leads to output showing enrichment of the proteins interacting in this awful-to-work-with mostly detergent-resistant mess of lipids, but also protein-lipid associations (what?!?)
4) They show how incredibly important designing and upstream experiment is in biology by pulling all of the proteomics with a linear ion trap and some really clever analysis and the lipidomics via TLC and GC.
How'd they pull this off? With an amazingly painful amount of upstream sample preparation. What? Another list? Sorry, I have to, to capture how glad I am that they did this and I didn't have to.
1) Synchronized the parasite (difficult)
2) Infected mouse.
3) Removed parasites when they reached their sexual stage (difficult to determine AND remove)
4) Separated infected from uninfected cells on Nycodenz gradient (blech. blech. blech.)
5) And then it gets fun. Lysing the infected red blood cells, determining the protein content at every stage with anti-parasite antibodies and more detergents and gradient ultracentrifugation and some homogenization.
6) I have to end the list. The amount of work here is truly just awe-inspiring. It just keeps going....
The amount of care that these authors spend on ensuring that they aren't going to the next awful painful step unnecessarily by carefully determining what they currently have is just amazing. I'm so glad there are people in this world who go to these lengths to help understand malaria so we can kill it.
Once they obtain their lipid rafts, they break them down with chloroform and methanol to get the peptides and take the lipids through a staggering amount of work to HPTLC and GC readouts.
And then?? then they make sense of all these measurements and more that I didn't mention!! The network analysis of the proteins is linked with the major lipid class identifications and paints a surprisingly complex picture of the parasite's membrane proteome -- and how it interacts with an also surprisingly heterogeneous lipid profile in the strains they work with.
This was a monumental amount of work, and I'm beyond impressed.
All data is uploaded at Massive and will be available at full paper release (the authors include access info now if you really want it)
Thursday, August 10, 2017
There was a great paper in MCP recently that looked at the metaproteomics of baby poo. Turns out that wasn't the only group working on this easy to obtain, but tough to process, sample type!
This is a surprisingly comprehensive review on the topic -- in fact, this is one of two published this month, in what appears to be a rapidly growing area of our field. I'm challenged sometimes by finding clear peptide level differences in samples from a single -- fully sequenced - species. In high complexity ecosystems? Ouch.
If this area really is growing as fast as the recent literature suggests we all might be called upon to do this sooner or later and this review is a great place to start on the topic!
Not-at-all related to this paper:
Long day running between meetings -- and this isn't proteomics -- but I have to post this.
These guys encoded a strand of DNA that was actually a computer virus -- and could potentially hack the DNA sequencer....
Sounds dumb till you consider the fact that some sport agencies have been considering genetic testing for alterations like the "mighty mouse gene" for a while. Tough to do that when the sequencers all crash!
Sure -- this is Sci-Fi stuff now -- but I really liked the article.
Wednesday, August 9, 2017
I have been called upon from time to time to do some intact proteins. Honestly, I believe I achieved the point where I finally failed enough at it (ask my postdoc advisor) that I'm reasonably good at it. However, I've ALWAYS had some questions about what is going on with the proteins as I kind of haphazardly change settings till I get a good signal. Especially when I'm tinkering with source parameters trying to, for example, get a low abundance mAB PTM to resolve a little better!
This brand new paper at NIST answers so many of the things I've wondered about in my head that I honestly can't come up with anything except -- how don't I know these authors?!? We're a pretty small community here in Maryland...but I'll sort this out later.
In this work they take several intact proteins of varying sizes-- all interesting (BONUS!), starting with CRP and up to the NIST mAB. They utilize three instruments, an Orbitrap Elite and 2 Q-TOFs. Systematically they go through a bunch of different parameters, including:
Solvation energies (in source CIDs)
They even go through the (always fun for those of us with multiple knee surgeries) manual optimization of the HCD gas pressures with the little knob hidden deep inside the later LTQ Orbtrap systems.
What they come up with is the best study I've ever seen on optimizing an LTQ Orbitrap system for obtaining reproducible intact protein measurements (particularly focused on the low abundance PTMs). Of course, these observations can be directly converted to the other systems -- since a lot of what they look at will be the same for the quadrupole-Orbitraps as well.
I've already sent links to this paper to a number of different people I know who will find this very useful.
Interesting note -- they calculate the intact protein theoretical masses using this cool little program from NIST that I'm embarrassed I didn't know (or forgot) about!
Tuesday, August 8, 2017
I looked at this new paper at JPR for an embarrassingly long time this morning before I got it.
This strategy totally helped!
Honestly, I had to go to the original SILAC-SPROX paper at MCP here as well.
But I've got it now, and I'm impressed and surprised that this works.
SILAC-SPROX is a strategy for determining the protein stability over all the proteins in 2 (in this instance -- should be compatible with 3-plex) cell lines. I'd assume that global protein stability would be pretty much stable across one human cell line to the next.
They way they assay this is taking total protein aliquots and exposing them to different concentrations of urea. Urea exposed SILAC labeled protein from cell line A and cell line B exposed to X concentration of urea are combined, digested and quantified using normalize SILAC type proteomics.
If cell line A is less stable (unfold more readily in that concentration of urea) then there will be more linear protein available for tryptic digestion and it will look like protein up-regulation. Of course some proteins will be more resistant to denaturation by urea than others, so you have to look at a global picture.
This figure from the MCP paper linked above explains the idea pretty well. Remove the gel at the bottom and replace that with nanoLC-quadrupole Orbitrap analysis instead.
Naively...I would think, sure this would work when comparing something "simple" like bacterial strains...but human cells are so complicated that the protective machinery would never allow a read out from something like this.
And I would be wrong. This group shows 3 separate experiments with different cell lines that demonstrate this as a measurable readout for different cancer cell lines. They make some biological interpretations of the data based on the protein denaturation patterns, but I used up all my free time this morning working out the method.
Seriously interesting technique! It does remind me of some other global studies based on temperature such as this recent one in Science and an older one I can't think of this morning and I do wonder how these techniques would correspond in readout.
Monday, August 7, 2017
Yesterday I made a short post on this new study in Open Reports. I'll be honest, I thought it was a seriously impressive sample set, but I was a little skeptical about the results. Not that I didn't believe the authors could find the bacterial proteins, but I was skeptical that the results could be that clean cut. I've seen a few "identification of digested bacterial proteins in body fluid" studies and even with high resolution data the matrix is complex enough that there is noise there. These results just seemed a little too good to be true.
Shoutout to ProteomeXchange/PRIDE and these authors for making this data extremely easy to obtain, sort and reprocess overnight!!
For my re-analysis, I threw all the files into PD 2.2 and used the default processing workflow for LTQ-Orbitrap LFQ and LFQ concensus with enhanced annotation. I made a 77MB FASTA in PD by parsing a UniProt TrembL I downloaded in July on "Steptococcus pneumoniae" and used my normal Uniprot SwissProt human + cRAP databases.
I can confidently say, with no reservations at all, that I believe this group did exactly what they said they did. This readout is the simplest one to display what I'm looking at. Green means protein found in that sample.
I have to go into really low scoring stuff before I see a peptide or two from the bacteria flagged as present in the controls -- and the first 2 match cRAP hits as well.
Okay...so this study wasn't published for almost a year from the time it was submitted. I'm willing to bet $10 that part of the holdup was that a reviewer (or two) didn't believe the data could be THIS clean. It is.
I don't know how long it takes to culture CSF in a classical microbiology assay to test for these bacteria -- or how trustworthy results from those classic assays are. What I do know --this team has shown clearcut evidence that our technology can make determinations of CSF infection with 2 hours of instrument time per patient (on a mass spec first released 7 years ago!)
Sunday, August 6, 2017
So...nothing is supposed to be in your cerebral spinal fluid (CSF) except the stuff that is supposed to be in there...and especially not a detectable amount of bacterial proteins!
And...in this new study...
this team successfully detects and quantifies pneuomococcal proteins in children with bacterial meningitis.
A good bit of this study is optimizing a method for collection, digestion and prep for mass spec. They use in-solution digestion with a commercial MS-compatible detergent prep, and they use 2 hour gradients on 15cm nanoLC columns with an LTQ Orbitrap Velos system.
And...loads of bacterial proteins even in the complexity of the CSF proteome! I'm no medical expert, but those sound like some seriously sick kids.
Progenesis appears to be used for the quantification and all IDs were obtained with Mascot after using the Proteome Discoverer Viewer to create the MGFs. The paper is straight-forward, but the downstream plots for analysis are really clear and eye-catching. They were generated using R and a stand-alone software package called Gigawiz, that I hadn't heard of before this.
All the data was uploaded to ProteomeXchange via PRIDE and is available here (PXD004219).
Saturday, August 5, 2017
I've got a ton of new cool papers on my desktop to read when I get caught up, but I landed on this study while looking for something else, and I found the results remarkable.
1) How much does postmortem degradation effect the brain proteome? To go after the answer, this team used a mouse model system. What they found (using 2D-DIGE, and LTQ and a TOF) is some massive changes even 1 minute after sac'ing the mice.
2) Can they find markers that would give them a reliable measurement of the quality of the brains for use in further analysis?
I found myself multiple times reading between the lines and imagining that this group primarily worked with human postmortem tissue and that there was a lot of variability in how those people had died and how long it was before the samples would find the way to their labs.
What they found is really interesting -- almost immediately changes after death if they don't do something rapidly to stop protein degradation. Released peptides from protein degradation nearly triple in the first 10 minutes postmortem! Unfortunately, a lot of their known free peptide markers decrease in abundance at nearly the same rate!
They find some markers they like which make it to the title and abstract that they can use to as a metric for the amount of degradation that has occurred in that sample.
I like this paper because it points out a problem I wouldn't have considered and then uses the available proteomics technology to develop a way to attack that problem. Biologically, the results are pretty stark. There is no real leeway there between the amount of time that your brain must be receiving oxygen. Even 60 seconds without it and things start getting very out of control very quickly!
Tuesday, August 1, 2017
HOW TO RELIABLY QUANTIFY 4,000 PROTEINS IN HUMAN PLASMA!
I don't have time this morning to read this whole new Nature Protocols paper as it's quite large (19 pages) and I'm going to argue that it's really several protocols -- but it looks like a seriously valuable resource to me!
What I can tell you:
1) Its exceptionally well-written and clear -- and thorough
2) It shows outstanding reproducibility with isobaric tags -- despite the use of abundant protein depletion (which I generally consider to be something that negatively impacts reproducibility)
3)The troubleshooting section is AWESOME. I suspect at least one of these authors has trained people on these protocols in their facility a few times.
I stored this away to refer to later -- and to send to anyone thinking about doing reporter ion quan based biomarker studies!