Saturday, December 31, 2016

Peptides with phosphotyrosine fragment differently when isobaric tags are used!


Okay...this one is weird...and fascinating!

The paper is from Robert Everley et al., and is in this month's JPR here!


These authors go into great detail in describing the chemistry and why all this makes sense. It involves bond energy and stuff.

They prove it by studying 2,000 synthetic peptides (!?!?!) labeled/unlabeled and fragmented with CID HCD and ETD.

My question is -- now that we know more about how TMT phospho fragmentation works, can we use this info to better interrogate our data? Perhaps by weighting the search engines to expect a neutral loss when we are doing TMT-phospho studies?!!?

Friday, December 30, 2016

One simple trick to have both Scaffold compatible AND .PDResult output files!


Hey! Are you still using Scaffold for some projects, but want to use the nice new .PDResult file formats in PD 2.1 for others?  Here is the setting you need to change!

In any of the Processing workflow FDR nodes, you can set your Max Delta CN. Scaffold's tutorial for the topic says to leave it at 1.  Don't worry, though, you don't have to process your data twice!

Cause PD 2.1 has another MaxDeltaCN filter in the "MSF Files" node!


If you set that to 0.05 then your .PDresult file won't be different, despite the fact that the .MSF files you generated from your processing workflow will be larger and in the correct format for Scaffold!  Heck, I can't come up with a good reason (if you have Scaffold) for not leaving this the same for all of your files -- just in case.

Shoutout to Dr. Fiedler for working this out!

Thursday, December 29, 2016

Bioconductor workflow for spatial proteomics data!


There are some crazy powerful tools out there for proteomics in R.

Wait! Don't leave!

If you completely dismissed this 'cause you've heard this before from some dumb bioinformatics wannabe down your hallway and it was all junk -- or you tried it yourself and found it impossible --

Check this out!


Point 1: Yes, the bioinformatics field is having lots and lots of problems with fakers. How could they not? There are jobs open everywhere for them -- and unscrupulous people -- or uninformed interviewers are going to make an increasing number of mistakes. So you're going to find some people who can hardly handle Excel who have somehow scored a 6 figure salary and the title "bioinformatician."  Similar things have been known to happen in our field from time to time....

Point 2: Just follow the link and scroll down though this paper!  This is so clearly described -- and the important scripts and their functions so well highlighted that I seriously thing that I could do this! Why do I feel like I recently had an anti-command line rampage recently...?

I guess the point of this post is -- don't get frustrated yet cause the first 4 "bioinformaticians" you interacted with got their jobs cause they looked the part (it is very cool right now for kids to look like they get beat up a lot -- even the dumb ones) and don't mistrust the field. There are some seriously good people out there -- producing very good software -- that works!!

Or...can you come up with another way to get dual PCA plots of your differential data that autoscales?

Wednesday, December 28, 2016

Answers to the 30 blog comments I recently missed!!!


Ummm...so my Blog interface has changed. It is much more attractive on my end while I write these posts, but it doesn't alert me to unanswered blog comments when I log in. Maybe that is a setting I can change!

I'll try to answer directly on the post when I get caught up, but

1) Yup, you can definitely do "MultiConsensus Reports" in PD 2.1. Here is how. 

2) Do you still need the FT Tools? Email me directly, please: orsburn@vt.edu; I'll WeTransfer you a couple versions!

3) Yes, the TMT11 reagent is now available. Apparently you buy the TMT11 tag from your Thermo consumables sales rep. They will have the details. You just add it to your TMT10 kit and go!

4) Thank you Proteome Sciences (and Anonymous) for answering question #3

5) Thank you...but I do not currently require viagra...and I'm a little worried about what is on my blog that would make you think that I do...is it the Pugs?

6)  The link to the amazing course at NorthEastern is here!  If you need to copy/paste it is: (http://computationalproteomics.ccis.northeastern.edu/). I'll add that to the post!

7) Thank you for clarifying the previous use of ion mobility in proteomics!  And for the links to the previous papers on the topic. I'll definitely take a look at them. Now that I've seen, in depth, some of the really amazing algorithms that PNNL has, I need to readdress this idea a little. A local friend I have tremendous respect for endorsed one of these during a meeting today. I'll try to suspend my skepticism that ion mobility would needlessly complicate shotgun proteomics.

8-30) Wait. What is with the viagra thing again?  Did the Spam filters change or do you get to some new milestone when you've submitted over 1,300 blog posts on...just...one of your blogs....I swear this is just an outlet because I CAN NOT STOP TALKING. That is really it. Nothing physiological. My guess is it was all the participation awards I got in the 80s....

i-GPA - An interesting new (command line) glycoproteomics algorithm


I'm leaving this paper here so I can get back to it after work. It definitely looks promising. You can check it out in Scientific Reports here.


If you are fine by going by the authors (and I guess the 19 months of work by the reviewers to verify) this might be one of the most powerful global glycoproteomics platform we've ever seen.  Zero false discoveries in CID fragmentation of N-linked glycans!  Which is amazing...bordering on...unbelievable...!

Fortunately, an advanced algorithm that is leagues ahead of the other 15 or so glycoproteomics programs in use in the world would have a download link so you can try it out, right? You know...



Okay...I have to back off on the snarky. In the Supplemental info I found a download link!  Holy cow...it is huge!  Update: 3.5GB!!!, cause it contains all the data from the paper!

Here is the link to the software so you don't have to look for it.

And...it's command line...well, for those of you who do science -- especially glycoproteomics -- professionally, you can check it out!  I'd love to know how it goes! I draw the line for this hobby here -- I won't look at QQQ data and I won't use command line.

Tuesday, December 27, 2016

Comparison of a label free imputation strategies!


Very soon...we're all going to be having a lot of conversations on the topic of "Imputation". And it's gonna be all sorts of fun. Partly 'cause it means a lot of different things to a lot of different people -- and partly 'cause even the people who all think it means the same thing do it 12 different ways.

The different definitions come from the background you came from to get into proteomics. If you came from genetics like I did, you've done all sorts of imputation on microarrays. You have to do that because microarrays are a picture of a tiny piece of glass that has thousands of different probes on it. And the biologists at the end expect a measurement from every single one of those probes...even if something like this happens...


See that thing in the bottom? That isn't massive overexpression of all the genes/transcripts in that area of the array (each pixel is a gene or transcript and the intensity of the color correlates to how much of it there is) -- these instruments can do thousands of these arrays each day -- and not all of them are perfect.

I did the QC on one of the (at the time) world's largest microarray projects and when you have thousands of arrays you need to have a strategy for getting every measurement that you can -- and for figuring out what measurements you can't trust. And we call that imputation!

For a great review on some of the techniques for imputation in proteomics you should check out this new paper (thanks again to @UCDProteomics, who I've obviously been Twitter stalking) Wait...this isn't new...WTFourier?  Oh well. It is reasonably new and I spent a couple hours making notes on it...


It is going to be more in-depth than anything I'll go into here -- and more in-depth than anything you'll see from any commercial software package anytime soon. This review is partially so deep because this group at PNNL has a completely different idea of how we should be doing proteomics. In numerous studies they have shown that having high resolution accurate mass and accurate retention time is enough data to accurately do quantitative proteomics -- once you have a library of exact masses and retention times. And they have applied numerous statistical algorithms (some deriving from genomics techniques and others from pure statistics) to a few high quality data sets they have created to find the best ones -- and this is a wrap up of these. BTW, I LOVE these papers, I just don't think we're quite there to applying these to the diseases I care about.

Wow, this is rambly.

Important parts of this paper -- there are 3 ways of doing imputation

1) Single value
2) Local similarity
3) Global-structure

1) Single value:

Pro -- Easy, fast
Overview -- Makes one very powerful assumption and lives -- or dies -- by it. That if you have a missing value it is because it is outside of your dynamic range -- and is therefore super significant!

This can be AWESOME in proteomics of cells with unstable genomes. In homologous recombination/excision events in cancers where entire ends of both copies of a chromosome are gone...?...you aren't going to have a down-regulation of that protein. That protein will be gone. If you didn't do some sort of imputation outside of your dynamic range, you could walk away empty handed.

Cons: You can have tons of false positives that you think are significant, that totally aren't.  Your spray bottomed out for 2 seconds during sample one of your treated samples -- when you only had an n of 3? Now you have 10 peptides that look really important -- that aren't.

P.S. Up till now this was the only imputation strategy available in Proteome Discoverer + Quan "Replace missing values with minimum intensity"

2) Local similarity:

Pro--Kinda easy...well...you can download the scripts from PNNL in MatLab format (I think the DanTe R package also has stuff, but I can't remember)

Overview -- This one derives from engineering principles -- the idea is this, sending back a reading of "zero" could wreck the whole system, so avoid it at all costs. The textbook example is you've got 10 temperature sensors down a vat and the one shorts out a couple times and comes back with 0 degrees. Better to impute a zero to the temperature of it's nearest neighbor.

If we apply that thought to proteomics -- the idea is that it is better to have a 1:1 measurement for a peptide than it would be to have no value for the peptide.  The most used one is gonna be the KNN or k-nearest neighbors approach -- this goes one beyond the 1:1 idea I state above -- the most similar peptides (in a Euclidian sense) to the one with the missing value are used to impute the intensity of the missing peptide.

Further elaboration on why this could be useful --> what if your collaborator (or postdoc advisor) made you filter out any proteins that didn't have 2 or more unique peptide values before they'd look at your protein list. This is still common, right? If you had 2 peptides ID'ed and one was 10:1 and one was 10:(missing value) it sure would be nice to put something in there, right? Even if the nearest neighbors made this ratio 1:1, you'd still have 2 peptides and the quan would average out to 5:1. Sure beats having no protein (in the appropriate circumstances).

Cons -- It is almost the opposite of the single value example I mentioned above. You can almost have something approaching ratio suppression here.

3) Global-structure:

Pro -- Statistically valid, probably

Overview -- This model assumes that missing values are the result of random events that are evenly distributed across all measurements. By reducing everything down to prime principal components and then normalizing across the missing measurements you fill it all in.

Cons -- Well...Computationally really really expensive. For example, in this paper, this group couldn't complete a BPCA global-structure imputation on a dataset with 15 samples --that contained a total of 1,500 peptides IN A WEEK.

For reference sake, 1 crunched >1000 files with Minora and imputed with the KNN (local similarity) 16,000 peptides across all files in 48 hours..,with database searching. Maybe this is something a server is necessary do really do.

This is the first conversation of probably many that everyone will be having soon, but this is a seriously, a good review of the topic!


Monday, December 26, 2016

S-Trap -- Universal protein sample prep?


I've got no hands-on with this one, but Brett Phinney's got some on hand and maybe he'll let us know how it goes. Anything that standardizes sample prep is a win in my book, though!


...especially if it is in a sealed system -- and looks friendly to at least some level of automation!

Sorry...link here!

Sunday, December 25, 2016

The current state of proteomics repositories!


This isn't the newest review ever, but it is very pertinent (and pretty close to up-to-date!)

You can direct (open) access it here. 


Saturday, December 24, 2016

Doubling down on variable phosphorylations


This is a seriously interesting read...and...I'm not 100% sure what to make of it. It definitely throws into question some of the assumptions (and statements!) that I personally make regarding PTMs, search space, FDR, etc.,

Or...it throws into question some of my assumptions regarding how the super-secret Mascot algorithms work...

In either case, it is worth a read!  Direct link to this Open Access paper here!


Shoutout to @UCDProteomics for the link!

Tuesday, December 20, 2016

Ten simple rules for constructing a paper!


When Alexis sent this to me she said "I wish I'd read this 10 years ago"...and I wholeheartedly agree!


This may seem a bit simplistic, but if you are writing something/anything up, you might want to at least glance at the figure on page 5. I know I've never seen it set down so clear!


Monday, December 19, 2016

Two sweet new proteomics methods books just dropped!

Hey! There is nothing wrong with the books currently on my shelf on these 2 topics -- but our field is growing by leaps and bounds!  We know SO MUCH MORE than we did just 4 or 5 years ago!

Seriously, both of these are totally worth checking out:




They are brand new and both seriously solid. The hard copies aren't available to order on Amazon for a bit, but if your library subscribes to full access through Springer link all the chapters have been loaded.

In the book on the left, there is a whole chapter devoted to proteomic data storage and sharing, and a really interesting breakdown of what databases you should be using -- and drawbacks of different strategies.

And the book on the right....


Suggestions for getting the best peptide yields out of different cells and body fluids AND --check this out!


How to get proteins out of plants (have you tried it? you might be very surprised by what DOESN'T WORK. I know I was....)

In not-at-all related news. DJ Bernie, at only 10 weeks old, is already dropping some sick beats.


(Just kidding..this mixer isn't even hooked up!)

Sunday, December 18, 2016

Amazing course in May at NorthEastern!



Wow!  I would seriously love to take this course. What a lineup. The speakers and instructor list is like a who's who in proteomic statistics and programming.

Want to make sure you have no problem getting a job in any major metropolitan area on earth? Master the skills at this course. If you want to move to the Maryland/D.C. area I have friends looking right now for people with these kinds of skills.

EDIT: Here is the link to the course (sorry!)

Saturday, December 17, 2016

Is this new paper showing we could have 4x more Orbitrap resolution!?!?


I'll admit it, the title of this paper sounds pretty boring.

It painstakingly details a new way of processing Orbitrap data that these researchers call Î¦SDM.  This is definitely a development paper because it shows that this algorithm has some downsides that will have to be addressed, but it shows a crazy amount of potential as well!

Downsides?

Computationally far more intensive than the normal algorithm(s?) running behind the scenes in your instrument.

May be negatively affected by having more stuff in the Orbitrap -- actually -- that is how I read early into the results, but later on into the paper (and a lot more math I don't understand) this actually starts to look like they work their way around it by "binning" frequency. I'm still going to leave it as a maybe negative.


Potential?!?!   Is this 40,000 resolution at 32ms transient!??!?


The 32ms transient on a QE HF in ones I've used should get you somewhere around 15,000 resolution around 200 (m/z).

Since Orbitrap analyzer resolution is m/z dependent this is what we'd normally see on a QE HF running at 15,000 resolution (in this case, I just grabbed a quick MS/MS spectra; click to expand)


Around 200 (m/z) in a normal "15,000 resolution" Xcalibur reports around 17,500 resolution for an ion near 200 and 9,000 resolution for an ion that falls in the range of the SDM image above for the 32ms transient.

40,000 resolution is 4.4 times more resolution!!  If we just think this might be linear -- this is something in the range of 75,000 or 80,000 resolution with a 32 ms transient!!!!  (Sorry, I'm excited!)

Okay...I'm just going to extrapolate just a bit here. And I'd like to lead with the statement that that this is just me going into my crazy mass spectrometry fanboy thing while we are very snowed in on a Saturday afternoon -- what if this was linear? What would that mean for us?

In term of getting lots of resolution -- and this is linear -- our new transient vs resolution chart for a high field Orbitrap might look possibly maybe look something like this --


RIGHT!?!?  Again -- I don't know what many of the figures in this paper mean (I'm a biologist, remember?) but just imagining the possibilities of this makes me very very happy.

From a proteomics perspective, this is no way the most exciting chart. The most exciting chart would be going the other direction. I really don't need 80,000 resolution in every one of my MS/MS scans (holy cow -- could you imagine how big those data files would be?!?!  Would that be 10GB/hour coming off each instrument!?!?).

The more exciting thought is this --> how fast could I generate 15,000 resolution scans if I had enough computational power to run this algorithm instead of the normal FT algorithm? Could we get 50Hz in an Orbitrap? Sure is a nice thought!


Shoutout to Mass Spec Pro for breaking this one on Twitter and giving me something fun to read this afternoon!

Friday, December 16, 2016

Clean out the litterbox and get rid of some false discoveries!


At first glance, this figure seems a little silly and maybe alarmist, right? I retweeted it, but it didn't immediately crack my top10 list of papers that I need to give more time. As I passed it by a second time, I thought -- the editors at JPR must have seen something in it AND the title is funny, but now I highly recommend you read it!!

It is Open Access too!


Here is the idea in a nutshell --> we often use chemically modified trypsin that isn't supposed to spend all it's time digesting itself. But...when these researchers go to look at big public repositories and then take into account the modifications present on the trypsin that was used in the study -- they find a lot of peptides that sure look like they came from the trypsin.

Interesting tidbit:


I did not know this at all -- trypsin is trypsin, right? Ugh...okay...so now I'm super depressed. I'm thinking now I've got to add a bunch of dumb PTMs to my search database or run a separate search algorithm with a bunch of PTMS added to my cRAP database and combine the results downstream...


Thanks, Billy! I can't wait to hear about it!

The solution these researchers propose? A new amino acid!  They add in a new amino acid "O" or "j" depending on the algorithm and give it the mass 156.12571 -- the mass a dimethylated lysine should have and then apply the new amino acid to the sequence of the modified trypsins. And...voila!...litterbox cleaned!  (They find they improve their true and false peptide identifications in every sample and search engine they test!)

While on the topic, I'd like to remind you cat lovers that Toxoplasma gondii is no joke.  You don't want to end up like this...


Did I read this paper partly because I really wanted to integrate this video onto the blog somehow? Umm....

Wednesday, December 14, 2016

Critical decisions for metaproteomics!



Are you considering venturing into the scary dark world of metaproteomics?!  Do you have no choice, cause no one has sequenced your darned organism yet -- or all of the organisms in your darned sample?!?

You probably want to check out this cool new (open access) insight paper.


This is a nice beginner's guide to the field -- what you need to worry about, where you're going to run into problems, etc.,

It is a short read and looks like a quick overview until you get into the Supplemental info which adds some depth to it.  Quote I find interesting:

MS-based metaproteomics is now practical due to advances in duty cycle and increased mass accuracy for both precursor and fragment masses. These improvements allow for the detection of over 104 tandem mass spectra from a single data-dependent acquisition MS analysis of a mixed microbial sample. 

Maybe it's a little dated, considering how many files I'm seeing with 1e5 MS/MS spectra per run, but accurate MS1 and MS/MS masses sure do take a lot of uncertainty out of...well...a lot of uncertainty!

Stellar little review on a crazy complex -- and rapidly growing -- field!


Tuesday, December 13, 2016

Multi-level study of antibiotic resistant Klebsiella pneumoniae!


In today's edition of "terrifying things I didn't know about, but I'm very glad people are working on!" we're going to talk about multi-drug resistant Klebsiella pneumoniae called KP35

'Cause....



KP35 is an opportunistic pathogen that hangs out in hospitals and gets people who are already down...and if I've got this right you don't even have to get exposed to this strain of the bacteria. If you're exposed to the right conditions -- and antibiotics -- other, much more ubiquitous strains of the bacteria can essentially exhibit become just like this one. 

To investigate something like this is gonna require a big team with diverse skill sets, and these authors show how we can contribute what we do to a team like this. 

The paper starts off with histopathology stuff and establishment of model systems to study KP35, including a mouse model. Hey, I understand why we have to use mouse models, but I think I exhibit an appropriate level of skepticism when I see data from mice. In this study, these authors use their mouse models -- and then validate their observations, for nearly every experiment, in the limited lung fluid samples that they have from patients!  

I don't know all the techniques in use here, but there are a lot -- cytokine signaling assays, immune cell killing assays (??) and qRT-PCR and full genome sequencing of the bacteria (I think) are all in play in studying the patient samples and in validating that the mouse model is valid and then -- the cool stuff happens on the mice.

They take the lung fluid from the mice from their model and pool it, as well as pooling mice from their controls and do some deep proteomics --> high pH reverse phase fractionation before going onto a Fusion running OT-IT for deep coverage. The data was processed in PD 1.4, but the quantification is done in something called QluCore --> something that needs further investigation!  They only take the statistically significant (p<0.05) quantifiable changes (see, this is why QluCore needs further investigation!) and do downstream analysis with DAVID and IPA.

What do they find out after all of this?  Something super weird (IMHO)! Part of the reason that KP35 is so hard to clear when it infects patients is that it has somehow hijacked the immune system!

There are weird immune cells called MSDCs (WikiPedia article)  that actually suppress our immune response and these things seem to highly activate them to protect themselves. I'm sure these cells normally play an important role in keeping our normal immune response from running amock and destroying everything, but it seems pretty scary that a simple bacteria could exploit them!

TL/DR: Solid multi-discipline study of a scary hospital-acquired bacteria figures out why it is so darned hard to kill -- and hopefully gives us new strategies to wipe it out!

Monday, December 12, 2016

IonStar shows what happens to mice during cocaine withdrawal!


We started watching the Netflix show "Narcos" this weekend. We didn't get through the first episode because, well, it was a little too violent for us -- and both televised dog barking and gunshots set our ypuppies into complete chaos.

There is a really interesting description in the opening monolog regarding mice choosing cocaine over food and water. I don't know if that is true, but it makes for an interesting segway into a lot of murder for the TV show. Quiet lab ---> crazy violence.

I haven't been quiet regarding my love of the IonStar methodology developed by the Qu lab in Buffalo and I'm excited to see papers using this technique appearing in the literature!

The gist of the method is this -- if your chromatography is reproducible (even your nano chromatography) you do single shot LC-MS/MS and extract the isotopic profiles from run to run with tight window tolerances and exceptionally tight MS1 mass tolerances and you don't have to fragment every peptide in every single run. This isn't a new concept -- MaxQuant, OpenMS, and soon Proteome Discoverer can all do this. The difference with IonStar is that it is a packaged workflow (and the algorithms for peak detection/alignment honestly do work a bit better than some other algorithms).

You don't get the depth that 2D-LC-MS/MS will give you, but you get pretty darned deep and the more samples you are studying the better -- you just need to fragment that ion one time in the 20 or 50 patient samples to ID it and quantify it across all runs.

Since ASMS 2016, the Qu lab has published at least 4 studies displaying the efficacy of IonStar. And this is where I get back to what I was talking about!

In this one...



..these researchers show the depth that you can get with IonStar in a super tiny amount of sample on an LTQ-Orbitrap system. Excision of the striatal region rats (this is a small internal region of the brain that, I promise you, sucks to get remove even if it isn't THE smallest part of the brain) from rats going through cocaine withdrawal and quantification with IonStar pulls out over 2k unique proteins ID'ed in all runs...and a subset that change markedly in both groups!

Sunday, December 11, 2016

Want to increase your lab efficiency by 10% right now? TMT 11 plex!


So...this might not seem like such a big deal...unless you're about to get 1,000 samples delivered for TMT quantification....

PD 2.x can do the quantification (you just need to build a quan method that has this additional channel)

More information Dr. Saba was able to hunt down:



Yeah!!!

Oh. If you don't want to do the work and just want to add TMT11plex to your copy of PD 2.1 (haven't checked to see if it works in PD 2.0), I exported a quantification method and you can download it from my DropBox here.  (I think you can ignore signing in if you don't have a DropBox account and it will still, grumpily, allow you to download the method.

If you go to -->Administration --> Maintain Quantification methods you can import the method there. TAADAA!! 10% more samples quantified and around 0% more work!



EDIT (12/11/16): I can't seem to figure out where to buy the 131C reagent. Hold up, this might not be 100% out yet!  Will update when I know more!


Saturday, December 10, 2016

Ever wondered how to do cartilage proteomics?!? Here you go!


According to a Reddit forum I somehow found myself on, we don't know much about human cartilage...unsurprisingly it turns out that some guy on Reddit didn't know much about cartilage, while we -as a species- seem to know an awful lot about it.

It was already too late, though, and I was on Google Scholar looking for some cutting edge cartilage stuff to read and -- BOOM!


A complete protocol for completely studying cartilage that is, according to the authors, way better than the previous protocols I didn't know about.

They have a decent argument, too! They argue that single stage extraction/digestion techniques don't get the whole picture of the cartilage proteome because each method ends up leaving something behind. The multi-step methodology they describe here allows them to get a more complete picture of what is happening -- at the expense of a lot more sample prep time.

Now...to figure out how to rehab a sore knee faster without some weird pseudoscience quackery... thank you, but I don't need your magic copper knee brace OR your turmeric crystal that I put under my toenail (I made one of those up).

EDITs (after continuing to look at this paper a little more):

This isn't just a methods paper, btw, this study is also super fascinating. It goes beyond where you think it would. For one, they are able to use the sectioning method to track the abundance of the proteins they detect -- at different physical layers of the collagen that they're looking at. It is silly to think of tissues as homogeneous throughout and -- that is, by far, the easiest way of doing it. This team goes WAY beyond that -- and you see why they should!

That's not all. There are collagen samples taken from different places -- hips and knees -- as well as collagen from patients with different disease states. Guess what? There are proteins that are produced at much higher levels in cartilage from different joints!  Which makes sense, I guess! The knee and hip cartilage take different levels and different kinds of impacts and stresses -- so would need different distributions of proteins -- and different levels of crosslinks?

I'm just kind of overwhelmed by the level of complexity that we have in something as simple as cartilage and I'm very impressed by the amount of work these authors have put in to elucidate some of this for us!


Proteome profiling OUTPERFORMS transcript profiling!

I'll just start with this if ya' don't mind.....


...and maybe I'll take my dogs for a walk. Meanwhile....


Seriously, though. We're ALWAYS second fiddle. What were the last numbers I saw? Something like $27 for genomic studies for ever $1 we get to study the gosh-darned proteins? (I may honestly have made those numbers up, cause I can't find a reference of any kind, but I don't think I did).

This study isn't news to us.  How often are we trying to compare the transcripts to the protein expression levels and it just doesn't work? Like...all the time!

This study actually shows that the proteome links better to the cell phenotype than the transcript does...and since proteins are ultimately...well...responsible for the phenotype -- it seems kinda obvious.... They also show that they can make better pathway links from disregulation of the proteins in the pathway than they can from the disregulated transcripts of those same cells!

I really like that title!!  Probably more on this later. I really will take my dogs for a walk now.



Friday, December 9, 2016

DSSO IS OUT!!!


Were you at ASMS?  If so, you probably remember seeing a ridiculous amount of amazing chemical crosslinked data like this!


All of it was based on a centralized workflow:

1) Better chemical crosslinker (DSSO!)
2) Optimized workflow to exploit the unique signature produced by said crosslinker in MS/MS
3) An easy, integrated data processing workflow designed specifically for processing it!

Orbitrap Fusion software 2.1 is out (and amazing!) and now you can buy the crosslinking reagent here!

Let's do some protein-protein interaction studies fo' real, yo!  (Let me know if you need help processing the data, I'll try to help -- probably not even kidding.)