Sunday, March 31, 2013

Nature special focus on Big Data. Pretty relevant to us!


What an interesting week in the big journals!  Nature is running this really great focus on Big Data and what the heck we are going to do with all of it.  I think we're all pretty familiar with these problems by now.  I am currently carrying 3 1TB hard drives with me so that I can work on my research on the road and have control data sets to show other researchers and still have space to process any data that they want.

Tranche is worthless to most people now.  I haven't been able to get even 1 single RAW file off of the site in over 6 months.  Supposedly there is data there, but I think it's probably just a room full of smoking servers.  The problem is that we are generating, on average about 1GB/hour with most of the current instrument line.  What do we do with all of this?  How do we store it safely.  Hell, how do we even process it?

(This is what our little server is running at right now....  93% load... )

We're not the only people having problems.  The Next Gen sequencers are having the exact same problem.  A researcher in Maryland that I know told me that his sequencers can generate as much data in a weekend as was produced during the entire 13 years of the Human Genome Project.  Yeah, they feel our pain.

The good news is that since we're all having problems, maybe solutions will be coming more quickly!  If you're at all concerned with where we're going, I definitely suggest that you check this out!


MaxQuant and QE phosphoproteomics data


This is going to sound more like a gripe than what I mean this to be, but has anyone out there tried to process QE AIF-NL-ddMS2 data with MaxQuant?  Holy cow.  My server has been cranking at >90% CPU and/or hard drive capacity for 14 hours now to finish 6 files that PD 1.4 with SequestHT knocked out in <4 hours.  I'll take this to the MaxQuant Google group, but since there is no place in MaxQuant where you can filter out the AIF data from the MS2 data then that is where the hangup is.  I cancelled my SIEVE and PD queues just to allow MaxQuant more power, but it doesn't seem to have helped.

Saturday, March 30, 2013

cBioPortal -- rapid access to numerous cancer genomics sets


Today's issue of Science and Science signaling have a special emphasis on Cancer Genomics.  Most of the articles, unfortunately, won't be available to actually read for a couple of days, but I did manage to find something interesting.  The cBioPortal at MSK is a database that looks through a large number of published genomics data sets and gives you information on whether your gene(s) of interest have been previously implicated in any forms of cancer.
For example, a study that  I did during one of my postdoctoral fellowship implicated a novel phosphorylation site on integrin alpha 4 (ITGA4) in the mechanism of action of a proprietary drug we were working with.  Input ITGA4 into cBioPortal and you get this nifty chart:

This shows that ITGA4 mutations, amplifications, and deletions have all been implicated in these forms of cancer.  This would have been a great slide for our monthly progress reports!  At least we have access to this great database now!  You can find more on it (and a tutorial) at the MSK website.


Friday, March 29, 2013

Culture shock: Proteomics study on CAND1 in Cell makes the news in Seoul!


There is a great proteomics paper in this month's edition of Cell.  I spent some time with one of the authors, Dr. J. Eugene Lee,  and got an amazing walkthrough on this study where they have identified a protein with a completely novel type of activity.  The paper in question, from Nathan Pierce et al., comes out of Cal Tech and describes the ability of the CAND1 protein in accelerating the dissociation of certain protein families (by over 1 million fold!).  The study is a little mind-boggling in its scope and extremely thorough.  One of the most interesting aspects of the study for me is the fact that they used a "pulse SILAC" technique.  After their protein of interest was turned on by their expression system, they changed the media for SILAC media.  This halted the production of their expression vector and left all of their protein of interest unlabeled.  Using SILAC in this manner had never occurred to me, but it makes great sense, and I can think of lots of uses for this technique.

This is a great and extremely innovative study where they demonstrate a protein function that has never before been observed.  I fully expect this system to be a staple of future biochemistry texts.

As much as I respect this study, what I almost respect more is that Dr. Lee was interviewed for television here regarding the publication of the study.  Hopefully this doesn't seem odd to some of my readers, but in the U.S. science stories almost never make the news.  In general, our media is very anti-intellectual.  When a study does make the news it is almost always about bigfoot or some conman who hunts ghosts or is searching for places in the bible that are obviously allegorical and not real places.  Large high profile NASA missions make our news but they are almost always quick blerbs, unless the mission is a failure and the media can use it to criticize science in specific and NASA in particular.  In the U.S., you don't get on TV for doing something smart, but you will be a daily feature by proudly flaunting your ignorance.  Sorry if that was a tirade.  I'm just so very impressed that there are cultures that respect scientific achievement and I'm so glad that I get to spend time in one of them.



Thursday, March 28, 2013

LuciPHOr -- phosphosite localization for the TPP


LuciPHOr is one of the gems that I ran into at KHUPO.  I spoke to one of the developers and I can't wait to try this out.  Upcoming weekend project:  same dataset LuciPHOR and the TPP vs Proteome Discoverer 1.4 with PhosphoRS 3.0.  Would LOVE to see this comparison.  Considering I'm composing a phosphoproteomics paper right now, this might be a perfect place to use it.
Anyway, LuciPHOR is a phosphosite assignment program specifically designed to work in conjunction with the transproteomic pipeline.  It is also primarily written for Linux.  This shouldn't be a problem for us Windows slaves, cause we can probably just run it through the command line once we've compiled it.  If I like it nearly as much as I expect to, I'll write up a GUI for it and post later.  Again, this might fall onto my "to do list when I finally find that hyperbolic time chamber list"  or "things for my clone to do the next time he's on a 16 hour flight and not intoxicated list."
Wow, I digress.  The really cool thing about LuciPHOr, and why I am so totally sold on it is this:  it has a FDR calculation for the localization of the phosphorylation.  They call it the "False Localization Rate".  It uses a lot of high level statistics, not my forte, but the data that I have seen looks really promising.

Korea HUPO Day 1


I just wrapped up day one at KHUPO.  What did I learn?  There is some top-notch proteomics research going on here in Korea.  Novel approaches to some old problems, ambitious tackling of some new ones, and some really really nice looking software.
I'm still a little unclear on what I am and am not allowed to write about.  The only thing that I have definite permission for is some new software, but I want to take it for a test drive first.  This is just a teaser, I guess, but it's hard to walk out of there and not write something!

Wednesday, March 27, 2013

The ATP-binding proteome of Tuberculosis


Currently in press at MCP is a really cool project that makes amazing use of the Thermo(Pierce) kinase enrichment kits.  This paper from Lisa Wolfe et al., represents the work of a group from several institutions and uses the kit to find the proteins that use ATP during the Mycobacterium tuberculosis life cycle.  If you're unfamiliar with these kits, you should really check them out.  What you get is this pure ATP or ADP that are tagged with biotin.  You place these compounds into your experimental system and if, say, your drug treatment (as I've used it) , causes a whole lot of kinase action then the respective tag that you used will be integrated into the binding site of the protein.  You then have a permanently tagged protein that you can pull down and implicate in the function you are studying.  They can be a little tricky to optimize.  This is a high level experiment, but to have your kinases enriched and to know that they are active can give you more information than just about any other experiment.  The application of these kits is pretty much up to your imagination and your biological system.

I stole the picture above from the PierceNet website, but you can find more information here.  If you are interested in tuberculosis or in a great way to apply this technology, you should definitely check out this paper!

Tuesday, March 26, 2013

ProteinLasso -- use super statistics to estimate protein interference

There is no art whatsoever on the ProteinLasso website, so I made my own icon with the help of Google Images.
Anyway, in every complex MS/MS experiment we ultimately select for fragmentation some ions that we didn't mean to.  Increasing sample complexity, increasing the isolation window and decreasing the chromatographic separation all exacerbate this fact.  There have been a number of different approaches to estimating or dealing with this.  ProteinLasso is a new approach described in this recent paper from Ting Huang et. al., out of the Dalian University of Technology.  The approach here, as far as I can tell is some high level statistics called lasso regression.  My expert eye can pull a lot of very large capital letter Sigmas in both the figures and the text.  The end point, however, is pretty clear -- false discovery rates calculations that appear to work with the same degree of efficiency whether the sample is simple or incredibly complex.
 We'll spend some more time on FDR in the near future, but you should check out this paper.  The software is also available through sourceforge if you just want to plunge right in.


Is Paris Hilton interning with Steve Gygi?


In a bit of silliness, and something that approximates news (or at least what we seem to consider it in the U.S....)   I was looking at the current software offerings from Steve Gygi's lab and was surprised to recognize a face in the lab photo.  Paris Hilton appears to be working in the Gygi lab, or at least stopped by for a visit.  In casual conversation, you often wonder where these starlets go -- mostly to rehab, but not Paris!  She's in one of the premier U.S. proteomics labs.

Monday, March 25, 2013

Excessive carry-over in your LC-MS system?


This isn't news, just another random topic for discussion (monologue).  Here is the question, though:  how do you know if you are having excessive carryover in your LC-MS analysis?  There are lots of ways to check this, but this is what I do:  I run lots of blanks and I process almost all of them.

Expanding:  in between samples of importance, or in-between quantitative runs without internal controls (label-free or SRM or whatever) I inject a normal size sample load (2-10uL, depending on the LC in question).  I then process that sample using an appropriate database through my normal processing scheme.  Since I almost exclusively work with human samples these days, I simply use my Sapiens Uniprot FASTA (or IPI Human) with the cRAP database either tacked onto the end or searched in parallel.  This gives me a pretty good metric of how clean my sample is.  I don't freak out when I see some peptides.  I only freak out when I see a lot of peptides.  These instruments are so sensitive that they are going to find peptides floating in the air or reasonably fresh buffer.  They shouldn't, however, find dozens of peptides.

If I do find enough peptides to freak out, these are my steps:
Clean the front of the mass spec.  Capillary and front plate with 50% methanol should do it.  If that doesn't help, it is time to approach the LC system.  The order varies from person to person, but this is how I approach it:

1) Change my blank
2) Change my wash solvents
3) Change my running buffers
4) Run high solvent for a long period of time (a few hours to overnight, if I could possibly afford it.  Extremely rare that I could or can)
5) If none of these help, I try injecting something harsh (a full sample loop of 20% isopropanol):  Note:  This may not be appropriate for all nanoLC systems.  I don't think I've ever used bold print in all the years I've been writing in this silly blog.  But I don't want you damaging your LC system.  I don't know a lot about LCs, just enough to successfully get by and to know that I've never noticeably damaged the LC systems that I have had.  That doesn't mean that you should trust me on this one.  When in doubt, consult the manufacturer.
6) If this doesn't help, it is time to approach the guard column (if employed) and finally the analytical column.

End of tirade.



Sunday, March 24, 2013

Q Exactive tutorial for LTQ-Orbitrap users


A while back I threw this video quickly together for a lab with LTQ-Orbitrap experience that had just received their first Q Exactive.  It is a quick run-through on how to set up a TopN experiment.  It was enough to help them get started before they could get their official training.  The main focus is on nomenclature that is different between the two method building software packages.  It isn't high enough quality for an official video, but I figured I'd post it in case it was useful to someone out there; why waste the effort in case it could help even a little, right?  There is a funny point where my dog's mortal enemy, the UPS guy, rings the doorbell and she does her best to get through the screen door to get him.  It is crudely chopped out.  Again, you won't see this one anywhere official, but hey, why not throw it out there?


Saturday, March 23, 2013

Phosphorylation ends up shifting all over the place?


Ummm....
If this was coming out of other labs, maybe I would glaze right over and by this.  But this is coming out of Karl Mechtler's lab.  Backing up....  A paper in this month's Proteomics (Wiley) from Andreas Schmidt et al., examines arginine phosphorylation using a global approach.  Arginine phosphorylation in bacteria is something that just exploded last year, with a couple really nice papers on where/how it works in Bacillus subtilis.  In this paper, this group looks very closely at this and other phosphorylation sites and shows what appears to be phospho groups jumping all over the place during LC-MS/MS analysis.  I would have glazed over this because I recall this being a topic of conversation about 6 years ago, phosphorylations moving around during fragmentation, but there was some solid contradictory evidence and we all moved on.
 And here it is again.  Just when we think we've got a system figured out.  If you are doing any kind of phosphorylation analysis this is worth a read.  If you are one of the many groups tracking arginine phosphorylation in bacteria (or mammals!?!?) definitely jump on this.

Beta testers needed! Proteome Discoverer tutorial vidoes


I've been kind of quiet this week.  Crazy busy working on my talk for Korea HUPO and this project.  A survey following this year's North American iORBI tour suggested that people would be really interested in tutorial videos on our software.  This week I've been trying to generate a ton of them.  Several are now finished and I'd appreciate feedback.  If you are interested in being one of my test subjects on the Proteome Discoverer 1.4 tutorial videos just send me an email at:  orsburn@vt.edu and I'll send you a link to the videos.  Feedback (not linked to my accent or grammar!) would be greatly appreciated.

Wednesday, March 20, 2013

ABRF Results 2013


Wow,  O.K., this was just brought to my attention in a comment in Sunday's post.  This is a big big study of search engines and confident IDs.  It is amazing how much can slip your attention in this field.  So many cool things go on that it is impossible to keep of them all.  This is really neat.  You can find more details here.  Thanks, Eric, for bringing this to my attention.  I hope to speak more with you in the future.

Tuesday, March 19, 2013

New article in press -- Identification of Protein Interactions Involved in Cellular Signalling




This new article is currently in press at MCP and comes to us from Westermarck et al., as a collaboration between groups in Turku, Finland at the Institutions whose emblems are shown above.  This review takes a very critical look at the technologies currently available for interrogating cellular signaling networks.  It goes after the classic techniques of the geneticist, such as the yeast two hybrid assay and systems such as strep- and flag- tags.  They also take a look at affinity purification coupled MS, and tandem affinity purifications.  You get a nice look at the strengths and weaknesses of each assay for studying cellular networks.

Monday, March 18, 2013

Proteome Discoverer 1.4 vs MaxQuant 1.3.0.5

A couple of years ago, I wrote a short and blurb on my experience comparing MaxQuant vs Proteome Discoverer.  Turns out, it may have been the most read thing I've ever written.  If I'd known how many people would read it, maybe I would have done a more thorough job!

Here is my vindication, though!  To celebrate last week's release of Proteome Discoverer 1.4,  I took a very nice SILAC labeled data set and ran it through PD 1.4 and MaxQuant 1.3.0.5 (the newest iteration, as of this posting date).

Dataset:  Human cancer cell line passaged in SILAC media with Lysine (6) and Arginine (10).  The data was analyzed in one go on a Q Exactive system on a 180 minute gradient using a Top20 approach.  ~600 MB file.

Software settings:  
Dynamic modifications:  Carbamidomethylation (C), Oxidation (M), N-acetylation, and SILAC labels
MS1 tolerance:  10 ppm (20 ppm first search for MxQ, 10 ppm for second)
MS2 tolerance:  50 ppm
FASTA:   IPI Human 3.77 (originally downloaded from maxquant.org)
MaxQuant used Perseus 1.3.0.4 with an FDR of 0.01 and implemented 4 threads
PD used Sequest with the Percolator algorithm at default parameters

PC:  AMD Quad Core, clocked at ~3 GHz with 8 GB of RAM

Total search time:
MaxQuant:  109 minutes
PD:  23 minutes

Results:
MaxQuant:  425 total grouped IDs, 27 of which were contaminants and 12 were reverse sequences.
386 human protein IDs
286 quantifiable

Proteome Discoverer:
465 grouped proteins
380 quantifiable.

I'll be honest.  I was scared at first.  MaxQuant has gone through some significant revision since I was last using it commonly.  Some of the new features, such as the ability to go back and re-search spectra are crazy impressive.  That team contains some of the best researchers in our field and continues to innovate how we do proteomics and process MS/MS data.  However, I have met a lot of the team that writes PD and they are no slouches either.

I'm going to throw in a caveat here:  I am an expert at using Proteome Discoverer.  I've been using PD since version 1.0 and have been using the beta versions of PD 1.4 for about 6 months.  I'm less adept with MaxQuant.  For a quad core cpu, I don't know how many threads would be optimal.  4 seems the smartest, but I may have been able to optimize that number and sped it up (virtual threading, or whatever...)  It may also be possible to optimize first search/second search parameters to gain more IDs.  Would I have picked up almost 100 quantifiable IDs?  I doubt it, but maybe the disparity wouldn't have been as large.

In time, I might do a follow-up article to this one.  It would be nice to see what the overlap in ID and/or quan is like.  It is a little difficult due to how differently MaxQuant and PD deal with protein grouping.  My guess, however, is that the majority of IDs and quan are the same, but that Andromeda and Sequest would each add complementary data to each other.

But for now, for just pure depth of coverage and quan, Proteome Discoverer appears to be the winner, though I'd still encourage you to try running both.  The worst that would happen is that you'd get more data from that MS/MS experiment.


Sunday, March 17, 2013

It's here!! Proteome Discoverer 1.4


It is here!
This week, Proteome Discoverer 1.4 was officially released.  Multi-core Sequest anyone?  Improved FDR calculations?  Spectral library searches?  Improved support for Q Exactive data?
Wooooo!
You can download the demo at the BRIMs portal, or get an official installation CD by contacting your local sales rep.

Saturday, March 16, 2013

How to set up a binning experiment on an Orbitrap


Recently, we've started doing quantitative experiments by looking in smaller MS1 ranges.  But what if you could do discovery based proteomics in the smaller ranges?  Would this help your results?  Absolutely.

This idea has been touched on before by other groups, but this new paper from CE Vincent et. al., out of the University of Wisconsin really knocks it out of the park.  The paper focuses on two concepts, one a little older and one brand new for doing this.  They call the first one a binning experiment, and their new approach a tiling experiment.  For now, I'm going to skip the tiling experiment which is the real star of this paper.  This requires subtle alterations of the code controlling Xcalibur on the Orbitrap computer.  The lesser note is the binning experiment, which everyone can do on every Orbitrap or LTQ system.


Above is a quick schematic I made of a binning workflow.  In this you would run the same sample multiple times, but during each run you can only perform your search for your TopN to fragment from within a narrow mass range.  It increases your dynamic range within that area and lets you dig a whole lot deeper.  The more narrow the bin, the deeper you'll dig into your sample.

In a tiling experiment, this is taken one step further, where multiple narrow ranges are selected in one run.  For example, if this is a Top20 experiment, the first ten MS/MS events can only occur from mass range 400-500 and the next ten can only occur on precursors from 500-600.  Without hacking Xcalibur using the developer's kit this option simply isn't available.  I would have to think though that the binning experiment would have to do a better job in obtaining sample depth than the tiling experiment, since you are concentrating more time on a more narrow range.  The benefit in the tiling approach is that, if utilized well, you would get depth without re-running the sample multiple times, something most of us can't afford.

The real gold in this experiment is that this group used these approaches quantitatively.  In each run, even when you were binning a different mass range, you still have the MS1 scans that you could use to for XICs for quan.  So you were able to dig much deeper and still have replicates for accurate quantification.

I encourage you to take a look at this paper.  And if you have questions on how to set up a binning experiment, I have a method that I successfully utilized on an Orbitrap Velos that I can make available.  As always, feel free to email me at:  orsburn@vt.edu

Friday, March 15, 2013

Build the ultimate Proteomics Processing PC on a budget


I get this question a lot:  How much power do I need to build a PC for proteomics processing?  And what components would I need?
In order to help answer this I'm going to give the configuration that  I would build for processing proteomics software on a budget.  The picture above is my all-time favorite PC case/test bench, the Antec skeleton.  It is a steel frame wrapped in plastic with easy accessibility to all the components and open air processing.  That big circle at the top where it says Antec?  That's a great big fan. (6 inch or something close).   More about the case below, but here is my shopping list..

Processor:  AMD FX-8350  ($190)
I'll start here because this is going to determine what motherboard I use.  For me, I just have to go with an AMD 8 core system.  These come in several varieties now.  The AMD 8150 runs about $150, but the newer (and 18% faster) AMD FX-8350 runs $190.  When can you get 18% faster for $40?   Compare this to the (SLOWER!!!) Intel I7-3920XM at $1100, and you realize what a bargain you are getting for the 8350.  Is it the fastest processor in the world?  No.  But is it fast (4GHz) , easily modified, and <$200?  Yes.

Motherboard:  Asus Sabertooth 990FX ($200)
This gets significantly easier now that I've chosen a processor.  My requirements:  able to use the fastest RAM chips.  Able to handle at least 20GB of RAM, and multiple USB 3.0 ports for fast and easy data processing.  My choice is the Asus Sabertooth 990FX.  It can take up to 32 GB of 1866 MHz DDR3 RAM and has 4 separate ports for USB 3.0.  It runs ~$200.

RAM:  Patriot Viper 1866 Mhz 16GB (8GBx2)  ($115)
There are cheaper options for RAM.  But I'm building a dream machine here.  I like this setup because I would only be using 2 of my memory slots and the Asus motherboard has 4.  If I found I needed more RAM I could just buy 2 more of these down the road (when they are much cheaper!) and slap them in.

Hard drive 1:  Any 7200 RPM 1TB or bigger hard drive ($70)
Put in a bigger hard drive or two for data storage.

Hard drive 2:  OCZ 120GB Agility 3 Series SSD drive ($90)
The requirement here is a 6GB/s data transfer rate and a SATA III connection (rocket fast).  Use this for the samples you are analyzing at present.  When you're done transfer them to Hard drive 1 for storage.

OR

High Point Rocket Cache ($155)
I wrote about this earlier.  It uses the SSD as a cache to accelerate the slow drive to SSD speeds.

Boring stuff, i.e. case, power supply connections (<$200)
This is my wishlist, so I get the Antec mini skeleton in the picture above for (110 and a standard power supply wiring kit with some nice shrink wrap cable covers so it looks nice.

Windows 7 64-bit Enterprise or Pro ($150)

Total for a  fast proteomics processing PC on the cheap?  ~$1000  throw in the rocket cache and you're still under $1200.

For a little extra oomph, particularly if you are going to run the processor at a higher level (cause 4.0GHz isn't enough?) , you could investigate things like a water chiller for the processor.  RAM can also be water cooled if you are pushing it that hard, particularly in a less well-ventilated enclosure, but this would be enough for me even when running big Q Exactive phospho files.

Now, would a 12 core Xeon processor smoke this set up?  Absolutely, but you'd pay more for that processor than you would for this whole PC even if you threw in a nice monitor.  Again, power for those of us without a ton of money.

It is worth noting that a lot of PC places, like TigerDirect, New Egg and Amazon bundle components.  It may be possible to get the case, power supply RAM, motherboard and CPU listed above at a significant discount if you bought them together.  It's worth shopping around for!





Thursday, March 14, 2013

Can we quantify a relationship between PTMs and phenotypes?



Here is an interesting line of thought -- can we map the relationship between PTMs and phenotypes?  It feels like a big step.  As a microbiologist in my past life, we often worked on the link between the genotype and the phenotype.  In bacteria, these things can line up pretty well, "this gene helps shape the bacteria's size or what sugars they can utilize."

They don't always line up, of course, but we often chalk that up to redundant mechanisms.  In eukaryotes, it is even worse.  We have fail safe after fail safe protecting essential functions.  We also know that our proteomes are orders of magnitude more complex than our genomes.  But what if you could take a step forward and out of the system and look at the PTMs and how they line up with phenotypes.  Would this assemble some sort of a bigger picture for what is happening?

This is the question that is examined in this new paper in MCP from Warren Albertin et al., out of a multi-department effort mostly centered at the University of Paris.

In this study, they use a lot of fancy statistics to analyze acetylation of proteins in yeast and look at the correlation between theses and the phenotypes of these yeast.  The differential was determined by 2D gels and the mass spec employed is an undisclosed detail.  I find this funny:

"Spots of interest were quantified using Progenesis software (Nonlinear Dynamics, Newcastle, UK) and identified using mass spectrometry (MS)."

Yeah, cause who cares what MS was used and how? ;)

It is clear that the statistics are the stars of this paper.  And it all appears to be quite good (and way above my head).  It is an interesting pixel into the potential of what the big picture could be here.



Pittcon update -- Elise Andrew has a booth!


Not really proteomics, just a random awesome thing I got in my Facebook Feed:  Elise Andrew, the founder of  "I fucking love science" will be at Pittcon giving out shirts and autographs.
Don't worry, I'm actually reading literature and have real news to put up here.  It's just been a busy week.

Wednesday, March 13, 2013

PITTCON next week


PITTCON kicks off in Philly on Monday, March 17th and runs through the 21st.  If you're unfamiliar, it is one of the largest conferences for lab sciences in the world.  With a projected 17,000 attendees for this year and nearly 1,000 companies demonstrating their newest developments.  Short courses are also available on everything from spectroscopy through drug testing.  One of the courses is an Intro to LC-MS for beginners.  The courses are a little on the expensive side ($300-$400) but there are multi-course discounts.  I've never been to this, but it always seems like something that would be a lot of fun!

Tuesday, March 12, 2013

Refined procedures for studies of ubiquitination --20,000 sites in 1 go!


Another cool paper in MCP this month is this gem from Udeshi et. al., out of Steven Carr's lab.  In this, the group reports their strategy and results from a fully optimized enrichment and analysis procedures for anti-di-glycine remnant studies.  The study in question was performed on SILAC labeled cells, demonstrating that these studies can be qualitative or quantitative.  All of the MS/MS analyses were performed on a Q Exactive and analyzed with MaxQuant.  The end result -- 20,000+  ubiquitination sites that can be identified/quantified in a single experiment.
As we continue to find more and more things that are affected by ubiquitination, I think this is going to be an extremely well cited paper for how to do this kind of study.  Definitely check it out!

Monday, March 11, 2013

Partial explanation of dietary restriction benefits?


The picture above is one of a series from the primate longitudinal study that started decades ago.  Most of us have seen these.  The guy on the left was rather severely restricted in caloric intake, while the inmate on the right was not.  They are the same age.  This image still shocks me a little. While caloric restriction is extremely controversial, a paper in this month's MCP may have a partial explanation of the benefits of this practice.
The study from He Wen et al., is the result of a collaboration between mutliple top-notch research centers in Seoul.  A particular focus was placed on analyzing the phase II metabolic biproducts in the urine of these animals.  If you're interested in this topic or metabolomics at all, I strongly recommend you check this one out!

Sunday, March 10, 2013

CBTC at University of Toronto is open and looking for collaborators/clients wanting top-notch MS work




After some reorganization, the Center for Biological Timing and Cognition at the University of Toronto was down for a bit.  With new management and guidance,  is up and fully operational now and looking for clients and collaborators.  If you are looking for some top-notch analytical technology, as well as a skilled staff to run this for or with you, definitely check out this center.  You can follow this link to see what services they offer, but I can assure you that they are loaded with some great technology.

Besides an Orbitrap for Proteomics, they also have the capabilities for small molecule quantification as well as high tech fractionation and enrichment techniques such as off-line HPLC and capillary electrophoresis.

It doesn't end at mass spec either.  They've got DNA sequencers and thermo cyclers and all sorts of other goodies.

You can follow this link for more information or contact:  Dr. Suzanne Ackloo (suzanne.ackloo@utoronto.ca) or Ken Seergobin (ken@psych.utoronto.ca)


Saturday, March 9, 2013

Can you use SIEVE for Top Down Data?


I started using SIEVE about 6 months ago.  And I feel like I'm finally doing label free peptide quan with grown-ups.  Yes, I'm biased, but I never personally cared about lining up my chromatography (and there definitely still are reasons not to, to be elaborated on later!!)
But for plain old peptide quan, nothing beats SIEVE coupled with Proteome Discoverer right now and the two just keep getting better (more details on this, later as well!)  Synergistically, you are talking about some awesome software on the way.
Anyway, I was recently asked if SIEVE could do quantitative top down data.  Never thought to check, or ask!  So I picked up some data files from the analysis of intact proteins dumped them in, and (BOOM!!!) quantitative data from intact proteins.  So, yes, when we finally get where we're going (LC-MS/MS analysis of intact proteins) SIEVE will be ready!

Friday, March 8, 2013

Proteomics of Clostidium toxin effects!


My old pals in the Clostridium world are getting some love in the literature this month!
In JPRs papers ASAP, we have a high tech study of the effects  Clostridium toxins on cells.  The work was done using a SILAC approach on an Orbitrap Velos system.  The paper is from Johannes Zeiser et al., out of Hannover Germany.

Thursday, March 7, 2013

In depth profiling of the human placenta proteome


In yet another amazing paper out of our friends at the human proteome project, this new paper from HJ Lee et al., describes the comprehensive proteomic mapping of the human placental proteome.  Maybe we overuse the word "comprehensive" sometimes.  I'm not doing that here.  These proteins were fractionated by SCX, HILIC, and OFF-GEL.  They were isobarically tagged.  They were enriched for phosphopeptides and they were enriched for glycopeptides and on and on.  The result is the most comprehensive look at the human placenta that has ever been assembled.  The breadth of this work is just fantastic!

Big Month for Label Free Quan -- 2 new reviews!



In case you're looking for a new review on label free quan, this month you are in luck.  Both this month's MCP and this month's Proteomics (Wiley) feature reviews on the topic.

The article in Proteomics is from Matzke et al., out of PNNL and can be found here.
The MCP article is from Nahnsen et al., out of the University of Tubingen and is here.

ChemCalc -- Exact mass calculator


Whenever you get tired of all the drivel that is on the internet, take a step back and look at all the awesome services there are out there.  The UCSF Protein prospector has been out there forever and is just awesome.  How many of us still use it all the time?  Chemcalc is another one.  It may have been there forever, I don't know, but I was just introduced to it today.  It is a simple and flexible tool for getting your exact monoisotopic masses and isotopic distribution.  It is brought to us by Luc Patiny of the Swiss Institute of Chemical Sciences and Engineering.  You can find it here.

Wednesday, March 6, 2013

SweetSeqer: Glycopeptide analysis software!



More magic from the Steen and Steen lab!  In press at MCP is this paper from Serang et al., that details a new software package called SweetSeqer.  This uses a de novo type approach to evaluate glycopeptides.  For any of you guys doing this, you know that the vast majority of the time we end up doing in manually.  In this paper they show a couple of spectra where an expert manually works through it compared to the results that SweetSeqer comes up with and it looks pretty good.
You can download the software at the Steen and Steen website here.
In order to install it, you'll need a couple of things.  The first is a python compiler, and the second is the pyteomics toolbox.  Don't worry, they are both free and available here.

Tuesday, March 5, 2013

Spec&SepNow a pretty App you can't log into


I got a notice from the Apple Genius thing that said spectroscopy/separationsNow.com had a new app.  This would work similar to Sparkplug, where it would provide me with abstracts that met one of many criteria that I set for upcoming literature from Wiley.  The icon is bright and colorful and suggests that this was really well made.  Then you try to log into it....
Try it, it's fun.  You get two identical bars.  The first says:  enter you registered user name.  The second says: confirm registered user name.  Then hit the login button.  Then it declares that the username or password are incorrect.  Try putting your registered password in the second box.  Doesn't work.  Try any combination until you realize that you are typing 2 really long things on a tablet and then delete the App.
Normally I only try to write nice things, but I would expect Wiley to do a little better.
If they fix the App, feel free to let me know.  It seems like a good idea, but I'll just keep using Sparkplug.

Korean HUPO 2013


The KHUPO conference is the 28th and 29th of this month in Seoul.  The highlights for me?  Top notch researchers like Akhilesh Pandey, Mike MacCoss, Michael Freeman, and William Hancock.  Another big one for me is Henry Rodriguez who will be describing the ProteoGenomic Atlas, a program at the NCI that I hadn't heard about until now.  Dr. Hancock will be talking about chromosome 7 and the group in Seoul that is working on chromosome 11 will detail their progress as both are working toward the comprehensive human proteome project I touched on a couple of weeks ago.  I'll also be there, talking about quantification technologies.  For more information, check out the KHUPO site here.

Monday, March 4, 2013

Xcalibur 64-bit


This doesn't come from the Xcalibur that I mean, it is just a little cooler looking.  It appears to be from some Casino gambling game.
This is just a quick blurb in case you didn't know that a 64-bit version of Xcalibur is available.  In December I spent a lot of time distributing the file to people but I worked with a group just last week that still didn't have it.  It is out there.  It is available. And I have yet to see a single glitch (particularly over running a virtual environment on your 64-bit PC to run the 32-bit version.)