Wednesday, October 7, 2015

Assessment of longitudinal interlab variability


This one is pretty cool .

What happens when 64 different labs submit BSA samples that they run every month for 9 months and people sit down and assess the data?  Sounds like an ABRF study to me!

As we've come to expect, intralab variability (same lab over the 9 months) was smaller than interlab variability (from one place to another...I get them mixed up). That makes sense.  My LC my mass spec, I'm going to keep it pretty consistent for 9 months, compared to the way I run it versus those wackos over at Whats-It-Called University.

Variability among all the samples really doesn't look all that bad. It his, however, a single protein digest -- so we'd kind of expect that. Sampling of 100,000 peptides from a normal mammalian line might be a more sensitive indicator, but I still think this is a promising measurement.  As a field, we're getting better all the time!

Interestingly, the real outliers seem to show up right after LC-MS preventative maintenance (PM). And this makes sense, too. If you've had your LC open and changed some thingies in it recently then peakwidths and retention times might have shifted a bit pre- and post- opening it.  Sure does emphasize the frequent use and recording of quality control standards, particularly after maintenance and things.

Oh yeah, and this paper is currently open access in early release format at MCP here.

No comments:

Post a Comment