Okay, this is pretty smart! This paper is in this issue of JPR and from Miin Lin et al., at Wesleyan Connecticut.
There is inherently some uncertainty in the assignment of peptide/protein ID from shotgun proteomics data. What if we had a metric for it? In a decidedly old school and awesomely valid way of looking at it, these researchers went back to the tried and true molecular weight determinations from a nice old SDS-PAGE gel.
They cut slices so they knew the parent protein molecular weight, but dumped that data for now. They then used a slew of search algorithms with a 1% FDR and went back to see how well their peptide IDs corresponded to their molecular weights.
I think there is a possible criticism of this technique based on unknown cleavage products and post translational modifications causing shifts in the molecular weight of the protein or in the charge, and therefore shifting the pattern of protein migration. I would counter that argument by stating that I would expect this would be relatively minor in comparison to the high abundance proteins. I'm sure cleavage/PTMs have an effect but I don't think it hurts what an elegant analysis this is.
What were the conclusions? That using multiple algorithms give you a better shot of matching the expected protein identity. So, if you haven't been convinced already to use as many processing algorithms as practically possible to dig through your RAW data, here is yet another data point!