Transcriptomics is still booming -- there is so much awesome data being generated from all those cool instruments (I recently heard one of the newest ones can generate 3000GB of transcriptomic data per sample!)
If you've been summarily browsing the biology literature, you've undoubtedly seen reclassification of some "non coding" genes as "coding", via these technologies. And there have definitely been several that have been validated at the protein level.
However -- there have definitely been some that have been purported as "coding" via transcriptomics that do not make it to the protein level. Is it because mass spec based protein technologies just can't detect them? This new open paper at JPR takes an in-depth looks at some of these disagreements and concludes ---
I feel bad for starting this post this way. It almost says -- here is the controversy and here is a response from some really smart Belgish people, but there are other reasons why you should check out this paper.
1) You can see what happens when all the cool free CompOmics tools are put into action (SearchGUI and PeptideShaker)
2) You can see what power we still have with existing tools to ask intelligent questions of the awesome proteomics repositories and answer today's most pressing fundamental questions!
Hey -- there's definitely stuff out there that is real -- but this study just introduces some caution into the mix. There is tons of info in these huge transcriptomics files and cool stuff waiting to be found -- but if something down in the noise range doesn't translate to a protein -- maybe we should....
....hold our horses before concluding it is some inherent fault on the proteomics side!