Article titles are something that I think can go one of two directions.
Direction 1) How many words can I squeeze into this box the publisher's provide in order to make sure that I describe everything I did in the most thorough way possible while ensuring that no one will ever actually look forward to reading this paper.
Direction 2) BOOM. This is what we did and we've got a name for it.
The problem with direction #2 is that you can sometimes get into the paper with the flashy title and feel like it was a marketing job. "Like....oh....yeah....of course I read it, but it wasn't that good."
Every once in a while, though, you run into Article title/type #3 where you get a really good and catchy title for something that makes you want to stop your car at a closed rural highway truck weighing station so you can read it -- and even after you have to explain to a police officer why you parked at 4:30 AM at a closed weighing station
(defund the police) you're going to move your car out of the special space that says "emergency services only" and the paper is good enough that you're still going to finish your blog post anyway.
What I'm typing about right now is the 3rd type.
What's it about? Well, they did proteomes of 100 taxonomically distinct organisms. For one study. Even if it was bacteria, that's a buttload of proteomes. And it's not all bacteria. It's all sorts of organisms.
Quick takeaways in case I have to move a hypothetical car again.
1) uPAC columns have somehow come in and vindicated what Jun Qu has been doing for upwards of 10 years now. Columns over 1 meter are all the rage! In this study 2 meter long uPAC columns were used.
2) This is how you set them up! (due to the extra emphasis in this paper on the importance of grounding, I'm imagining there is a story that someone somewhere knows....)
3) How would you do 100 proteomes for one study?
-Digest with preOmics robot/Bravo
-Spider Fractionate (8 fractions) (yup -- there's over 800 HF-X RAW files!)
-Run uPAC columns at 750nL/min with QE HF-X with DDA
-You could also mess around with MaxQuant targeted (further investigation warranted here)
-Process with MaxQuant
-Develop some new tools that will be necessary to deal with this much data (put them up on Github!)
-Put all the data up on ProteomeXchange Partners (PXD014877)
-Set up a snazzy website for interpreting all the data
-And wait till June for Nature to publish the paper that was accepted back in April