Monday, December 30, 2019

Optimizing/reducing in source fragmentation on 3 different Orbitrap systems!!

Talk about a useful study! If you're doing tryptic peptides, maybe this isn't all that useful, but if you are working on anything that is more fragile than that (glycopeptides? PARPylated? intact/native, metabolites...we could go on an on here) this is probably worth at least thinking about. 

On the letterbox systems (the ion tranfer tubes with the great big rectangular holes) we use lower RF% to start out with. For peptides on a Q Exactive or HF system, I typically err toward an RF of 50-60%. On the Lumos or Exploris we're typically doing 40-45% for peptides.

The great Katie Southwick explained RF% to me years ago (I need an ELI5 once in a while) as the amount of pulling force in through the very front of the instrument. Bigger things probably want a higher %RF but you have to keep in mind that there are downsides to that extra force and you could break apart smaller or more fragile things.

In this study, this group  takes some of the more fragile things that we all hate to work on -- lipids -- and painstakingly compare different systems with different source conditions.

The chart at the top is the one I find the clearest and most valuable out of this great study -- when I'm looking at something that is clearly fragile and I've looked at it on whatever instrument is available -- this provides some guidelines for normalizing a setting that I probably didn't pay enough attention to.

Sunday, December 29, 2019

Urinary Peptidomics Reveals Diabetic Markers?!?!

Well -- if you needed a protocol for doing urine peptidomics, all the way down to standardizing everything to the urine creatinine levels (90umol, if you were wondering) and wanted some WTFourier level proof that this is a good use of your time, may I present: 

1) I didn't know urine peptidomics was a thing
2) This group reduces and alkylate their endogenous peptides. I'm unclear on whether or not I think that is a good idea, but considering how this paper develops downstream, I'm just going to shut up and do exactly what they did.

Discovery was all done on a Q Exactive coupled to a slEasyNano 1000 using an interesting Agilent column I'm not familiar with (post SCX fractionation? SAX? I forget now and I've got stuff to do).

Validation? Well -- they tripled the speed of the mass spec and increased the speed of their separation by over 7 orders of magnitude with an EvoSep coupled to an HF-X. (I guess the EvoSep isn't 1e7 times faster, but is sure feels like the slEasyNLC is taking the length of a human lifetime to load a single sample.

All the data is up on ProteomeXchange and Panorama, but you should read this great paper and find the links yourself!

Saturday, December 28, 2019

PISA -- Multiplex Thermal Proteome Profiling!

Want to massively increase the speed of your drug mechanism elucidation/ drug target workflow? Back your bags for PISA!

Nope. Not that one. This one! 

Proteomics Integral Solubility Alteration! (PISA is a much better name).

What's it do? It multiplexes Thermal Proteome Profiling -- in the context of drug treatment. Here is a post that will link you to two of the previous studies (including the Nature protocol for ThPP).

The idea is that if your drug binds to some proteins it's going to change the proteins inherent 3D stuff. One readout of that will be a change in the protein's behaviour at different temperatures. In ThPP (an acronym I may have just made up so I don't confuse this with the TPP thing on my desktop) you look for how things change in your proteome at different temperatures. Check out the protocol. It's tough is lots of room (in my mind) for human error to lead you to false positives.

One way in proteomics to reduce quantitative error? Multiplexing!

One way to reduce quantitative error in everything? More samples!

PISA uses both of these to end up with a TMT quantitative readout of how the proteome changes at a global level (with both 1D and 2D fractionation for TMT seamlessly integrated just as you'd expect from a TMT based experiment) with lots of replicates all multiplexed together.

Friday, December 27, 2019

The Case for Proteomics and Phosphoproteomics in the Clinic!

After a couple of days of somewhat successfully skirting any discussion of politics with my family for the holidays - with one extremely notable exception, I'm so pumped to type something that people with a similar mindset might read one day.

What about this for building some consensus?

Where are we now? What are the challenges ahead? What do we need to do next? Yo, I'll let them tell you what....

This review has study after study that has shown the promise of proteomics to impact patient health. Now -- you can probably guess where the big technological need is in the personalized space from the picture at the top. HLA peptides still suuuuuuuck. Blech. Yes. We need help on that side, but from many of the other areas we're good to go. We just need a shot. And the paragraph above says ...more chances to prove that we know how to do this stuff.

I highlighted my favorite words: because you know what the medical community is good at? Openness to shifts. That's me being sarcastic, if you can't tell.

I love the angle on the phosphostuff here, because you sure don't here these cancer people in the clinic talking about protein abundance all that much -- they're all rambling about the "phospho status" of this protein or that one, and doing Westerns and ELISAs to check them. Which, yo, it's almost 2020. Western blots are fucking stupid. I'm not the first person to say that, but if you need someone to reference that statement to, I'm cool with you quoting me. Here's some semi-coherent reasons why. I'm pretty sure ELISAs are stupid as well, but I'm not sure I've ever actually done one, so I'm not sure I feel qualified to make such a strong statement.

A big thing that we're kind of missing in our realm might be the incorporation of -omics data with clinical data. We're not exactly running away with loads of stuff that can help us make these connections, but -- realistically -- we can steal that stuff from the GWAS people!  (There are good examples, of course, but they aren't integrated into a lot of the more common software programs.)

This is a beautiful, optimistic, and valuable review and -- I'm a few months late on posting it (11 months) but it is definitely worth a read!

Thursday, December 26, 2019

PeakOnly -- Deep learning Python code for finding your MS1 features!

There are a lot of ways to find compound peaks in your data, but some compounds/peptides (particularly modified ones) just have lousy elution profiles. Sometimes you just have to go in and look yourself. Isn't that what all this AI/Machine Learning/Deep learning baloney is supposed to be doing for us? Automating tasks that are a little bit harder?

Maybe this is gonna help!?!?

PeakOnly uses Deep Learning to classify peaks. It is meant for metabolomics and was optimized on 1-3 Hz MS1 data, but I'm still putting it here because there is a very short list of things that will make lists of your MS1 peaks and their abundances (quantifying the stuff you didn't identify) and I'll probably need this sooner rather than later.

You can get PeakOnly at this Github. It doesn't blow the traditional peak detection stuff away or anything, but it does identify some compounds here and there that XCMS misses. It makes for a solid proof of concept study with open (with MIT license?) code that deserves a look. I mean...I'm not having trouble with the high abundance ones with the perfectly gaussian distributions...I need help identifying the lousy ones....

Wednesday, December 25, 2019

Your Holiday Gift from MCP is a special issue on ProteoGenomics!

Okay -- it says it was online in August, but this is the first time I've seen it and MCP's Twitter just showed this crazy cool cover from

You can check out the full special edition here.

I've rambled about ProteoFormer on here before. (RiboSeQ + Proteomics data analysis)

MetaQuantome deserves a revisit later. I'm writing this sentence for myself, mostly. It looks like a more sensitive way of breaking down complex microbial communities with metaproteomics.

I can't pretend I've done more than flip through these (I'm either luckily flipping through or most/all are open access?) but I landed on a serious distraction on my way through.

WTFragmentation is this!?!??

A new study from Zhang Lab interrogates data from both TCGA (the Cancer Genome Atlas) and CPTAC by both downloading the data from it and by doing some analysis directly on the site.

Not sure how I missed it or forgot about it, but it deserves some clicking around to see what it can do!  You can check out LinkedOmics here.

Tuesday, December 24, 2019

R2-P2 -- The robot method that'll make large scale phosphoproteomics realistic!

Try forgetting the name of this great new method (you're welcome! this is the kind of stuff I contribute to society)!

And if it brings some awareness to a fantastic new study that shows some demonstrates a way to fix several of the fundamental problems in my field, then it's worth it to me if a few more people think I'm an idiot than did yesterday.

I hate phosphoproteomics and I bet you do too. Sure, maybe it pays the bills, but it's a terrible situation for everyone. The phosphopeptides themselves are finicky. You can't mess around. If you even lyse the sells just a little bit slower than last time it seems like things have changed. And I don't care what kit you're looking at. You'll see 3 barely visible beads in the lid of the tubes you bought and you'll think, just for a second, about maybe calling out sick the rest of your career. And even when you (or someone way better at sample prep than you, in my case) does everything perfect the replicates are often still sad. There are too many variables -- there is too much sample handling. There are too many steps. There are too many places where a pH off by 0.1 will ruin the number of those awful looking fragmentation spectra that you're going to get in the end.

Would someone please just find an affordable robot and do the most incredibly boring study in the history of the galaxy and just work out every miserable variable and just provide a realistic way to do large scale phosphoproteomics prep???  That's all I'm asking for.  Oh -- and please make it open access. I'm off campus....

How does R2-P2 do?

Cheap Robot? BOOM!

KingFisher!! (Not OpenTrons cheap, but affordable as robots go [and, btw, if you didn't know, it is marketed by multiple companies under different names with different lables on it. Shop around, you can get them 1/2 or 1/3 price depending on if it says RioBad or FermoTisher or something else (I forget the third one). Not making that up.

All the boring details worked out?  BOOM!

This group did everything. I respect and pity them for the amount of work this study must have been (I guess it got easier once the robot was functional)

Compatible with large scale?


Maybe yeast phosphoproteomics isn't the most complex system, but they do a bunch of replicates and perturb some central MAPK systems and the data looks awesome. It seems very realistic that you could scale this up to larger systems easily.

And the paper is open access!

Look -- this isn't the first automated phosphoproteomics prep system, and it won't be the last, but this checks all the right boxes. 100% recommended paper.

You're fragmentation spectra will still look like garbage, but that's physics and chemistry and stuff -- what we need is a reproducible way of getting to the same garbage spectra every time.

Monday, December 23, 2019

Synapse proteomics (and phosphoproteomics) and sleep cycle!

...and the winner for some of the prettiest and most useful plots I've seen this year goes to two recent (new?) studies in Science about proteomics of mouse synapses and sleep.

(One and two)

I stared at the very top for several minutes in the afternoon when my brain is generally working at it's typical extremely limited capacity before I realized it's changing signals over clocks!

Here is my summary:

1) Sleep is super important at a proteome -- and particularly a phosphoproteome level (there are really cool analyses of neuropeptide signaling events here as well. (Scary for people with intense insomnia? Maybe?)

2) This may be completely untied from the transcriptomic regulation which is totally circadian-ish. (This is kinda big, right?) I'm always ranting about how low the correlation is between transcript abundance and protein abundance and here is a CRITICAL system where the two appear completely untied from a regulatory standpoint!! Killer paper

3) When you're starting out with a tiny amount of material (I'm doing synapse work right now and, we'll of course, model it after this paper) and you have to do phosphoproteomics on it -- that's when you call in Cox Lab for help.

4) One thing that stands out a little here from how we go about it is that we start with distinct brain subsections that are surgically removed and then the synapses are enriched from, for example, the hippocampus first and the pre-frontal cortex, because from the RNA-Seq stuff we're very concerned about mudding signals. Here they appear to have started with a big section of the brain and concentrated the synapses from them at once. Considering our maximum extraction from the hippocampus synpases is like ~1ug...maybe...this is likely critical to get phosphopeptides.

5) We're completely stealing the layout of this paper for our synapse studies. 100% just copying how this is laid out and putting our data in it. Worked for them. Two (2!!) science papers?  Totally deserved because this work is killer.

6) Many people will still read this and then try to do transcriptomics of the brain, look up what proteins those transcripts make and will call it "quantitative proteomics" because...well....

Sunday, December 22, 2019

Why hundreds of thousands (millions) of unidentified MS signals matter!

The table above is a great breakdown of what I'm going to call "nextgen proteomics search engines" going forward.

SeQuest (and, yes, reviewer number 3, I'm going to continue to capitalize it this way for the rest of time. Typing it in all caps makes it seem like I'm screaming about how much I miss the 90s {which... admittedly...I do sometimes...} is just missing too many things. I don't first pass anything with it anymore. I first pass with MetaMorpheus, find my PTMs and then MSAmanda them (because I like the PD output and MS2Go.'s a complicated PTM and then I just MetaMorpheus and stop. Cause there are modifications that MM will find that are 100% real that nothing else will find. One of the coolest things is when a PTM changes the charge state of the peptide. Like so you add the mod and you lose a proton. And it adjusts for that mass/charge shift.

I'm lumping SeQuest with Sanger Sequencing going forward. As in, whatever was before "next gen".

And this cool new review provides loads of extra reasons for why!

Why is it Shot-Gun? I dunno. I do like that this is a great review of open searching algorithms and why they are so very critical for us as a field, and for biologists who are thinking about proteomics stuff.

Sunday, December 15, 2019

Exploris mass accuracy at highest speeds? Or is the data being read funny?

I'm skeptical of these results, but I still think they're worth mentioning.

This is an Exploris data dependent experiment. 80ng of commercial HeLa digest on a 60 minute gradient with different resolutions. 120k MS1 and two separate runs (n = 1, look ma' I'm a scientist...)

The 18ppm MS/MS fragment tolerance lines up with what I've seen with various tools for the HF and HF-X. The higher speed.....that's a bit larger of a difference.

The weird part is the fact that in the third run (and I just checked to make sure I wasn't being stupid) I used the internal calibrant (fluoranthene gnarly? radical?) for the MS1 and MS/MS.

Obviously, n=1, but that's pretty weird, right?

This output in MetaMorpheus, but a lot of software does that calibration step these days and I'll check those, but I do have to wonder if there is a scan header thing?

In a couple of tools I've had some weird glitches with Exploris data -- which was fixed by updating my MSFileReader on various computers. I don't have any conclusions here. but I do think it's something worth keeping an eye on!

Also -- please keep in mind that post-acquisition calibration is basing an MS/MS fragment on theoretical. At lower resolution we're seeing more coalescing peaks (they've overlapped and we can't tell them apart) so the error is stuff like that (and mis-assigntments) on top of any drops in mass accuracy.

Remember how the scan headers got swapped around 2012 and old MSFileReaders would sometimes put the pre-MIPs mass in as the monoisotopic mass? That was fun! Not saying that's what is going on, but it has certainly happened before.

Saturday, December 14, 2019

XMAN v2.0 -- Even more human cancer mutations -- now on Github!

How did I miss this?!? 

We can ALL do proteogenomics if someone is nice enough to

open the door by taking all the "normal variation" and deleterious mutations and put them into nice protein .FASTA databases for us. Then we just do what we'd normally do, maybe mumble something semi-coherent about FDR in the method section of our paper and we're done.

If you're in the cancer realm, we've had a great tool like this for years now. The XMAN database is a composite of hundreds of thousands of mutations that we can use for searches if we've got enough processing power, or we can reduce them and utilize them.

After using it nonstop for years it's honestly pretty depressing to work with other diseases that don't have a resource like this.

How do you improve on something this useful? could start by adding a few million more mutated sequences.


Oh and you could make the resource a little easier to get to, I guess! And they did that to. You can get it all on Github here!

Friday, December 13, 2019

ProteoMiner utilized to find missing proteins!

"Missing proteins" are the ones that the genetics stuff says are there that we can't find proteomics evidence for.

Both the really hydrophilic and the really hydrophobic aren't a lot of fun to work with, particularly if you're using things like trap columns (or cough cough PepMap). Good-bye hydrophilic peptides!  Unfortunately, those missiing proteins tend to fall into one of those two groups. This great new study at JPR utilizes an commercial enrichment/depletion method from the past called "ProteoMiner" to get to them.

ProteoMiner has been around for a loooooooong time. And it had some solid applications in depleting proteins prior to doing 2D-gels. It's usefulness in more modern global proteomics has been a little less clear and you don't hear about it much anymore. The idea is that you make crazy big peptide ligand libraries and then use that as a depletion column. You know the top 10 depletion columns we use for plasma/serum? Imagine that on a much larger scale. Deplete 100 things?

This group uses it to go after the super hydrophobic membrane stuff. They start with a membrane prep then they ProteoMiner it -- then they use a combination of crazy HPLC separation methods and -- violin -- they end up with solid evidence for a bunch of proteins that should be there!

Are you looking for crazy membrane proteins? Maybe this is a cool new/old technique to check out!

Thursday, December 5, 2019

Over 1,000 SINGLE CELL PROTEOMES! 2,700 Proteins. 10 days of instrument time!

 You know about SCoPE-MS. Even if you don't pop in on my rambling here, if you've been to a conference this year there have been a few people here and there (and more than a few vendors) who have shown that they can replicate the technique. 

Time to start applying it! And pushing the boundaries and this new preprint shows what we can do in terms of throughput.

By applying enough improvements to the technique that we need to start calling this ScoPE2 -- these authors do single cell proteomes of over 1,000 (one thousand!) cells in 10 freaking days.

How on earth do you do that? In part with TMTPro. 16 cells (minus control and blank) -- 95 minute runs and the math is legit (part of the study is done with TMT 11plex) the second half with TMTPro/16-plex)

10 days = 240 hours = 14,400 minutes/95 min/run = 151 injections x 13 or 14 cells (after controls) = Math checks out!

I'm downloading the data and -- yo -- the vendor talks this year sure have made it seem like the Eclipse is the only way to go for doing single cell. And there are clear advantages there. Sensitivity, speed, real time search complementing SPS MS3 and I just identified a candidate lab with an Eclipse on the way where I might show up on Xmas with some Canadian whiskey and see how long they director will let me hang around (whaaaaaaaaatup, T?)  ---  but the RAW files from this new study? I can fully open in RAWMeat and see everything except MS1 fill  -- which means -- it's Q Exactive!

I was sure the scan header was wrong. There is no way the number of peptides they're reporting (and I'm verifying) is Q Exactive Classic, right? No way.

How'd they do it?

Narrow'ed the quad isolation (0.7Da). I know what you might be thinking, didn't Gygi lab do that a long time ago to limit isolation interference? Totally. Didn't it end up not working as well as SPS MS3? Well...yes and no. SPS MS3, particularly with RT-search is the best. However, if you look at how the Lumos picks SPS MS3 -- it still drops the quad isolation way way down. Because it's dumb not to.

And -- yes the problem with the original Q Exactive and Fusion 1 system is the last generation quadrupoles that are in them. They don't isolate symmetrically (if you say give me a 0.7Da isolation the ions on the upper and lower sides, the ones 0.30-ishDa/Th above and below target center are artificially suppressed because its a lot better at isolating right at target center than the sides) which can be a bummer -- but right in the middle they isolate great! And that's all that is wanted here. They're trying to just isolate the most intense peak! Could this actually be an advantage? I'm probably just typing too fast....there's no way an asymmetrical isolation could actually improve TMT quan on narrow isolation data....

They focus on optimizing the peak picking time and fill time and -- this data is awesome.

You should read it and download the data and check it out, but -- I'm going to keep rambling (file pulled at random)

This is the RAW Meat TopN plot (how many MS/MS events are actually chosen) and it's revealing). It looks to me like the majority of the time there are only 3? ions selected for fragmentation after each MS1 scan. The majority might be 1.

And fill time for those MS/MS (top panel, TIC bottom)

Yeah....with the narrow isolation and the fact these are SINGLE CELLS(!!!) out of the 6,000 or so MS/MS scans in this experiment it looks like about 5,800 required all 300ms of injection time they were allowed (red line at top).

I'm going on about this data as if it's something crazy awesome, right? But how did they actually do?

2,700 proteins. I'm not making this up.

AND -- they compare this to the 10x genomics workflow for single cell? And it kills it. It absolutely crushes it. Yes I'm biased. Clearly. But multiple peptides per protein per cell. It adds up. Sure -- the genomics stuff is still useful and we've got a ways to go. I'd love to do this correctly myself once before I go around kicking over sequencers or whatever they're called, but -- just wow.

One more thing. Basically every week the NCBI has a "CodeAthon" you can check it out on Github here. They get a bunch of informatics nerds to submit proposals around a loose central theme and they pick a project, get all the nerds together and they code away until they have something. So far they've been shockingly successful as several of the individual events have resulted in accepted papers.

You know who has been ignored? Proteomics. There is one in January that is on Single Cell stuff. I submitted a proposal last month that it should be on integrating single cell DNA/Protein. My biggest concern? Where would I get the dataset? BOOM! I have all the data I need.

If you think this would be a cool idea, shoot in a proposal! This is cool enough that I'd take a train from the warm south to frigid NYC in January if we get to do some proteomics coding. I'll get Canadian Whiskey!

Wednesday, December 4, 2019

Revisiting BoxCar!

Now that a large number of little Orbitraps appear to be showing up all over that essentially have a "BoxCar button" (it's a premade workflow) I think it's worth talking about it again.

What's BoxCar? 

It's a method to improve Match Between Runs (MBR) using MS1 library matching. It improves the S/N of your MS1, but it does nothing to improve your MS/MS scans (they're mostly just there for chromatographic alignment stuff for MaxQuant)

Will BoxCar help my results? 

Yes, but only if you are using MBR and are comfortable reporting IDs using MS1 library matches

How do I process BoxCar data? 

Your dd-MS2 events will trigger normally and those peptides will be ID'ed. However -- if you aren't using MaxQuant and MBR you will get substantially fewer identifications. If you don't have an awesome library and MaxQuant and MBG -- expect maybe 1/2 to 1/4 the number of identifications per file. With MaxQuant and MBR? On 2ug injections you can get numbers just as good as what is reported in the original paper. No joke. I've totally done it. 7-10k proteins in a single shot.

What happened to BoxFahrt? 

Oh...that...I'm dumb and forgot about making mutated versions of BoxCar.


Disclaimer -- you have to pick your application for this to be useful at all -- but there are applications where I swear this totally works (more results and applications coming soon!)  Conor and I still write like Appalachian 4th graders, but since this workflow is SUPER EASY to set up on the Exploris we thought it would be helpful to the world to push this one up the queue and out the door.

The concept is super easy, right? BoxCar does multiple sweet BoxCar scans but that doesn't help your MS/MS, cause you pick your MS/MS from the included MS1 scans. Your cycle time suuuuuucks.  In the original BoxCar paper there are 3 BoxCar MS1 and 1 regular full scan MS1. Even on an HF-X at 60k resolution you're at 128ms transient per scan. That's over 0.5 seconds per MS1. But the S/N on the MS1 is incredible. You can get clear isotope distributions on ions that you can't even see in the MS1s.

Can you get the 10,000 protein ID's in 100 minutes with MaxQuant and MBR on that data? Totally. (In my analysis of their data I get more like 8,800, but I think that's my FASTA database and that I've never attended a MaxQuant summer school). And that's amazing and I'd personally run this workflow all day. But what about the super low abundance stuff? Like when people come to you and say  "I cut the salivary glands out of this weird mosquito, please give me a proteome of the parasite that is living in it?"

The first thing I saw during my BoxCar obsession was that -- holy cow -- yes on this super low material, BoxCar's boost in S/N is amazing. I can see SO MANY PEAKS, but I don't have an MS1 library for this -- or lots of low abundance weird things.


....triggers the MS/MS off of the BoxCars....BOOM....

1) If you run it on something where you've got more than 20ng of material you're going to think I'm more dumb than you already do (this number may be a lot lower on the Exploris -- I think the vendor has been understating the sensitivity on this thing a little. The numbers I'm getting are ridiculous. So...maybe below 10ng of material? I don't know yet, but hope to find out soon!)

2) Your cycle time still suuuuucks. Unless you do this on a Fusion and acquire your MS/MS scans in the ion trap. Then it sucks less.

How do you set it up? Starting methods for Fusion/Lumos are included in the supplemental. I'll try to get one made up for the Eclipse -- I haven't been able to get my hands on one (if you have one and you'd like to see if I'm as strange in person as I appear on these pages [I sure hope not!] I would be happy to discuss a field trip! Unless your in Boston right now. Half meter of snow? Best of luck with that, yo.)

On the Exploris -- you open the BoxCar method and then you add a ddMS2 to each msx-TSIM. That's it. Since the Exploris already has the BoxCars worked out all you have to do is add your MS/MS events. When I get time I'll work on titrating sample downwards, but in limited runs at higher injections it looks like the MS1 signal intensity and cycle time are better than the HF-X and Lumos.

Should go without saying? If you had to do a magic MS1 to see your ion you aren't going to fragment it well with 14ms of fill time. You'll need to crank that up. One mosquito's salivary glands? 300ms MaxIT on the Fusion 1, so around 100ms on the Lumos. Also -- there is a lot of junk in the noise. I'd recommend lowering your quad isolation to the lowest safe limit. 2.2Da on Fusion 1? 1.4Da/Th on Lumos? Something like that.

(Methods and RAW files, of course, are all available. I'm getting great at PRIDE uploads finally and I'll get these up here for the real paper submission. I've got 2 computational proteomics workshops to teach and my 2019 commitments wrap up and I can budget more than 30 minutes per day for writing! This took 28???)

BoxCar ramble number 713 complete.

Tuesday, December 3, 2019

Auto STOMP Shows Protein Structure in Subcellular Structures!

I'm unclear on why the word STOMP in Google images provides tons of pictures as awesome as whatever is going on above. However, if you're going to go jumping through the air with trashcan lid shields, I'm going to find a way to circulate it. I think this may contribute to the short halflife of my friends on the Fakebook thing.

What was I talking about? How about a way to look at things under a microscope and then BOOM crosslink them while you're looking at them? Cooler than trashcan lid dance fighting? Yo, I'm not even done. What if your microscope was smart enough that it went around and found the things you wanted to look at and then crosslinked them itself and then emailed you when it was done?

Science fiction?

Nope. You can read about it here!

First off -- STOMP isn't new. If you had 48 straight hours to spend behind the lenses of a microscope in a dark room and you could code patches in multiple languages -- oh -- and could synthesize your own special crosslinker -- you could totally STOMP.

Since that is stupid on 14 levels, let's forget the first thing ever existed and just call this one STOMP.

This is how it works --

What do you need for STOMP now?

A commercial crosslinker.
A commercial microscope
A commercial image processing software
Okay --- you still need a couple of patches, but it's all in the same language (Python) and it's easy enough that I could set it up. (It's all set up in a step-by-step walkthough here).

What do you get out of it? Just a way to crosslink (AND ENRICH -- the crosslinker has biotin on it, that's what it's for, I think) what you're looking at under a microscope!!!

"That looks cool! What is it?" BOOM! Pull it down, digest it and figure it out.

That's neat, but what could you actually use it for? These authors use it for HOST-PATHOGEN INTERACTIONS!!  Yo, where my malaria people at?!?!  How awesome would it be to look at the interface between Plasmodium and the red blood cell wall? What's the PFEMP1 really doing? The authors point out you could obviously use it for more than host pathogen stuff, but if I was gonna write a grant on this stuff....maybe I should delete this and start writing a grant instead....

Monday, December 2, 2019

Awesome review of Python in Proteomics!

Google is very nice about telling me when I log in about whether people care about what I've posted. It says some really interesting stuff about my audience. One thing that is very clear is that y'all don't care for R and Python and Linux. However, my increasing sense of horror about being force fed the habanero and turd sandwich that is Windows 10 is going to keep showing up here.

As such, I'm going to need to keep checking out new tools (good-bye Visual paid way too much for you over the years....) hello free Python IDEs that all about 90% work!

I love this new Python unreviewed review and I might just be leaving it here so I can find it. I only have the direct PDF download and this is the link to it.

As an aside, I've been tricking myself into getting more comfortable with Python by making it a game. I've got a pretty much 100% functional Tetris that insults the player and a side scroller that I'm even making all the ultra-professional level art for. Confused about what an Anaconda is and how it affects that Numpy thing you heard someone ramble about (arrays!) -- make it something fun and 16 hours later you might have it sorted out!  There are LOADs of free programs online to teach you this stuff. I'm just going through Tech by Tim's channel which starts with "installing Python" and by video 12 is like "using Tensorflow to build an AI"  -- some of these things are cooler than the others.....

Sunday, December 1, 2019

Finally! An Exploris 480 paper!

What took so long? Geez. It's like this thing just came out!

The results are....impressive....and....ummm.....

....ummmm......well. I still love the HF-X and the data I've got from the two I was lucky enough to get to use is still some of the most dense and awesome I've ever searched through.

Somehow in the haze of ASMS and all the cool stuff I missed something SUPER CRITICAL. Phase constraint is activated -- in a limited sense -- on the Exploris!!  TMT11-plex can be resolved -- just like in the paper above -- in 32 milliseconds!! WTFourier?

I'll be reading this cover to cover repeatedly cause this got powered up while I was in Okinawa --

(Finally an Orbitrap small enough that you can hug in a photo! Who knew we needed that?)

...and in the one day I was around to mess with it I found a lot of stuff to try -- like NATIVE BOXCAR and BOXCAR DIA!!  Guess what can set variable TSIM windows?