Sunday, June 30, 2019

XNet: A Bayesian Approach to XIC clustering

Sooooooo....umm...I'm relatively sure I just learned a few things from this new paper ASAP at JPR and I think you should read it, particularly if you want to learn/unlearn/overlearn what you thought you understood about how XIC (eXtracted Ion Chromatogram) based clustering (a step most commonly employed as a part of label free quantification workflows by data dependent LC-MS/MS) works.

Now. I'm on a fence between here, because I don't understand this very well, but I'm really excited about this study and how they did it. Stop typing? Type faster? Screenshots? ....Screenshots!

Okay -- so WTFTICR is any of these here words?

Either I was in the sun too long or all of these are new programs to me. Presumably they exist behind the scenes as steps in processing pipelines I know about? I don't know.  This paper is worth reading just to look up new software! But it's not done. It is about a new way to do XIC clustering based on Bayesian thingamathings, which of course are:

...I totally knew that....

You can get XNET at this GitHub and it requires an Apache License, which is something I'd seen written and...I give up. I had no idea what that was either, but it is an open source agreement that you can read about here.

And my favorite part about this paper might be how brazenly just honest and just good the whole thing seems. My interpretation is this:

1) Bayesian network things might be a smart way to do XIC clustering quan

2) This is what this is and how we set it up.

3) Here is the potential it might have

4) Here we stacked it up against stuff that you already know, like MaxQuant and OpenMS

5) Sure -- we don't actually win this comparison, but here is all our code and you can use it as long as you get this cool license that says you guarantee it stays open source forever.

The only thing might possibly improve this intimidatingly smart and positive example of how science should work might be ending it like this.

Thursday, June 27, 2019

Is this finally it?!? Isotope analysis on an ORBITRAP!

Okay -- no time to read this -- I've really got to run to meetings -- but -- is this FINALLY it? Is this finally moving isotope analysis from instruments that only have maximum mass ranges of like 5 Th to instruments that can do other things?

I don't know....but it kind of looks like it is it.... for real, if you don't aren't familiar, you should take a look at isotope analysis and how it hasn't changed at all since the 60s....and compare it to this!

Big shoutout to Dr. Kermit Murray who does a better job of keeping track of inorganic mass spectrometry advances than I do.

Holy cow....C&EN is already running with a press release of a JASMS study -- I think this is it!!  You can check it out here. No, this isn't proteomics, but this is potentially a light year jump for our long-suffering friends in the inorganic MS world!

Wednesday, June 26, 2019

TMT Labeling for the Masses! A thought experiment for ultra-efficient, low cost labeling!

Disclaimer: You should definitely 100% completely follow every step of your vendor provided protocols. You probably spent close to a million $$ on that instrument, what's our total reagent costs? The "next gen" people drop close to $1,000 per sample no matter what on their global assays....

A while back I noted some other (also unrecommended) thought experiments from some people at Harvard where they were using less than the ratios that you should be using (I can't find the link...) but I think they were using 1:2 ratios but typically getting 95%-ish labeling efficiency.

In this (don't do it, for real) thought experiment, these theorists find they can obtain over 99% efficiency with a 1:1 ratio of reagent after some tweaking. 

You certainly shouldn't do this. 
And you certainly shouldn't take these authors' word that this works. You can check their work, it's at ProteomeXchange here.  Definitely download the .csv file, the file naming isn't quite intuitive (and keep in mind about half the optimization was done with the TMT-0 reagent, which if you haven't used before may not be default in your software (this is where you find/add it in Proteome Discoverer. Depending on your version, you may need to Apply the modification and then close your open workflows so you can see it)

(TMT-0 is +224, TMT-duplex is +225, TMT6-11plex is +229, for reference in any other software program)

I'm definitely not going to do this myself, obviously, I'm going to follow my vendor recommended protocols, just as I will urge you to do, but -- 

-- a quick filter says that out of 5,234 PSMs labeled in this method, I'm getting 3 unlabeled PSMs from the author's data....that's >99.942% labeling this was me just processing one of the short gradient optimization TMT-0 experiments, so grain of salt here on the larger complex sets, but wow....not a bad start. You still shouldn't do it, but it's an interesting thing to think about. 

Tuesday, June 25, 2019

NanoBlow -- create a gas curtain barrier and keep your instrument cleaner longer!

My library hasn't indexed this (Just accepted) but I think I can guess the idea. We've got gas lines just hanging out when we used nanospray -- chances are you just have them blocked off, but you could use them!

In this case, these authors turn on the gas when the troublesome stuff is eluting. I pulled a RAW file from ProteomeXchange/PRIDE here and it looks quite convincing!

This shouldn't be confused with the ABird, which is constantly removing background ions. Heck, the two might work really well in unison.....

Dr. Dave Sarracino used to do something that might be similar to NanoBlow, but I can't find any record of it. Correction, found it!  I apologize to everyone if this isn't the same kind of thing, I'll correct things when I can read the paper!

Monday, June 24, 2019

LIMS (laboratory information management systems) for mass spectrometry labs!

At ABRF this year I talked a lot about a SOPs and LIMS and why I loved they are critical to our future development as a field.

Laboratory Information Management (LIMS) systems are maybe starting to take off now, in some form or another, after a couple of false starts?  I think proteomics changed way way too fast -- 2D-gels were still cutting edge not all that long ago and are mostly a niche technology now. A lot of coding went into technology that aren't our central pipelines anymore.

What's a LIMS? Well...the word means a lot of different things to a lot of different people, but this is how the old Johns Hopkins Clinical LIMS worked when I was there a million years ago.

1) Samples come in and are barcoded
2) Samples only move to a new place when a new person (tracked person) scans them and moves them to a new location (sample collected. sample arrives at new location)
3) Samples are only processed by the strict criteria in the computer (multiple choice options  -- no creativity allowed).
4) Sample data is reported out into the LIMS as it is achieved
5) Most of this data is directly uploaded to the computer by the instrument
6) The operator verifies this data is correct
7) The data goes into permanent encrypted storage that is only accessible by the appropriately credentialed parties.

This is obviously a lot trickier for some assays than others. What was the CRP level? The total albumin, etc.,? That's a lot easier than -- "what was the relative shift in the PARPylation level across the entire proteome normalized against the global level?" Extra steps involved, but it's the same stuff.

Whoa! Here is a great review that is better than what I'm going to write. You should check it out instead!

Do we have LIMS options now? I think so, and I hope that we'll have more. (Let's stick to global untargeted first)

In no particular order check out Proteios.

First you'll think -- that paper is 10 years old! Well, here is a paper where it was used just a few months ago!

Another great one (more focused on the data management side, as far as I can tell) is msLIMS.  It runs via a really slick Java GUI that you can get here.

Okay -- for something completely different (and something you have to buy, no idea whatsoever the cost, but there is a 30 day free trial, disclaimers over there somewhere... -->) check out this LabKey thing!

It appears to be a more full-functioning LIMS with available Skyline/Panorama support and direct integration!

Back to the stuff for global -- here is another one from the rush around 2010 to get platforms out (MasSpectra) that has been seen in the literature as recently as the last few years (Bonus, it's a 2014 malaria flagella paper that I've never seen before!!)

Are there more? I hope so! But this is a decent start and more than I thought I'd find today!!

Saturday, June 22, 2019

MSFragger PD-Nodes!

...on the backburner of things that I wanted to verify I could talk about at ASMS....

The poster is up on the MSFragger website. The direct GitHub for the nodes (and installation instructions) is here. They appear to be compatible with PD 2.2 and PD 2.3. 

I haven't tried these crazy new Proteome Discoverer super powers, but the excited grad student who texted me about the nodes has them up and running and assured me the absurd performance increases displayed on the poster are reproducible.

I'll add them to the "where to get free PD nodes" box over there -----> when I get a spare minute or two.

Friday, June 21, 2019

Is 2019 the year of Single Cell Proteomics?

Image borrowed/taken from this great paper above.

In a field growing as rapidly in every direction as ours, it's impossible to pick a direction that it is going in. It really is, but half way through the year I'm going to type loudly on this keyboard every other person earth who has ever touched it has hated (it's this one, and it's awesome)


Before I start or continue rambling I'd like to point out something that I didn't know until a few months ago.

Did you know that single cell genomics doesn't give you a complete picture of the genomics stuff in that single cell?

IT TOTALLY DOESN'T! The genomics people like to hang over our heads -- "there will never be an equivalent for PCR in proteins". Maybe there won't be. But...this paper is 1.23 pages long and yes it is a few years ago, but...

....and they are not getting anywhere near complete transcriptome coverage. (Grain of salt, here obviously, I know less of what I'm typing about here than normal), but I've got it on good authority from an expert in this stuff (what up, Joe!) that they aren't getting a whole genome or transcriptome out of a single cell. Is that an "the emperor didn't win the popular vote" level surprise to you as well? Okay. So...what about proteomics?

I dunno. What if there are 2 single cell proteomics meetings this year? Does that do anything for you?

One just happened, so let's skip it. If you don't register for a conference 3 months in advance you obviously lose, right?

What's important is this one! (Jokes aside, you should register soon. It's free, and looks awesome!)

Wait. FIRST? Do you mean that we got to something in proteomics in the 21st century before Europe did!! About time! Take that you rest of the world people with your fancy pants schools where you learn stuff. We're first in single cell proteomics right now thanks to a red blooded American who grew up right here in Dallas, gosh darned Texas, named Nikolai Slavov.

I've never met anyone from the Slavov lab, I'm pretty sure. I've kicked some emails around, but I think if you are on this blog you'd think we played on the same AMERICAN football team in middle school. I just really like this lab's work and general philosophy. Slavov lab seems to be juuuuust a little over the status quo in the way science is disseminated to the world. Like -- waiting 6 months for a journal to finally format your paper for publishing, or -- if you aren't part of the right society or you couldn't attend a meeting for whatever reason, then you miss out on all of it.)

Case in point?

Who hasn't tried ScoPE-MS at this point? Most people I know tried it a year before the paper was printed. Yo. Dick Smith's lab had a killer poster on ScoPE-MS at ASMS where they threw in their technology (NanoPots) and it improved everything.  The info got out fast. People tried it. It worked, but had challenges and one of the powerhouse labs in our field jumped in with some improvements.

More pertinent case in point? I couldn't go to Boston for the Single Cell meeting. I've got a job and dogs to feed. I missed out, right?

BOOM. They're all here. No membership required. No expiration date on the talks. Free access for everyone.

Okay -- so this is maybe the coolest part -- and the example of why we need science to move faster (I know we're trying!)

Peter Kharchenko demonstrating a re-analysis of a new bioRXiV paper that came out just 2 days before he spoke!

Yes. There are downsides to the preprint process, like, yes, you can be totally wrong and everyone will see it and you'll feel dumb and maybe they wasted time that a reviewer would have saved everyone from. Peer review should never go away, but open rapid access to awesome new scientific advances should be screamed from rooftops and written in blood on pancakes and this is one of the clearest success stories of this model that I know of

Pumped for the future. Lets knock down paywalls and give our talks online free access -- cause, just maybe, the next top mind in our field is in Bhutan or Malawi or Paraguay and keeping all our results in our special clubs is the only thing standing between us and that person coming in and revolutionizing all of it.

Thursday, June 20, 2019

Independent Component Analysis -- Combine all the data!

Possible winner for the most times I've written something about a paper, deleted it, and started over again:

This new paper in press at MCP.

What? Independent Component Analysis (ICA).

Great wikipedia article on it here. Good to read before you get further into the paper and are like -- "wait -- can't you do exactly the same thing with Perseus...? isn't that exactly what it's for..."  And...honestly maybe you can do exactly the same thing. I'm not great with Perseus and I really really should do MaxQuant summer school next year , but once you realize that ICA is a real thing that you've just never heard of (you're very likely much smarter than me and know all about it already).

Wait. Is the MaxQuant summer school in Madison this year?

On topic. Come on. You can do it.

What these authors demonstrate is the use of ICA to combine both proteomic and genomic data in cancer cells. What is cool about this is the fact that, while ICA is independent (that's the I) it can be used in classifying combined signals against patient characteristics. If I write more about it I'll just embarrass myself, but considering I miscalculated the width of the isotopic envelope of a +1 small molecule and left that up on this blog for like 3 days, that probably doesn't seem to be something I really concern myself with.

The data is processed in R primarily using a package called FastICA (capitalization probably wrong). The data in question was pulled from TCGA (cancer genome atlas) and CPTAC (itraq 4-plex). I'm unclear as to whether this team developed the package, or if this is more of a proof-of-concept of the utilization of pre-existing packages to combine data in this manner, but its an interesting approach and all the tools are out there now for any of us to try them!

Wednesday, June 19, 2019

Same everything -- except buffer -- 30% increase in IDs!

I totally dig this new study's Elfseverer again....I swear, I'm not doing this to discriminate against people in the EU. Papers show up in my feed and I try to read them without looking at the author names or what journal it came from to avoid my powerful subconscious biases -- and people still submit cool things to these journals. Wait -- shit -- I may have just been emailed that I was author 84/219 on something that went to one...I should write less and read more emails....

What was I....THIS!

This team takes a look at running the same sample, gradient, instrument parameters and data processing scheme and just switches up buffer B.

Quick note: If you've got a slEasyNLC you should definitely talk to your technical support or service engineer before even considering replicating this. Maybe any nanoLC, honestly.... definitely verify that you can run buffers other than the approved ones. EasyNanos will go through seals rapidly if you use more than 80% acetonitrile as your running buffer. I think if you put 100% acetone in buffer B....

However, this brave team obviously checked with their LC manufacturer, found it was all good and swapped up the buffers on their LC and -- seriously 30% increase in HeLa IDs in what was a sub-2 hour gradient by swapping acetonitile with methanol and then acetone.

What surprises me the most, perhaps, is that the elution profile is even similar.  If at 3% acetone, all the peptides came off in one single peak, I'd be less surprised. The detector was an Orbitrap Velos running in high/low mode and the RAW files are available on ProteomeXchange here (but have yet to go live as of the time of this post's rambling).

Tuesday, June 18, 2019

Blog comment add on might finally be fixed?

Ummm.... maybe I got it this time? Sorry if you tried to post -- particularly questions. Better to shoot questions at @ProteomicsNews (that's how the Twitter thing works, right? is it a pound sign?) or to the email address in the About page.

Spam is getting super creative and keeps breaking the rules that I set up.  Definitely less than 80 real comments/questions out of the 321 that were pending....

Lychee/Achee toxins and how to detect them with LC-MS/MS

Did you hear about this? Until the tragedy of yesterday continued to unfold in my newsfeed, I'd never once heard of these awful toxins. I think last night the death toll was up to 100 children (CNN article here).

I'm not alone. This Science article from 2015 is the earliest place I've been able to find the naming of this toxin and it honestly didn't seem super conclusive, but the article states the compound is:  methylenecyclopropylglycine, which may be a close relative of hypoglycin -- a toxin that is found in unripe achee fruit.

ChemSpider says there is an alpha and an N- form (the more recent studies below appear to focus exclusively on the alpha form)

Oh. Okay. That should be fun to detect with LC-MS/MS of any type.

mzCloud doesn't have it! I've tried every acronym in ChemSpider. Hey! Anyone with a HRAM system that has worked with this! Send your MS/MS to mzCloud!!

mzCloud is the best small molecule database in the world in terms of spectral quality. If you haven't used it, you definitely should check it out. You will need to use Microsoft Edge or Explorer because the site requires the SilverLight Extension to run it. It's worth it. If it's a shared computer, do something like this...

 You can also download the full/current mzCloud instance and use it locally through a program you might have ignored in your recent Foundation/Xcalibur/Tune updates. It's called "mzVault" and you can directly access it via FreeStyle if you have a PC that can't freely Bing the internet whenever you want. For now I've got to go somewhere else...

Bongo! PubChem has 3 papers where LC-MS/MS assays were developed for it!  Both studies focus on the Alpha-form and ignore the N- so we'll assume that's what we should be concerned with.

Here's the first one....

The other 2 studies are from the same group and the links to the rest, if you've read this far, are in the PubChem link. (I've been doing a lot of food research this year, and Elfseverer reaaaally dominates food science publications.)

Actually -- this paper might be the one that is the better reference (they used both QQQ and Q Exactive HF in this detection method...but the HF seems to be an after-thought....)

Gross. They dansylate the molecule to detect it. I'm going to be a jackass here. My experience with dansylation is that -- yes -- it increases the ionization potential of molecules -- it is not necessary for today's more sensitive instruments and -- since it is modifying all sorts of things, it often makes your job harder rather than easier. Are you using an old Varian? You've got a pre-TurboSpray 3000? Dansylate away, but if you've got something with 80% or greater transfer efficiency, a way to enrich chromatographically or in the gas phase? I'll always try that first.

If I was a betting man -- I'd say that I could just detect this ion with a SIM/PRM centered here. (On SIM I'd narrow the heck out of the isolation width -- 0.7Da on non-segmented quads (QE/Fusion 1) and 0.4Da on QEPlus/HF/Lumos -- {Correction, brain was off} I'd try the same on PRM, but wouldn't stress if I had to center a 2.2 Da window on the single C13 M+1,  but for both PRM and SIM I'd use a ridiculous fill time -- you can do 3 seconds. One target? Why not?)

(This is the "mass spec scissors" function in ACD ChemDraw -- also available in the free - to - academics ChemSketch version.)

On my Triples -- I'm going positive scan all the way unless something goes wrong and I'm setting SRMs on these fragments and just see how it goes (decimals put in because they'd actually be useful on the PRM):

82.0657 (the ion that can differentiate between the alpha and N-forms above -- by PRM..too close on a Triple quad)
44.9977 (you could track this ion on the Exploris!)

To provide confidence that I'm on the right track ( case I'm waiting for the standard to come in....) I can check the fragmentation of similar compounds in PubChem.

The achee toxin isn't in mzCloud yet either but it is in PubChem

--and in HMDB here.

HMDB has predicted fragmentation spectra for different devices. This is the Q-TOF predicted and it appears to track with my manually calculated/predicted fragments.

Monday, June 17, 2019

HYPER-Sol -- Crazy reproducible data from FFPE Tissues!

Just about every hospital or medical institution has some way of collecting and storing formalin-fixed paraffin embedded (FFPE) tissues.  Maybe they keep them on-site themselves. I know of a lab that couldn't get their new Q Exactive installed because someone had filled a room belonging to them with FFPE samples and it took months to find a place for them on-site.  These are valuable materials. You don't just put them somewhere you can't find them later!  Someone might need them to find the next biomarker or treatment or whatever.

The ideal proteomics clinical samples are flash frozen, but FFPE is sooooo much cheaper to store and there are thousands (millions?) of samples throughout the world.

FFPE has some serious challenges for proteomics. The words Formalin and Paraffin come to mind first for me for some reason. FFPE proteomics is in no way a new idea.  Here is a Scholar search I just did requiring both of those terms that kicks back 8,000+ entries. Keeping in mind Scholar's redundancy issues, and the large number of M.D.s who insist on calling western blots and ELISA assays proteomics, there is still a lot of stuff going back years and a lot of it is great, but -- it's proteomics -- from one lab to another I'd confidently bet that no one used the same sample prep methods for their studies.

What if there was a complete workflow that could get near identical data from samples that were stored as either FFPE or as Flash Frozen? That would be worth typing about, right?

You can check out the preprint of HYPER-Sol here.

The data was generated on a Fusion 1 and HF-X and it looks like all the DIA data was processed in SpectroNaut "Pulsar X". DDA data processed in Proteome Discoverer appears to directly import into it as spectral libraries and it handles all the DIA stuff from there.

A lot of time when we're comparing different sample prep workflows we focus on ID overlap, and that's cool and everything. However, we're feeling pressure at all times to be more quantitative about everything. Not only does this workflow get a high degree of ID overlap, but the quan values ALSO line up. This team demonstrates r2(squared....wait...I can't superscript? even Word can superscript... weird...) values that were up in the .9s -- impressive, considering how differently the samples were originally treated!

Okay -- and a big shoutout to Markus Hartl at MPL for tipping me off to another big preprint on this topic. This one uses SWATH and variable DIA windows.

Sunday, June 16, 2019

Reducing FT-ELIT size!?! Is this idea actually realistic?


So...the fourier transform electrostatic ion trap (FT-ELIT) has been an idea that has bounced around for a while -- but this recent-ish paper in JASMS makes it seem like it might actually be a competitive technology!

Here is the basic idea (from someone totally and completely unqualified to describe said idea) -- instead of ejecting ions from the ion trap in a linear manner by ramping up the RF  (DC/RF ratio, depending on the trap architecture) so that they destabilize in the predicted order (cool review on this here) what if you did hard math instead? Could it possibly be that the ion trap is already physically capable of so much more, but we haven't utilized it properly?  That's what the FT-ELIT looks like.

However, the earlier applications made it seem more like a novelty. "Cool! You can get like 10,000 resolution in that ion trap, but each scan takes like 6 hours? Riiiiight oooooonnnnn.....(backs away slowly....).

In this study we see competitive levels of performance from tweaking the trap and decreasing it's size (think about the boost in resolution/speed in the OrbitalTraps when you drop from a diameter of 30mm to 20mm). Huge boosts! They also get closer to measuring the initial injection of the ions off of the mirror that puts them in there.

What can they get with all this work?

22,000 resolution at an m/z of 1,150 in 300ms.

Let's compare.

This is data from a high field (D20) Orbitrap using 60,000 resolution at m/z 200. At around 1,145 m/z it's getting superior resolution (26,000 or so), but similar ballpark.

What's a 60k scan in a high field? 96ms? I forget. Something in that range, definitely less than half the 300ms the FT-ELIT is using, so the D20 Orbitrap clearly wins. The D30 trap, I know for sure is 256ms for a 70,000 resolution scan when the D30 has eFT (similar to this improvement in this study, eFT is monitoring closer to the 0 time point when the ions are ejected directly into the Orbitrap.) Instruments without enabled eFT, such as the original Orbitrap? I think the FT-ELIT might actually win, by achieving comparable resolution in a shorter amount of time!  At the very least, it is reaaaally close. I can't seem to find an Orbi Classic file on the shared drive and I got bored looking for one.

Okay -- I don't know if this is a reproducible thing or if it requires a football stadium of electronics to make it work, but it is certainly interesting to think about, right?

Since I'm too lazy to look on a network drive for an older Orbi file, I'm definitely not going to check the relative mass accuracy of the 2 devices (don't forget, Orbi resolution has only minor relatively minor changes on mass accuracy -- if you're doing over 15,000 resolution it doesn't improve mass accuracy to even do 500,000 (but revealing coeluting peaks you hadn't seen before may make it look like it does).

I should stop typing on the interwebs and go play outside.

Saturday, June 15, 2019

BASIL -- Use TMT to amplify phosphopeptide IDs!

YES. BASIL is not only tasty it's also a great way to improve your phosphoproteomics! You should check out the paper here. 

Okay -- so you know how SCOPE-MS works for other people (not me, but it's definitely my sample prep inabilities)?  In ScoPE you load a lot of signal in one channel of TMT. Since all your peptides get combined in the MS/MS, the high channel provides enough signal for identification and then the reporter ions allow you to quantify things that would be below your S/N for an ID.

What if you did the same thing for phosphoproteomics?  That's BASIL.

The numbers are the normal ludicrous values everyone feels that it is mandatory to report for phosphoproteomics data since the first group reported 785 trillion phosphosites (or whatever...) on a Finnigan LTQ back in the day, but it gets you to that number of sites (and, presumably, the realistic smaller number that are actually there) with way less material!

Friday, June 14, 2019

Level up PRIME-XS -- it's now 11 sites for EPIC-XS!

PRIME-XS was, in my mind, the smartest and most effective use of proteomics resources since this little field started.

How many papers came out of PRIME-XS? Is it the reason that the EU spends far less money on proteomics instruments than the U.S., but massively insanely leads the U.S. on numbers of papers published and in medical studies that feature applications of proteomics? I don't know for sure, but I do strongly believe it contributes.

What should you do when PRIME-XS is over? You expand it! You give it an even better name!

Here is a press release!

Here is the new and upgraded EPIC-XS!

My understanding is that researchers from all over the EU apply for proteomics support like this:

Awesome researcher who doesn't know proteomics stuff: "Hey -- I've got the coolest model system you've ever heard of and I think some proteomics would allow me to: cure this disease/bring the honey bees back/impeach the US president/lower the temperature of the globe/create a malaria vaccine that doesn't cost $4,000 (insert anything else that would drastically improve the quality of the life on earth and the chance any mammals are still alive 50 years from now)"

Some of the best proteomics labs on earth "Great! We don't fuck around with these mass specs. If proteomics can give you an answer, we're the people that can pull it off, and this sounds way cooler than running HeLa or yeast or whatever I can actually get access to most of the time."

XS: "BOOM! Cover of Nature/Science again! Saved human and dolphin lives!"

WE NEED TO DO THIS IN THE U.S.!  5 years ago I volunteered to head the US one. My aspirations are more reasonable now. I will sell my labs right now. Sign me up as the janitor. Whatever I can do to make this happen here.