Category: Blogs

  • Welcome to the respirometry.org blog!

    Welcome to the Respirometry.org blog, an occasional or sometimes regular blog about metabolic rate measurements and other topics!

    This blog follows no rules and no regular publication schedule, though I’ll be updating it as regularly as my schedule allows. Most of the emphasis will be on the measurement of metabolic rates, but don’t be surprised by asides into sacred or profane areas of discourse.

  • Black boxes versus traceability in metabolic measurement

    Anyone who has read Chapter 13 of my book knows that I have a hobby-horse: the accuracy of metabolic rate measurement, especially in the metabolic phenotyping systems that are so widely used in the biomedical field. Now I’ve taken steps to make some of my (and others’!) misgivings known. 

    It’s really frustrating to watch otherwise capable researchers in the biomedical field flock to use the same black-box metabolic screening systems that others used before because, well, everyone uses them.

    Of course there are good reasons for this situation. There’s a natural wish to use systems that have become accepted in the field, even if the underlying reasons may not hold up to close examination.

    Yet a metabolic phenotyping system built for traceability and transparency can be used just as effectively by someone who doesn’t know or care how it works. The difference between that and a black-box system (which shares the lack of any requirement for understanding) is that if someone wishes to understand the behind-the-scenes operation of the system built for traceability and transparency, including all its corrections, assumptions and calculations, then that understanding is possible.

    But it isn’t easy to convince scientists to move to transparent, traceable systems. Which is too bad, because important decisions can be made on the basis of data produced in questionable yet untraceable and unauditable ways. Rather, scientists are attracted to the familiar. Here’s my humorous take on this herd reaction:

  • Thoughts on Apple

    I’ve written quite a lot of scientific data acquisition and analysis software, some of which – like Datacan and ExpeData – are in use in many laboratories around the world. I’m often asked: Why don’t you write software for Apple computers rather than PCs? There’s a reason, or maybe a couple of them, and here they are.

    Picture the scene: Los Angeles, specifically UCLA, in 1987. For three years I’d wanted to write data acquisition software for Macs, but it was functionally impossible to add the necessary interfaces to them because of their closed architecture. Then, in 1987, Apple announced that it was demoing their brand new Macintosh II on-campus, so I rushed to the event. I was blown away (well, maybe infatuated, because the machine did crash a lot, but hey, it was gorgeous). Best of all, here, finally, was a Mac that could be interfaced to laboratory instruments because it had a card slot with which I could connect analog to digital converters and other lovely things. The demo was being run by a Mac nerd with a thin, weasely voice who reacted very negatively when I asked whether the Mac II’s card slot was documented and open for use. “No”, he said, “it’s proprietary.” “Surely you’re encouraging people to use it?”, I asked, to which he replied, “It’s for our products. If we catch anyone else trying to reverse engineer it, let alone use it, we’ll sue them.”

    And I’m sure he meant it. I thought, **** you very much, too, and walked away. After that I avoided using Apple computers or software for several years – about twenty, in fact.

    As for Apple and scientific data acquisition? Though as a graduate student I was threatened with a lawsuit if I tried to use a Mac II for that purpose, that didn’t stop Apple from cutting deals with big companies such as National instruments (of LabView fame). It all fits together in a strange way.

    That said, my aversion to using Apple products turned out to be inconsistent. I was an early (too early!) adopter of the iPod Touch, which was visually delightful but otherwise pretty dysfunctional in its first incarnation. And now I find it difficult to imagine not having an iPad. Sitting with Robbin in an oceanside bar in Florida at sunset on October 5, 2011, I heard of Steve Jobs’ untimely death and felt unexpectedly and deeply moved, as if I had lost a friend.

     

  • Welcome to the revitalized Respirometry.org!

    Just a quick post to let the teeming multitudes (all three of you) know that Respirometry.org’s long stasis is at an end. I’ll be posting new content on a regular basis, much of it available nowhere else. I’ll be blogging muchly about Promethion, a new metabolic measurement system for biomedical researchers, but also on other topics.

  • The future of metabolic phenotyping

    Data from a parallel, continuous metabolic phenotyping system (black line) vs. a multiplexed system (red line)

    The essence of metabolic phenotyping is accurate metabolic measurement. As a matter of convenience, cost and feasibility, practically all metabolic phenotyping systems operate in a multiplexed mode, in which a single gas analysis chain is shared between multiple animals, typically 8, 10, 16 or more. Cycle times between metabolic measurements for a given animal vary widely between systems, ranging (in the case of 16 animals) from 2 minutes for an optimized Promethion-M Multiplexed metabolic phenotyping system to as much as ~45 minutes for its competitors. The result is a heavily sub-sampled data set from which much fine temporal detail is missing or distorted.

    The picture above is worth a thousand words. It shows the output of a brand new Promethion-C Continuous, parallel metabolic phenotyping system. Data from one of sixteen animals (mice, strain C57BL/6J) is shown. Eight at a time were measured simultaneously, without multiplexing, and the system is capable of indefinite expansion (one pharmaceutical company has a 24-channel Promethion-C system). Click on the picture to embiggen it. Data on VCO2, food and water uptake, body mass, water loss rate etc. were also acquired synchronously but are not shown. For more on this topic, including an excellent interactive visualizationof the distortions caused by multiplexing, visit this later blog post.

    The multiplexed system (red lines) was simulated from the output of the Promethion-C system*, assuming a 30 minute cycle time, which is faster than average. As you can see, the Promethion-C system (black trace) tracks each metabolic excursion by the animal, as it alternates between rest, pedestrian locomotion, and wheel running, with extraordinary fidelity. The data storage interval of the Promethion-C metabolic phenotyping system is one second for all attached sensors.

    Now that Promethion-C is available, it is extremely hard to justify acquiring multiplexed metabolic phenotyping systems any longer unless cost is an overriding factor. If that is the case, Promethion-M offers the fastest cycle times available – for example, down to 2 minutes cycle time for a 16-cage system. Like all Promethion systems, Promethion-M also offers the many traceability and transparency benefits of complete raw data retention. Plus, most components of a Promethion-M system can be used in your Promethion-C system if you ever decide to upgrade.

    Designing and building a “massively parallel”, continuous, multi-animal metabolic phenotyping system is far from easy. This is especially true if the system stores all of the raw data from all sensors – a requirement for good laboratory practice. The bandwidth requirements are formidable, as are the requirements for implementing suitably flexible data analysis protocols. There is no way this massive exercise in coordinated integration will work unless, as with Promethion, the manufacturer has total control over all aspects of the design of all instrumentation comprising the system. This we do. As a result, Promethion-C is up and running, in production, a documented and field-proven product with multiple installations in the field.

    As the chief designer of both Promethion systems, it’s been a privilege to have the opportunity to turn my knowledge of respirometry and passion for innovation to practical use for the biomedical community!

    If you have any questions, contact me.

     — John Lighton

    * Thanks to Thomas Förster, Ph.D., Sable Systems International’s expert in-house data analysis and data presentation consultant, for running the simulation and creating the graph.

     

     

  • Automated behavior analysis: The basics of EthoScan

    When mice (or rats) are wandering around their Promethion system cages and simply being mice (or rats), they interact with the various sensors in their cages – the mass sensors attached to their food and water dispensers, their body mass sensor, their running wheel, and their X-Y open-field position sensor, for example. In the Promethion system, each interaction is precisely recorded with a heartbeat of one second.

    This makes it possible to construct a detailed analysis of the animal’s behavior. Looking at Promethion data, you can easily see how this is achieved. In this 20-minute section, zoomed in from a much longer recording, you can see the raw data from four sensors – water dispenser mass (blue), habitat mass (green; this allows body mass to be measured each time the animal enters or leaves the habitat), food hopper mass (red), and running wheel revolutions (orange), with a time resolution of one second. The mouse’s transitions between different behaviors are very obvious, the times at which they occur are precisely known, and they are quantifiable. (To make the graph clearer, the various sensor data are not to the same scale, and not all of the sensor data are shown). Click on the picture below to embiggen it.

    So, in this example, we see the transition (starting at the left) from running, to lounging about (without touching anything) for a few seconds, to entering the habitat (and thus being weighed), to running again, to touching the habitat without entering it, to running, to touching the habitat again, to drinking (notice the reduction in water dispenser mass), to eating (see the reduction in food hopper mass), to lounging and then touching the home habitat, to running, to entering the home, to running, to touching the food hopper without eating, to drinking – you get the picture!

    Now we can invoke ExpeData’s EthoScan automated behavior analysis software. It gives us several outputs. The simplest is a list of behaviors. Let’s look at such a list, for the exact sequence we see above. Here is that section of data shown in Excel. Each behavior has a five letter code – WHEEL for wheel-running, SLNGE and LLNGE for a short or long lounge (< 60 and >= 60 seconds), THOME and IHOME for touching or actually entering its habitat, TFODA and EFODA for touching or actually eating from food dispenser A, TWATR and DWATER for touching or actually drinking from the water dispenser. A section of particular interest is highlighted. (Click to embiggen.)

    A small section cut from an EthoScan, corresponding to the above picture

    As you can see, each behavior has a start and end time and date, a duration in seconds, and an “amount”, a quantification based on the behavior – revolutions for WHEEL, body mass for IHOME, mL of water consumed for DWATR, grams of food eaten for EFODA, and centimeters locomoted for SLNGE. Much of this information is impossible to obtain from video analysis or other behavior quantification systems – yet this rich mine of data is a standard feature with Promethion, which knows how to use its sensors intelligently.

    If you look at the highlighted section, you can see how the mouse’s body mass rises abruptly after eating and drinking. This gives you mass balance data you can’t get any other way.

    Ah, but there’s more. Having the list of behaviors, EthoScan now gives you time budgets…

    And it gives you non-wheel ambulatory budgets…

    And it gives you with-wheel ambulatory budgets… all automatically. Note the huge contribution from the running wheel. This mouse ran over 4 kilometers in a 24-hour period – and that’s actually pretty lazy by mouse standards.

    But wait, there’s more! Given a particular behavior, what is the probability that a given behavior will occur next? What we need, in short, is a transition probability matrix. And yet again, EthoScan obliges. Here it is.

    For example, after eating food (EFODA), there is a small likelihood that this mouse will drink water or touch the water dispenser without drinking (DWATR, TWATR), but it’s very likely (> 80%) to walk about the cage for a few seconds (SLNGE) before doing anything else. It won’t go straight to the wheel after eating or drinking. But after a short lounge the most likely behavior is to run on the wheel. You can, in effect, see how the mouse’s mind works, and how it balances the transitions between behaviors and their duration and quality. And all of this information is hard, numeric data that can be tested and rigorously compared across animals, strains, knockouts, dosages, treatment groups, and so on.

    And it all comes, complete and at no extra cost, with any Promethion system, whether you use behavioral analysis or not. In fact you can even opt to use a Promethion system for behavioral analysis only. You can always add metabolic measurement later if you need it.

    I still remember when EthoScan occurred to me. It was one of those “aha”, and “I remember exactly where it happened” moments. Examining a Promethion recording similar to the one that started this blog post, the idea for EthoScan sprung fully formed into my mind – using the rich sensor data from Promethion in a way never intended when the system was designed. But somewhere deep in my mind Jeanne Altmann’s classic 1974 paper (Observational Study of Behavior: Sampling Methods) had been percolating since undergraduate days. To use her terminology, EthoScan boils down to simultaneous Focal Animal behavioral sampling of all animals at once, with one-second sampling resolution – and with no wandering of the observer’s attention. Sometimes the best things happen without conscious planning.

    If you have any questions, don’t hesitate to contact me.

    — John Lighton

  • Data, data, data!

    This shows a day’s worth of data from a single mouse in graphic form, recorded by a Promethion-C system for a research study on which I’m collaborating. The time resolution of the data set is one second. Loads of additional data (such as XY position and so on) didn’t make it into the graph but are waiting in the wings in case they’re called on for duty. Click on the graph* to embiggen it.

    So, what’s happening? The top panel shows VO2 and VCO2 (rate of O2 consumption and CO2 production, respectively). You can see they’re quite variable, and that most of the variability is explained by the next panel, which displays wheel running and non-wheel pedestrian locomotion in blue and orange, respectively. You can see how the VO2 and VCO2 traces faithfully reflect the increased metabolic rate that accompanies locomotion. The next panel, RQ, shows the fuel that the animal is burning. It can vary from 0.7 (fueled entirely by fat) to 1.0 (fueled entirely by carbohydrates). As you can see, when the mouse is running, it shifts the fuel it is burning more towards carbohydrates. Next we have food and water uptake, then below that, the body mass of the mouse. (You might wonder how that’s measured; inside the cage there’s a cute little habitat attached to a high-resolution mass sensor, and the mouse gets weighed each time it enters and leaves the habitat. The food and water uptake sensors work in a similar, differential way). You can see how the mouse’s body weight (or body mass, to be rigorous) increases when it goes through feeding and drinking bouts. And finally, we have something that only Promethion can measure in the metabolic phenotyping arena, which is water loss rate. That’s the sum of the water the mouse ate and drank and later excreted, and the water the mouse produced metabolically. You can see how closely it tracks metabolic activity. Metabolic water production can be very significant. Would you believe that 1 gram of fat produces over a gram of metabolic water?

    Just a tiny appetizer, a soupçon, of what you can get from a good metabolic phenotyping system.

    — John Lighton

    * Thanks to Thomas Förster, Ph.D., Sable Systems International’s expert in-house data analysis and data presentation consultant, for creating the graph.

  • Measuring food uptake differentially

    Let’s say you need to measure the food uptake of an experimental animal, which of course could mean any creature, including you. For the sake of simplicity, imagine a mouse or a rat feeding intermittently from a food hopper.

    You’d think that all you needed to do was weigh the hopper periodically, such as at the start and end of each 24-hour cycle, and see how much its mass decreases. You’d be right, in a sense. That will indeed measure the change in food mass over that period. But if you think that the change in mass is an accurate representation of the amount of food the critter ate, you might be very wrong.

    This is because most food, including rat or mouse chow, is hygroscopic. It absorbs water from the water vapor in the air to an extent roughly proportional to relative humidity. And relative humidity is anything but constant, particularly inside a cage. As a result, neither is food mass.

    To get accurate food uptake figures, you need to measure differentially. In other words, food uptake must be calculated from the difference in food hopper masses just before and just after each feeding event. This figure* (where d is food hopper mass) illustrates the point.

    As you can see, a feeding event corresponds to a large increase in the variance of the measured food hopper mass. A good food uptake calculation algorithm, such as the one used by Promethion, searches for sections of stable mass readings immediately before and after each such event. Then it compares those readings and tests them for statistical significance. If a significant difference is found, the event is designated as a food uptake event. If not – and a surprising number of interactions with the food hopper don’t result in significant food uptake – then it’s ignored.

    As a result, slow changes in hopper mass resulting from fluctuations in relative humidity no longer distort food uptake data.

    True, but analyzing the problem at a deeper level, the mass of food that is eaten, however accurately it’s measured, still reflects the sum of two partitions:

    1. The dry weight of the food that is eaten
    2. The weight of water associated with the food

    The water content of typical mouse or rat chow is about 10-15%, so the error can be significant. Dry food mass would be a much better measure of food uptake.

    Funny you should say that. Because the Promethion system (unlike any other food uptake measurement or metabolic phenotyping system) measures water vapor partial pressure in the air pulled from the cage, it is possible, knowing this, to back-calculate food mass to its “dry” state, mathematically. All that is required is a good characterization of the chow’s mass versus ambient water vapor partial pressure.

    Not a single researcher anywhere in the world is yet doing this. But it’s possible (though only with Promethion). I wonder who will be the first to fill this vacuum?

    — John Lighton

     * Thanks to Thomas Förster, Ph.D., Sable Systems International’s expert in-house data analysis and data presentation consultant, for creating the graph.

  • Data Visualizations you haven’t Seen Before…

    OK, imagine a mouse in a cage. With Promethion, you can monitor that mouse’s position to within a calculated centroid of 2.5 mm (several times higher resolution than legacy systems). You do this with an infrared light array. So, what do you do with those data?

    Well, you can of course calculate the time which the mouse spends in various areas of the cage. Depending on your research goals, this can be quite informative. Here’s a graphic representation (click to embiggen). Note that in these views, you should mentally move the running wheel a bit to the left; the spot shown in the running wheel is actually just to the right of it, and corresponds to the mouse’s last recorded position before it entered the wheel.

    Well, that gives you some information regarding the mouse’s favored positions. But what is the metabolic state of the mouse in each of these areas? With Promethion, it’s easily possible to find out because all of the data from all of the sensors in the system are synchronized. How about VO2 vs. position? (click to embiggen).

    Now we’re seeing something interesting. This mouse was measured at 29 oC, within its thermal neutral zone, so its resting metabolic rate was around 0.6 ml O2 min-1. There are certain areas characterized by much lower, or higher, VO2s than others. Wouldn’t it be informative to know the characteristic RQs at each location, which would tell us about its respiratory substrate utilization across space? Certainly (click to embiggen).

    …and this visualization tells us a lot more about the metabolic nature of this mouse, and shows us its resting areas (low RQ) very clearly. Now, think of the applications of this type of visualization for thermogenesis research (hello, BAT)?

    Update! For an interactive, graphic exploration of how multiplexing distorts metabolic data, be sure to visit the multiplexing visualization page on this site. Have fun!

    As usual, thanks to Thomas Förster, Ph.D., Sable Systems International’s expert in-house data analysis and data presentation consultant, for creating the graphs.

    — John Lighton

     

     

  • Three reasons for misleading gas analyzer accuracy specifications

    A funny thing happened on my way to helping a colleague make sense of the specifications of a metabolic phenotyping system. There were many strange things to mull over. Not the least of these were oxygen and carbon dioxide analyzer accuracy specifications that appeared to be the product of smoke, mirrors, and absinthe-infused coca tea sipped through a crazy straw.

    Then it struck me: A sense of déjà vu all over again. I’ve seen similarly bloated and unwordly specifications made by the manufacturers of human metabolic measurement equipment. Why is it that companies that serve the biomedical market feel compelled to exaggerate analyzer specifications beyond the bounds of credibility? Curious minds want to know.

    For that reason, I’m transcribing my spiral-bound anthropological notebook and presenting here my own, brief, analysis of the phenomenon.

    Reason one is specialization. Scientists and researchers in general know a great deal about their subject (well, most do) but outside their area of expertise, not so much. Thus, they are vulnerable to exaggeration and outright untruth in areas outside their field of expertise. This is especially with regard to matters such as accuracy versus precision, and measurement theory in general.

    Reason two is inconsistent terminology. Accuracy is an elastic concept, and unless you’ve had some exposure to measurement theory you will consider accuracy, quite reasonably, to be a measure of relative error. We can all agree that 0.000% accuracy implies that a measurement is made without error. But what does “0.04% accuracy” mean? This is the accuracy that one metabolic phenotyping system manufacturer claimed for their third-party O2 and CO2 analyzers.

    UPDATE added in 2017:  Some legacy metabolic phenotyping system manufacturers have now jumped the shark!  Not content with claiming 0.04% accuracy, they are now claiming “0.001% of reading accuracy”. Their previous fictitious specifications were already unreachable, and now, they have managed to make them 40 times more unreasonable and dishonest. But back to the main text: 

    Here is where things get interesting. In order to evaluate the accuracy of an analyzer, you must use it to measure an accurately known standard, and then determine the error in its measurement of that standard. Say, for example, you pass an exactly known 1.0000% CO2 standard gas through the analyzer, and the analyzer reads 1.0004%. In that case, the error (in percent of reading) is 100 * (1.0004 – 1.0000), or 0.04%. Notice the number of significant figures? in order to measure this degree of accuracy, our CO2 standard gas must be exactly 1.0000%, not 1.0001% or 0.9999%.

    Here is where the real world pokes its muzzle into this ideal world and barks, loudly.

    Because the fact is, it is literally impossible to obtain such accurate gas mixtures. Not difficult. Impossible. This is for a variety of technical reasons I’ll explain if there’s some demand for it. But bottom line, the most perfect span gas anyone can buy is 1% accuracy. This means that the actual concentration of the CO2 span gas mentioned above will be anywhere from 0.99% to 1.01%. So as you can see, in most cases – and always, with CO2 analyzers – any accuracy figure of better than 1% is garbage, pure and simple. The sole exception to this is if you manage to persuade a national standards laboratory to create some 0.25% or so accuracy span gas, for which you will pay many thousands of dollars. In that case, you are equipped to evaluate accuracy down to 0.25%, if you make the assumption that the only cause of error is from the standard gas concentration. Which is highly questionable, as any student of measurement theory will tell you. Plus, you can only measure the error and thus the accuracy of the analyzer at that one concentration, because there is no way of mixing that gas to create a lower concentration without introducing a new source of error in the 1-2% range.

    But 0.04% accuracy? Srsly?

    There is one accuracy specification that allows this amazing specification. As Dr. McCoy would say, it’s accuracy, Jim, but not as we know it. And that is to express accuracy in absolute terms at some point in the analyzer’s measurement range. In the case of the system specifications I was looking at, the maximum CO2 concentration it could measure was 2%. Let’s say the analyzer was fed a nominal 2.00% span gas, and actually measured 2.04%. In that case, its absolute error could be stated as 0.04%, and I suspect that this is exactly what the drafter of that metabolic phenotyping system’s specifications had in mind.

    Was the metabolic phenotyping system manufacturer trying to mislead its potential customers? I leave that for the reader to decide.

    Bearing on that last question, here is an interesting fact. The metabolic phenotyping system in question uses a combination O2 and CO2 gas analyzer made by Siemens, a very solid and reputable industrial process-control firm. Siemens is not given to flights of fancy. So, what does Siemens give as the accuracy specification of that analyzer, as shown in the analyzer’s downloadable instruction manual?

    In fact, Siemens says exactly what they should say: “Calibration error: Dependent on accuracy of calibration gases”. This is the only accuracy specification they give. They do, however give two other specifications that are important when considering accuracy: Repeatability and linearity deviation, both of which are 1%, again showing that the claim of “0.04% accuracy” for that same analyzer is, shall we say, imaginative.

    Reason three is peer pressure. For some reason, there is a red queen race going on among biomedical equipment suppliers. Some of them are tripping over themselves to invent fantastical accuracy specifications that will impress researchers who are not used to thinking critically about accuracy specifications – not because the researchers are stupid, or credulous, but because they make the assumption that equipment manufacturers and resellers are being as honest as they (the researchers) are.

    And that isn’t always the case. Worse than that, paying uncritical attention to accuracy figures may lure researchers into a lair which, in hindsight, they will wish they had not entered. And that’s a pity for their research, for their funding agencies, and for human curiosity.