Accuracy

When I first read Shannon’s 1948 paper “A Mathematical Theory of Communication”, one set of numbers particularly jumped out at me: the numbers for the difference between the error rate of a communications channel and the rate at which information is lost from it. I recently ran into a free link to the paper (in a blog post by Federico Pereiro), which reminded me of it. As Shannon explains the issue:

Suppose there are two possible symbols 0 and 1, and we are transmitting at a rate of 1000 symbols per second with probabilities p0 = p1 = 1/2. Thus our source is producing information at the rate of 1000 bits per second. During transmission the noise introduces errors so that, on the average, 1 in 100 is received incorrectly (a 0 as 1, or 1 as 0). What is the rate of transmission of information? Certainly less than 1000 bits per second since about 1% of the received symbols are incorrect. Our first impulse might be to say the rate is 990 bits per second, merely subtracting the expected number of errors. This is not satisfactory since it fails to take into account the recipient’s lack of knowledge of where the errors occur. We may carry it to an extreme case and suppose the noise so great that the received symbols are entirely independent of the transmitted symbols. The probability of receiving 1 is 1/2 whatever was transmitted and similarly for 0. Then about half of the received symbols are correct due to chance alone, and we would be giving the system credit for transmitting 500 bits per second while actually no information is being transmitted at all. Equally “good” transmission would be obtained by dispensing with the channel entirely and flipping a coin at the receiving point.

He then goes on to calculate the transmission rate, for this case, as being 919 bits per second. That is, if 1% of the bits are flipped, it results in about an 8% loss in the capacity of the channel. In other words, to transmit data correctly in the presence of a 1% error rate, you have to surround it with error-correction codes that bulk it up by about 8% — in practice, by even more than that, since error-correction schemes are not perfectly efficient.

What is interesting is to consider the implications of this in less formal settings: in news reporting, for instance. A 1% error rate would be very good for a newspaper; even the best magazines, with fact checkers who call around to check details of the articles they publish, can only dream of an error rate as low as 1%. Most of their errors are things that only specialists notice, but they are still errors, and often ones which significantly change the moral of the story; specialists frequently sigh at the inaccuracy of the news coverage of their specialty. But that unattainable 1% error rate would still mean that, at best, one would have to throw out 8% of what is said as being unreliable. That is, if one were interested in perfect truth. People in the news business have long since become inured to the fact that what they print is imperfect, and their main concern is that the errors they make not be of the laughable sort — or at least not laughable to the general population. But if one wants to figure out the world, that is not good enough.

To make things worse, most of the world does not encode information in error-correction schemes to cover for the press’s errors. When anyone does so (and political groups have learned to do so), the scheme they usually use is simple repetition: saying things multiple times, in varying language. That’s quite an inefficient error detection/correction scheme, but is the only one that most recipients can be expected to decode and that can get through a human news system in the first place: people would look at you quite funny if you tried to speak in Hamming codes.

For the rest of the information in the world — the stuff not sponsored by any political group — the situation is even worse. In the absence of deliberately added redundancy, one has to use whatever inherent redundancy there is in the information, and mistrust the stuff that isn’t confirmed that way, which is typically more than 90% of it.

This is why reliable sources of information are so valuable: one doesn’t have to regard them with near-fanatical mistrust. It is tempting, once one knows the truth about something, to be liberal in excusing errors in descriptions of it, on the grounds that those descriptions are mostly right. But that is to forget, as Shannon puts it, “the recipient’s lack of knowledge of where the errors occur”, for recipients who are still struggling to find the truth. It may be right to excuse the people who commit those errors, as human beings who perhaps did the best that could be expected of them in the circumstances, but that doesn’t make it right to excuse their work product.


Easy Bread

Bread right out of the oven has a taste that beats most anything that can be bought in a store. Even in stores that sell bread baked on the same day, it has usually been sitting around for hours. But baking one’s own bread is generally a hassle. Bread machines make it much easier, but like all machines, they aren’t quite as easy to deal with as one first imagines. They take up space; they sometimes break; they have to be cleaned. (The one I once owned featured an impeller that had to be dug out of the bottom of each loaf.) Nonstick surfaces make cleaning easier, but don’t entirely eliminate it.

A few years ago, I was sent a pointer to an article by Mark Bittman describing a method for making bread without kneading. It was further billed as a “truly minimalist” bread. But when I tried it, the effort required was not minimal enough for me. There was a lot of rigamarole about flouring the ball of dough, manipulating it two hours prior to baking, and preheating a pot to bake it in. Bittman calls himself “The Minimalist”, but he didn’t minimize this one.

The trick to minimizing it is to bake the loaf in a silicone mold. Stuff doesn’t stick to silicone, so when the loaf is done baking, you just turn the mold upside down and the loaf falls out. No greasing, flouring, or anything else is required for this to happen. I don’t even clean the silicone between loaves; it isn’t left immaculate, but the slight residue doesn’t harm anything.

The minimized recipe uses the same ingredients:

  • 3 cups flour
  • 1/4 teaspoon yeast
  • 1 teaspoon salt
  • about 1.5 cups water

Just about any sort of flour will do; flour without any gluten in it (non-wheat) will produce a dense, crumbly loaf, but that’s not a big deal. The salt can be omitted, or doubled, or whatever; it’s just for taste. Mix the dry ingredients for about a minute (they’re easier to mix dry), then add water, mixing, until the mass is completely wet, but no further. The amount of water needed depends on the type and brand of flour; the number given above is just a rough approximation. There shouldn’t be any mounds of unmixed flour hiding below the bottom of the mixture, nor any parts of the mixture whose surface is dry and which protrude a bit. But don’t add any more water than that, or you’ll get a rather wet loaf. For that matter, even when done right this yields a pretty wet loaf. Not that the wetness matters much; it has no effect on taste and not even much effect on texture.

Anyway, the next step is to wait 12 to 24 hours for it to rise. Or longer, if it’s cold; the original recipe specifies 18 hours at 70 degrees F, but those numbers are not critical. The long rising time is what makes it unnecessary to knead. It also allows for the development of a rich microbial flora, which provides the excellent flavor achieved with this recipe. Or, well, rather, I hope that’s not the case. And I don’t believe it is; the flavor has been quite uniform, which wouldn’t be the case if random germs were providing it. That’s much less yeast than is normally used, but still, it seems, enough to provide the vast majority of the inoculum. But, uh, best not to spit in the stuff while mixing it, or to mix with your hands. The baking will kill most germs that happened to get in and grow, but some might survive in spore form.

The next step is to transfer the mess to the silicone baking dish, and bake at 375 degrees F (190 C) for an hour or so. Exact timing is not critical. If you like a burned, er, “dark” crust, do as in the original recipe and increase the temperature and decrease the time.

The only part of this exercise that still seems like a hassle is cleaning the mixing bowl. Bread dough is sticky and doesn’t wash off easily. I tried doing the mixing in the baking mold, but that didn’t work well: silicone is thin and floppy, and the mold has square corners unconducive to mixing. Besides, when bubbles form in the mold as the bread rises, they form against the walls, so instead of a smooth crust one gets a crust with bubble holes in it. But perhaps it deserves another try. In any case, even as it is, this is competitive with bread machines for ease: probably somewhat worse, but not that much worse.

Once the bread is out of the oven, wait half an hour or so for the heat to continue penetrating into the interior, finishing up the cooking process.

Then, any part that isn’t eaten right away is best sliced up and put in the freezer, for later microwaving. That preserves most of the right-out-of-the-oven taste. The only trouble with doing that with this loaf is that the slices tend to stick; that’s the downside of having a wet loaf. The way to avoid sticking is to pack the slices in a pessimal fashion: instead of trying to pack them tightly, try to pile them up so as to leave space unused, so that they’re not touching each other very much. They’ll still stick, but generally can be pried apart without much trouble. (Still, people with weak hands might want to just pack the slices in a single layer so that they don’t stick at all.)


Audio sampling rates and the Fourier transform

Christopher Montgomery (“Monty”) recently posted an excellent argument against distributing music in 192 kHz, 24-bit form, as opposed to the usual 44.1 kHz (or 48 kHz), 16-bit form. I think, however, that many of the people who are inclined to doubt this sort of thing are going to doubt it at a much more fundamental level than the level he’s addressed it at. And I don’t just mean the math-phobic; I know I would have doubted it, once. For years, and even after finishing an undergraduate degree in electrical engineering, I wondered whether speaking of signals in terms of their frequency content was really something that could be done as glibly and freely as everyone seemed to assume it could be. It’s an assumption that pervades Monty’s argument — for instance, when he states that “all signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling”. If you don’t believe in speaking of signals in terms of their frequency content, you won’t know what to make of that sentence.

As it happens, the assumption is completely correct, and the glibness and freeness with which people talk of the frequency domain is completely justified; but it originally took some serious proving by mathematicians. To summarize the main results, first of all, the Fourier transform of a signal is unique. When you’ve found one series of sine waves and cosine waves that when added together are equal to your signal, there is no other; you’ve found the only one. (Fourier transforms are usually done in terms of complex exponentials, but when one is dealing with real signals, they all boil down to sines and cosines; the imaginary numbers disappear in the final results.) If you construct a signal from sinusoids of frequencies below 20 kHz, there’s no possibility of someone else analyzing it some other way and finding frequencies higher than that in it — unless, of course, he does it wrong (an ever-present danger).

Also, the Fourier representation is complete: any signal can be exactly represented as a sum of sinusoids (generally an infinite sum of them, or an integral which is the limit of an infinite sum of them). There are no signals out there which defy Fourier analysis, and which might be left out entirely when one speaks of the “frequency content” of a signal. Even signals that look nothing like sine waves can be constructed from sine waves, though in that case it takes more of them to approximate the signal well.

But the main thing that makes it possible to be so glib about the frequency domain is that the Fourier transform is orthogonal. (Or in its complex-exponential variants, unitary, which is the corresponding concept for complex numbers.) What it means for a transform to be orthogonal can be illustrated by the example of coordinate transforms in three-dimensional space. In general, a coordinate transform of a three-dimensional object may twist it, bend it, or stretch it, but an orthogonal transform can only rotate it and possibly flip it over to its mirror image. When viewing 3D objects on a computer screen, applying an orthogonal transform just results in looking at the same object from a different angle; it doesn’t fundamentally change the object. At most it might flip the ‘handedness’, changing a right hand into a left hand or vice versa. In the Fourier transform there are not just three numbers (the three coordinates) being transformed but an infinite number of them: one continuous function (the signal) is being transformed into another continuous function (its spectrum); but again, orthogonality means that sizes are preserved. The “size”, in this case, is the total energy of the signal (or its square root — what mathematicians call the L2 norm, and engineers call the root-mean-square). Applying that measure to the signal yields the same result as does applying the same measure to its spectrum. This means that one can speak of the energy in different frequency bands as being something that adds together to give the total energy, just as one speaks of the energy in different time intervals as being something that adds up to give the total energy — which of course is the same whether one adds it up in the time domain or the frequency domain. This also applies, of course, to differences between signals: if you make a change to a signal, the size of the change is the same in the frequency domain as in the time domain. With a transform that was not orthogonal, a small change to the signal might mean a large change in its transform, or vice versa. This would make it much harder to work with the transform; you would constantly have to be looking over your shoulder to make sure that the math was not about to stab you in the back. As it is, it’s a reliable servant that can be taken for granted. As in the case of 3D coordinate transforms, but in a vaguer sense, the Fourier transform is just a different way of looking at the same signal (“looking at it in the frequency domain”), not something that warps or distorts it.

Engineers these days seem to go mostly by shared experience, in feeling comfortable with the Fourier transform: it hasn’t stabbed any of their fellow-professionals in the back, so it probably won’t do so for them, either. But as a student, I didn’t feel comfortable until I’d seen proofs of the results described above. In general, learning from experience means learning a lot of things the hard way; that just happens not to be so in this particular case: there are no unpleasant surprises lurking.

Now, when trying to use the Fourier transform on a computer, things do get somewhat more complicated, and there can be unpleasant surprises. Computers don’t naturally do the Fourier transform in its continuous-function version; instead they do discrete variants of it. When it comes to those discrete variants, it is possible to feed them a sine wave of a single frequency and get back an analysis saying that it contains not that frequency but all sorts of other frequencies: all you have to do is to make the original sine wave not be periodic on the interval you’re analyzing it on. But that is a practical problem for numerical programmers who want to use the Fourier transform in their algorithms; it’s not a problem with the continuous version of the Fourier transform, in which one always considers the entire signal, rather than chopping it at the beginning and end of some interval. It is that chopping which introduces the spurious frequencies; and in contexts where this results in a practical problem, there are usually ways to solve it, or at least greatly mitigate it; these commonly involve phasing the signal in and out slowly, rather than abruptly chopping it. In any case, it’s a limitation of computers doing Fourier transforms, not a limitation of computers playing audio from digital samples — a process which need not involve the computation of any Fourier transforms.

Much more could be said about the Fourier transform, of course, but the above are some of the main reasons why it is so useful in such a wide variety of applications (of which audio is just one).

Having explained why sentences like

“All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling”

are meaningful, and not merely some sort of mathematical shell game, a few words about Monty’s essay itself. As regards the ability of modern computer audio systems to reproduce everything up to the Nyquist limit, I happen to have been sending sine waves through an audio card recently — and not any kind of fancy audio device, just five-year-old motherboard audio, albeit motherboard audio for which I’d paid a premium of something like $4 over a nearly-equivalent motherboard of the same brand with lesser audio. This particular motherboard audio does 192 kHz sample rates, and I was testing it with sine waves of up to the Nyquist frequency (96 kHz). Graphed in Audacity, which shows signals by drawing straight lines between the sample points, the signals looked very little like a sine wave. But when I looked at the output on an oscilloscope with a much higher sample rate, it was a perfect sine wave. Above 75 kHz, the signal’s amplitude started decreasing, until at 90 kHz it was only about a third of normal; but it still looked like a perfect sine wave. Reproducing a sine wave given only three points per wavelength is something of a trick, but it’s a trick my system can and does pull off, exactly as per Monty’s claims. Accurate reproduction of things only dogs can hear, in case one wants to torture the neighboorhood pooch with extremely precise torturing sounds! (Or in my case, in case one wants to do some capacitor ESR testing.)

The limits of audio perception are not something where I’ve looked into the literature much, but I have no reason to doubt what Monty says about it. Something I did wonder, after reading his essay, though, was: what about intermodulation distortion in the ear itself? That is, distortion of the same sort that he describes in amplifiers and speakers. Being made of meat, the human ear is far fromnot perfectly linear; and pretty much any nonlinearity gives some amount of intermodulation distortion. Unlike in the case of intermodulation distortion in audio equipment, though, this would be natural intermodulation distortion: if, for instance, one heard a violin being played in the same room, one would be hearing whatever intermodulation distortion resulted in the ear from its ultrasonic frequencies; those would thus comprise part of the natural sound of a violin, and reproducing them thus could be useful. Also, nonlinearities can be complicated: any given audio sample might not excite some particular nonlinearities that might nevertheless be excited by a different sort of music. But as the hypothetical language (“could”, “would”) indicates, these are theoretical possibilities, which can be put to rest by appropriate experiments. As per a test Monty links to, which was “constructed to maximize the possibility of detection by placing the intermodulation products where they’d be most audible” — and nevertheless found that ultrasonics made no audible difference. I only took note of that sentence on re-reading; but this nonlinearity-in-the-ear idea is what that test was designed to check for.

Poking around at the Hydrogen Audio forums, the explanation for why nonlinearity in the ear doesn’t produce audible lower frequencies seems to be that:

  • Ultrasonics get highly attenuated in the outer parts of the ear, before they could do much in the way of intermodulation distortion. (It’s quite common for higher frequencies to get attenuated more, even in air; this is why a nearby explosion is heard as a “crack”, but a far-off one is more of a boom.)
  • Intermodulation distortion then imposes a further attenuation, the spurious frequencies introduced by distortion having much less energy than the original frequencies.
  • Generally in music the ultrasonic parts are at a lower volume than the audible parts to begin with.

Multiply these three effects together, or even just the first two of them, and perhaps one always gets something too small to be heard. In any case, as Monty states, it’s impossible to absolutely prove that nobody can hear ultrasonics even in the most specially-constructed audio tracks. But when one is considering this sort of thing as a commercial proposition, the question is not whether exceptional freaks might exist, but what the averages are.

(Update: Monty tells me that contrary to what I’d originally stated above, “by most measures the ear is quite linear”, and “exhibits low harmonic distortion figures and so virtually no intermodulation.” The text above has been corrected accordingly. I’d seen references to nonlinearity in the hair cells; and it’d be hard to avoid it in neurons; but those are after the frequencies have been sorted out.)


Tocqueville

The book Democracy in America, by Alexis de Tocqueville, is widely recommended to those wishing to know about the US political system. Personally, I tried to read it at one point, but found it boring, and only got through fifty pages or so. Yet I devoured from cover to cover Tocqueville’s later book The Old Regime and the Revolution, about pre-revolutionary France. It’s a much better book, in a lot of ways.

Tocqueville was a young man when he wrote Democracy in America, after spending eighteen months traveling through America and talking to the best-informed people he could find. Though probably wiser at age 26 (when he started his journey) than 99.9% of people twice that age, he was still no match for his later self; in the later book, he alludes several times to errors of youthful enthusiasm that he committed in the earlier one. Also, in the earlier book, he was writing about a foreign country, not about his own. The language and mode of expression were not his native ones; fine nuances and things that were left unsaid must have escaped him in some cases. His later book, besides being about his own country, was researched in much greater detail; he delved deeply into formerly-private government records, much as modern historical researchers do. The book is heavily footnoted.

But it’s not just that it’s a better book in the abstract; it’s a lot more relevant than the earlier book to the US political system as it is today. In writing of the old regime of France, he was writing of a formerly decentralized system that had gradually, over the century or so prior to the Revolution, turned into a centralized one — one in which bureaucrats from Paris (or answering to Paris) poked into all manner of details of people’s lives. The change had been little remarked, since the old institutions of local control had been left intact, but had been bypassed. Before Tocqueville’s book appeared, Frenchmen had been of the habit of speaking of centralization as one of the benefits of the Revolution, but he showed that that particular change was more in appearance than in reality.

That is the resemblance in the large; in details, there are also a surprising number of resemblances. The courts of the old regime, for instance, he describes as increasingly interjecting themselves into politics, yet on the other hand increasingly abandoning their role as legal arbiters to special administrative courts. (That this is a resemblance may come as a surprise to readers who are unfamiliar with the number of special court systems established in the US government today, and the number of “administrative hearings” of various sorts that are conducted. Tax courts, which resolve matters relating to the IRS, are perhaps the most widely known; but even the National Transportation Safety Board has its own courts with its own judges.) Tocqueville also describes the middle classes, in pre-revolutionary society, as being divided into squabbling groups, each trying to chisel favors out of the government in its own way, but more similar to one another than they realized.

Of course there are unsurprising resemblances too, such as that the old regime was a system that, to support itself, levied an increasing number and variety of taxes, and nevertheless was going bankrupt. But those are not what make the book interesting.

The book is long out of copyright, and thus available for free. Those who know French will of course prefer the original rather than a translation.


Taking Advantage of the Placebo Effect

The placebo effect is well known for interfering with medical experiments. It’s not just that if you tell patients that a drug is going to have an effect they tend to believe it has had that effect. It’s that it tends to actually have that effect, when measured by objective measures such as blood tests. Thus the use of double-blindness in experiments, where not only the patients but also the doctors dealing with them have no idea whether they’ve been given the active drug or a placebo. Something about wishing really does make it so, somewhat, when it comes to health; the placebo effect is not just in patients’ minds but also in their bodies.

But besides the nuisances it causes to experimenters, the placebo effect can also be taken advantage of. Doctors sometimes prescribe drugs as placebos; since it genuinely helps the patients, it’s hard to argue with that practice. It’s deception, but in a good cause. (The deception had better not be too obvious, though, or it’ll do no good. Indeed, if the doctor himself believes in the placebo, that’s best.) Christian Scientists use nothing but the placebo effect. But what’s a skeptic to do, to get some of this goodness? If a doctor prescribes something for me, I’m going to look it up on the net and find out how it works; if it doesn’t, that’ll be apparent, and the fact that the doctor prescribed it will not impress me. As for faith healers, starting by insulting one’s own intelligence doesn’t seem like the way to proceed in harnessing the powers of the mind. So what to do?

I believe in the placebo effect directly. I cut out the middlemen, and the foolishness, and just take the thing straight. The placebo effect is going to help me because I know it will; because it is an established principle of medical science that it will if I believe it will; and I do so believe. Whatever boost my mind can give to my body, in getting better from whatever ailments might afflict me (not particularly much, at the moment), it’ll give.


Deepwater Horizon report

Back in June (this blog does not in any way aim to be a timely reporter of news), Transocean released their report on the Deepwater Horizon disaster. I found it interesting, and read most of it; it seems like primarily an honest effort to get to the bottom of the disaster, not an exercise in blame-shifting and ass-covering. (I am not involved in the industry, so might be being a bit naive here, but at least have the miserable excuse that I am unbiased.) There is only one place, described below, where I noticed the report getting weaselly. Otherwise, the bad decisions were quite plainly BP’s, both as a matter of law (they being the “operator” who was in control) and as a matter of fact; so Transocean didn’t need to indulge in evasiveness, but could just plainly state what happened, and what should have been done better.

The main thing I was interested in was what had happened with the blowout preventer. Back during the disaster, there was all sorts of speculation about it. After dragging the 150-ton device up from the deeps, they indeed have figured out what happened — and it was none of the scenarios regarding hydraulic failure or electrical failure that were voiced in the press. All the mechanics of the thing had worked: batteries provided current; valves opened; hydraulic accumulators provided hydraulic power; rams closed and were locked closed by massive steel wedges. The engineering seems to have been, throughout, the sort of thing that one does if one wants a device to work very reliably. There are minor questions regarding some pieces of it (one relay in one of the dual-redundant electrical boxes seems to have been goosey somehow), but those weren’t why it failed. Why it failed, to summarize the whole sequence of things that went wrong, is that it was a blowout preventer, but what they needed was a blowout interrupter. The fast, high-pressure flow through the device, carrying not just fluids but pieces of abrasive rock, was something it had never been designed or tested to control. The report comes with a good video showing the whole sequence of failures, which does a better job of describing it than the report does, or that I can do here — so I won’t try.

The place where I noticed the report getting weaselly was in the following language:

The investigation team is aware that some sources suggest that the various activities during final displacement constituted inappropriate “simultaneous operations,” which may have interfered with the monitoring of the well. Tasks such as repairing a relief valve or dumping a trip tank commonly are performed on an offshore rig and would be considered normal in the course of operations — not simultaneous operations. … The investigation team determined that after the fluid transfers to the Bankston were completed at 5:10 p.m., the activities of the drill crew were completed in a sequential manner, and “simultaneous operations” were not present.

As to what exactly constitutes “simultaneous operations”, I’ll leave that to the lawyers. My sympathy goes out to the people in the industry who must labor under rules defined so imprecisely. Hopefully, on a fifty-thousand-ton drilling rig with 150 people on board, at least some of them are allowed to walk and chew gum at the same time. But whatever the rules might be, the physics issue here is that the most reliable way of monitoring flow out of the well was by measuring the levels in the tanks (the “mud pits”) it was flowing into; there were other flow sensors on board, but none nearly as accurate. But in this case, at the same time that mud was flowing from the well into mud pits, it was being pumped from them overboard into the auxiliary ship Damon B. Bankston. So the operators couldn’t simply determine the amount of fluid coming out of the well by looking at how much had accumulated in the mud pits.

This sort of thing was a large part of why the disaster occurred: if they’d noticed the well “kicking” earlier, by observing that it was sending out a lot more fluid than they were pumping in, they’d have been able to shut it down before the flow got too great for the blowout preventer to stop, and before gas emerged onto the deck, exploded, and turned the rig into an inferno. Since this part of the rig’s operations was largely or entirely the responsibility of Transocean, it is no wonder that their report gets a bit weaselly — which is not to suggest that anything stated is untrue; indeed, their defense on grounds of timing is a good one. The disaster struck much later in the day: gas exploded onto the deck at 9:45 p.m., after having started flowing into the bottom of the well at a time estimated as “sometime between 8:38 p.m. and 8:52 p.m.”. So probably no serious discrepancies in flow happened during the time before 5:10 p.m. during which they were pumping mud out to the Bankston. (As to why they didn’t notice the later discrepancies, the investigation was hampered by the fact that most or all of the people who should have noticed died in the disaster.)

Still, even with it not being the cause of the disaster, not being able to monitor flow from the well was undesirable. At first glance, this seems to be a case where doing things right would impose serious delays, from doing things consecutively rather than simultaneously. But on consideration, there seems to be a way, in this sort of situation, to accurately monitor the fluid volume coming from the well while still simultaneously transferring it overboard. That would be to direct fluid coming out of the well to a mud pit that wasn’t currently being emptied, then when that pit filled, to switch the flow from the well to another mud pit and start emptying the first pit, alternating between the two (or more) pits as necessary. That way, the volume coming from the well could be accurately calculated by measuring levels in the pits, without any serious costs. It would mean a bit more activity (switching of valves and pumps), but little more in the way of costs. The report makes no mention of this as a possible alternative; perhaps they didn’t think of it, or perhaps there was some stupid little reason (involving, say, details of pipes and valves, or of control software) that it wouldn’t have been feasible. But there don’t seem to have been any big reasons: the rig had more than enough mud pits, and enough valves and pumps. As for the control software, with forethought they could even add a feature to do this procedure automatically, switching flow between pits and totalling up the rises in levels of the active pit(s) in order to get the total flow, then displaying that for the operator rather than forcing him to do the arithmetic.

The primary thing that went wrong, though, was the cement job at the bottom of the hole. The investigations found so many things done badly about the cement job that it’s hard to tell which of them was actually responsible for the failure. To pick just one error, they tried to leave drilling mud below the cement while it cured, with the drilling mud being lower density (14.17 pounds per gallon) than the cement (16.74 ppg), and with no barrier separating the two fluids, just a “reamer shoe” with an open orifice of about an inch and a half in diameter (to judge from the diagrams). How they could possibly have thought this would succeed is unclear: when you put a heavier fluid on top of a lighter fluid, they naturally tend to swap places. And in the place the cement would have migrated to (the 55-foot-long “rat hole” under the end of the casing), it would have been of no use at all. It wasn’t like the cement was particularly resistant to flowing (the report quotes its shear strength at 2 lbf/100ft2), or like it set particularly fast (the report speaks of setting times in hours). Also, as it dribbled out that hole, the mud that came in to replace it would then have proceeded to bubble up to the top of the cement column. And that was the critical piece of cement that failed: there was also cement outside the casing, which had its own issues; but in the disaster, the rogue flow came up the inside. With mistakes on this level (another was to make foamed cement with one of the ingredients being an anti-foaming additive), it’s not a question of just saying “be more careful next time”; people need to lose their jobs, if they haven’t already — and not just the people who originated these particular mistakes, but also their supervisors. Increased government regulation, as per the usual knee-jerk response, can’t fix a lack of clue in the industry itself.