Ammonium nitrate in airbags? Are you out of your minds?

Most car recalls are pretty tame. But the recent Takata airbag recall is not one of them. It’s not your ordinary situation where something might malfunction in a mild way — say, a wire shorting out which might lead to a fire, which in turn might injure someone who handled the situation wrong. With these airbags, the risk is that when set off, they might explode and send shrapnel into you.

A recent article about this stated that Takata hasn’t yet found the root cause of the failures; it went on to say that, according to Takata, “the ammonium nitrate used in the airbags was safe and stable”.

Wait, what? Ammonium nitrate, in airbags? Did I read that wrong? Did the news site get it wrong? No; checking, plenty of other sources confirm.

Well, they may not know what the root cause of their airbag malfunctions is, but I do: they used ammonium nitrate. Ammonium nitrate is hygroscopic. It’s “stable” in the sense that it won’t decompose under normal circumstances; but it absorbs water from the atmosphere and, above a certain humidity, turns itself into a little puddle. Then as it dries out, it recrystallizes into a different form. Presumably they tried to seal water out of the airbags; but seals often fail over time, or are flawed from the start; and lots of materials (like plastic films) which seem impervious to water actually let it slowly diffuse through. Once moisture has gotten in, this process (absorbing water and turning into a liquid, then drying out and recrystallizing) would repeat itself every day, due to the humidity dropping during the day then rising at night. (With a constant amount of moisture present, humidity falls when the temperature rises.)

In the airbags, they must have mixed the ammonium nitrate with something else, as pure ammonium nitrate is not a suitable propellant. And the exact nature of the mixture must have been important. The more finely divided the material, the faster the boom. The changing shape of the ammonium nitrate must have played havoc with whatever they’d done to control the speed of the boom.

I don’t know how they could ever have thought this would work. If they didn’t get problems with exploding too fast, they’d have gotten problems with exploding too slow. (And I wouldn’t be surprised if the latter also sometimes turned out to be a problem with these airbags.) But ammonium nitrate is among the cheapest of materials, which is presumably why they tried to make it work.

Still, you just can’t field millions of devices, and expect to keep moisture out of all of them. On a laboratory scale, that might work; out in the world, subjected to all sorts of abuse and to the forces of aging, it never will.

(Most of this was originally posted as a comment to the article linked above.)

Addendum (Jan 17, 2015): On further reflection, there is one technology for sealing out water that has been widely and successfully fielded, namely enclosure in a glass capsule. Vacuum tubes generally work fine even after sitting in storage for decades; and they were produced by the billions and sent all over the globe. Their technology includes, of course, electrical connections through the glass, as would be necessary here. Having airbag propellant sealed inside glass might be a bit tricky, as the flying glass shards would have to be filtered out before they reached the bag itself; but that could be done. My guess is that the expense of such an assembly would eliminate ammonium nitrate’s cost advantage, though.

Where did all these silly “similar to”s come from?

Something I’ve noticed a fair bit, recently, has been the use of the phrase “similar to” at the start of a sentence, where one would ordinarily use “like”. Say, instead of writing

Like dogs, cats have four legs.

someone would write

Similar to dogs, cats have four legs.

When I first encountered this usage, I parsed it wrong — meaning that I parsed it by the normal rules of the language, where the leading clause is a parenthetical remark that cats are similar to dogs. But as I ran into more such instances, I realized that such sentences are never meant to be parsed that way — that “similar to” is just being used as a synonym for “like”. Under the rules for “like”, the sentence is not saying that cats and dogs are similar, just that they share the property of having four legs.

Well, okay; if people want to extend the rules for “like” to “similar to”, who am I to stop them? Language changes, sometimes for the better and sometimes for the worse. But it leads to the question: what is it about “like” that is causing people to avoid it?

I have several theories:

  1. “Like” is something that, like, teenage girls say, and so is inappropriate for pompous, pedantic writing — which is commonly where I’ve seen these strange “similar to”s. In particular, I’ve seen a lot of them in formal medical and biology articles. In those fields, there are a lot of women who don’t want to seem like teenage girls in their writing. (Not that they are alone in this strange usage; males have picked it up too.)

  2. “Like” is too short and simple, and inappropriate for this era of obfuscation, so the same sorts of people who write “utilize” instead of “use” write “similar to” instead of “like”.

  3. People don’t really know where to use “like” any more, as opposed to alternatives such as “as with” or “as in”, so they just use “similar to” whenever any of them is called for, in the hope that it will do. To illustrate this distinction, the sentence

    Like everything else, the more practice you have the better you can become.

    should really be

    As with everything else, the more practice you have the better you can become.

    since “everything else” isn’t like “more practice”, “you”, or any other subpart of the sentence — not even in some particular way, as in the first example, where cats and dogs both have four legs. But it seems like some people, vaguely sensing that “like” isn’t quite the right word, would make it even worse, by writing

    Similar to everything else, the more practice you have the better you can become.

    (Not that I’ve seen that particular sentence in the wild, but I’ve seen analogous ones.) Unlike the simple substitution of “similar to” for “like”, this sort of muddling actually subtracts information as compared to a proper phrasing, so is substantially more objectionable.

These three theories are not mutually exclusive.

Unknown unknowns

When Donald Rumsfeld came out with his line about there being “unknown unknowns”, a lot of people laughed, and in response his defenders sneered at the laughers. But I didn’t see on either side a real appreciation of the phrase — indeed, I still haven’t, from anyone.

These are unknowns.

They are not the “known unknowns”, which “we know we don’t know”.

Instead they are things which “we don’t know we don’t know”.

So these were things he (and others) thought he knew, but he didn’t know — in simpler words, things he was wrong about.

This makes Rumsfeld’s line one of the most unusual things said by a politician in recent memory: an admission of error. Not just that he had been wrong in the past — as in the line, which politicians hate, but are sometimes forced into, “yes, that was a mistake, but now I know better”. This, though presented confusingly, was an even rarer admission: that he was wrong in the present and going to be wrong in the future. If he’d wanted not to obfuscate but to put it dramatically, he could have turned one of Shakespeare’s lines against himself, saying “There are more things in heaven and earth than are dreamt of in my philosophy”.

Which indeed turned out to be the case.

Why the immune system is so complicated

Trying to understand the immune system can seem like a neverending task. There are tens of different varieties or subvarieties of immune system cells, with new subvarieties being discovered every so often. For sending messages between those cells, there are tens (or is it hundreds?) of signaling molecules (“cytokines”, among others). A signaling molecule that turns up one part of the immune system may turn down another, as in (but almost certainly not limited to) the “Th1” versus “Th2” concept, itself a not very precise notion. There are also homeostatic loops in which the body reacts to its own reactions, damping an immune response when it has gone on for too long and threatens to be more damaging than it is worth.

Such subtleties have not propagated to popular culture, where various substances are described as “boosting the immune system”, with no qualification as to which part of the system is being boosted or how long that boost might last. But they are well known to specialists — who, themselves, will be the first to admit that even they don’t fully understand the system and that it needs more study. The immune system is often blamed for disease, but although it is not surprising that a complicated system might malfunction, still this is generally a diagnosis of exclusion, a sort of thing that Darwin had a rule to avoid believing: a disease is labeled “autoimmune” because nobody has found a causative microbe that is goading the immune system on, not because anyone has proven that there is no such microbe. Indeed, with such diseases, though doctors commonly profess certainty about autoimmunity being the root cause, there is usually in the scientific literature a constant trickle of attempts to blame them on one microbe or another. The one thing that is completely clear about such diseases is that whatever immune system activity is going on isn’t curing the patient, and is causing distress to him or her. The complexity of the system makes other conclusions uncertain.

So where does all this Rube Goldberg action come from? It’s tempting to blame evolution, and the accumulation of cruft in the genome, but evolution can be quite good at simplifying when simplicity is actually optimal. We have only one backbone in our body, not five sort-of-parallel ones all trying to combine to support us. So there must be something optimal here about complexity, and when considered it’s obvious: if we could understand the immune system easily, so could microbes, and so they could subvert it easily. Indeed, it seems like whenever I read about the workings of any well-studied human pathogen, those workings include at least one way of eluding, deceiving, or sabotaging the immune system, and often two or three of them. Germs don’t seem to qualify as human pathogens, in the eyes of doctors, unless they have such a way; otherwise they are just one of the “harmless” background microbes which the immune system usually deals with so efficiently that we don’t even know that they are trying to eat us (though they can still be harmful in high doses). Yet even when a germ has three different ways of eluding the immune system, that doesn’t make it 100% deadly; most of the time the immune system can still eventually get it under control, using a fourth (and maybe a fifth and a sixth) mechanism in its arsenal.

This situation differs greatly from the situation with computers, where the simplest mechanisms to counter computer viruses and worms are commonly the best. With computers, you can make, in circuitry or with the aid of circuitry, a separate protected area which can’t be sabotaged. In wetware everything is swimming in the same soup: both microbes and immune system can do anything to each other that biochemistry allows, which is quite a lot. Any signaling molecule used by the immune system can be detected by microbes, allowing them to know what the system is doing, or can be synthesized by them, causing the system to do the wrong thing.

With computers, countering malware is mostly a question of how paranoid you are in letting information into the protected area. In practice the standard is often pretty permissive, but that is a matter of convenience — of programmers cutting corners to ship products fast, and of eliminating barriers that would inconvenience users. But then when customers suffer from security holes, programmers change course and get more serious about security. For those trying to make the best of this unpleasant tradeoff, simplicity is a good guiding light: when things are simple to program it lessens the temptation to cut corners; and where barriers must be inserted that inconvenience users, simplicity makes it possible to explain why those barriers are there.

People try to do some of the things in computer security that the immune system does, but it doesn’t work well. Antivirus products are the prime example of this. Like the immune system, they try to recognize hostile intruders yet without any really definitive way of doing so. The result is that they spend so much effort searching that they often noticeably slow down machines, and that they sometimes interfere with legitimate activities — sometimes openly and obnoxiously objecting, and sometimes insidiously sabotaging. And like the immune system, they are themselves subject to subversion: a virus can alter the antivirus program.

In computing, this qualifies as a big mess, which many people choose to avoid entirely. In wetware, this sort of thing is the best we’ve got. As big creatures, we can afford a big mess of complexity; microbes don’t have the genome size to understand our immune system — or, to speak more precisely, to react as if they understood it. They can adopt the occasional dodge, but a full understanding, such as would be needed to take thorough control of the system and use it for their own purposes, is beyond them. For microbes to evolve to expand their genomes and get more complicated would go against their basic life strategy of being fast breeders who are small and simple. Also, it wouldn’t just be a matter of learning one host species’s immune system, but rather that of all their hosts. Most microbes can live off any one of a number of host species, which is a great advantage to them, since when they leave one animal the next potential victim that they encounter is likely to be of a different species. And, although immune systems of different species are similar, they are not identical, so learning how to deal with a variety of animals’ immune systems is harder than learning how to deal with just one.

Or, to view things another way, microbes don’t need to get more complicated: they’re already doing so well in the struggle for existence that doing a bit better wouldn’t provide them with much evolutionary advantage.

As I hope has been apparent, when writing of microbes “understanding” the immune system, I’m not referring to an intellectual understanding but an operational understanding. An intellectual understanding is something that is possessed by a programmer who writes code to model a system; an operational understanding is something that is possessed by the code itself. Not that a microbe would do this digitally, of course; any model they might have of the immune system would be analog in nature, somewhat like the old analog computers. But in those, to have a working model of a system with N variables, you needed a computer with at least N amplifiers. To model the immune system in this fashion would mean making some sort of biochemical analog which had as many different working parts as the immune system does. By boosting the number of working parts, we put this task out of the reach of microbes — at the cost of making the system annoyingly complicated to its human students.

(Update, August 2019: This argument can also help explain why brain chemistry is so complicated: so that parasites can’t manipulate it like they do to insects, whose brains are much simpler. For more details than I could provide, see this blog post at Slate Star Codex for an entertaining review of the paper Invisible designers: Brain evolution through the lens of parasite manipulation by Marco del Giudice, itself quite a readable paper.)

So that’s how they really do Tempest

One of the recent Snowden revelations was a catalog of spying items that the NSA’s “Tailored Access Operations” unit had for breaking into bad guys’ computers. Most of the items weren’t particularly surprising. We already know that since they can’t break cryptography, they try to break into endpoints, where the plaintext lives — and even if we hadn’t known that from recent revelations, it makes complete sense for them to operate that way. What was surprising was the Tempest stuff.

To explain a bit, Tempest is the code word for spying on people’s computers via unintentional electronic emanations. A computer monitor, for instance, is driven by a high frequency signal which more or less broadcasts whatever is being shown on the monitor. If thoroughly shielded it wouldn’t be broadcasting, but it never is thoroughly shielded, except in special Tempest rated equipment such as is sold to various agencies of the federal government who worry about such things. And the broadcast is performed regularly at 60 times a second (at least that’s the usual refresh rate these days), a piece of redundancy which makes the signal easier to retrieve. Old-fashioned CRTs amplified this signal to high voltage to shoot it through an electron gun, but as Markus Kuhn has found, even modern flat panel displays can produce decipherable emanations.

But how well Tempest worked in practice was never quite clear to me. Okay, various demos have shown it to work in some cases. But whether those cases are typical has been unclear; monitors no doubt vary in how well their shielding is designed and built. And even if you can do a good job picking up the signal from one monitor, in practice there’ll probably be tens or hundreds of monitors within range; what can you do with the resulting mess of signals stomping all over each other? So it was no surprise reading, a while ago, in the book Security Engineering, by Kuhn’s Ph.D. advisor Ross Anderson, that

“Despite the hype with which the Tempest industry maintained itself during the Cold War, there is a growing scepticism about whether any actual Tempest attacks had ever been mounted by foreign agents in the USA.”


“Having been driven around an English town looking for Tempest signals, I can testify that doing such attacks is much harder in practice than it might seem in theory…”

What was a surprise was looking at the recently-leaked NSA catalog and seeing an entry for a “radar”. Radar? What is this, for tracking airplanes? “Primary uses include VAGRANT and DROPMIRE collection”. Googling those, they turn out to be Tempest stuff, the former being on computer screens and the latter on printers.

So that’s how the pros do it: not just by passively listening for emanations, but by making emanations. This unit, the “CTX4000”, broadcasts at a frequency adjustable from 1 to 2 gigahertz, and listens for return signals with a bandwidth of up to 45 megahertz. (As the catalog states, this unit is obsolete and, in 2008, was already scheduled for replacement; modern flat-panel displays are driven by signals of higher bandwidth than that.) Power levels are “up to 2W using the internal amplifier; external amplifiers make it possible to go up to 1kW”. The carrier wave is broadcast continuously.

But this calls for another bit of explanation, as to why this would work. Well, to start with the simple part, the use of a “radar” makes it possible to pick out the device you want to spy on: point the antennas at it, and not at all the other devices within range. Antennas at these sorts of frequencies can be quite directional without being too large. The more complicated part, at least to the uninitiated, is the modulation: why would you get back a signal of interest modulated on to the carrier wave?

Well, you might not. If all the materials involved are “linear”, you won’t; if frequencies A and B are present in a linear device, each might be attenuated or amplified, and/or phase shifted, but no new frequencies will be generated. Linear devices include wires, resistors, capacitors, and inductors — at least the ideal versions of all those. (Real versions are of course subtly nonlinear, but probably not usefully enough so for the present purpose.) But silicon devices (transistors, diodes, and such) are all nonlinear — though for small signals, they can be more-or-less linear; thus the utility of high “radar” power, to force them into their nonlinear regimes. Going through a nonlinear device, signals “mix”; in radio technology, the ideal “mixer” is a multiplier, but in practice one usually uses some cruder mixer which does something very far from an exact multiplication. When you pass frequencies A and B through an ideal two-input mixer, you get out the frequencies A+B and A-B. That’s for an exact multiplication of A by B; if the mixer is cruder, you also get frequencies such as A, B, 2A, 2B, A+2B, A+3B, 2A-2B, and so forth.

In the case of a spy beam, the nonlinear “mixer” might be the transmit or receive transistor at one end of the wire connecting the computer to the video monitor. Frequency A might be somewhere in the signal driving the screen (which perhaps spans the frequency range of zero to 30 MHz), and frequency B the spy beam (perhaps 1.5 GHz), picked up by that same wire acting as an antenna. Then the mixer generates a modulated version of the spy beam (1.5 GHz +/- 30 MHz), which will then get re-radiated and picked up by the spy’s antenna. As for the unwanted mix frequencies, many of them are outside the frequency ranges which are being received (e.g. 2A+2B, which is about 3 GHz). As for the rest, one can try to filter them out somehow, or one can just hope that they generate a low enough level of noise that the resulting signal is still decipherable. This being spy work, one doesn’t need a perfect image of the screen being spied on or of the page printed by the printer being spied on. It’s enough if the text is readable; it doesn’t have to look pretty.

If this isn’t good enough, you might have to sneak into the building and implant something. The device codenamed RAGEMASTER, perhaps, at a unit cost of $30. They recommend putting it onto the red video line; “it was found that, empirically, this provides the best video return and cleanest readout of the monitor contents”. In the photo, it seems to be a tiny little device that won’t even put a bulge in the cable where implanted: just some well-chosen nonlinearity, probably in silicon. Presumably in practice you slit the cable insulation to insert it, then somehow seal the slit closed.

Or you might also sneak in if you want other services, such as a microphone in the room. The catalog has microphones which insert into cabling and are readable via “radar”. It also has devices which can be implanted on low-frequency channels such as keyboards, to make reading those via “radar” feasible.

In any case, this system is quite easily detected by the intended victim, since he is being continuously illuminated by a microwave signal at rather substantial power. In the Cold War, the US embassy in Moscow frequently complained to the Russians about being irradiated with microwave beams. The NSA probably isn’t so gauche as to use power levels that actually harm the victim personally (the one-kilowatt option would be for use at a great distance, not for frying people at close range), but even the lower sorts of power levels furnish more than enough power to use standard direction finding techniques on, so as to track the spy beam back to its source. But using a sledgehammer on the source seems ill-advised, it (in its latest version, “PHOTOANGLO”) being government property with a price tag of “$40k (planned)” and likely twice that after the usual cost overruns.

The only caveat about the easy detectability would be if they were using spread spectrum techniques; spread spectrum stuff can be hard to detect. A cryptographically-spread signal can be below the noise floor and undetectable to people who don’t know the cryptographic key, and yet still can convey useful information to someone with the cryptographic key. But while that’s enough to make communications invisible, it can’t necessarily make radar invisible. With radar, the power level at the target has to be high enough that even faint echoes of it are detectable back at the radar unit. Also, high-frequency spread-spectrum stuff is hard to design and build, and my guess is that if they were using such techniques they’d be boasting about it in the catalog. So these NSA “radars” are probably easily detectable: just wave around a frequency counter, and it’ll tell you what frequency you are being illuminated at.

At any rate, this NSA Tempest stuff is too interesting for it to have really been a good idea to leak it. It doesn’t relate to dragnet surveillance of the whole population: the “radar” has to be pointed at one particular target, and someone has to get close to the target to emplace the “radar” and operate it. It’s an expensive unit, and the salaries of the people manning it are even more expensive. It’s for when they want to pay very close attention to a very special person, not for serving as everyone’s nanny.

Update (May 21, 2014): We now have a list of targets, as of 2010, via Glenn Greenwald; see pages 58—60 of his “Documents from No Place To Hide” pdf. The listed targets for “VAGRANT” or “DROPMIRE”, all UN missions in New York City or embassies in Washington DC, are:

  • the Brazilian UN mission
  • the “EU/Emb” (presumably the EU Delegation to the US)
  • the French UN mission
  • the Georgian embassy
  • the Indian UN mission
  • the Indian embassy
  • the Japanese UN mission
  • the Slovakian embassy
  • the South African UN mission
  • the South Korean UN mission
  • the Taiwanese consulate in NYC
  • the Vietnamese UN mission

Some of these apparently were done to give the US government an edge in the negotiations over sanctions on Iran. Of course this is most probably not a complete list. In any case, embassies are traditional targets for spying, who should already know about frequency counters and spectrum analyzers and such.

Update (Aug 13, 2014): I’d attributed this leak to Snowden, but Bruce Schneier is of the opinion that it is from a second leaker, which seems quite plausible, as this is not mass surveillance, which has been Snowden’s main emphasis, nor was it published by the people to whom Snowden gave his documents.