Page 1 of 3 >>
The FBI’s next move
As this is being written, a judge is considering competing briefs from Apple and the government on the question of whether Apple should have to comply with the FBI’s request to help it brute-force the passcode of an iPhone which had belonged to one of the terrorists who did the San Bernardino shootings, and are now dead. (Update, May 10: the FBI withdrew its request a while ago, saying that they had found another way into the phone. But nobody doubts that they’ll be back at some point with another phone; so this article still represents what could easily be their next move. I’ve also edited this article for clarity, and to make some points more explicit.)
At first, when considering the software changes the FBI demanded, I was sympathetic: the main point is to remove a couple of features from the code. These features are (to quote from Apple’s brief) that the code “imposes escalating time delays after the entry of each invalid passcode”, and “includes a setting that—if activated—automatically deletes encrypted data after ten consecutive incorrect attempts to enter the passcode”. Commenting out the lines of code that do these things is, for someone who knows the code, a five-minute task. Just the administrative overhead of sending the resulting code through Apple’s safeguarded signing process would be more work than that. (I don’t know the details of how that is done, and those who do know shouldn’t say; but such master signing keys are corporate crown jewels, and it is not to be expected that handling them is quick or easy.) Even that, though, doesn’t seem like an inordinate burden, in and of itself.
But then the FBI added on more requirements which make the job considerably harder, to where Apple’s estimate of the work involved (two to four weeks by a team of six to ten people) seems reasonable. One demand is that Apple give them an electronic interface for entering in passcodes, so that an FBI technician doesn’t have to sit there tapping in thousands of passcodes by hand. Another is that this new version of the software must be capable of running in RAM rather than being installed on the device, so as not to destroy any data.
What nobody seems to realize is that the FBI already has an electronic interface for entering passcodes; it’s staring them right in the face. It’s called a “capacitive touchscreen”. Such touchscreens work electrically; they send out signals (voltage) through the air to sense the capacitance of the fingers above them. Many eco-nuts would no doubt be horrified to know that Apple’s devices are sending electricity through their fingers — indeed, through their whole body — whenever they get their fingers near the screen; but it’s true. And it isn’t hard to make a circuit that connects a capacitance to a point on the screen, and which varies that capacitance under computer control. Nor is it hard to expand that circuit to several points on the screen, as would be required for entering a PIN code. (As an alternative to just varying a capacitance, the circuit might sense the voltage waveform that the screen puts out and apply a deliberately spoofed response to mimic a finger; but that is a bit more complicated and I doubt it would be necessary.) Nor is it difficult to write a piece of software that looks at the screen via a video camera and senses the difference between an unsuccessful passcode entry and a successful one. Combine those two tools, and you get an automatic PIN brute forcing machine.
Indeed, for previous versions of iOS, such a device exists, by the name of the IP-Box, though it seems to somehow enter the PIN via USB. (I am not sure how; a bit of searching makes it seem like this is not a normal feature for the iPhone.) It also has the feature of cutting the power to the phone after PIN entry failure but before the iPhone writes the record of that failure to flash memory, so that thousands of PINs can be tried. This requires that the phone be disassembled so that the battery connection can be cut and rewired to go through the IP-Box. It also doesn’t work with recent versions of iOS, which Apple has fixed so that they write the record of failure to flash memory before reporting it to the user.
So here’s what the FBI could do. First, build the above-described automatic PIN brute forcing machine. (Don’t look at me that way, FBI; yes, I know your specialty is law enforcement, not building electronics; but you at least should have enough of a technical clue to know whom to hire. Though it would help if the community of people who can build this sort of thing would acknowledge that law enforcement has legitimate needs, rather than responding in a tribal fashion. The world really does need people whose job description is “the bad thing that happens to bad people”. But these days they can’t be cavemen; they need to understand something of computing and electronics.)
The second step would be to hack Apple’s code via binary patching, to remove the two features that prevent brute-forcing passcodes. Probably they would just have to overwrite two machine instructions with NOP instructions. The hard part would be finding those two instructions, and it probably wouldn’t be that hard for a good reverse engineer (though I’m guessing here; the difficulty of such tasks can vary quite a lot, depending on things like the availability of a symbol table, and I’m very far from being an iOS expert). Having done that, they could go to Apple with a much simpler request: we have this hacked code; sign it for us so that it will run on the terrorist’s phone.
That would reduce the debate to its essence. Apple would no longer be able to argue that they shouldn’t be forced to create code that was too dangerous to exist, because they wouldn’t be creating it; the FBI would already have created it. Apple would just be signing it with their key. This is why Apple is a fair target for government demands: not because of their license agreement (an argument the FBI made that Apple’s lawyers easily brushed aside), nor because they’re the only ones who know how to program the device, but because they have retained control over the device by their control of the signing key for the operating system. Asking them to use it is a digital parallel to a demand to an owner of a storage locker: we have a warrant for this locker, and you have the key, so open it up for us. The parallel is so strong that Apple’s attorneys might well advise them not to even try fighting, but just to comply. And, for that matter, they might decide they could comply in good conscience: developing the sort of electronic interface the FBI is presently asking for, which could enter passcodes wirelessly (or via the charging cable), really does pose risks that using the touchscreen doesn’t. James Comey, the head of the FBI, has stated that he doesn’t want a backdoor into phones, but rather entrance via the “front door”; and if anything is the front door to an iPhone, it’s the touchscreen. So access of this sort, while it might not be what he secretly wants, is exactly what he has asked for.
As for the FBI demand that the version of iOS which is to be produced for them run from RAM, the basis for the modification could be a version of iOS which already does so; at least I’m under the impression these exist. Even if that weren’t possible, changing just a few bytes of the operating system in flash memory is not really going to alter the evidentiary value of the data there, though there might be problems related to legal requirements for forensic tools.
Now, if Apple signed a version of the code that just had those two changes, it would run on all iPhones of that model. So the FBI might tell Apple that if instead they wanted to make their own version of the hack which would also check the phone’s serial number and only run on that one phone, they would be free to do that instead of signing the FBI-hacked version. (The FBI has been much criticized for saying that this is a one-time request, when inevitably other requests will follow if this one is successful, but they have a point: this isn’t a warrant like the one served on Ladar Levison, which demanded that he supply his private SSL key to them, enabling them to decrypt not just the messages they originally were after but all the messages all of his customers ever sent or received. In this case any further phones to be unlocked would still have to be individually approved by a judge.)
All the same, the FBI could let Apple stew in their own juices a bit here. There are great risks in having a common signing key for all their phones, because if that key ever gets disclosed Apple’s control over hundreds of millions of iPhones is lost. If instead Apple had one signing key per phone, they could, on receipt of this sort of warrant, merely hand over the key for that particular phone and let the government do whatever they wanted with it. The whole drama would be avoided; it would be as routine a thing as court-ordered access to a storage locker. At present, it is as delicate as it would be for a nationwide storage locker chain which had a common master key shared throughout all its facilities, where any use of the master key would mean taking the risk that it might leak out, thus compromising their security nationwide.
In the last couple of decades, cryptographic schemes have been moving away from having a single key for everything and in the direction of having a multitude of keys. Indeed, the possibilities for mischief that a single key opens up are a large part of why everyone with a clue is scared of government-imposed backdoors: such schemes almost inevitably involve a single backdoor key, even if an attempt is made to split that key up for storage. The world has never seen the possibilities that such backdoors would give rise to; fiction offers the only parallel to them. In particular, they evoke the world of The Lord of the Rings, in which the One Ring confers invisibility and vast powers, corrupts its users, and is the centerpiece of various adventures, in which it passes through the hands of all sorts of creatures, from Gandalf to Gollum. The authorities have been remarkably creative in trying to find a replacement word for “backdoor”, calling it “a front door”, “a golden key”, and such; but whatever the word, many knowledgeable people will still think of the intention as being something like forging the One Ring in the fires of Mount Doom:
One Ring to rule them all, one ring to find them,
one ring to bring them all, and in the darkness bind them.
(And to those in the national security establishment to whom that sounds pretty cool: beware! Some hobbit may steal it and run off to Russia with it. Tricksy, those hobbitses.)
Apple has called the software they are being asked to create a “backdoor”, though it is not the traditional sort of backdoor which enables spying on running systems without the user’s knowledge. I do not feel comfortable entirely agreeing that it is a backdoor, nor entirely denying that it is a backdoor; but the weakness that makes backdoors scary is a sort of weakness that is shared by any system that puts too much reliance on a single cryptographic key.
But though going to one key per device would solve the problem of how to give law enforcement access to devices as per court order, it would not solve all of Apple’s problems. In particular, it would not much lessen the degree to which Apple is a target for espionage; a thumb drive full of keys is not all that much harder to steal than a single key. If our government were to turn brutal, too, it could about as easily confiscate the one as the other. To seriously reduce their status as a target, Apple would have to give up power over the iPhone. As things stand, you can boot only an Apple-signed operating system on an iPhone, and unless you “jailbreak” it, that operating system will only run Apple-approved software. This is a mechanism of social control which is just begging for governments to start meddling with it for their own purposes. In Android, the same limits are there, but can be turned off by the user (or, at least, that’s the way Google would have it; makers of phones can and do alter Android to lock in users). But the best example for letting people take ownership of their own devices comes from the PC world. PC users can disable Secure Boot, which parallels what can be done in Android; but they can also go beyond that, and replace Microsoft’s signature-checking key with their own, so that they themselves are the only ones who can approve changes to the operating system. Almost nobody actually does this, but it’s a welcome safety valve.
If something similar were implemented for the iPhone, again very few users would take advantage of it, but those few would be the ones who were the most concerned about security, such as investment banks, government security agencies, spies, and terrorists. Even with the vast majority of users still depending on Apple for their security, having the highest-value targets opt out would significantly lessen the degree to which Apple’s signing keys were a target for espionage and for government demands. It is really not fair for one corporation to have to bear such a large fraction of the world’s security burdens, nor should Apple try; they should release some of it to those of us who are willing to shoulder it ourselves. That way they could actually be, as their brief to the court claims them to be, merely a supplier of products and services, not a controlling master.
So that’s what caused that “flash crash”
The authorities have recently been insinuating that the “flash crash” five years ago was caused by a guy in London who was trading on his own account from his parents’ house, trying to manipulate the market. Of course they have been careful to not say this explicitly, since it can’t really be true. There is an old jibe that “if people built houses the same way that programmers write programs, the first woodpecker that came along would destroy civilization”. That’s unfair to careful programmers, but it’d be fair to apply it to the programmers of a stock market where trillion-dollar swings in valuation could be caused by the misdeeds of someone playing with mere millions of dollars. Not that there was serious reason to believe that even that happened; the announced link to the flash crash seems more like an attempt to grab headlines for a minor arrest than a real attempt to explain the flash crash. And the coverage from people who know finance (see, for example, columns by Matt Levine and by Michael Lewis) has indeed been appropriately skeptical.
But it’s not like they have a compelling alternative theory of the cause of the flash crash. Lewis hints at high-frequency traders being to blame, but that’s a little too easy to really be satisfying. With a fast crash, one can expect that people who trade fast had a lot to do with it. But that does not say who exactly was at fault nor what exactly they did wrong.
Lewis does, though, link to a report by Nanex which offers some much more decided opinions. Nanex reports seem to mostly be by Eric Hunsader, although they are unattributed, so perhaps others are involved. In any case, Nanex is a small company whose business is collecting, analyzing, and disseminating market data. Nanex noticed that at the start of the flash crash
“… quotes from NYSE [the New York Stock Exchange] began to queue, but because they were time stamped after exiting the queue, the delay was undetectable to systems processing those quotes.”
Perhaps one has to have spent many hours pondering the mysteries of feedback and stability for those words to leap off the page proclaiming “this was the cause!”. But perhaps I can explain.
Pretty much any feedback control system can be deranged by adding delay to the feedback; and the algorithms for trading on the market constitute, collectively, one huge feedback control system. Each algorithm sends orders into the exchange while monitoring the price quotes that come back from the exchange (the feedback). Large parts of the control laws of the system are kept secret (people don’t reveal their trading strategies, nor the algorithms that implement them), and the whole system is immensely complicated, but one can get an idea of how devastating added delay is by looking at simpler disciplines.
Electrical engineers, for instance, often get bitten when they try to design a system in which an op amp drives a capacitive load. Op amps are generally used with feedback, and a capacitive load delays that feedback, which often turns what would otherwise be a well-behaved circuit into one that oscillates madly. Solving this might involve removing the capacitive load (or at least moving it outside the feedback loop), but might also involve increasing it (to where the added capacitance forms the “dominant pole”).
In aircraft design, there is something known as a “pilot-induced oscillation”, PIO for short. An example can be seen in this video of a prototype F-22 aircraft crashing. But by convention, using the term “PIO” does not imply that the pilot was to blame in any moral or legal sense. He might be; but the usual idea of “PIO” is that the plane confused the pilot by how it behaved — in particular, by the delay in its responses to the controls. If an aircraft responds instantly to the controls, PIOs seldom occur. Yet on the other extreme, if an aircraft responds quite slowly to the controls, as in a large airliner, there is again seldom a problem. (This parallels the electrical engineering situation in which increasing the capacitance can tame an unruly circuit.) PIO problems occur mostly when the aircraft’s response time is similar to the pilot’s reaction time.
PIOs are generally cases of pilots getting confused by a constant delay; if aircraft were to randomly introduce additional delays into the control response, we can only imagine how much confusion it would cause, since no sane aircraft designer would ever do such a thing. Introducing additional delays right when things got hairy would be the worst possible scenario. But that’s apparently what happened with the stock market.
The undetectability of the delay in stock market quotes is what makes the flash crash comparable to the above two examples. Both in an electrical circuit and in piloting, no information is available to the control system about how long the delay in feedback is. The feedback voltage just takes longer to change, or the airplane takes longer to respond. It does not tell anyone that it has been delayed, it just is delayed. If quotes from the stock market had correct timestamps, that would be a considerably more benign situation: the quotes would be delayed, but they’d be saying how much they were delayed, so algorithms could adapt accordingly. (They might not adapt well, since the code for dealing with long delays would be rarely exercised and thus likely to be buggy, but at least they could try to adapt rather than being left in the dark.)
As a third example of feedback delay causing misbehavior, somewhat closer to the matter at hand (indeed, it might even have been involved), there is the problem of bufferbloat. That is a term for the problems that occur when networking hardware and software has buffers that are too large. Computer memory, in recent years, has become so cheap that adding oversized buffers can be done at very little cost, even in cheap consumer devices. It is common for network devices these days to have enough buffering capacity to store several seconds’ worth of data packets before forwarding them.
At first glance the increased buffer size seems innocuous: instead of packets getting discarded due to lack of buffer space, they are stored and later forwarded correctly. The problem is that TCP, the Internet’s main connection protocol, was not designed to deal with this. Packets are supposed to get discarded; that is how the computer sending the packets figures out that the link is congested, whereupon it throttles down the data rate. But for the TCP control algorithm to work smoothly, it must get this feedback (telling it that packets have gone missing) in a timely fashion. If the feedback is delayed, the algorithm overcompensates, oscillating between sending data too fast and sending it too slowly. This wreaks havoc not just on that connection but on any others which happen to share the communications channel. With TCP, delays in feedback are somewhat detectable (due to the timestamps defined in RFC 1323), but the system somehow still manages to misbehave.
Undetectable delay is such a potent destabilizing influence that one might wonder why there haven’t been even more flash crashes. The answer seems to be that not all participants were victimized by the delay. This will take a bit of explanation of the stock market system as it exists today.
The authorities have decreed that there can be multiple stock exchanges all trading the same stocks, but that they have to be bound together to all more or less have the same prices, to be part of a “national market system”. Rather than just letting each exchange operate independently and letting traders arbitrage between them to even out the prices, there is a “consolidated quotation system” that gets data feeds from all the exchanges and computes a “national best bid and offer”: a listing, for every stock, of the best price at which anyone will buy and the best price at which anyone will sell. But this transmission and consolidation of data takes time (several milliseconds), so the system can never be quite up to date.
The rules dictate that orders sent into any exchange must be routed to the exchange with the best price. But that is more a promise than a guarantee, since the transmission takes time; information can’t travel any faster than the speed of light. So when the order arrives the price may no longer be available. And it’s not just that accidents occasionally happen; high-frequency traders play games with this, quickly withdrawing orders and substituting ones with worse prices. To really get the simplicity that the system pretends to, the regulators would have to dictate that each stock be traded only on a single exchange; all orders for it would be sent to that exchange. There could still be competition between exchanges, but each stock would have to decide which of the competing exchanges it was to be listed on.
Alternatively, they could just drop the pretense of the system being a unified whole and rely on arbitrage to equalize prices between exchanges; even that would be simpler than a system that tries to be unified but really isn’t.
In any case, that is the environment that the flash crash took place in: multiple exchanges which are supposed to act like a single unified system but don’t quite, with the consequence that high-frequency traders and other heavy hitters find themselves needing to get direct data feeds from all the exchanges rather than just subscribing to the consolidated feed. But since direct feeds are expensive, the vast majority of traders just get the consolidated feed.
Though that first Nanex flash crash report identifies mis-timestamping, it doesn’t say who exactly did it. Subsequent reports (e.g. this one) fill in that detail: it was the consolidated quote system, which timestamped the quotes after they arrived at the data center which does the consolidation. Indeed, it is still doing so, though that should change due to the recent SEC ruling that exchanges must start sending timestamps to the consolidator — a ruling that came to public attention two weeks ago when the NYSE, in preparing to implement it, broke their systems for a large part of a day. (What seems to be disconfirmed, though, are the Nanex claims that there were already timestamps on the data coming into the consolidator and that those correct timestamps were being replaced with incorrect timestamps.)
Another Nanex article adds a further detail: while the direct feeds to colocated high-frequency traders use the UDP protocol, the feed to the consolidated system uses TCP. (Yes, the stock exchanges use the same sort of networking hardware that the Internet does, though of course operating over private links rather than over the public net.) TCP, it will be recalled, is the main victim of bufferbloat; UDP is less affected. The “U” in UDP doesn’t actually stand for “unreliable”, but that’s the easiest way to remember it: in UDP, data packets are just sent over the network without keeping track of whether any of them have been lost or making sure they arrive in the right order. UDP may in fact be reliable if the underlying network is reliable, but it doesn’t protect the data like TCP does, and thus is not subject to the complicated pathologies that can arise from trying to protect the data. Still, even with UDP the large buffers are still there, invisible until some link gets overloaded but then adding delays to the transmission of data.
In 2010, the year of the flash crash, the available networking hardware and software generally suffered from severe bufferbloat; widespread awareness of the bufferbloat problem dates from 2009, when Jim Gettys coined the term and wrote several articles on it. Though good algorithms for managing buffers have since been developed, getting them widely fielded in networking hardware is another matter, and is still (in 2015) very far from complete. The flash crash involved, at its worst, delays of tens of seconds; that’s more than enough for a TCP connection to exhibit pathological behavior. So bufferbloat probably made the flash crash worse, though to what extent is hard to tell.
The official SEC / CFTC report on the flash crash reads like its authors had heard of the Nanex argument that delay was the cause, thought they had to say something regarding that argument, but didn’t think they had to really take it seriously. Indeed, from a social point of view they didn’t; the Nanex report gives an account of how delay caused the flash crash, but it’s the sort of account that even when correct does not convince people. It is terse enough and has enough jargon that it can’t reach the general public; and specialists will wonder what parts of it the author really knows and what parts he’s just guessing at. I am not too sure myself — which is why, though not disagreeing with it, I have not relied on it here, but instead have just been arguing for the general proposition that delays in feedback are destabilizing. But for those who want a blow-by-blow account, this, again, is the link.
The official report’s section on delays focuses on the worst delays, which occurred rather late in the crash and thus couldn’t have had a causative role. The word “timestamp” (or even just “stamp”) occurs nowhere in the entire report. There is a mention of a delay of five seconds in a direct feed; but there are many direct feeds, and nothing is said about how prevalent this might have been, nor about whether any delays in direct feeds occurred early in the crash and might have fed it. On the whole the report reads like they were looking for a villain, rather than regarding the situation as a dynamical system and trying to understand how the system’s reactions could have been so messed up.
They did in fact find a villain, or at least thought they did: a company which had been selling a large number of “E-Mini” futures (futures on the S&P 500 stock index). The Chicago Mercantile Exchange disagreed about this being at all villainous, as did Nanex, who said that the SEC/CFTC had even gotten their facts wrong about the trading algorithm that the company used. That Nanex article then goes on to argue that the actual cause of the system overload and delays was that high-frequency traders had dumped a lot of E-Mini contracts very quickly — and that since the E-Mini is something like a future on the whole market, this propagated via arbitrage into quite a lot of buy and sell orders on various stocks and other financial instruments, which is what produced the overload and delays in the quote system.
Looking at the graph where those large dumps are marked, they seem to have a total value of something like 500 million dollars (about ten thousand contracts, with each contract being 50 times the value of the S&P 500, which is roughly a thousand). This is still nowhere near the sort of impetus that is normally required to create a trillion-dollar shift in the stock market. But it is plausible as something that increased the level of market activity to where the quote system became overloaded.
This tactic of sending a big pulse of orders strongly resembles the behavior of the “Thor” algorithm that Michael Lewis describes in Flash Boys as being a way for investors to escape the depredations of high-frequency traders. The idea is that the big pulse of orders all gets filled before anyone can react by moving prices. But here it was done by high-frequency traders. It’s also something that can be done to people like that guy in London who was trying to manipulate the market: slamming the market with a massive pulse of buy orders would make his not-meant-to-be-filled sell orders actually get filled, giving him notice that he was swimming in deeper water than he’d thought and with larger predators.
In any case, the details of who exactly overloaded the system seem relatively unimportant: any system has to expect occasional overloads and should handle them gracefully. That might include a delay but does not include mis-timestamping which conceals the existence of the delay.
To finally get back to the question of why there haven’t been more flash crashes, as mentioned above, high-frequency traders and other large players in the market have direct feeds; they don’t rely on the consolidated feed. Its misbehavior in the flash crash no doubt prompted them to use it even less, at least as a source of ground truth. But they probably use it more as a source of information that reveals what other traders are misinformed about. Preying on the misinformed can be quite profitable even if the misinformation is just a matter of 200 milliseconds of delay. And though predatory, it probably is predatory in a way that stabilizes the market, since taking advantage of misinformed people moves the market in the opposite direction from the way they’d move it.
But it would be even better if there were no misinformation in the first place. Having correct timestamps, so that delays are evident, is a good first step. As mentioned above, the SEC has dictated that timestamps be sent from the exchanges, and this is being implemented. If these timestamps are passed on to the the consolidated feed’s customers (as they should be), a lot of customers will suddenly notice that they’re getting a fair bit more delay than they thought they were getting, and will complain.
And they should indeed complain, since there’s no reason for quotes to get stuck in buffers on the channel between the exchange and the consolidator. The fix (courtesy of the bufferbloat community) is for the software that sends the quotes to rate-limit its output. It would keep its own modest buffer (perhaps enough to hold a millisecond’s worth of quotes) and would discard any quote that overflowed the buffer. It would empty that buffer at a fixed rate which was slightly below the data rate of the outgoing channel. This way data would always be arriving at any point in the channel at a slower pace than it could be sent out at, so buffers in the channel would not fill up.
At least that would be the simplest fix; more sophistication could be applied. For instance quotes closer to the best bid/offer could be given priority over quotes farther from it, and could bump them from the buffer. Also, a level of fairness among stocks could be enforced so that huge quote activity in one stock wouldn’t crowd out occasional quotes in other stocks. But even the simplest solution would be an improvement over leaving the buffers uncontrolled.
The above fix is just for delays on the link from an exchange to the consolidator. Delays from the consolidator to subscribers are harder, since they go over many different channels and share them with other data. There can also be delays in the matching engine of the exchange itself; and there can be congestion on the way to the exchange. In those cases, too, it’s probably best to adopt the general policy to throttle rather than delay — that is, to discard orders that are coming in too fast rather than letting them accumulate in a big buffer to later emerge stale. How exactly that might be done is a large topic; but at first glance it seems that what is needed is not some Big Idea but rather a lot of petty negotiations about who pays for what network services and exactly what they get in exchange — negotiations that already take place, but should be expanded to include stipulations about what exactly happens when a link gets congested: whose packets get priority (if anyone’s), what data rate is guaranteed to the subscriber, and what the maximum delay might be.
In modern programming, there are a lot of things that are done not because the computer needs them for the code to work but because the programmer needs them to cover for his natural human failings; they make the programming environment simpler and more predictable. In the stock market, eliminating congestion delays falls into this category. Code that deals with markets will normally be written and tested with delays that are relatively minimal. When all these codes are suddenly thrown into a high-delay environment the results will be unpredictable; even if each individual piece of code does something locally sane, they may interact badly. The players with direct feeds perhaps have enough of an information advantage and enough money to take advantage of such people and ruthlessly crush their attempts to destabilize the market, but it still would be best if they didn’t have to.
Also, in the case of high-frequency traders, the “enough money” part may be doubted: such companies try hard to keep the level of their holdings small. After the flash crash was well under way, there was a “hot potato” period where HFTs traded E-Mini contracts among one another, continually passing them on at lower and lower prices. If any of those firms had had the nerve to hold onto those contracts for a few minutes until the market had recovered, it would have made a very healthy profit. They no doubt realized this to their chagrin after the fact, and at least thought of changing their algorithms to deal better with such situations.
Indeed, in general, flash excursions like this (“excursions” because they can happen upwards as well as downwards) represent someone being foolish and are an opportunity for others to take advantage of that foolishness. Thus does the market discipline its own. But the more complicated the system is — the more it misbehaves in weird ways — the longer it takes for participants to learn how to deal with it. And when an event is as large as the flash crash was, one can expect that the system itself, not just participants, displayed some rather serious misbehavior. The hidden delays in the consolidated feed definitely qualify as serious misbehavior.
Of course, while I think this was the cause of the crash, I’m not pretending to have offered proof of it — not only because I haven’t delved deep into details of what happened that day, but also because in cases like this even people who agree about what caused what in the chain of events can disagree about which of those causes was truly blameworthy and points to something that should be changed to prevent a recurrence. But in this case eliminating congestion delays seems to be the easiest prescription: it does not involve any moral crusades but is just a matter of fixing things on a technical level.
So that’s how they really do Tempest
One of the recent
Snowden revelations was a catalog of spying items
that the NSA’s “Tailored Access Operations” unit had for breaking into
bad guys’ computers. Most of the items weren’t particularly
surprising. We already know that since they can’t break cryptography,
they try to break into endpoints, where the plaintext lives — and
even if we hadn’t known that from recent revelations, it makes
complete sense for them to operate that way. What was surprising was
the Tempest stuff.
To explain a bit, Tempest is the code word for spying on people’s computers via unintentional electronic emanations. A computer monitor, for instance, is driven by a high frequency signal which more or less broadcasts whatever is being shown on the monitor. If thoroughly shielded it wouldn’t be broadcasting, but it never is thoroughly shielded, except in special Tempest rated equipment such as is sold to various agencies of the federal government who worry about such things. And the broadcast is performed regularly at 60 times a second (at least that’s the usual refresh rate these days), a piece of redundancy which makes the signal easier to retrieve. Old-fashioned CRTs amplified this signal to high voltage to shoot it through an electron gun, but as Markus Kuhn has found, even modern flat panel displays can produce decipherable emanations.
But how well Tempest worked in practice was never quite clear to me. Okay, various demos have shown it to work in some cases. But whether those cases are typical has been unclear; monitors no doubt vary in how well their shielding is designed and built. And even if you can do a good job picking up the signal from one monitor, in practice there’ll probably be tens or hundreds of monitors within range; what can you do with the resulting mess of signals stomping all over each other? So it was no surprise reading, a while ago, in the book Security Engineering, by Kuhn’s Ph.D. advisor Ross Anderson, that
“Despite the hype with which the Tempest industry maintained itself during the Cold War, there is a growing scepticism about whether any actual Tempest attacks had ever been mounted by foreign agents in the USA.”
“Having been driven around an English town looking for Tempest signals, I can testify that doing such attacks is much harder in practice than it might seem in theory…”
What was a surprise was looking at the recently-leaked NSA catalog and seeing an entry for a “radar”. Radar? What is this, for tracking airplanes? “Primary uses include VAGRANT and DROPMIRE collection”. Googling those, they turn out to be Tempest stuff, the former being on computer screens and the latter on printers.
So that’s how the pros do it: not just by passively listening for emanations, but by making emanations. This unit, the “CTX4000”, broadcasts at a frequency adjustable from 1 to 2 gigahertz, and listens for return signals with a bandwidth of up to 45 megahertz. (As the catalog states, this unit is obsolete and, in 2008, was already scheduled for replacement; modern flat-panel displays are driven by signals of higher bandwidth than that.) Power levels are “up to 2W using the internal amplifier; external amplifiers make it possible to go up to 1kW”. The carrier wave is broadcast continuously.
But this calls for another bit of explanation, as to why this would work. Well, to start with the simple part, the use of a “radar” makes it possible to pick out the device you want to spy on: point the antennas at it, and not at all the other devices within range. Antennas at these sorts of frequencies can be quite directional without being too large. The more complicated part, at least to the uninitiated, is the modulation: why would you get back a signal of interest modulated on to the carrier wave?
Well, you might not. If all the materials involved are “linear”, you won’t; if frequencies A and B are present in a linear device, each might be attenuated or amplified, and/or phase shifted, but no new frequencies will be generated. Linear devices include wires, resistors, capacitors, and inductors — at least the ideal versions of all those. (Real versions are of course subtly nonlinear, but probably not usefully enough so for the present purpose.) But silicon devices (transistors, diodes, and such) are all nonlinear — though for small signals, they can be more-or-less linear; thus the utility of high “radar” power, to force them into their nonlinear regimes. Going through a nonlinear device, signals “mix”; in radio technology, the ideal “mixer” is a multiplier, but in practice one usually uses some cruder mixer which does something very far from an exact multiplication. When you pass frequencies A and B through an ideal two-input mixer, you get out the frequencies A+B and A-B. That’s for an exact multiplication of A by B; if the mixer is cruder, you also get frequencies such as A, B, 2A, 2B, A+2B, A+3B, 2A-2B, and so forth.
In the case of a spy beam, the nonlinear “mixer” might be the transmit or receive transistor at one end of the wire connecting the computer to the video monitor. Frequency A might be somewhere in the signal driving the screen (which perhaps spans the frequency range of zero to 30 MHz), and frequency B the spy beam (perhaps 1.5 GHz), picked up by that same wire acting as an antenna. Then the mixer generates a modulated version of the spy beam (1.5 GHz +/- 30 MHz), which will then get re-radiated and picked up by the spy’s antenna. As for the unwanted mix frequencies, many of them are outside the frequency ranges which are being received (e.g. 2A+2B, which is about 3 GHz). As for the rest, one can try to filter them out somehow, or one can just hope that they generate a low enough level of noise that the resulting signal is still decipherable. This being spy work, one doesn’t need a perfect image of the screen being spied on or of the page printed by the printer being spied on. It’s enough if the text is readable; it doesn’t have to look pretty.
If this isn’t good enough, you might have to sneak into the building and implant something. The device codenamed RAGEMASTER, perhaps, at a unit cost of $30. They recommend putting it onto the red video line; “it was found that, empirically, this provides the best video return and cleanest readout of the monitor contents”. In the photo, it seems to be a tiny little device that won’t even put a bulge in the cable where implanted: just some well-chosen nonlinearity, probably in silicon. Presumably in practice you slit the cable insulation to insert it, then somehow seal the slit closed.
Or you might also sneak in if you want other services, such as a microphone in the room. The catalog has microphones which insert into cabling and are readable via “radar”. It also has devices which can be implanted on low-frequency channels such as keyboards, to make reading those via “radar” feasible.
In any case, this system is quite easily detected by the intended victim, since he is being continuously illuminated by a microwave signal at rather substantial power. In the Cold War, the US embassy in Moscow frequently complained to the Russians about being irradiated with microwave beams. The NSA probably isn’t so gauche as to use power levels that actually harm the victim personally (the one-kilowatt option would be for use at a great distance, not for frying people at close range), but even the lower sorts of power levels furnish more than enough power to use standard direction finding techniques on, so as to track the spy beam back to its source. But using a sledgehammer on the source seems ill-advised, it (in its latest version, “PHOTOANGLO”) being government property with a price tag of “$40k (planned)” and likely twice that after the usual cost overruns.
The only caveat about the easy detectability would be if they were using spread spectrum techniques; spread spectrum stuff can be hard to detect. A cryptographically-spread signal can be below the noise floor and undetectable to people who don’t know the cryptographic key, and yet still can convey useful information to someone with the cryptographic key. But while that’s enough to make communications invisible, it can’t necessarily make radar invisible. With radar, the power level at the target has to be high enough that even faint echoes of it are detectable back at the radar unit. Also, high-frequency spread-spectrum stuff is hard to design and build, and my guess is that if they were using such techniques they’d be boasting about it in the catalog. So these NSA “radars” are probably easily detectable: just wave around a frequency counter, and it’ll tell you what frequency you are being illuminated at.
At any rate, this NSA Tempest stuff is too interesting for it to have really been a good idea to leak it. It doesn’t relate to dragnet surveillance of the whole population: the “radar” has to be pointed at one particular target, and someone has to get close to the target to emplace the “radar” and operate it. It’s an expensive unit, and the salaries of the people manning it are even more expensive. It’s for when they want to pay very close attention to a very special person, not for serving as everyone’s nanny.
Update (May 21, 2014): We now have a list of targets, as of 2010, via Glenn Greenwald; see pages 58—60 of his “Documents from No Place To Hide” pdf. The listed targets for “VAGRANT” or “DROPMIRE”, all UN missions in New York City or embassies in Washington DC, are:
- the Brazilian UN mission
- the “EU/Emb” (presumably the EU Delegation to the US)
- the French UN mission
- the Georgian embassy
- the Indian UN mission
- the Indian embassy
- the Japanese UN mission
- the Slovakian embassy
- the South African UN mission
- the South Korean UN mission
- the Taiwanese consulate in NYC
- the Vietnamese UN mission
Some of these apparently were done to give the US government an edge in the negotiations over sanctions on Iran. Of course this is most probably not a complete list. In any case, embassies are traditional targets for spying, who should already know about frequency counters and spectrum analyzers and such.
Update (Aug 13, 2014): I’d attributed this leak to Snowden, but Bruce Schneier is of the opinion that it is from a second leaker, which seems quite plausible, as this is not mass surveillance, which has been Snowden’s main emphasis, nor was it published by the people to whom Snowden gave his documents.
Page 1 of 3 >>