So that’s what caused that “flash crash”
The authorities have recently been insinuating that the “flash crash” five years ago was caused by a guy in London who was trading on his own account from his parents’ house, trying to manipulate the market. Of course they have been careful to not say this explicitly, since it can’t really be true. There is an old jibe that “if people built houses the same way that programmers write programs, the first woodpecker that came along would destroy civilization”. That’s unfair to careful programmers, but it’d be fair to apply it to the programmers of a stock market where trillion-dollar swings in valuation could be caused by the misdeeds of someone playing with mere millions of dollars. Not that there was serious reason to believe that even that happened; the announced link to the flash crash seems more like an attempt to grab headlines for a minor arrest than a real attempt to explain the flash crash. And the coverage from people who know finance (see, for example, columns by Matt Levine and by Michael Lewis) has indeed been appropriately skeptical.
But it’s not like they have a compelling alternative theory of the cause of the flash crash. Lewis hints at high-frequency traders being to blame, but that’s a little too easy to really be satisfying. With a fast crash, one can expect that people who trade fast had a lot to do with it. But that does not say who exactly was at fault nor what exactly they did wrong.
Lewis does, though, link to a report by Nanex which offers some much more decided opinions. Nanex reports seem to mostly be by Eric Hunsader, although they are unattributed, so perhaps others are involved. In any case, Nanex is a small company whose business is collecting, analyzing, and disseminating market data. Nanex noticed that at the start of the flash crash
“… quotes from NYSE [the New York Stock Exchange] began to queue, but because they were time stamped after exiting the queue, the delay was undetectable to systems processing those quotes.”
Perhaps one has to have spent many hours pondering the mysteries of feedback and stability for those words to leap off the page proclaiming “this was the cause!”. But perhaps I can explain.
Pretty much any feedback control system can be deranged by adding delay to the feedback; and the algorithms for trading on the market constitute, collectively, one huge feedback control system. Each algorithm sends orders into the exchange while monitoring the price quotes that come back from the exchange (the feedback). Large parts of the control laws of the system are kept secret (people don’t reveal their trading strategies, nor the algorithms that implement them), and the whole system is immensely complicated, but one can get an idea of how devastating added delay is by looking at simpler disciplines.
Electrical engineers, for instance, often get bitten when they try to design a system in which an op amp drives a capacitive load. Op amps are generally used with feedback, and a capacitive load delays that feedback, which often turns what would otherwise be a well-behaved circuit into one that oscillates madly. Solving this might involve removing the capacitive load (or at least moving it outside the feedback loop), but might also involve increasing it (to where the added capacitance forms the “dominant pole”).
In aircraft design, there is something known as a “pilot-induced oscillation”, PIO for short. An example can be seen in this video of a prototype F-22 aircraft crashing. But by convention, using the term “PIO” does not imply that the pilot was to blame in any moral or legal sense. He might be; but the usual idea of “PIO” is that the plane confused the pilot by how it behaved — in particular, by the delay in its responses to the controls. If an aircraft responds instantly to the controls, PIOs seldom occur. Yet on the other extreme, if an aircraft responds quite slowly to the controls, as in a large airliner, there is again seldom a problem. (This parallels the electrical engineering situation in which increasing the capacitance can tame an unruly circuit.) PIO problems occur mostly when the aircraft’s response time is similar to the pilot’s reaction time.
PIOs are generally cases of pilots getting confused by a constant delay; if aircraft were to randomly introduce additional delays into the control response, we can only imagine how much confusion it would cause, since no sane aircraft designer would ever do such a thing. Introducing additional delays right when things got hairy would be the worst possible scenario. But that’s apparently what happened with the stock market.
The undetectability of the delay in stock market quotes is what makes the flash crash comparable to the above two examples. Both in an electrical circuit and in piloting, no information is available to the control system about how long the delay in feedback is. The feedback voltage just takes longer to change, or the airplane takes longer to respond. It does not tell anyone that it has been delayed, it just is delayed. If quotes from the stock market had correct timestamps, that would be a considerably more benign situation: the quotes would be delayed, but they’d be saying how much they were delayed, so algorithms could adapt accordingly. (They might not adapt well, since the code for dealing with long delays would be rarely exercised and thus likely to be buggy, but at least they could try to adapt rather than being left in the dark.)
As a third example of feedback delay causing misbehavior, somewhat closer to the matter at hand (indeed, it might even have been involved), there is the problem of bufferbloat. That is a term for the problems that occur when networking hardware and software has buffers that are too large. Computer memory, in recent years, has become so cheap that adding oversized buffers can be done at very little cost, even in cheap consumer devices. It is common for network devices these days to have enough buffering capacity to store several seconds’ worth of data packets before forwarding them.
At first glance the increased buffer size seems innocuous: instead of packets getting discarded due to lack of buffer space, they are stored and later forwarded correctly. The problem is that TCP, the Internet’s main connection protocol, was not designed to deal with this. Packets are supposed to get discarded; that is how the computer sending the packets figures out that the link is congested, whereupon it throttles down the data rate. But for the TCP control algorithm to work smoothly, it must get this feedback (telling it that packets have gone missing) in a timely fashion. If the feedback is delayed, the algorithm overcompensates, oscillating between sending data too fast and sending it too slowly. This wreaks havoc not just on that connection but on any others which happen to share the communications channel. With TCP, delays in feedback are somewhat detectable (due to the timestamps defined in RFC 1323), but the system somehow still manages to misbehave.
Undetectable delay is such a potent destabilizing influence that one might wonder why there haven’t been even more flash crashes. The answer seems to be that not all participants were victimized by the delay. This will take a bit of explanation of the stock market system as it exists today.
The authorities have decreed that there can be multiple stock exchanges all trading the same stocks, but that they have to be bound together to all more or less have the same prices, to be part of a “national market system”. Rather than just letting each exchange operate independently and letting traders arbitrage between them to even out the prices, there is a “consolidated quotation system” that gets data feeds from all the exchanges and computes a “national best bid and offer”: a listing, for every stock, of the best price at which anyone will buy and the best price at which anyone will sell. But this transmission and consolidation of data takes time (several milliseconds), so the system can never be quite up to date.
The rules dictate that orders sent into any exchange must be routed to the exchange with the best price. But that is more a promise than a guarantee, since the transmission takes time; information can’t travel any faster than the speed of light. So when the order arrives the price may no longer be available. And it’s not just that accidents occasionally happen; high-frequency traders play games with this, quickly withdrawing orders and substituting ones with worse prices. To really get the simplicity that the system pretends to, the regulators would have to dictate that each stock be traded only on a single exchange; all orders for it would be sent to that exchange. There could still be competition between exchanges, but each stock would have to decide which of the competing exchanges it was to be listed on.
Alternatively, they could just drop the pretense of the system being a unified whole and rely on arbitrage to equalize prices between exchanges; even that would be simpler than a system that tries to be unified but really isn’t.
In any case, that is the environment that the flash crash took place in: multiple exchanges which are supposed to act like a single unified system but don’t quite, with the consequence that high-frequency traders and other heavy hitters find themselves needing to get direct data feeds from all the exchanges rather than just subscribing to the consolidated feed. But since direct feeds are expensive, the vast majority of traders just get the consolidated feed.
Though that first Nanex flash crash report identifies mis-timestamping, it doesn’t say who exactly did it. Subsequent reports (e.g. this one) fill in that detail: it was the consolidated quote system, which timestamped the quotes after they arrived at the data center which does the consolidation. Indeed, it is still doing so, though that should change due to the recent SEC ruling that exchanges must start sending timestamps to the consolidator — a ruling that came to public attention two weeks ago when the NYSE, in preparing to implement it, broke their systems for a large part of a day. (What seems to be disconfirmed, though, are the Nanex claims that there were already timestamps on the data coming into the consolidator and that those correct timestamps were being replaced with incorrect timestamps.)
Another Nanex article adds a further detail: while the direct feeds to colocated high-frequency traders use the UDP protocol, the feed to the consolidated system uses TCP. (Yes, the stock exchanges use the same sort of networking hardware that the Internet does, though of course operating over private links rather than over the public net.) TCP, it will be recalled, is the main victim of bufferbloat; UDP is less affected. The “U” in UDP doesn’t actually stand for “unreliable”, but that’s the easiest way to remember it: in UDP, data packets are just sent over the network without keeping track of whether any of them have been lost or making sure they arrive in the right order. UDP may in fact be reliable if the underlying network is reliable, but it doesn’t protect the data like TCP does, and thus is not subject to the complicated pathologies that can arise from trying to protect the data. Still, even with UDP the large buffers are still there, invisible until some link gets overloaded but then adding delays to the transmission of data.
In 2010, the year of the flash crash, the available networking hardware and software generally suffered from severe bufferbloat; widespread awareness of the bufferbloat problem dates from 2009, when Jim Gettys coined the term and wrote several articles on it. Though good algorithms for managing buffers have since been developed, getting them widely fielded in networking hardware is another matter, and is still (in 2015) very far from complete. The flash crash involved, at its worst, delays of tens of seconds; that’s more than enough for a TCP connection to exhibit pathological behavior. So bufferbloat probably made the flash crash worse, though to what extent is hard to tell.
The official SEC / CFTC report on the flash crash reads like its authors had heard of the Nanex argument that delay was the cause, thought they had to say something regarding that argument, but didn’t think they had to really take it seriously. Indeed, from a social point of view they didn’t; the Nanex report gives an account of how delay caused the flash crash, but it’s the sort of account that even when correct does not convince people. It is terse enough and has enough jargon that it can’t reach the general public; and specialists will wonder what parts of it the author really knows and what parts he’s just guessing at. I am not too sure myself — which is why, though not disagreeing with it, I have not relied on it here, but instead have just been arguing for the general proposition that delays in feedback are destabilizing. But for those who want a blow-by-blow account, this, again, is the link.
The official report’s section on delays focuses on the worst delays, which occurred rather late in the crash and thus couldn’t have had a causative role. The word “timestamp” (or even just “stamp”) occurs nowhere in the entire report. There is a mention of a delay of five seconds in a direct feed; but there are many direct feeds, and nothing is said about how prevalent this might have been, nor about whether any delays in direct feeds occurred early in the crash and might have fed it. On the whole the report reads like they were looking for a villain, rather than regarding the situation as a dynamical system and trying to understand how the system’s reactions could have been so messed up.
They did in fact find a villain, or at least thought they did: a company which had been selling a large number of “E-Mini” futures (futures on the S&P 500 stock index). The Chicago Mercantile Exchange disagreed about this being at all villainous, as did Nanex, who said that the SEC/CFTC had even gotten their facts wrong about the trading algorithm that the company used. That Nanex article then goes on to argue that the actual cause of the system overload and delays was that high-frequency traders had dumped a lot of E-Mini contracts very quickly — and that since the E-Mini is something like a future on the whole market, this propagated via arbitrage into quite a lot of buy and sell orders on various stocks and other financial instruments, which is what produced the overload and delays in the quote system.
Looking at the graph where those large dumps are marked, they seem to have a total value of something like 500 million dollars (about ten thousand contracts, with each contract being 50 times the value of the S&P 500, which is roughly a thousand). This is still nowhere near the sort of impetus that is normally required to create a trillion-dollar shift in the stock market. But it is plausible as something that increased the level of market activity to where the quote system became overloaded.
This tactic of sending a big pulse of orders strongly resembles the behavior of the “Thor” algorithm that Michael Lewis describes in Flash Boys as being a way for investors to escape the depredations of high-frequency traders. The idea is that the big pulse of orders all gets filled before anyone can react by moving prices. But here it was done by high-frequency traders. It’s also something that can be done to people like that guy in London who was trying to manipulate the market: slamming the market with a massive pulse of buy orders would make his not-meant-to-be-filled sell orders actually get filled, giving him notice that he was swimming in deeper water than he’d thought and with larger predators.
In any case, the details of who exactly overloaded the system seem relatively unimportant: any system has to expect occasional overloads and should handle them gracefully. That might include a delay but does not include mis-timestamping which conceals the existence of the delay.
To finally get back to the question of why there haven’t been more flash crashes, as mentioned above, high-frequency traders and other large players in the market have direct feeds; they don’t rely on the consolidated feed. Its misbehavior in the flash crash no doubt prompted them to use it even less, at least as a source of ground truth. But they probably use it more as a source of information that reveals what other traders are misinformed about. Preying on the misinformed can be quite profitable even if the misinformation is just a matter of 200 milliseconds of delay. And though predatory, it probably is predatory in a way that stabilizes the market, since taking advantage of misinformed people moves the market in the opposite direction from the way they’d move it.
But it would be even better if there were no misinformation in the first place. Having correct timestamps, so that delays are evident, is a good first step. As mentioned above, the SEC has dictated that timestamps be sent from the exchanges, and this is being implemented. If these timestamps are passed on to the the consolidated feed’s customers (as they should be), a lot of customers will suddenly notice that they’re getting a fair bit more delay than they thought they were getting, and will complain.
And they should indeed complain, since there’s no reason for quotes to get stuck in buffers on the channel between the exchange and the consolidator. The fix (courtesy of the bufferbloat community) is for the software that sends the quotes to rate-limit its output. It would keep its own modest buffer (perhaps enough to hold a millisecond’s worth of quotes) and would discard any quote that overflowed the buffer. It would empty that buffer at a fixed rate which was slightly below the data rate of the outgoing channel. This way data would always be arriving at any point in the channel at a slower pace than it could be sent out at, so buffers in the channel would not fill up.
At least that would be the simplest fix; more sophistication could be applied. For instance quotes closer to the best bid/offer could be given priority over quotes farther from it, and could bump them from the buffer. Also, a level of fairness among stocks could be enforced so that huge quote activity in one stock wouldn’t crowd out occasional quotes in other stocks. But even the simplest solution would be an improvement over leaving the buffers uncontrolled.
The above fix is just for delays on the link from an exchange to the consolidator. Delays from the consolidator to subscribers are harder, since they go over many different channels and share them with other data. There can also be delays in the matching engine of the exchange itself; and there can be congestion on the way to the exchange. In those cases, too, it’s probably best to adopt the general policy to throttle rather than delay — that is, to discard orders that are coming in too fast rather than letting them accumulate in a big buffer to later emerge stale. How exactly that might be done is a large topic; but at first glance it seems that what is needed is not some Big Idea but rather a lot of petty negotiations about who pays for what network services and exactly what they get in exchange — negotiations that already take place, but should be expanded to include stipulations about what exactly happens when a link gets congested: whose packets get priority (if anyone’s), what data rate is guaranteed to the subscriber, and what the maximum delay might be.
In modern programming, there are a lot of things that are done not because the computer needs them for the code to work but because the programmer needs them to cover for his natural human failings; they make the programming environment simpler and more predictable. In the stock market, eliminating congestion delays falls into this category. Code that deals with markets will normally be written and tested with delays that are relatively minimal. When all these codes are suddenly thrown into a high-delay environment the results will be unpredictable; even if each individual piece of code does something locally sane, they may interact badly. The players with direct feeds perhaps have enough of an information advantage and enough money to take advantage of such people and ruthlessly crush their attempts to destabilize the market, but it still would be best if they didn’t have to.
Also, in the case of high-frequency traders, the “enough money” part may be doubted: such companies try hard to keep the level of their holdings small. After the flash crash was well under way, there was a “hot potato” period where HFTs traded E-Mini contracts among one another, continually passing them on at lower and lower prices. If any of those firms had had the nerve to hold onto those contracts for a few minutes until the market had recovered, it would have made a very healthy profit. They no doubt realized this to their chagrin after the fact, and at least thought of changing their algorithms to deal better with such situations.
Indeed, in general, flash excursions like this (“excursions” because they can happen upwards as well as downwards) represent someone being foolish and are an opportunity for others to take advantage of that foolishness. Thus does the market discipline its own. But the more complicated the system is — the more it misbehaves in weird ways — the longer it takes for participants to learn how to deal with it. And when an event is as large as the flash crash was, one can expect that the system itself, not just participants, displayed some rather serious misbehavior. The hidden delays in the consolidated feed definitely qualify as serious misbehavior.
Of course, while I think this was the cause of the crash, I’m not pretending to have offered proof of it — not only because I haven’t delved deep into details of what happened that day, but also because in cases like this even people who agree about what caused what in the chain of events can disagree about which of those causes was truly blameworthy and points to something that should be changed to prevent a recurrence. But in this case eliminating congestion delays seems to be the easiest prescription: it does not involve any moral crusades but is just a matter of fixing things on a technical level.
Torture’s effectiveness (or lack thereof)
Often in a controversy the things that are most interesting are the things that there isn’t any particular controversy about. Such is the case with the recent torture report from the Senate Committee on Intelligence. One of its twenty conclusions was:
16: The CIA failed to adequately evaluate the effectiveness of its enhanced interrogation techniques.
The CIA never conducted a credible, comprehensive analysis of the effectiveness of its interrogation techniques, despite a recommendation by the CIA inspector general and similar requests by the national security advisor and the leadership of the Senate Committee on Intelligence.
As they then explain, they are referring to the sort of analysis that they themselves conducted: looking at each piece of important intelligence (such as the identity of Osama bin Laden’s courier), finding where that information came from, and trying to figure out whether the individual from whom it came had been subjected to harsh treatment prior to providing it — and if so, whether it seemed like the harsh treatment had been essential in getting it out of him. This is the sort of analysis they were accusing the CIA of never having done; instead, according to them, the CIA’s own internal reviews had relied on the opinions of the people who designed and ran the interrogation program, and had fobbed off outside queries by answering a different question, namely whether the interrogation program as a whole had produced worthwhile intelligence.
The CIA’s official response:
We agree with [this conclusion] in full.
They say more, but nothing more needs to be said. And they don’t try to walk back that admission by saying something along the lines of “well, we had no formal assessment, but informally we had a good handle on it”.
As for why they would concede such a thing, a lot of it is that they don’t know how their prisoners would have responded to ordinary interrogation because they didn’t try; instead they just went straight to physical abuse. (I write “physical abuse” because most of their techniques don’t rise to the level of torture. Slamming someone against a wall is roughing him up, not torture. Waterboarding probably qualifies as torture, as does keeping someone awake for three days at a stretch. And “rectal feeding”, which they did on a few occasions, is just foolishness: it isn’t painful enough to be torture, and it’s not a viable way to feed someone; the digestive tract doesn’t work in reverse.) In any case, because of this practice of going straight to physical abuse, in most (perhaps all) of the cases where the Senate report argued that the valuable information obtained from a prisoner was obtained prior to him being physically abused, they could make this argument because some other organization had interrogated the prisoner first.
The Senate report seems to be trying to give the impression that not a single piece of useful information was derived by the CIA from someone being phyically abused. They of course do not say that they are trying to prove such a thing, because it’d be pretty silly to try to prove that torturing someone absolutely eliminates the chance of getting useful information out of him. But the vast majority (perhaps all) of the examples in their thousands of pages of examples point that way.
The CIA of course has responses — most of which seem intended to take the edge off the criticisms rather than thoroughly refute them. Some of the responses are quite weak. To argue, for instance, that although the government already had a certain piece of information that was re-obtained by abusing a detainee, the prior information hadn’t been available to the relevant CIA officer, comes perilously close to arguing “we had to abuse people because our computer systems were mismanaged”. Likewise, arguing that information from a detainee was valuable even though it just confirmed information they already had from multiple sources comes perilously close to arguing “we had to abuse people because we were too stupid to know when we’d already found the truth”. The one example the CIA offered that seemed to show that physically abusing a detainee had been worthwhile was that Hambali, under duress, said that a certain group of students had been being groomed as pilots for Al Qaeda operations, then later tried to retract it — but it was judged to be correct, on what sounds like good grounds (although it’s hard to really tell).
The CIA had a rather strange theory of torture, differing greatly from the usual notion of telling a prisoner that if he doesn’t answer the question the pain will start (or will continue). Instead of trying to extort information, they were trying to break people — to reduce them to a state of “learned helplessness”, after which supposedly they would answer questions. Learned helplessness is a notion that comes out of experiments with dogs; the dogs were tortured with electrical shocks under conditions where they truly were helpless; and then later, when the door of the cage was left open and they could have escaped, they still lay there whimpering under the shock rather than jumping out. It is not clear how this would help with interrogation of humans; there would seem to be no need for detainees to “learn” helplessness when they can just be put in a situation where they really are helpless; and passively submitting to a hopeless situation is different from actively answering questions correctly.
The CIA’s theory originated with not with experienced interrogators but with two psychologists from the military’s SERE (Survival, Evasion, Resistance, and Escape) school, who there had been waterboarding trainees in order to give them a taste of what they might have to go through. One of those two, James Mitchell, in an interview with Vice News, seemed to be trying to tell the interviewer that he had misgivings about that use of waterboarding: a reaction from trainees who’d gone through it was that they never wanted to go through that again, and if captured would just tell the enemy everything; so the exercise was just “doing the enemy’s work for them”. Unfortunately the interviewer did not pick up on this and ask whether the psychologist indeed meant that they should stop waterboarding SERE trainees; but it seems like the logical conclusion. The military, while it believes in practicing for war, has long held to the rule that “you don’t need to practice bleeding”, and it seems reasonable that the same should go for being tortured. Also unfortunately unasked in the interview was what exactly the idea was, with this “learned helplessness” theory of torturing people; the Senate report is somewhat vague on this, and if posed as an abstract question about human psychology, this should have been answerable without having to divulge classified information.
In any case, the CIA sure didn’t break Khalid Sheikh Mohammed, perhaps their most important detainee. He soon picked up the pattern of the waterboarding, such as that each pour of water was to last thirty seconds; near the end of the pour, he would hold up fingers in the air to count off the remaining seconds. And then, under questioning, he continued making stuff up left and right, doing his best to distract the agency from its pursuit of his associates. In one case he got two innocent men arrested; in another, he talked about a plot to assassinate former President Jimmy Carter. (Somehow the CIA thought that that was worth writing down, rather than just laughing at.) What worked, with him, was to show him that someone had been arrested; then he would give up details on that person. Occasionally he slipped up and said something useful; but it’s not like he got anywhere near being broken. Not that one would expect the mastermind of the 9/11 attacks to break easily, of course; torture advocates might argue that waterboarding was too wimpy and that the rack or the thumbscrew would get better results — or perhaps, to pick a more modern method, that it should have been electricity to the genitals.
The way that torture has been advocated in recent years, though, has been rather strange. The scenario that was usually invoked was of the catching of a terrorist who has planted a bomb and who must be forced to divulge the location of that bomb before it explodes. In the history of terrorist plots, though, this is not something that happens much. Often someone is caught before planting a bomb; often he is caught after it explodes; but being caught in the interval between planting the bomb and it exploding? That is usually a relatively short interval of time; in suicide bombings, its length is zero. Even if a terrorist is caught during that interval, he has to be caught in such a way that the authorities know he has planted a bomb which hasn’t exploded yet; they somehow have to have insight into his operation, yet not enough insight to know his target.
But though it seems unlikely, this “ticking time bomb” scenario is not entirely empty. It could happen. The people who invoked the scenario never (that I saw) offered any documented cases of it happening — any cases one could point to and say “if we only had tortured X, we could have prevented the N deaths and M severe injuries from the bomb he laid”. They certainly did not offer long lists of cases like that, bemoaning “how many more casualties need to be added to this list before we come to our senses?” For all I know, the scenario has never happened, though I would guess that it has happened at one point or another, somewhere or other. Yet even if it hasn’t, it might in future.
It’s just that this is not the way to make laws. For every law, there are cases where it would be best to break the law. Even for murder, one can point to cases where murdering someone would have stopped the much greater harm that he did, and where someone was already inclined to murder him and was held back only by fear of the law. There are occasional times and places even for treason; this nation was formed by a treasonous rebellion, as have been many others. Yet to make murder and treason legal would mean anarchy. Likewise for torture: just because there are rare cases where torture would be worthwhile does not mean that rules should be written to permit torture. For exceptional cases, the law has escape hatches. Prosecutors are likely to decline to charge someone who by torturing has prevented many deaths and injuries; juries are likely to decline to convict; and in the last resort, a pardon is likely to be granted.
Besides this outright advocacy of torture, there is another, subtler form of advocacy, perhaps unintentional, which consists of fictional depictions which show it working better than it really does. I have not made anything like a comprehensive survey of these, and would not care to; but the movie The Battle of Algiers is perhaps the most prominent example. It doesn’t expressly advocate for torture, of course; rather, it uses the fact that the French tortured as propaganda against them. Yet it presents torture as a very effective tool for rounding up terrorist networks. The military is shown marching in, in response to some terrorist bombings, and the commanding officer is shown explaining to his subordinates what their tactics will be. To round up networks of terrorists, he says, they need information. And how to get information? “L’interrogatoire!”, he exclaims. It is implicit that this means torture; and the principal terrorist that the movie focuses on is indeed found via torturing someone.
The movie is compelling enough, and fair enough to the military, that generations of counterterrorist specialists have watched it to gain an idea of what happened in Algeria; and it is indeed well worth watching even if just to see what things looked like. But more recently, one of the principal officers in charge, Paul Aussaresses, published his memoirs. Reviews of the book largely focused on his admission that torture was used. But when read in detail, the book tells a different story. Yes, they tortured; but mostly what they did that was objectionable was to kill people without trial — thousands of them. As Aussaresses tells it, for the vast majority of the terrorists, torture wasn’t necessary to get information out of them; they spilled the beans without being tortured. It is a very different story than that told by the movie, and one which reflects worse on pretty much everyone involved: on himself and his men (for those executions without trial), on the terrorists (for not being the hardened revolutionaries they’d like to think of themselves as), and of course on those who exaggerated the role of torture and missed the mass killings.
Though the book received plenty of condemnation, its accuracy does not seem to have been a point of criticism; mostly it was criticized for being appallingly tone-deaf. Which it is; Aussaresses was not the sort of man who might quote Napoleon’s dictum that in war, “the moral is to the physical as three is to one”. Both in Algeria and when writing, he focused purely on the short-term effect of his executions without trial. The long-term effect of poisoning the public mind against France he does not consider — for that, he seems to prefer to blame leftists in the press. Which is understandable; the press was indeed a middleman for these sorts of charges, and there were plenty of leftists in the press who sympathized with the Algerian independence movement and overlooked its savage nature. (That the savagery was not simply a problem of the French being there has been shown quite amply by the post-independence history of the country, which has featured terror and repression far worse than the French ever received or dished out.) But to put the entire blame on the press, as if the underlying facts did not matter, is too easy. Leftists commonly neglect the truth, but their charges only have serious traction with the general public when there is some truth to them.
But though this heedlessness of public opinion may have contributed to losing the Algerian war, and certainly made the author a pariah in France after his book was published, it does lend plausibility to his claim that torture wasn’t all that important: whatever his reasons for making that claim might have been, political correctness could not have been among them.
A frequently given piece of advice for writers is to avoid long sentences. It’s one of the pieces of advice that I have always completely disregarded, as being obviously wrong: the thing to avoid is not long sentences, but complicated sentences. A sentence that is long can still be quite simple, if it doesn’t require the reader to remember previous parts of the sentence in order to parse the rest. Instead, each part of the sentence just extends the thought made in the previous one, with appropriate punctuation that shows the relationship between the two; a sentence of that sort can go on for many lines without confusing anyone. What is confusing is when a sentence does something like requiring the reader to remember which verb was used back forty words previously, before the sentence went off on a tangent. And the cure for such sentences is never as simple as just bisecting them. Often one can rearrange them to bring together the separated pieces of an idea; but if that doesn’t work, one has to drop the idea and then explicitly take it up again when one later comes back to it. Or, more brutally, one can axe the tangential remark; not everything needs to be said — or if it does need to be said, maybe it can be said somewhere else.
So if you find yourself breaking sentences apart to follow a rule that sentences should be short, you’re doing it wrong; if they can be broken apart without much trouble, they also weren’t any trouble for the reader to understand in the first place. It’s when you read a sentence, get lost, and have to backtrack to grasp its meaning, that rewriting is indicated.
Of course the difficulty in this is that just after you’ve written a confusing sentence, the thought that gave rise to it is fresh in your mind, so you remember what you were driving at even if most readers wouldn’t have a clue. The cure can be to run it by someone else, to wait a day (or maybe a week) and then revisit the text, or, in the long run, to just develop a sense of when a sentence is getting too complicated.
There is one excuse for complicated sentences, which is when you’re dealing with an underlying idea that is itself complicated, and you’re writing for a very narrow audience of people whom you expect to actually understand that idea. This is a rare circumstance; most writing — even most technical writing — has to be palatable to people who will just nod their heads at it without really understanding it. But sometimes one is writing for friends; and then the excuse that the sentence is no more complicated than the underlying idea can be a reasonable one. Even then, it still would be better to explain the idea using simpler sentences; but that takes more effort.
There is a similar state of affairs with writing computer programs, where people often insist that functions should be short. While much bad code falls afoul of that rule, the rule doesn’t really get at the essence of the problem. It’s not the functions which one reacts to by saying “okay, this is getting long, let’s move the top of it to its own function” which are the problem; it is the functions which one reacts to by saying “oh, dear lord, what is this function doing?” — or, in one’s own just-written code, “whoa, this is getting a bit hairy; how can I break it up into simpler pieces?” And the pieces do have to be simpler, for the effort to make sense; otherwise the reader of the code is left contemplating not just a horrible mess, but a horrible mess that has metastasized.
The reason people make rules about length rather than complexity is
that length is easy to define, while complexity isn’t. Indeed,
complexity is to some extent a matter of personal taste and
experience. If you’re used to some particular code pattern, you’ll
consider it less complicated than will someone who is seeing it for
the first time. This also makes the rule to avoid complexity a rule
that is hard to teach: from a beginner’s point of view lots of things
about programming are complicated, so avoiding complexity would mean
avoiding learning how to program. Instead, beginners must be given a
set of easy-to-apply rules that aim at that end, such as avoiding
gotos. Such rules are never perfect; but since they have been
handed down by high authority, programmers often advocate them with
near-religious fervor. Not that really good programmers do that; they
have long since realized that these are just rules of thumb, not iron
laws, and that there are times when disregarding them makes sense.
But avoiding long functions isn’t even a particularly good rule of
gotos, for instance, is much more to the point (though
even to it there are exceptions).