Where did all these silly “similar to”s come from?
Something I’ve noticed a fair bit, recently, has been the use of the phrase “similar to” at the start of a sentence, where one would ordinarily use “like”. Say, instead of writing
Like dogs, cats have four legs.
someone would write
Similar to dogs, cats have four legs.
When I first encountered this usage, I parsed it wrong — meaning that I parsed it by the normal rules of the language, where the leading clause is a parenthetical remark that cats are similar to dogs. But as I ran into more such instances, I realized that such sentences are never meant to be parsed that way — that “similar to” is just being used as a synonym for “like”. Under the rules for “like”, the sentence is not saying that cats and dogs are similar, just that they share the property of having four legs.
Well, okay; if people want to extend the rules for “like” to “similar to”, who am I to stop them? Language changes, sometimes for the better and sometimes for the worse. But it leads to the question: what is it about “like” that is causing people to avoid it?
I have several theories:
“Like” is something that, like, teenage girls say, and so is inappropriate for pompous, pedantic writing — which is commonly where I’ve seen these strange “similar to”s. In particular, I’ve seen a lot of them in formal medical and biology articles. In those fields, there are a lot of women who don’t want to seem like teenage girls in their writing. (Not that they are alone in this strange usage; males have picked it up too.)
“Like” is too short and simple, and inappropriate for this era of obfuscation, so the same sorts of people who write “utilize” instead of “use” write “similar to” instead of “like”.
People don’t really know where to use “like” any more, as opposed to alternatives such as “as with” or “as in”, so they just use “similar to” whenever any of them is called for, in the hope that it will do. To illustrate this distinction, the sentence
Like everything else, the more practice you have the better you can become.
should really be
As with everything else, the more practice you have the better you can become.
since “everything else” isn’t like “more practice”, “you”, or any other subpart of the sentence — not even in some particular way, as in the first example, where cats and dogs both have four legs. But it seems like some people, vaguely sensing that “like” isn’t quite the right word, would make it even worse, by writing
Similar to everything else, the more practice you have the better you can become.
(Not that I’ve seen that particular sentence in the wild, but I’ve seen analogous ones.) Unlike the simple substitution of “similar to” for “like”, this sort of muddling actually subtracts information as compared to a proper phrasing, so is substantially more objectionable.
These three theories are not mutually exclusive.
When Donald Rumsfeld came out with his line about there being “unknown unknowns”, a lot of people laughed, and in response his defenders sneered at the laughers. But I didn’t see on either side a real appreciation of the phrase — indeed, I still haven’t, from anyone.
These are unknowns.
They are not the “known unknowns”, which “we know we don’t know”.
Instead they are things which “we don’t know we don’t know”.
So these were things he (and others) thought he knew, but he didn’t know — in simpler words, things he was wrong about.
This makes Rumsfeld’s line one of the most unusual things said by a politician in recent memory: an admission of error. Not just that he had been wrong in the past — as in the line, which politicians hate, but are sometimes forced into, “yes, that was a mistake, but now I know better”. This, though presented confusingly, was an even rarer admission: that he was wrong in the present and going to be wrong in the future. If he’d wanted not to obfuscate but to put it dramatically, he could have turned one of Shakespeare’s lines against himself, saying “There are more things in heaven and earth than are dreamt of in my philosophy”.
Which indeed turned out to be the case.
Why the immune system is so complicated
Trying to understand the immune system can seem like a neverending task. There are tens of different varieties or subvarieties of immune system cells, with new subvarieties being discovered every so often. For sending messages between those cells, there are tens (or is it hundreds?) of signaling molecules (“cytokines”, among others). A signaling molecule that turns up one part of the immune system may turn down another, as in (but almost certainly not limited to) the “Th1” versus “Th2” concept, itself a not very precise notion. There are also homeostatic loops in which the body reacts to its own reactions, damping an immune response when it has gone on for too long and threatens to be more damaging than it is worth.
Such subtleties have not propagated to popular culture, where various substances are described as “boosting the immune system”, with no qualification as to which part of the system is being boosted or how long that boost might last. But they are well known to specialists — who, themselves, will be the first to admit that even they don’t fully understand the system and that it needs more study. The immune system is often blamed for disease, but although it is not surprising that a complicated system might malfunction, still this is generally a diagnosis of exclusion, a sort of thing that Darwin had a rule to avoid believing: a disease is labeled “autoimmune” because nobody has found a causative microbe that is goading the immune system on, not because anyone has proven that there is no such microbe. Indeed, with such diseases, though doctors commonly profess certainty about autoimmunity being the root cause, there is usually in the scientific literature a constant trickle of attempts to blame them on one microbe or another. The one thing that is completely clear about such diseases is that whatever immune system activity is going on isn’t curing the patient, and is causing distress to him or her. The complexity of the system makes other conclusions uncertain.
So where does all this Rube Goldberg action come from? It’s tempting to blame evolution, and the accumulation of cruft in the genome, but evolution can be quite good at simplifying when simplicity is actually optimal. We have only one backbone in our body, not five sort-of-parallel ones all trying to combine to support us. So there must be something optimal here about complexity, and when considered it’s obvious: if we could understand the immune system easily, so could microbes, and so they could subvert it easily. Indeed, it seems like whenever I read about the workings of any well-studied human pathogen, those workings include at least one way of eluding, deceiving, or sabotaging the immune system, and often two or three of them. Germs don’t seem to qualify as human pathogens, in the eyes of doctors, unless they have such a way; otherwise they are just one of the “harmless” background microbes which the immune system usually deals with so efficiently that we don’t even know that they are trying to eat us (though they can still be harmful in high doses). Yet even when a germ has three different ways of eluding the immune system, that doesn’t make it 100% deadly; most of the time the immune system can still eventually get it under control, using a fourth (and maybe a fifth and a sixth) mechanism in its arsenal.
This situation differs greatly from the situation with computers, where the simplest mechanisms to counter computer viruses and worms are commonly the best. With computers, you can make, in circuitry or with the aid of circuitry, a separate protected area which can’t be sabotaged. In wetware everything is swimming in the same soup: both microbes and immune system can do anything to each other that biochemistry allows, which is quite a lot. Any signaling molecule used by the immune system can be detected by microbes, allowing them to know what the system is doing, or can be synthesized by them, causing the system to do the wrong thing.
With computers, countering malware is mostly a question of how paranoid you are in letting information into the protected area. In practice the standard is often pretty permissive, but that is a matter of convenience — of programmers cutting corners to ship products fast, and of eliminating barriers that would inconvenience users. But then when customers suffer from security holes, programmers change course and get more serious about security. For those trying to make the best of this unpleasant tradeoff, simplicity is a good guiding light: when things are simple to program it lessens the temptation to cut corners; and where barriers must be inserted that inconvenience users, simplicity makes it possible to explain why those barriers are there.
People try to do some of the things in computer security that the immune system does, but it doesn’t work well. Antivirus products are the prime example of this. Like the immune system, they try to recognize hostile intruders yet without any really definitive way of doing so. The result is that they spend so much effort searching that they often noticeably slow down machines, and that they sometimes interfere with legitimate activities — sometimes openly and obnoxiously objecting, and sometimes insidiously sabotaging. And like the immune system, they are themselves subject to subversion: a virus can alter the antivirus program.
In computing, this qualifies as a big mess, which many people choose to avoid entirely. In wetware, this sort of thing is the best we’ve got. As big creatures, we can afford a big mess of complexity; microbes don’t have the genome size to understand our immune system — or, to speak more precisely, to react as if they understood it. They can adopt the occasional dodge, but a full understanding, such as would be needed to take thorough control of the system and use it for their own purposes, is beyond them. For microbes to evolve to expand their genomes and get more complicated would go against their basic life strategy of being fast breeders who are small and simple. Also, it wouldn’t just be a matter of learning one host species’s immune system, but rather that of all their hosts. Most microbes can live off any one of a number of host species, which is a great advantage to them, since when they leave one animal the next potential victim that they encounter is likely to be of a different species. And, although immune systems of different species are similar, they are not identical, so learning how to deal with a variety of animals’ immune systems is harder than learning how to deal with just one.
Or, to view things another way, microbes don’t need to get more complicated: they’re already doing so well in the struggle for existence that doing a bit better wouldn’t provide them with much evolutionary advantage.
As I hope has been apparent, when writing of microbes “understanding” the immune system, I’m not referring to an intellectual understanding but an operational understanding. An intellectual understanding is something that is possessed by a programmer who writes code to model a system; an operational understanding is something that is possessed by the code itself. Not that a microbe would do this digitally, of course; any model they might have of the immune system would be analog in nature, somewhat like the old analog computers. But in those, to have a working model of a system with N variables, you needed a computer with at least N amplifiers. To model the immune system in this fashion would mean making some sort of biochemical analog which had as many different working parts as the immune system does. By boosting the number of working parts, we put this task out of the reach of microbes — at the cost of making the system annoyingly complicated to its human students.