<<  Page 10 of 14  >>


Accuracy

When I first read Shannon’s 1948 paper “A Mathematical Theory of Communication”, one set of numbers particularly jumped out at me: the numbers for the difference between the error rate of a communications channel and the rate at which information is lost from it. I recently ran into a free link to the paper (in a blog post by Federico Pereiro), which reminded me of it. As Shannon explains the issue:

Suppose there are two possible symbols 0 and 1, and we are transmitting at a rate of 1000 symbols per second with probabilities p0 = p1 = 1/2. Thus our source is producing information at the rate of 1000 bits per second. During transmission the noise introduces errors so that, on the average, 1 in 100 is received incorrectly (a 0 as 1, or 1 as 0). What is the rate of transmission of information? Certainly less than 1000 bits per second since about 1% of the received symbols are incorrect. Our first impulse might be to say the rate is 990 bits per second, merely subtracting the expected number of errors. This is not satisfactory since it fails to take into account the recipient’s lack of knowledge of where the errors occur. We may carry it to an extreme case and suppose the noise so great that the received symbols are entirely independent of the transmitted symbols. The probability of receiving 1 is 1/2 whatever was transmitted and similarly for 0. Then about half of the received symbols are correct due to chance alone, and we would be giving the system credit for transmitting 500 bits per second while actually no information is being transmitted at all. Equally “good” transmission would be obtained by dispensing with the channel entirely and flipping a coin at the receiving point.

He then goes on to calculate the transmission rate, for this case, as being 919 bits per second. That is, if 1% of the bits are flipped, it results in about an 8% loss in the capacity of the channel. In other words, to transmit data correctly in the presence of a 1% error rate, you have to surround it with error-correction codes that bulk it up by about 8% — in practice, by even more than that, since error-correction schemes are not perfectly efficient.

What is interesting is to consider the implications of this in less formal settings: in news reporting, for instance. A 1% error rate would be very good for a newspaper; even the best magazines, with fact checkers who call around to check details of the articles they publish, can only dream of an error rate as low as 1%. Most of their errors are things that only specialists notice, but they are still errors, and often ones which significantly change the moral of the story; specialists frequently sigh at the inaccuracy of the news coverage of their specialty. But that unattainable 1% error rate would still mean that, at best, one would have to throw out 8% of what is said as being unreliable. That is, if one were interested in perfect truth. People in the news business have long since become inured to the fact that what they print is imperfect, and their main concern is that the errors they make not be of the laughable sort — or at least not laughable to the general population. But if one wants to figure out the world, that is not good enough.

To make things worse, most of the world does not encode information in error-correction schemes to cover for the press’s errors. When anyone does so (and political groups have learned to do so), the scheme they usually use is simple repetition: saying things multiple times, in varying language. That’s quite an inefficient error detection/correction scheme, but is the only one that most recipients can be expected to decode and that can get through a human news system in the first place: people would look at you quite funny if you tried to speak in Hamming codes.

For the rest of the information in the world — the stuff not sponsored by any political group — the situation is even worse. In the absence of deliberately added redundancy, one has to use whatever inherent redundancy there is in the information, and mistrust the stuff that isn’t confirmed that way, which is typically more than 90% of it.

This is why reliable sources of information are so valuable: one doesn’t have to regard them with near-fanatical mistrust. It is tempting, once one knows the truth about something, to be liberal in excusing errors in descriptions of it, on the grounds that those descriptions are mostly right. But that is to forget, as Shannon puts it, “the recipient’s lack of knowledge of where the errors occur”, for recipients who are still struggling to find the truth. It may be right to excuse the people who commit those errors, as human beings who perhaps did the best that could be expected of them in the circumstances, but that doesn’t make it right to excuse their work product.


Easy Bread

Bread right out of the oven has a taste that beats most anything that can be bought in a store. Even in stores that sell bread baked on the same day, it has usually been sitting around for hours. But baking one’s own bread is generally a hassle. Bread machines make it much easier, but like all machines, they aren’t quite as easy to deal with as one first imagines. They take up space; they sometimes break; they have to be cleaned. (The one I once owned featured an impeller that had to be dug out of the bottom of each loaf.) Nonstick surfaces make cleaning easier, but don’t entirely eliminate it.

A few years ago, I was sent a pointer to an article by Mark Bittman describing a method for making bread without kneading. It was further billed as a “truly minimalist” bread. But when I tried it, the effort required was not minimal enough for me. There was a lot of rigamarole about flouring the ball of dough, manipulating it two hours prior to baking, and preheating a pot to bake it in. Bittman calls himself “The Minimalist”, but he didn’t minimize this one.

The trick to minimizing it is to bake the loaf in a silicone mold. Stuff doesn’t stick to silicone, so when the loaf is done baking, you just turn the mold upside down and the loaf falls out. No greasing, flouring, or anything else is required for this to happen. I don’t even clean the silicone between loaves; it isn’t left immaculate, but the slight residue doesn’t harm anything.

The minimized recipe uses the same ingredients:

  • 3 cups flour
  • 1/4 teaspoon yeast
  • 1 teaspoon salt
  • about 1.5 cups water

Just about any sort of flour will do; flour without any gluten in it (non-wheat) will produce a dense, crumbly loaf, but that’s not a big deal. The salt can be omitted, or doubled, or whatever; it’s just for taste. Mix the dry ingredients for about a minute (they’re easier to mix dry), then add water, mixing, until the mass is completely wet, but no further. The amount of water needed depends on the type and brand of flour; the number given above is just a rough approximation. There shouldn’t be any mounds of unmixed flour hiding below the bottom of the mixture, nor any parts of the mixture whose surface is dry and which protrude a bit. But don’t add any more water than that, or you’ll get a rather wet loaf. For that matter, even when done right this yields a pretty wet loaf. Not that the wetness matters much; it has no effect on taste and not even much effect on texture.

Anyway, the next step is to wait 12 to 24 hours for it to rise. Or longer, if it’s cold; the original recipe specifies 18 hours at 70 degrees F, but those numbers are not critical. The long rising time is what makes it unnecessary to knead. It also allows for the development of a rich microbial flora, which provides the excellent flavor achieved with this recipe. Or, well, rather, I hope that’s not the case. And I don’t believe it is; the flavor has been quite uniform, which wouldn’t be the case if random germs were providing it. That’s much less yeast than is normally used, but still, it seems, enough to provide the vast majority of the inoculum. But, uh, best not to spit in the stuff while mixing it, or to mix with your hands. The baking will kill most germs that happened to get in and grow, but some might survive in spore form.

The next step is to transfer the mess to the silicone baking dish, and bake at 375 degrees F (190 C) for an hour or so. Exact timing is not critical. If you like a burned, er, “dark” crust, do as in the original recipe and increase the temperature and decrease the time.

The only part of this exercise that still seems like a hassle is cleaning the mixing bowl. Bread dough is sticky and doesn’t wash off easily. I tried doing the mixing in the baking mold, but that didn’t work well: silicone is thin and floppy, and the mold has square corners unconducive to mixing. Besides, when bubbles form in the mold as the bread rises, they form against the walls, so instead of a smooth crust one gets a crust with bubble holes in it. But perhaps it deserves another try. In any case, even as it is, this is competitive with bread machines for ease: probably somewhat worse, but not that much worse.

Once the bread is out of the oven, wait half an hour or so for the heat to continue penetrating into the interior, finishing up the cooking process.

Then, any part that isn’t eaten right away is best sliced up and put in the freezer, for later microwaving. That preserves most of the right-out-of-the-oven taste. The only trouble with doing that with this loaf is that the slices tend to stick; that’s the downside of having a wet loaf. The way to avoid sticking is to pack the slices in a pessimal fashion: instead of trying to pack them tightly, try to pile them up so as to leave space unused, so that they’re not touching each other very much. They’ll still stick, but generally can be pried apart without much trouble. (Still, people with weak hands might want to just pack the slices in a single layer so that they don’t stick at all.)


Audio sampling rates and the Fourier transform

Christopher Montgomery (“Monty”) recently posted an excellent argument against distributing music in 192 kHz, 24-bit form, as opposed to the usual 44.1 kHz (or 48 kHz), 16-bit form. I think, however, that many of the people who are inclined to doubt this sort of thing are going to doubt it at a much more fundamental level than the level he’s addressed it at. And I don’t just mean the math-phobic; I know I would have doubted it, once. For years, and even after finishing an undergraduate degree in electrical engineering, I wondered whether speaking of signals in terms of their frequency content was really something that could be done as glibly and freely as everyone seemed to assume it could be. It’s an assumption that pervades Monty’s argument — for instance, when he states that “all signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling”. If you don’t believe in speaking of signals in terms of their frequency content, you won’t know what to make of that sentence.

As it happens, the assumption is completely correct, and the glibness and freeness with which people talk of the frequency domain is completely justified; but it originally took some serious proving by mathematicians. To summarize the main results, first of all, the Fourier transform of a signal is unique. When you’ve found one series of sine waves and cosine waves that when added together are equal to your signal, there is no other; you’ve found the only one. (Fourier transforms are usually done in terms of complex exponentials, but when one is dealing with real signals, they all boil down to sines and cosines; the imaginary numbers disappear in the final results.) If you construct a signal from sinusoids of frequencies below 20 kHz, there’s no possibility of someone else analyzing it some other way and finding frequencies higher than that in it — unless, of course, he does it wrong (an ever-present danger).

Also, the Fourier representation is complete: any signal can be exactly represented as a sum of sinusoids (generally an infinite sum of them, or an integral which is the limit of an infinite sum of them). There are no signals out there which defy Fourier analysis, and which might be left out entirely when one speaks of the “frequency content” of a signal. Even signals that look nothing like sine waves can be constructed from sine waves, though in that case it takes more of them to approximate the signal well.

But the main thing that makes it possible to be so glib about the frequency domain is that the Fourier transform is orthogonal. (Or in its complex-exponential variants, unitary, which is the corresponding concept for complex numbers.) What it means for a transform to be orthogonal can be illustrated by the example of coordinate transforms in three-dimensional space. In general, a coordinate transform of a three-dimensional object may twist it, bend it, or stretch it, but an orthogonal transform can only rotate it and possibly flip it over to its mirror image. When viewing 3D objects on a computer screen, applying an orthogonal transform just results in looking at the same object from a different angle; it doesn’t fundamentally change the object. At most it might flip the ‘handedness’, changing a right hand into a left hand or vice versa. In the Fourier transform there are not just three numbers (the three coordinates) being transformed but an infinite number of them: one continuous function (the signal) is being transformed into another continuous function (its spectrum); but again, orthogonality means that sizes are preserved. The “size”, in this case, is the total energy of the signal (or its square root — what mathematicians call the L2 norm, and engineers call the root-mean-square). Applying that measure to the signal yields the same result as does applying the same measure to its spectrum. This means that one can speak of the energy in different frequency bands as being something that adds together to give the total energy, just as one speaks of the energy in different time intervals as being something that adds up to give the total energy — which of course is the same whether one adds it up in the time domain or the frequency domain. This also applies, of course, to differences between signals: if you make a change to a signal, the size of the change is the same in the frequency domain as in the time domain. With a transform that was not orthogonal, a small change to the signal might mean a large change in its transform, or vice versa. This would make it much harder to work with the transform; you would constantly have to be looking over your shoulder to make sure that the math was not about to stab you in the back. As it is, it’s a reliable servant that can be taken for granted. As in the case of 3D coordinate transforms, but in a vaguer sense, the Fourier transform is just a different way of looking at the same signal (“looking at it in the frequency domain”), not something that warps or distorts it.

Engineers these days seem to go mostly by shared experience, in feeling comfortable with the Fourier transform: it hasn’t stabbed any of their fellow-professionals in the back, so it probably won’t do so for them, either. But as a student, I didn’t feel comfortable until I’d seen proofs of the results described above. In general, learning from experience means learning a lot of things the hard way; that just happens not to be so in this particular case: there are no unpleasant surprises lurking.

Now, when trying to use the Fourier transform on a computer, things do get somewhat more complicated, and there can be unpleasant surprises. Computers don’t naturally do the Fourier transform in its continuous-function version; instead they do discrete variants of it. When it comes to those discrete variants, it is possible to feed them a sine wave of a single frequency and get back an analysis saying that it contains not that frequency but all sorts of other frequencies: all you have to do is to make the original sine wave not be periodic on the interval you’re analyzing it on. But that is a practical problem for numerical programmers who want to use the Fourier transform in their algorithms; it’s not a problem with the continuous version of the Fourier transform, in which one always considers the entire signal, rather than chopping it at the beginning and end of some interval. It is that chopping which introduces the spurious frequencies; and in contexts where this results in a practical problem, there are usually ways to solve it, or at least greatly mitigate it; these commonly involve phasing the signal in and out slowly, rather than abruptly chopping it. In any case, it’s a limitation of computers doing Fourier transforms, not a limitation of computers playing audio from digital samples — a process which need not involve the computation of any Fourier transforms.

Much more could be said about the Fourier transform, of course, but the above are some of the main reasons why it is so useful in such a wide variety of applications (of which audio is just one).

Having explained why sentences like

“All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling”

are meaningful, and not merely some sort of mathematical shell game, a few words about Monty’s essay itself. As regards the ability of modern computer audio systems to reproduce everything up to the Nyquist limit, I happen to have been sending sine waves through an audio card recently — and not any kind of fancy audio device, just five-year-old motherboard audio, albeit motherboard audio for which I’d paid a premium of something like $4 over a nearly-equivalent motherboard of the same brand with lesser audio. This particular motherboard audio does 192 kHz sample rates, and I was testing it with sine waves of up to the Nyquist frequency (96 kHz). Graphed in Audacity, which shows signals by drawing straight lines between the sample points, the signals looked very little like a sine wave. But when I looked at the output on an oscilloscope with a much higher sample rate, it was a perfect sine wave. Above 75 kHz, the signal’s amplitude started decreasing, until at 90 kHz it was only about a third of normal; but it still looked like a perfect sine wave. Reproducing a sine wave given only three points per wavelength is something of a trick, but it’s a trick my system can and does pull off, exactly as per Monty’s claims. Accurate reproduction of things only dogs can hear, in case one wants to torture the neighboorhood pooch with extremely precise torturing sounds! (Or in my case, in case one wants to do some capacitor ESR testing.)

The limits of audio perception are not something where I’ve looked into the literature much, but I have no reason to doubt what Monty says about it. Something I did wonder, after reading his essay, though, was: what about intermodulation distortion in the ear itself? That is, distortion of the same sort that he describes in amplifiers and speakers. Being made of meat, the human ear is far fromnot perfectly linear; and pretty much any nonlinearity gives some amount of intermodulation distortion. Unlike in the case of intermodulation distortion in audio equipment, though, this would be natural intermodulation distortion: if, for instance, one heard a violin being played in the same room, one would be hearing whatever intermodulation distortion resulted in the ear from its ultrasonic frequencies; those would thus comprise part of the natural sound of a violin, and reproducing them thus could be useful. Also, nonlinearities can be complicated: any given audio sample might not excite some particular nonlinearities that might nevertheless be excited by a different sort of music. But as the hypothetical language (“could”, “would”) indicates, these are theoretical possibilities, which can be put to rest by appropriate experiments. As per a test Monty links to, which was “constructed to maximize the possibility of detection by placing the intermodulation products where they’d be most audible” — and nevertheless found that ultrasonics made no audible difference. I only took note of that sentence on re-reading; but this nonlinearity-in-the-ear idea is what that test was designed to check for.

Poking around at the Hydrogen Audio forums, the explanation for why nonlinearity in the ear doesn’t produce audible lower frequencies seems to be that:

  • Ultrasonics get highly attenuated in the outer parts of the ear, before they could do much in the way of intermodulation distortion. (It’s quite common for higher frequencies to get attenuated more, even in air; this is why a nearby explosion is heard as a “crack”, but a far-off one is more of a boom.)
  • Intermodulation distortion then imposes a further attenuation, the spurious frequencies introduced by distortion having much less energy than the original frequencies.
  • Generally in music the ultrasonic parts are at a lower volume than the audible parts to begin with.

Multiply these three effects together, or even just the first two of them, and perhaps one always gets something too small to be heard. In any case, as Monty states, it’s impossible to absolutely prove that nobody can hear ultrasonics even in the most specially-constructed audio tracks. But when one is considering this sort of thing as a commercial proposition, the question is not whether exceptional freaks might exist, but what the averages are.

(Update: Monty tells me that contrary to what I’d originally stated above, “by most measures the ear is quite linear”, and “exhibits low harmonic distortion figures and so virtually no intermodulation.” The text above has been corrected accordingly. I’d seen references to nonlinearity in the hair cells; and it’d be hard to avoid it in neurons; but those are after the frequencies have been sorted out.)


<<  Page 10 of 14  >>