### Computer fan bearings

When I first got into messing with computer hardware, the received wisdom as regards fan bearings, for cooling fans on computers, was that there were two types, ball bearings and sleeve bearings, and that the tradeoffs were that ball bearings were noisier, but that sleeve bearings tended were less reliable, and tended to fail silently, likely letting the device they were cooling overheat and fail. Ball bearings get a lot noisier before they fail, and were thus the recommended solution for most purposes.

But these days, there are a variety of names for fan bearings. In Newegg’s list, today, of 120mm fans for sale, the various bearing types are described as follows (with each bearing type followed by the number of fan models that contain it):

• Sleeve (43)
• Ball (15)
• 2 Ball (19)
• 1 Ball, 1 Sleeve (2)
• Fluid Dynamic (17)
• Hydraulic (1)
• Hydro Wave (7)
• Nanoflux Bearing (NFB) (4)
• Ever Lubricate (11)
• EverLasting Quiet (1)
• Rifle (2)
• SSO (2)
• Z-AXIS (1)

Besides ball and sleeve, the principal alternative in that list is “fluid dynamic”. To computer people, fluid dynamic bearings have a high reputation, as being the thing that replaced ball bearings in hard drive spindles, making them a lot quieter. Hard drives no longer make an annoying whine just from spinning, like they did prior to about five to ten years ago (depending on manufacturer).

I disassembled a fluid dynamic bearing from a failed Seagate drive, to see how it worked. (The drive had failed with a head crash; the bearing was still fine.) Disassembling it required grinding, because it appeared to have been welded together (with a tiny, exquisite weld). Revealed was the following (click on the image to see a 4x larger version):

The main shaft of this bearing is an ordinary plain bearing (aka sleeve bearing): a cylindrical shaft rotating inside a cylindrical enclosure, separated by oil. Nothing special needs to be done to get the oil evenly-enough distributed to separate the two parts, since the shaft naturally drags the oil around with it. The trickery comes at the end of the shaft, where there is a bronze ring shrink-fit on to the shaft, to handle thrust (that is, loads coming from one end of the shaft or the other). This thrust bearing would, in the normal course of things, not have any sort of principle that would restore fluid to the interface; so the bronze ring would touch the steel enclosure. Although bronze and steel are a good combination for bearings, which gives relatively low friction and wear, still, spinning 24 hours a day, they’d wear out quickly if touching. To prevent this, the designers of this bearing have added a special pattern of grooves to the steel surfaces that would contact the bronze, as is visible in the photo; these re-direct fluid that would slip off an edge of the interface back into the middle of it. That way, the thrust surfaces touch each other only on startup of the hard drive spindle, a rare occasion and one during which it is not spinning particularly fast.

But the chances that anyone will ship such a beautiful piece of machinery inside an ordinary computer fan are pretty slim. Indeed, the computer fan bearings which I’ve taken apart, and which have been described as “fluid dynamic bearings”, operate on an entirely different principle. The shaft is the same sort of thing: a sleeve bearing. But the thrust is taken up differently. The following diagram, from a Scythe brand fan (which Scythe describes as having a fluid dynamic bearing made by Sony), is a good example:

Most of those parts are about the same as they would be on a sleeve bearing fan. The fan is held in by a plastic split washer that fits into a groove on the bottom of the fan spindle, as in an ordinary sleeve bearing fan. The porous bronze sleeve, filled with oil, is also usual in sleeve bearing fans. The difference is the “rotor suction magnet”, which takes the thrust load off the plastic split washer. The way computer fans are arranged, the force produced by the wind from the fan is trying to lift off the top of the bearing, on which the fan blades (not shown) are mounted. The magnet overcomes this force, replacing it with a force in the opposite direction, which gets taken on the bottom end of the shaft.

I can think of a couple of reasons why this might be better. One is that the bottom end of the shaft has a larger surface area than the groove which holds the plastic split ring, and so can handle the thrust force better. The flimsy plastic split ring also will bend a bit, likely making the surface area on which the thrust is taken even smaller. Another reason is that the magnet’s strength might be chosen so as to exactly counterbalance the wind force — although the wind force depends on a lot of things, including supply voltage and air pressures, and thus could never be exactly counterbalanced. In any case, the reason isn’t that the bottom end of the shaft sports any particular cleverness; when I took one of these bearings apart, there was nothing like the sort of oil flow channeling that the Seagate bearing had.

But whatever the reason, a lot of companies make such fans, using different names. Of the above fan bearing names, besides “Fluid Dynamic”, the “Nanoflux Bearing” and likely the “Ever Lubricate” bearings use this principle of having a magnet to take up the thrust force. In some designs, the magnet is put below the bottom of the shaft, to magnetically attract the steel end of the shaft. It is thus also sometimes called a “magnetic bearing”, a term which suggests the sort of ultra-expensive magnetic levitation bearing that Iraq was once trying to get hold of for their gas centrifuges for uranium. Such is marketing. As for what the generic name for such devices should be, I suggest “thrust magnet bearing”; it’s reasonably terse, and sort of conveys what the thing is. It won’t wildly excite marketing people, but I don’t think it’ll make them wince, either.

In other fans, ordinary sleeve bearings are described as “fluid dynamic bearings” — which in a sense they are, since sleeve bearings do involve fluid dynamics. The “Hydro Wave” bearing that I took apart was an ordinary sleeve bearing. This seems misleading, but not necessarily in any serious way: on the forum at silentpcreview.com, there seems to be a consensus that sleeve bearings are better than was traditionally thought. My guess is that this is because the denizens of that forum tend to operate their fans at low speeds, where there isn’t much thrust force. Also, even without any additional magnets, the magnetic field loop that is used to turn the fan provides a restoring force against thrust. In some sleeve bearing fans, the fan hub can be pulled out a few millimeters against that force before one hits the split washer that retains it. In those fans, especially in low-speed ones, adding thrust magnets is likely superfluous.

As for “rifle bearings”, the term is strange enough that I’ve ordered a couple of fans with them to see what they are; but one (marketed as an “air rifle bearing”) was just an ordinary sleeve bearing fan, and the other just a magnetically-counterbalanced bearing. The name suggests that either the shaft or its bearing would be rifled, but I don’t see what the point of doing either would be; it could pump all the oil out one end of the bearing, but that doesn’t seem sensible.

That pretty much exhausts Newegg’s list of names, although a couple of oddballs are left. Of course, as I hope was apparent, this article is not intended to be authoritative or up to date; that would be actual work and would cost actual money. It is just the result of having occasionally ripped apart a fan or two, over the years.

### Against state pension funds

For all the talk, these days, of the problems that state pension funds are getting into, I haven’t seen anyone argue against their existence. But the case against them is simple and strong.

To define what is being argued against: state pension funds pay the pensions of retired employees of the state government. Without pension funds, states would be paying these pensions directly out of tax revenues. With pension funds, the government plays the markets, investing tax revenues in stocks, bonds, and such, and then later selling them and using the proceeds to pay pensions to retirees.

If you were to ask anyone of pretty much any ideological stripe whether it’d be a good idea for the government to play the market in the service of any other obligation, he’d likely ask whether you were crazy. The idea that, for instance, maintaining roads should be done by investing money in the stock market, then using the dividends to do the actual road maintenance, would be laughed at — and not just by small-government advocates who doubted the government’s ability to choose winners in the stock market; socialists, from their point of view, might question why you were giving money to the capitalists on Wall Street in the first place, and whether you really could have any hope of getting it back from those lying pigs. But somehow for pensions the political situation in the US is the opposite: at the state and local level (but at least mostly, not at the federal level), pension funds are taken for granted; there is much controversy about some of their details, but generally all parties accept that they should exist. Yet the situation that everyone would laugh at and the situation that is generally accepted are really one and the same: when state money is sent to Wall Street, the official reasons why it is sent make little difference; all that really matters is the amount and the timing. Whether the name on the account be “pensions” or “roads”, the funds used for investing come out of the same pot of money and the proceeds go into the same pot.

Plenty of private companies have pension funds; so it’s easy to think states should, too, especially in this era of much talk about how government should try to imitate the private sector. But for private companies, there is a potent rationale for pension funds: companies often fail; a pension fund is a way to promise that pensions will be safe even if the company ceases to exist. States don’t cease to exist, except via war or troubles that verge on war; and when a state disappears via such events, its pension funds are extremely unlikely to survive the tumult.

The biggest attraction of state pension funds has no doubt been the extravagant promises they make, as to returns. I’ve seen in several sources (Michael Lewis’s recent article on California’s financial troubles being one) that state pension funds generally expect returns of about 8% per year. To illustrate the impact of this, suppose that any given piece of money spends about twenty years in the pension fund. That is the length of a short government career, and also a common length of time spent in retirement, and thus is a reasonable figure for the average interval of time between when a pension obligation is incurred by employing someone, and when that obligation finally comes due and the money is withdrawn from the fund to cover it. Twenty years’ compound interest, at 8%, multiplies the initial amount of money by a factor of 4.6; or if we figure that the 8% is just in nominal dollars, and subtract 2% to adjust for inflation, the multiplying factor is 3.2. So by assuming that 8% yield, they can justify much larger pensions than could be justified if pensions were to be paid directly out of tax revenues: in particular, the pensions can be around three times larger. A modest pension of $20,000 a year can turn into$60,000.

When the market fails to deliver that 8% increase, the result is what many states have now: an “underfunded” pension plan, where even when an 8% return is assumed for the future, the fund won’t be able to meet its obligations. The conventional way of regarding this is to be horrified at it, as a harbinger of state bankruptcy. But if one regards state pensions as things that should just be paid out of tax revenues, without any resorting to Wall Street to amplify money, then the pension fund is a nice big fat asset, and the only thing its “underfunding” is a harbinger of, is a switch to a system of accounting where future pension obligations are not counted as present-day liabilities. There would be nothing dishonest about such a switch; other future obligations, such as schools and roads that will predictably need repair, are not counted as present-day liabilities. As for the promises made, both as regards returns the pension fund would make, and as regards the size of the eventual pensions that would be paid to retired state employees, those were always just fantasies that could never be delivered for long. (For how fantastic some of those pensions have gotten, see this article, as well as Michael Lewis’s above-linked article.)

If an attempt were made to reduce pensions, lawsuits would no doubt be filed; promised pensions have a certain legal standing, as contractual obligations. But it’s not enough of a standing to give them absolute priority over the basic rule of elected government that no legislature can bind its successors. To force a state government to pay pensions that bankrupted the state would be an especially bad violation of that rule. Of course there is never any guarantee that judges will see it that way, especially if the bankruptcy is several years in the future. Still, any judge who tried to enforce payment of every dollar promised would, sooner or later, run into all the usual difficulties of getting blood from a stone. Would he force taxes to be raised? Which taxes? Force cuts in other spending? Which spending? Legislatures don’t have an easy time deciding such things; and judges would find it even harder, especially with the public screaming at them for usurping the legislature’s proper role.

Indeed, to some extent, my whole argument here is merely a justification for what inevitably will be done anyway, barring economic miracles. There is little political will for levying the huge tax increases that would be necessary to restore pension funds to being fully funded, and no short-term downside to leaving them underfunded; simple neglect and inertia would leave them underfunded until they ran out completely, at which point the only things to be done would be to fire the staff administering their investments, and adjust the size of pensions to whatever could be borne out of tax revenue. But to accept that this was actually the goal, rather than just drifting along in that direction, would open up other possibilities. For one thing, the assets in the pension fund could be sold to wipe out other debts of state government, so that the government was no longer, in effect, borrowing money and using it to play the market with. For another, the pension fund administrators could stop trying for unrealistically high returns (something which David Goldman has blamed for their recent losses in mortgage-based investments). Also, the sizes of pensions paid out could be adjusted before the final crunch actually hit; the transition could be a smooth one, rather than an abrupt emergency measure.

Thus far, I’ve focused on the effects of pension funds on government finances; but that’s not all, and likely not even the most important part. When pension funds buy corporate stocks, they get an ownership interest in those companies. They can vote in corporate elections; and they control such large blocks of stock that their votes carry serious weight. Even if they were to abstain from voting, their large purchases have big effects on companies’ stock prices, and thus on how easily those companies can raise more capital. Bond purchases, too, affect what companies do: in many cases, if bonds can’t be floated for a proposed venture, it won’t be done. So for the government to own large quantities of stocks and bonds is a big step towards Marx’s dream of the “workers” (via the government) owning “the means of production”. Not that a Marxist conspiracy to take over the economy is even vaguely possible: today’s Marxists are not intelligent enough to put together a decent conspiracy. Petty corruption is more of a danger, as are politicized investments. But although pension fund scandals and politicization of investments have often made the news, in the grand scheme of things they are minor and occasional problems; the big problem is the everyday mediocrity of the oversight that government pension funds apply to their investments. I have made no particular study of the quality of that oversight; but unless state governments miraculously do it much better than they do everything else, state pension funds must be a large contributor to what might be called the Dilbert-ification of corporate America, in which companies are taken over by people who chase after management fads, while the people who can actually do useful work struggle with silly orders from above, trying to construe them into something sensible. The cartoon of course exaggerates; but the phenomena it mocks are quite common, and a tremendous problem.

Most of what has been said above applies not only to state pension funds but also to those of local governments. The exception is that local governments sometimes do cease to exist: there are plenty of ghost mining towns out West, whose population evaporated when the mine closed. In such a case, just as the mining company may want to promise pensions which will survive the closure of the mine, so may the town government want to promise pensions which will survive the abandoment of the town. But for that, explicit measures would be needed to put the pension fund in some hands that would administer it honestly after the town was defunct as a political entity — a difficult enough proposition that giving control to the payees themselves, via 401(k) plans or the like, is likely better than establishing any sort of collective pension fund. (Not that corporate pension funds are immune from getting hijacked as the company fails; far from it. But politics has a nastiness all of its own.)

There may even be a few cases like this at the state level, where it might be forseen that, due to some economic factor, the population and tax base will diminish drastically. The oil boom in North Dakota might be one such factor: at some point that oil will be exhausted, and people will leave. In such rare exceptions, state pension funds might be justified. Such a justification would, of course, involve a very different attitude from the sort of giddy optimism that assumes that an 8% return will always be available. Also, for the justification to work, the decline would have to be local rather than general; in a general decline, good investments are no more common elsewhere than they are locally — so instead of trying to pick global winners in the market (and distorting it in the process), the government can take the easier and more certain approach of just letting the local winners emerge, and taxing them. In a decline that was national but not worldwide, investments in a foreign country which still had a growing economy might seem attractive — but the catch is that that country might decide, with the newfound power that economic growth brings, that it didn’t care to pay back the money.

In any case, even considering pension funds as an evil, they’re one we’re stuck with for a while, since arguments like this never prevail quickly. Even when everyone with good sense agrees immediately, that still leaves the majority unconvinced. Even if by some miracle this argument did prevail quickly, selling off pension funds’ investments would best be done slowly, so as not to unduly depress the markets and make the sale yield less than it should. And that scenario isn’t so different from what is happening today, since when a pension fund is “underfunded”, it uses up its capital at an increasing rate. Even as regards the effects of pension funds’ oversight of corporate America, that has been a slow process, and can’t be reversed quickly. Good oversight doesn’t magically appear when lousy oversight is destroyed, but rather takes time to build. For the moment, the hope and the threat of it will have to do.

Update: Alexander Volokh, a law professor, has written a nice overview of the legal rules surrounding pension funds. It falls short of considering what might happen when things really get bad, but that’s sort of inherent in legal analyses: they cover precedents (court rulings), not situations that are unprecedented.

### Entropy is not chaos

Mediocre physics teachers who are trying to explain the concept of entropy often say that entropy is a sort of measure of chaos, with increases in entropy meaning increased chaos. I found that claim confusing from the first time I heard it; once I got a grip on the concept of entropy, I realized that it’s simply false: entropy has little to do with chaos. Consider, for instance, a bucket into which different-color paints have been slopped, forming a chaotic mess of colors. That mess has less entropy than it will after you mix it to a orderly uniform color, which is the opposite of the way the entropy-means-chaos idea would have it. Likewise, a room filled with a chaotic mixture of air at different temperatures has less entropy than it will after the temperatures all equilibrate to the same value. Or take a situation in which you have two cylinders, one filled with air and the other evacuated, and connected by a pipe with a valve. Once you open the valve, half the air will rush from the full cylinder to the empty; this will increase the entropy. But which situation is more chaotic than the other? Relative to the everyday meaning of chaos, it’d be hard to say.

As for what entropy is, if it’s not chaos — well, as with other things in physics, a definition could be given simply enough, but wouldn’t mean much to anyone who didn’t already know how to put it in context. (“The logarithm of what?”) The concept takes a lot of understanding; I didn’t really get a grip on it until I spent a lot of quality time with Enrico Fermi’s book Thermodynamics. That book explains it probably as simply as it can be explained, but it’s still not easy.

It’s a worthwhile concept, though. One can get the impression from casual physics talk that entropy is only good for making gloomy statements about the heat death of the universe, and how everything is doomed to run down and deteriorate. (Or in the above case, how it’s easier to mix paints than to unmix them.) There is that aspect of it, but entropy is also a practical tool. Using it it one can, for instance, derive the Clausius-Clapeyron equation, which relates the vapor pressure of a liquid to its heat of vaporization. Or one can use it to calculate the exhaust velocity of a rocket engine, under the assumption of shifting equilibrium.

While on the subject of chaos, it’s also worth mentioning that the “chaos” defined in the branch of mathematics known as “chaos theory” also isn’t chaos in the usual sense of the English language. In chaos theory, water dripping from a faucet is a “chaotic process”. That’s because the exact size of each drip and the exact interval between drips is hard to predict, even though to the eye it looks like a steady drip, drip, drip, and though the average person would say you were nuts to call it chaotic. This has rendered scientific papers a bit more difficult to read, since it can be hard to tell whether “chaotic” is meant in the ordinary sense or in the chaos-theory sense. Unlike in the case of entropy, I have difficulty labeling this technical concept of “chaotic” worthwhile, since I’ve never encountered anyone making any practical use of it, and since I don’t know why labeling something “chaotic” would help with anything: you couldn’t predict it precisely before, and you still can’t predict it precisely.

### An addendum to The Devil’s Dictionary

Buypartisan, adj. As of, or pertaining to, a situation in which the partisans have been bought. Commonly misspelled.

(Not really entirely fair? Well, neither was the original…)

### Power Factor In The Digital Age

Over the years, I’ve seen entirely too much confusion surrounding the electrical quantity known as power factor. Even its definition is often confused. Roughly half the sources I’ve encountered define it to be the cosine of the phase difference between current and voltage — a definition that was adequate sixty years ago when waveforms were almost all sinusoids of the same frequency, but which is entirely inadequate now that both current and voltage are commonly chopped up using silicon. The “phase” of a non-sinusoidal signal can have many definitions, and probably none of those definitions yields a meaningful number for power factor. The old formula is still fine as a formula for the power factor in the case that one is dealing only with sine-wave power supplying old-fashioned devices, but fails as a general definition.

A real definition (and the one used by the other half of the sources I’ve encountered) is that power factor is equal to the true power divided by the “apparent power”. The true power is defined as physics dictates: the average of the instantaneous power consumed by the device (instantaneous power being instantaneous current times instantaneous voltage). That average is usually best taken over a single full cycle of the AC waveform, or multiple full cycles; but even if there are no recognizable cycles, it can be computed for any given interval of time. Apparent power (aka “VA”) is defined to be the voltage multiplied by the current, both voltage and current being measured in root-mean-square (RMS) fashion. It is, as per the name, what one might think the power was, if one just measured current and voltage with a true-RMS meter. The average (the “mean” in RMS) is again best taken over a single full cycle; but again, there don’t even have to be cycles at all, for apparent power (and thus power factor) to be a well-defined quantity, for any interval one chooses.

Whether or not that definition makes any sense in general is another question. For one thing, the power factor is supposed to always be between 0 and 1 (or -1 and 1, if the device is allowed to supply net power rather than consuming it). And while it’s obvious that the cosine of a phase difference has to be between -1 and 1, it’s not obvious that the same thing applies to the general definition of power factor. Or at least, it’s not obvious unless one recognizes it as a direct consequence of the Cauchy-Schwarz inequality. That inequality states (in the version that’s useful here; it can also be written much more generally) that for any two real functions f and g of a single variable,

$\int f(x)^2 dx \ \int g(x)^2 dx \ \ge \ \left( \int f(x)g(x) dx \right)^2,$

with equality occurring if and only if f is proportional to g — that is, if

$f(x)=cg(x),$

for all x and for some constant c. (This web page uses MathJax to render equations; if the above equations appear as LaTeX source, with lots of backslashes, it’s probably because Javascript is not enabled. It needs to be enabled for this website and for the website “mathjax.org”.)

In this case, let f be the voltage, and g be the current, both as a function of time. Then take the square root of both sides, and divide both by the length of time over which the integrals are taken. The right hand side is then the absolute value of the true power, and the left hand side is the apparent power, proving that power factor is between -1 and 1 — and, as a corollary, that a power factor equal to one occurs only in the case of a resistive load (in which case c is the resistance).

Power factor, defined this way, is thus a solid concept, not one of those poorly-defined notions that sort of works as long as you stay within its traditional applications but which breaks when you do something unusual. There are no strange voltage or current waveforms lurking anywhere for which the power factor might be greater than one.

But there’s another way in which one could doubt whether power factor makes sense to compute: if one objects to root-mean-square as the appropriate way to measure current and/or voltage. The square root of the sum of squares is a mathematically convenient entity, which makes a lot of formulas simpler than they would otherwise be. But mathematical convenience shouldn’t take priority over usefulness in applications. Fortunately, in this case, the two pretty much coincide. By Ohm’s law, heating in a conductor, at any instant, is proportional to the square of current. So total heating is proportional to the integral of the square of current; the RMS current is the square root of that, and thus tells you how much your wires are heating up in the process of carrying the current. An RMS current of 15 amps will yield about the same heating whatever the waveform; if it is 15 amps DC, the heating will be about the same as if it is 15 amps AC RMS — the latter being, by convention, a sinusoidal waveform with a maximum of 15$$\sqrt{2}$$ = 21.2 amps. (The reason for the qualifier “about”, in the preceding sentence, is skin effect; but the frequencies of interest here are too low for skin effect to play a big role.) Heating represents wasted energy, lost in transmission. Also, the amount of heating is usually what sets the limit on how much current a wire can carry. Heating in motors, transformers, and inductors is largely resistive heating, proportional to the square of current. On the other hand, if the current is through a diode, the situation changes: the diode’s voltage drop is nearly constant, rather than being proportional to the current. So instead of the square of current, the heating at any instant is proportional just to the current. But for power MOSFETs operated in saturation, the situation is again that they look like a resistance, with voltage drop proportional to the current. BJTs, however, are more like diodes. So, in power transmission and handling, RMS is a pretty decent measure of current, although it’s not as perfect as it was before silicon devices. As for the appropriate measure for voltage, if one is going to measure current in RMS terms, one pretty much has to measure voltage that way, too, so that Ohm’s law works for AC current.

So power factor, under the proper definition, is in all circumstances a good measure of how efficiently a device is sucking down current, as compared to the best it could do: not, of course, a measure of internal efficiency, but rather of how efficiently it loads down power production and distribution networks.

But there are some notions which one has to let go of, when using the general definition of power factor. One is the idea of measuring a phase difference and using that measurement to correct power factor. Oh, the old formulas still work in the old circumstances — those being when one is dealing only with sine-wave power and with linear devices such as motors, generators, transformers, and capacitors. But they don’t extend to the general situation. I’ve seen talk of patching them up by having two numbers for power factor, the one being the cosine of the phase difference and the other being due to harmonics. But there seems little point in this. For one thing, it could only apply to sine-wave power in the first place: if some other voltage waveform is being used, the best power factor is from a current waveform proportional to it, which has the same harmonics, which in this case are making power factor better rather than worse. Besides, unless one is going to try to correct the power factor, as has traditionally been done for motors by adding capacitors, there seems little point in computing any number for phase difference. And the power factor of nonlinear devices is not easily corrected: it is not a traditional “leading” or “lagging” power factor, where the current is a sinusoid that either leads or lags the voltage. Instead the pattern is commonly that power is drawn from the line near the peaks of the voltage waveform, and not near the zero crossings. The following are oscilloscope shots of such behavior, as displayed by an old computer power supply; the first shot is with it running, the second with it quiescent (plugged in, but only drawing enough power to keep its internal circuitry alive). The white line is voltage, and the purple line current (which is on a different scale in the second shot than in the first):

To correct power factor for a device like this by adding an external device across the line, the way capacitors have traditionally been used to correct power factor for motors, would mean fielding a device that drew power near the zero crossings, and fed it back into the line near the peaks. Such a device could be built, but would be much more complicated, expensive, inefficient, and unreliable than a capacitor. It is probably easier to demand that the devices being powered be power factor corrected, as are many modern computer power supplies, such as the one that produced the following scope traces — the first, again, when running, and the second when quiescent:

(As can be seen, the power factor correction only applies when the power supply is on; when it is quiescent, the small current it draws looks a lot like the current a capacitor would draw: about 90 degree phase lead. Indeed, that current is likely being drawn by a filtering capacitor inside the unit, such as is often placed across the input for the purpose of blunting power surges and suppressing RF emissions.)

It’s not just when a device consumes power in a nontraditional way that it is difficult to correct power factor; it also is difficult when a device provides power in a nontraditional way — that is, as something other than a sine wave. The usual nonsinusoidal power waveform is what marketing people have decided to call a “modified sine wave”, which really would be better termed a modified square wave. Whereas a square wave of 115VAC would alternate between 115V and -115V, the modified square wave alternates zero, 162V, zero, -162V:

That peak voltage is chosen to be the same as the peak voltage of a sine wave with RMS voltage 115V; the time spent at zero volts is chosen so that the signal as a whole is 115V RMS. This is the waveform output by most inverters — inverters being devices for converting DC, usually at 12V or 24V, into alternating current at line voltage. Most uninterruptible power supplies dish out the same sort of modified square wave when running off battery power. If a motor is driven by such a voltage source, it will have a lagging power factor; but if one were to try to correct it by adding a capacitor, at the sudden transitions between voltages the capacitor would try to draw enormous currents. Rather than correcting the power factor, that would dramatically worsen it — if, indeed, the inverter didn’t shut itself off instantly, as it probably would in self-defense when it detected those enormous currents.

Nevertheless, modified square waves have their upsides, as regards power factor. The old computer power supply that yielded the first graphs above, I measured drawing 219W and 313VA on AC power from the utility (a power factor of 0.70). On an inverter, with the same load, it drew 207W and 240 VA (a power factor of 0.86); the current waveform looked like this:

The fact that this power supply draws current only near the peaks makes for a better power factor on the inverter, since its voltage waveform has wider peaks. Also, it improves the power supply’s internal efficiency a bit, so it draws less real power.

On the other hand, the same power supply also has a small filtering capacitor across its input, to help deal with power surges and to suppress RF emissions. With the power supply quiescent, it draws 2 watts at 8 VA from the AC line, but draws something like 75 VA from the inverter (though that measurement is quite imprecise, since the wattmeter I was using couldn’t really resolve the current spikes). The power-factor-corrected power supply graphed above behaved even worse on the inverter when quiescent, drawing about 90 VA; even when switched off using the switch on the back of the power supply (which turns off the current to every part of it that is at all active, leaving only a filtering capacitor or two drawing power), it drew about 60 VA. Yet under load, its behavior was again good: 264 W at 287 VA, a power factor of 0.90, although it still shows current spikes:

So although power factor is usually specified as just a single number (or as a function of load), really those numbers apply only to sinusoidal voltage waveforms, and can’t be extrapolated to other waveforms.

### Runaway Starter

“Hmm, I shouldn’t be going anywhere with the car making a noise like that.”

I pulled back into the driveway, and turned off the ignition.

The engine went off, but the growling noise that had disturbed me continued.

I got out of the car, opened the hood, and looked. The engine wasn’t vibrating the way it does when it’s running; so, as my ears had already told me, whatever was making the noise didn’t involve pistons and cylinders and such. So was the noise electrically powered, or was fuel leaking someplace and burning? No smoke was coming out of anywhere I could see. Should I disconnect the battery? Go get a voltmeter and see whether the battery is draining? Or perhaps go get a fire extinguisher? Could combustion from a leak really be that regular and uniform, even inside some hidden space from which no smoke could escape? (In retrospect: no; and disconnecting the battery would be the right thing to do even if there were a fire.)

I stood there trying to figure out what to do for a minute or two, until the noise stopped with a bang, accompanied by a bit of smoke blowing out from behind the engine.

When, after some pondering, I tried turning on the car again, it wouldn’t start. After jacking up one side of the car, and going under it to fasten an alligator clip to a starter terminal (the starter being down behind the engine, in roughly the same area the puff of smoke had come from), I found that the starter solenoid seemed to be working: the voltage on its output was zero until I turned the key, then went to eleven volts and some. But by the same measurement, the starter motor was broken — since it didn’t run, and since it couldn’t just be jammed: the current draw of a starter is large enough that the voltage would be lower than that.

I finally got around to considering the possibility that the starter had run away, something I don’t recall ever hearing about, but of which Google quickly found me many examples. Of course how the starter would have run away was not entirely clear, since the starter solenoid was now working properly, as were all the circuits feeding it. But if it had stuck on, the bang at the end might have unstuck it. In any case, the starter plus solenoid being a complete assembly, it was clearly time to pull that assembly off and order a new one.

A variety of places on the net sell starters; I chose one from Amazon — about a hundred dollars for a new starter, NSA brand. Curiously, the refurbished starters on offer mostly went for more than that. The one that had failed was, to judge from the part number on a sticker on it, itself a refurbished unit, from the “Quality-Built” corporation. On their website, they advertise that the solenoids on their rebuilt starters have “100% new contacts”, among other things.

With a new one on its way, it was time for the fun part: failure analysis. The starter was held together by two bolts that ran the whole length of the motor. Removing them was a bit difficult; they came out a bit bent in one place. Then the front of the starter came off easily, revealing a planetary gear set, a one-way clutch attached to the output gear (and loosely mounted on a helical spline), a lot of grease, and nothing at all wrong.

Taking off the other end of the motor, though, revealed a scene reminiscent of Chernobyl. The whole volume was packed full of black-ish, somewhat fluffy debris, including bits of copper and of graphite brush. This explained the bang: something had gotten loose, slammed into something else moving fast, broken it, and a chain reaction of destruction had ensued. Here is a photograph of the recognizable pieces that remained; the photo can be clicked on to bring up a larger version:

That area of destruction was also the place in which the aforementioned bolts were a bit bent. But the real place of interest was the solenoid contacts. Being crimped in place, that portion could not be disassembled nondestructively; I got it apart by filing off the crimp. Here is a photo of the contacts (again, click for a larger version):

This is a decent contact design. The contacts are copper. The copper washer that closes the contact is loosely mounted, so that it can rotate, evening out its own wear. It can also swivel a bit to make good contact even if one of the two fixed contacts wears down (as one has). It lasted through several years and thousands of starts, so I can’t complain too much. But in the end it wasn’t enough to reliably switch the hundreds of amps that a starter draws. The contacts, which started out smooth, roughened from sparks upon opening. Eventually they roughened to the point where the resistance was high enough to generate serious heat, and to weld them together. The places where they welded are clearly visible in the photo, as fresh copper which was exposed when the weld was cracked away.

My impression is that the way to make contacts like this reliable is to make them out of silver rather than copper. But silver costs money. A thin layer of silver won’t do it, because silver gets eroded too, in this duty. And even a thick layer of silver eventually fails.

The drive gear off the old starter wasn’t much chewed up by the runaway. Nor, getting under the car and looking at it, was the ring gear. Properly hardened steel seems to have been used throughout.

The replacement starter looks good, and has worked fine so far. The only curiosity about it is that it comes with a two-year roadside assistance plan. This is rather odd, since the cost of two years of AAA roadside assistance is about the same as the whole cost of the starter. Of course the plan that comes with the starter is not provided by AAA, but rather by another company, “Auto Road Services Inc.”. That company advertises on their website that:

For just pennies per unit, you too can give your customers the added value of FREE emergency roadside assistance, and your company the marketing edge over the competition.

which of course tells me, the customer, the most that their plan could be expected to be worth to me: “pennies”. After their profit and overhead is taken out, it might even have a negative expected value. According to the plan description that came with the starter, one has to call their 800 number, use the roadside assistance provider that they dispatch, pay him his full fee, then send in lots of paperwork to them to get reimbursed. The ways in which they could sleazeball this are too numerous to mention. Not that I care; I’m just glad the starter’s manufacturer didn’t spend too much on this marketing gimmick.

One thing I noted when Googling for “runaway starter” is that some people advise against doing the obvious, and disconnecting the battery. Now, it’s true that in general, disconnecting the battery while the engine is running is a bad idea: it’s called a “load dump”, and can cause the voltage to rise excessively, damaging the car’s electronics. But in this case there’s zero danger: the alternator, which is what produces the excess voltage in the load dump scenario, is not running in the first place. Even if it were running, the runaway starter is sucking down so many amps that the voltage could hardly rise. The “load” that gets “dumped” in a load dump is the current going into the battery; but here current is coming out of the battery.

### Setting text width in HTML

This blog quite intentionally has very little formatting. “Quite intentionally”, because not only does it save my effort, but also lets mobile devices with tiny screens format the text the way they want, without having to fight my formatting. But there’s one piece of formatting code I use: limiting the width of the text column. That is a principle of typesetting that I disliked at first, but eventually accepted: long lines are just too hard to read; the eye too easily loses its place when scanning back to the left to get to the start of the next line.

Though a lot of sites limit text width, usually, from what I’ve seen, it’s done badly:

• Specifying text width in terms of pixels. This produces annoying results for people with bad eyesight who use huge fonts, and for people who have portable devices with lots of microscopic pixels (such as what Apple calls a “retina display”), and who thus also use huge fonts (that is, huge when measured in pixels). It also can fail for people who have displays narrower than the specified number of pixels, since they can end up with lines that go off the edge of the screen, and need to keep scrolling the screen back and forth for each line that they read.

• Specifying text width as a proportion of the screen width. This won’t overflow the screen, but may produce columns with annoyingly many or annoyingly few characters.

The best way to specify text width is relative to the font size. HTML provides the “em” unit, which is the width of the character “m”. About 35 of those translates into about 75 characters of average text, which is what Lamport’s LaTeX manual says is the maximum width one should ever use. (Personally, being an exceptionally fast reader, I don’t mind twice that width; but this blog is for other people to read, not for me. And above twice that width, even I start to get annoyed.)

One can set the width using HTML tables to divide up the screen into columns whose width is specified in “em” units; and there’s not too much wrong with that. But a width specified that way might be too large for smaller screens. Fortunately the CSS standard provides a way to set an upper bound on the width, without using tables:

<style type="text/css">
.foo { max-width:35em }
</style>


The above goes in the “head” section of the HTML file. To use that style, one then writes:

<div class="foo">
Text whose width is to be limited goes here.
</div>


It’s simple, and precisely what is needed: it produces a column 35em wide, unless the screen is narrower than that, in which case the column fits the screen. The “class” attribute can also be set for other HTML elements, such as <body> or <p>, so one doesn’t need to add extra <div>s if one doesn’t want to.

### Blogging software

The weblog software that people seem to choose by default these days is Wordpress. Wordpress has a lot of features, is widely used and liked, and is offered as a free single-click install by a lot of web hosting providers. But several of the Wordpress blogs I follow have been hacked at some point. When I looked into blogging software, the reason became clear: Wordpress is a large piece of software, written in PHP, a language which originally was designed arose in a world where security concerns were much less significant, and which has addressed those security concerns (and other evolving needs) by adding things, not by a fundamental redesign. (UPDATE: it appears I was being far too generous to PHP in saying that it had been ‘designed’.) The result is a rather large, complicated language, which is hard to learn well enough to master all the security issues. Also, Wordpress uses an SQL database to store weblog entries, comments, and such, which opens up possibilities of SQL injection attacks. The single-click install is easy, but upgrading is not so easy; and if one runs the software for any length of time, one has to upgrade much more often than one has to install.

A lot of other blogging software, too, uses SQL databases to store weblog data. But databases add complexity; for one thing, to back up a database-driven weblog means issuing special commands to back up the database, in addition to doing the normal backup of the weblog’s files. The added complexity might be worthwhile if there were any real need for a database, but there normally are few enough weblog entries that using a file for each one is quite practical; and once written, they seldom change.

I suspect that the reason why blog software commonly uses databases is that PHP makes using SQL easy, and doesn’t make other ways of storing data as easy. In any case, it’s quite inefficient: even though weblog pages hardly ever change, the PHP/SQL combination means that each time a user asks to view a web page, a PHP process gets started up (or woken up), sends queries to an SQL server, receives the results, and rebuilds the web page using them, adding the headers, sidebar, and other formatting that the user has chosen. The sidebars often take further SQL queries. Due to this inefficiency, database-driven blogs are routinely brought to their knees when they draw huge traffic (as in “slashdotting” or “instalanche”). Right when a weblog is getting the most attention is exactly the wrong time for it to fail. There are various optimizations that can improve this — for one thing, PHP can be left running (WSGI) run inside Apache (mod_php) rather than re-started for every request (CGI); and there are also plugins which cache the resulting web pages rather than rebuilding them every time. But installing and maintaining one of those plugins is additional work; and even they don’t bring the efficiency up to the level that static web pages naturally have.

Of course you can easily move a Wordpress blog to wordpress.com, and let them handle issues like caching and keeping the software up to date. That’s how they make their money: by selling advertising on the blogs they host, and/or charging those blogs for premium features. The blogging software they give away is not a revenue source; indeed, if they were to make it too easy to maintain, they’d be sabotaging their revenue source.

I don’t grudge them their revenue — the people who write blogging software do need to eat — but personally, I feel like going to the other extreme. Thus this blog is done in PyBlosxom, a small file-based blogging package written in Python, which I’m using in static-rendering mode, where rather than being run each time someone visits, it is run once and generates all the web pages for the entire blog. PyBlosxom’s default mode has the author writing blog entries in HTML; I’m using a plugin that provides for writing them in Markdown.

### Welcome

There are a number of things which I’ve accumulated, as being good to write, but which I either haven’t written or have written for a very limited audience. They cover a wide variety of topics, and range in scope from technical details to the largest of questions. Here they come…