Date: Wed, 3 May 89 23:15:27 -0400
Subject: Re: Use of "Standard" on sensitive applications
Generally rather similar to the risks involved in using other "standard"
tools like compilers, assemblers, text editors, front-panel switches :-),
operating systems, etc.: there is always a chance that the tool has not
been fully tested and will do the wrong thing silently, or that it will
not catch user errors that it is supposed to catch.
>Is it reasonable to set some criteria ...
Lengthy use tells you something about the average density of bugs in the code,
but won't necessarily tell you about the one bug that's in precisely the wrong
place. Thorough validation suites are better, although rarer.
Better yet are validation suites for the *application*, ones which do their
best to stress its components. (Note, this is not the same as "black box"
validation suites written with no knowledge of said components.) The fact is,
even well-proven tools can have obscure bugs lurking in them. Case in point:
the C compiler in V7 Unix, an unambitious compiler written by a very good
programmer and exhaustively shaken down by widespread use, had a bug in its
32-bit-divide routine that was not found until people -- specifically, some of
my users -- stumbled over it. The code made some assumptions about the
hardware that were true, at least most of the time, of older pdp11 processors
but not of the new 11/44 we had. The most interesting part was that my fix for
the problem appears to have also cured some much rarer misbehavior found even
on older processors. The values returned by that routine may have been wrong,
occasionally, all along.
One simply cannot afford to place implicit trust in *any* of the tools used to
build a sensitive application. As with "end to end" arguments in networking,
to be sure that the final product is right, one must test it directly and not
rely on trusted tools.
Henry Spencer at U of Toronto Zoology