Index
Home
About
Blog
From: Dennis Ritchie <dmr@bell-labs.com>
Newsgroups: comp.arch
Subject: Re: bit counting
Date: Wed, 27 Oct 1999 19:22:40 +0100
Mike Albaugh wrote:
...
> More recently, some claim that C's "++" and "--" operators
> were based on PDP-11 auto-[inc/dec]crement address-modes. This claim
> has been debunked several times, but I thought I'd mention it as a
> "pre-emptive strike". The 509-character "minimum, maximum line" limit
> in C seems to correlate pretty well with a 512-byte disk-sector, minus
> some overhead, so I have my suspicions... :-)
Prize to the person who digs up the my earliest posting on this.
(BTW, the browsable version of the paper mentioned by Seebach is
http://cm.bell-labs.com/cm/cs/who/dmr/chist.html ).
On the second, the line limit, that was an invention of
the standards committee. Our compilers had no line-length
limit at all.
However, the float-to-double promotion in C was mostly
because of the way the PDP-11 FP instructions worked.
(f vs. d was a CPU mode).
Dennis
From: jgregor@engr.sgi.com (John A. Gregor, Jr.)
Newsgroups: comp.arch
Subject: Re: bit counting
Date: 27 Oct 1999 22:38:03 GMT
In article <381742F0.BA285176@bell-labs.com>,
Dennis Ritchie <dmr@bell-labs.com> wrote:
> Mike Albaugh wrote:
>> More recently, some claim that C's "++" and "--" operators
>> were based on PDP-11 auto-[inc/dec]crement address-modes. This claim
>> has been debunked several times, but I thought I'd mention it as a
>> "pre-emptive strike".
> Prize to the person who digs up the my earliest posting on this.
> (BTW, the browsable version of the paper mentioned by Seebach is
> http://cm.bell-labs.com/cm/cs/who/dmr/chist.html ).
Ok, what do I win? :-)
-JohnG
| From: dmr@dutoit.UUCP
| Newsgroups: net.lang.c
| Subject: Where did ++ come from?
| Message-ID: <2140@dutoit.UUCP>
| Date: Sat, 21-Jun-86 04:22:22 EDT
| Article-I.D.: dutoit.2140
| Posted: Sat Jun 21 04:22:22 1986
| Lines: 36
|
| Phaedrus@eneevax guessed that a lot of notation in C came from PDP-11
| assembly language, and Chris Torek's reply did indeed drag me out
| of my torpor.
|
| Nothing in the C syntax came from the PDP-11, because all the relevant
| parts were imported from B, which was in use before the PDP-11
| existed. In fact things are somewhat the other way around; the reason
| the Unix PDP-11 assembler resembles B (and C) more than does DEC's, is
| that I wrote the first Unix PDP-11 assembler, in B, before we had a DEC
| assembler. It was written from the machine description. It used * and
| $ rather than @ and # because the former were analogous respectively to
| the B notation and to other assembly languages I knew, and (equally)
| because @ and # were the kill and erase characters.
|
| As to ++ and --: these were Thompson inventions as far as I know,
| or at least the idea of using them in both prefix and postfix form.
| No doubt the autoincrement cells in the PDP-7 contributed to the idea,
| but there was a significant generalization, or rather isolation of the
| significant operations into ++ -- and *, as Chris pointed out.
|
| If you haven't heard of autoincrement cells, here is the idea: certain
| locations (010-017) in low memory in the PDP-7 (and also the -8, but
| just one cell, probably 010), acted like ordinary memory locations,
| unless indirection was applied through them. In that case, after the
| indirect reference, 1 was automatically added to them. It was useful
| for stepping through arrays, especially because these machines lacked
| index registers.
|
| * came from the version of BCPL we were using. (Pure BCPL used "rv"
| for "*" and "lv" for "&").
|
| By the way, B had assignment versions of all the binary operators,
| including === and =!=. Since it didn't have &&, the question of =&&
| did not arise. The ones missing from C were dropped for lack of interest.
|
| Dennis Ritchie
From: Dennis Ritchie <dmr@bell-labs.com>
Newsgroups: comp.std.c
Subject: Re: preprocessor history
Date: Wed, 03 May 2000 02:44:53 +0000
Jonathan Thornburg asked:
> * Who first suggested that C should have a preprocessor?
> * At the time it first appeared, what problems was it intended to address?
We were already used to both PL/I (with a preprocessor that was
notionally distinct as a macro-processor) and BCPL, which had
an include facility, but one more a part of the language,
in that
get 'HEADING'
was in the syntax. A C preprocessor of a sort went in fairly
early. I suspect that it may have been in two stages; first
just "#include", then simple substitution (#define and #ifdef).
I think that Alan Snyder may have argued for expansionism here.
In Fifth and Sixth edition, the preprocessor was built into the
cc command, and it seems to have done only #include and #define
of fixed strings (no macro arguments), plus #ifdef and #ifndef.
Between Sixth and Seventh, Mike Lesk added macro arguments and
#if (and thus preprocessor expression evaluation). I don't have
the source for this--it might be on the "Typesetter C" distribution.
I'm also not sure whether this was still integrated into the
cc command or whether it was a separate program; I suppose there
is a lacuna here.
After this, Reiser did the "modern" version, which did not
change the syntax too much, but did have some more subtle
differences of the sort that ANSI and ISO committees need
to lose sleep over. E.g. in the README about differences,
it says
* Macros with parameters are legal in preprocessor '#if' statements.
* Recursion in macro definitions is strictly obeyed (to the extent that
space is available).
This last was changed even by us, post-Reiser, pre-ANSI: John, a purist,
really wanted
#define a a
a
to cause a loop in the preprocessor. After all, wasn't that what
the programmer asked for? Our fix was to impose some sort of count
limit, which Reiser added with an option to turn it off;
ANSI's later solution was reexpansion-prevention by
ways more complicated to explain.
> * Who designed the cpp "language" (syntax/semantics)? (I know Reiser
> did the first *implementation*, but I'm asking about *design*.)
> * What other languages' (if any) compile-time facilities influenced the
> cpp design? Was there any influence from the (baroque) PL/I facilities?
> * Did all the K&R-classic preprocessor commands (#include, #define, #ifdef)
> appear in the first design, or were some of them added later?
As you see, it just grew. And as you can see from this note (and also
the C history paper), I was dubious about putting too much gadgetry
into the preprocessor language. This may have been a mistake, since
it appeared anyhow.
> * What were the perceived uses of #ifdef back before "portability" was
> a major issue?
As of 7th Ed, about the only thing that uses #ifdef much is Reiser's cpp!
It is full of
#ifdef gcos
#ifdef unix
A few other programs, however, use
#ifdef DEBUG
liberally. The file-system checking routines use
#ifdef STANDALONE
to generate versions suitable for running on bare hardware, not
under the OS. The OS itself is free of them except that
#ifdef DISKMON
is there to turn on statistics-keeping for the disk drivers.
> * When did the ability to define preprocessor macros on the compiler
> command line ('cc -DFOO ...') appear?
This too was between 6th and 7th editions; I don't remember whether
it was the Reiser version that did it first.
Dennis
From: Dennis Ritchie <dmr@bell-labs.com>
Newsgroups: comp.std.c
Subject: Empty objects
Date: Wed, 16 Sep 1998 05:59:12 +0100
I just slipped a copy of the August draft to
Doug McIlroy, a long-time believer in handling
the end-cases properly. He writes me (and with
permission I post) his observations. We know
this is just putting the chicken into the fox-house
and it's not the time to rethink the issue from the start;
but this is usenet, after all. Observe that in referring
to malloc he is is not talking about an always-failing
version, rather that he would like malloc(0) to
succeed. Likewise int a[0] .
I'll point out (to his content) that the draft is more
explicit on the edge-case for the string functions
than the current standard.
Dennis
Doug's note:
Sigh. The new draft not only fails to fix the
mistake of allowing malloc(0) to fail unconditionally,
it compounds the error by forbidding VLA's to take
on length zero. For consistency (and efficiency)
the draft might well also emulate Fortran instead
of Algol in specifying the semantics of the for
statement, since it is useless to have the loop
work when the declaration doesn't in places like
this:
int a[n];
for(i=0; i<n; i++)
a[i] = 1;
To further discourage the bootless proliferation
of vacuous data structures, it would be well also to
forbid zero as an initializer for pointer variables.
The greatly reduced likelihood of empty lists and
trees would also argue for Fortran semantics in
loops like
for(q=p; *q; q=q->next)
...
From: Dennis Ritchie <dmr@bell-labs.com>
Newsgroups: comp.std.c
Subject: Re: Initializing Automatic Arrays
Date: Tue, 29 Sep 1998 18:27:02 +0100
Initializing automatic arrays was explicitly not in K&R 1;
it was added by the current standard. It was such an obvious
extension that it was probably added as an extension to
some compilers before the standard.
Another change was a bit stranger, namely the one discussed in
another thread; if fp is a function pointer then all
of
fp(), (*fp)(), (**fp)(), (***fp)() ...
are allowed and all the same. In K&R 1 function names
did not decay to pointers in function call position, and
the thing in function call position had to be a function,
not a pointer; the only correct form was
(*fp)()
This was enforced in the PDP-11 compiler. For reasons known
only to himself, Johnson in PCC decided to accept both fp()
and (*fp)(). The rest were an accidental effect. The committee
(presumably because it had become practice) decided to change
the rules to regularize it.
Something similar (but with different result) happened with
arrays. In K&R 1, given an array A, &A was simply not allowed.
Johnson allowed &A, with the same meaning as A. Here, however,
the committee decided to extend K&R 1 consistently, with &A
being a pointer to A. This was the right thing to do because
there are (rare) occasions when it's needed.
The problem with the PCC interpretations of these is that
as a "favor" to the user, it complicated further an already somewhat
difficult type notation. (I plead guilty to somewhat the same
thing in allowing [] in function arguments.)
Dennis
From: "Douglas A. Gwyn" <DAGwyn@null.net>
Newsgroups: comp.std.c
Subject: Re: __STDC__ (was: gcc -pedantic)
Date: Sat, 11 Sep 1999 00:51:46 GMT
David R Tribble wrote:
> Well, the problem I specifically wanted addressed was for conforming
> implementations *that provide extensions* to have a way of indicating
> that they are still conforming but provide extensions. I assumed
> that __STDC__ was the standard-approved way that an implementation
> would do this, since there is no other standard mechanism for it.
No; if they don't define __STDC__ to expand to the
decimal constant 1, they wouldn't be conforming!
> How exactly does a compiler that supplies, for example, an open()
> function in its <stdio.h> header file, but which is conforming in
> all other respects, indicate this fact?
It is not a conforming implementation! One major
goal of the C standard is to get implementors to
stop putting random crap in the standard headers!
Such implementations don't *deserve* any help from
the C standard, as they are interfering with source
code portability. (I have been stung by such stuff
many times in the past; it got a *lot* better when
vendors started trying to conform to the standard.)
> Current practice typically is to define __STDC__,
> but to define it with the value 0.
That is only "current practice" in some environments.
I went around and around with AT&T's C implementors
on this issue, but they thought it would be "helpful"
to their customers to indicate ... what exactly,
I don't know.
> Hence my point that the current definition of
>__STDC__ is flawed, or at least inadequate, for this task ...
Since __STDC__ was not intended by the C standards
committee to be used for that task, the problem lies
with the abusers of __STDC__. What the committee
intended was for usage like
#if __STDC__
#include <stdlib.h>
extern size_t func(const char *);
#else
#define NULL 0
typedef unsigned size_t;
extern size_t func();
#endif
Some of us also intended that
#ifdef __STDC__
could be used in such cases, but that is unworkable
now that the symbol has been abused by various vendors.
From: torvalds@old-penguin.transmeta.com (Linus Torvalds)
Newsgroups: alt.sys.pdp10,alt.folklore.computers,comp.arch,comp.lang.asm370
Subject: Re: A Dark Day...
Message-ID: <bc2i64$c7g$1@old-penguin.transmeta.com>
Date: Mon, 9 Jun 2003 18:03:48 +0000 (UTC)
In article <3EE3D589.F3EACF27@yahoo.com>,
Peter Flass <peter_flass@yahoo.com> wrote:
>
>Yes, but there are languages that encourage bad programming and
>languages that discourage it. It *is* possible to write PL/I code to
>cause buffer overflows, but you really have to work at it. In C, you
>have to work at least as hard to avoid them. That's not just my
>thinking, the Multics retrospective paper published last year says this
>flat out.
Ehh.. You might probably as well phrase that as "It *is* possible to
write PL/I code that does something half-way portably useful, but you
have to really work at it.".
Sure, you can write bad programs in C, and C allows you to do that.
But C allows you to do that exactly because C allows you to do pretty
much _anything_. There are no barriers. You can, as a programmer, say
"this is what I want to do", and C will just do it. It won't question
your sanity, and it also won't say "yeah, you can do it, but you need to
jump through hoops to show you really want to first".
C says "you're the boss", and just does it. Without any politics,
without any hidden agenda by silly language designers.
You appear to want programming languages that enforce things, to avoid
buffer overflows etc. But think about _why_ C is a popular language for
a second first. I claim it is exactly because it doesn't get in your
way - sure it allows you to shoot yourself in the head, but it does so
because it allows _anything_.
And I'll take a language that allows me anything, over a language that
thinks the programmer needs hand-holding and limiting. Is it a bit
chaotic? Sure, but the power means that you can write libraries and
complex infrastructure in the same language, without having strange
barriers.
Try to do the same in just about any language, and you'll most likely
fail. Look at how many languages have C bindings just because the
languages themselves are not able to do some things reasonably.
When you try to restrict what people can easily do, you will FAIL.
Linus
Index
Home
About
Blog