Index Home About Blog
Newsgroups: comp.arch,sgi.competition
From: mash@mash.wpd.sgi.com (John R. Mashey)
Subject: Re: Multicpu's ready for the masses yet? [really 64-bit stuff]
Lines: 199

In article <2aqp8s$rob@mail.fwi.uva.nl>, casper@fwi.uva.nl (Casper H.S. Dik) writes:
|> rpeck@pure.com (Ray Peck) writes:
|> 
|> >In article <mck4dfINN8jl@exodus.Eng.Sun.COM> ram@shukra.Eng.Sun.COM (Renu Raman) writes:
|> >> - just as MIPS folks chose to
|> >>jump into 64-bitness and ISDN with us and many others.

Hmm, that reads pretty strange, at least the first part.

|> 
|> >Uh, which company had 64 bit machines shipping before the other had
|> >its 64-bit architecture defined?  Who chose to jump into 64-bit with whom? 
|> 
|> You mean Sun said: look others have a 64 bit architecture defined
|> lets make one? And published theirs a month later? C'mon, it takes
|> a lot more time than that to define a 64 bit architecture from a
|> 32 bit one.

Just to be clear on the history:

1) The R4000's 64-bittedness and details thereof became public
in late January/early February 1991, see Microprocessor Report, Feb 6, 1991,
for example: 
	"The surprise is that it also extends the architecture to 64 bits..."


2) SPARC V9's 64-bittedness and details thereof became public
in late January/early February 1993, see Microprocessor Report, Feb 15, 1991,
for example, i.e., 2 years later.

3) Somewhere [that I can't find now], I recall seeing that SPARC International's
efforts on this started in real earnest March 1991.  Certainly, Sun (and
especially HaL) must have been thinking about it beforehand... 
but it was *2 years later*
before an equivalent level of public disclosure occurred, not the next month.

4) Re: "surprise" noted by M-R (which is fairly hard to surprise :-)],
	a) The reason there was surprise is that we told *almost* no one,
	   and we never wrote it down on nondisclosure papers that went
	   outside the company.  I.e. ,there was a nice, succinct 20-page
	   NDA presentation on the R4000 ... which said nothing about 64-bit.
	b) As it happens, a prospect who got this presentation (under NDA)
	   provided a copy to Sun (in mid-to-late 1990).
(item c is somewhat indirect, but was thru reliable sources):	   
	c) When the R4000's 64-bittedness started to come out, we heard things
	   back like "XXX (of Sun) says it isn't really a 64-bit chip";
	   in fact, if you'd read the NDA presentation, this is exactly what
	   you'd think: a 32-bit chip with a 64-bit bus...

BTW: I've seen people rant and rave at vendors for not being willing
to provide all details of future chips under NDA ...  it is experiences like
b) that cause vendors to be careful...  This is also the reason that you are
 sometimes willing to tell people things under NDA that you won't write down
for them under NDA... :-)


Finally, people have wondered about doing 64-bits as early as we did, given that
there wasn't any software that made use of it at the time.
let me try one more time [this has been explained lots of times years
ago, but it keeps coming up, so here it is again:]

1) The timing for this was derived from looking at:
	a) DRAM trends: 4X bigger every 3 years.
	b) Maximum physical memory on microprocessor-server machines ...
	   which also gets 4X larger every 3 years.
	c) And observing that virtual memory can often be 4X larger than
	    physical memory and still be useful [Hennessy says I'm being
	    conservative, in the face of increased use of memory-mapping,
	    but I've personally seen  4X, so I use it.]
I've drawn charts that used Sun/MIPS/SGI servers ... and the conclusion
was that 1993/1994 would be when the leading-edge pressure started hitting for
64-bit addressing, and that since software wasn't going to materialize
overnight (surprise!), one needed to have:
	1991 - first chips
	1992 - chips in systems
	1993 - get OS and 64-bit tools ready (internally)
	1993/1994 - start getting applications, so they'd be ready for
		leading-edge folks (1994), and others (1995).
Also, unlike some of the past new-CPU cases, where it was relatively
easy to get high-performance emulations of a new CPU on an old CPU [i.e.,
like when MIPS was using VAXen to simulate R2000s via object code
conversion from R2000-object code to VAX object code, getting about
50% of the VAX 11/780 performance], it is *not* so fast to simulate
full 64-bit machines on 32-bit machines, and there is a lot of OS work to do.
Hence it is good to have real hardware to try things out with...


As it happens, it's taken us a little while longer, due to:
	a) Churning around for 6 months amongst industry to agree on handling
	   (or at least, agree on how to express disagreements on handling)
	    64-bits in C on machines that also handle 32-bit environments.
	b) SGI/MIPS merger.

Compared to DEC's Alpha, we have both the luxury of having plenty of 32-bit
codes that run, and the downside that there is a little less pressure
to make everybody go directly to 64-bit (as there is for OSF/1), i.e., if I
were DEC, I'd do what they did [despite the fact that OSF/1 isn't yet shipping
on the big SMP configurations that are the ones most likely to *really want*
64-bit addressing], and if I were us, I'd do what we're doing, which is to
work the 64-bit OS and compilers into the natural release cycles,
and start at the high end and work down.

Of course, DEC would have been monstrously stupid to have introduced, in 1992,
a brand-new 32-bit architecture. :-)

2) Who wants 64-bit addressing?
In order of likely usage:
	a) Anyone with a FORTRAN program, where they could change one
	   parameter and crunch away.  Note that this is likely to happen
	   (in some cases) the day after appropriate systems are shipped,
	   since such FORTRAN codes often came off 64-bit supercomputers
	   anyway, and are already 64-bit clean.  Note that the typical
	   high cost of supercomputer memories sometimes makes it hard to
	   afford as much memory as people would like ... and there are
	   plenty of codes out there that can instantly expand to fill the
	   memory available.  Such people typically want big SMPs with
	   lots of memory.  If you haven't played in this arena, the
	   numbers are fairly mind-boggling.  For example, some people
	   may have heard about SGI's recent work with University of
	   Minnesota, where we put together an array of 16
	   Challenge SMPs, 20 CPUs and 1.75 GB each, for them to work on
	   a 1024^3 mesh homogeneous turbulence problem.  [Each of the
	   1B elements in such a mesh needs 5 32-bit floating-point values;
	   one does not casually store them as 64-bit values, by the way...],
	   in any case, one needs 20GB just for the 1024~^3 cube itself...
	   For what it's worth, starting in March 1993, there were already
	   (a few) customers who were ready to purchase 4-6GB of memory, if the
	   64-bit software had been ready then.
	   Of course, there are some supercomputer-type customers who don't
	   take 32-bit CPUs seriously :-)

	b) Other codes in MCAD, and a few in ECAD [some chip simulations will
	   start running out of 32-bit addressing soon;  really nasty for
	   them is that they are often integer-based, hence vector FP doesn't
	   help much....]
	c) DBMS [because there is somewhat less pressure, and there is a lot
	   software that will take a while to convert.]  On the other hand,
	   while disks continue to hold more and more data per square cm,
	   their access times aren't improving much, and it gets more and
	   more cost-effective to spend memory on minimizing dependent
	   chains of disk accesses.

3) Finally, there is one last, somewhat subtle issue about timing,
which many people do not understand.
Consider the 286->386->486 transition: for a long time, 386s and even 486s were
mostly running 16-bit code.  Hence you could claim that 32-bittedness was a
waste for the 386, and maybe even the 486.  On the other hand, the huge
installed base makes sure that 32-bit OSs and apps have a much bigger
potential market, than if (for example) only Pentium were a 32-bit chip.

Although not quite as strong an effect (16-bittedness is worse
constraint than 32-bittedness), there is a similar effect for the 32->64
bit transition.  Here are some choices:
	a) In 1993, with a brand-new chip, go straight to 64-bit for UNIX
	  ... even if there are few system configurations
	  ready where people must have it.
	  [DEC Alpha]
	
	b) In year 1991-1992, you introduce chips that can run existing
	   32-bit apps, can be used for 64-bit OS and compiler development,
	   and in fact, you convert your entire product line to use such
	   chips.  As OSs become available that continue to run 32-bit apps,
	   but also run 64-bit apps,  they start at the top and can work their
	   way down as far as there is demand.  In particular, wherever it makes
	   sense, the 64-bit OS might be installed on existing machines.
	   For example:
		1) It is always nice to have cheaper machines for software
		   development.
		2) Suppose a new application port is mostly driven by
	  	   64-bit on big machines ...  but can be useful on smaller
		   ones.  It's nice to have the chance to do that.
	  In general, one expects that by the time the 64-bit OS gets widely
	  available, that the overwhelming majority of active installed base
	  at least have 64-bit capable CPUs, and the decision to push the 64-bit
	  OS further and further down the product line can be taken on
	  customer demand, release schedules, etc ... rather than simply being
	  impossible.
	  [MIPS R4000 & up]
	
	c) In year 1995, you can introduce systems products with your first
	   64-bit chips, under circumstances where those chips tend to be
	   limited to the higher-performance/lower-volume part of your
	   family.
	   [Sun UltraSPARC, IBM PPC 620, according to published info,
	    i.e., where the low-end/mid-range family members remain 32-bits.]

Anyway, one might claim that a) and b) are premature; in any case, different
vendors have different kinds of customers, and hence perhaps everyone is
doing something reasonable.  Nevertheless, I would still claim that IF
you believe 64-bits is important, then I think you are better off rolling
over your hardware base onto 64-bit-capable machines *EARLIER* than
you'd expect, mostly because software takes a while, and having
64-bit-capable hardware in the field may help pull software later.


	

-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311


Subject: Re: Alpha database performance
From: mash@mash.engr.sgi.com (John R. Mashey) 
Date: Dec 02 1995
Newsgroups: comp.sys.powerpc,comp.sys.intel,comp.benchmarks,comp.unix.solaris

64-bit machines make *serious* sense for DBMS, based on technology
fundamentals, not just on marketing & current benchmarks: 

1) DRAM memory density increases 4X every 3 years.

2) The traditional latency improvement in disk access is 1.3X every
10 years, although lately it's been doing better.

3) Depending on how a DBMS is organized, and what you're doing with it, it may
be *very* advantageous to at least get most of the pointers to data into
memory. It is usually unlikely to get the data itself into memory:
        a) If a transaction involves chasing pointers, B-trees, indices, etc,
        and it takes a dependent sequence of disk operations to obtain the
        actual data, then one of the bounds on performance is the number
        of disk accesses, i.e., it is a latency bottleneck, not necessarily
        a bandwidth bottleneck.  [You can see the same effect in classic
        UNIX file access, i.e., where pointers to the first few blocks of
        a file are direct, and quick, and pointers to the end of a big file
        have multiple indirect pointers, and are not so quick.]

        b) Note that disk striping improves the bandwidth, but does little
        for small-block-dominated latency.  For example, we've seen
        300MB/sec reading/writing a single (striped) UNIX file ...
        but it still takes the same N milliseconds to get the first byte of
        data as it does from a single disk.
        
        c) In the absence of magically-faster disk drives or some equivalent
        technology, DBMS workloads dominated by disk-pointer-chasing can
        get better response time by caching more pointers in memory ...
        and there's not much in the way of other obvious solutions.
        Put another way: the fundamental technology trends of
        "Capacity rising quickly, latency not dropping quickly"
        are very strong influences.

4) As a result, various DBMS vendors have done, or are seriously doing,
64-bit ports.  Note that best use of 64-bit may be accomplished by
rethinking how the memory is used, not just be doing straight 32->64-bit
conversion.   Mainframes went through this a while ago, sort of,
i.e., the IBM ESA-architecture mainframes added some extra mapping operations
to give access to >31-bit addresses, as they weren't enough, and the reason
was explicitly for DBMS (DB2).

5) So, regardless of whether or not the existing benchmarks are convincing
to people, all you have to do is graph the memory sizes of refrigerator-sized
microprocessor servers over the last few years, and know that disk latency
isn't getting better all that fast, to know that some DBMS applications
will benefit from large main memories, which, in fact, do not even cost
that much as a fraction of machine with Terabytes (or even 100s of GBs)
of disk.


-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090    FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311


From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: Ultrasparc 1 has no 64 bit pointers?
Date: 19 Apr 1996 01:56:50 GMT

In article <zalman-1804961457410001@198.95.245.190>,
zalman@macromedia.com (Zalman Stern) writes:

|> Maybe John Mashey will comment directly, but he's basicaly said on various
|> occasions that the main purpose of 64-bit addressing on the R4000 was to
|> give the MIPS camp a headstart on 64-bit computing.

1)  "64-bit Computing", BYTE, Sept 1991, 135-142 went through all of this,
a long time ago.  I hate to keep pointing at this, but really, most of
the issues were covered there, a long time ago, about why people
would do 64-bit, and how, and when ... and was fairly close.

2) SGI & other dates, to fix some misapprehensions posted in this sequence:
1Q92	R4000 first system (Crimson), still 32-bit OS, of course.
4Q92?	Alpha 64-bit UNIX systems
1Q93	By then, all SGI systems were 64-bit CPUs ... running 32-bit OS
3Q94	R8000 first system (Power Challenge) SGI, 64/32-bit UNIX
1Q96	R4400/R8000/R10000 IRIX 6.2 (64/32-bit), i.e., all of the bigger
	systems plus desktops with R8K/R10K ship 64/32-bit 6.2.


3) DEC & SGI took slightly different paths, each of which made sense.
With a brand-new archiecture, DEC elected to take the leap direct to
64-bit programming model, not unreasonable, even if it caused some pain
in application acquisition [that is, many ISVs had to port software for
which there was actually no particular benefit for 64-bits], but this was
definitely simpler ... and I probably would have done the same.
Like most other vendors, SGI had an installed base, with lots of 32-bit
binaries floating around, and wanted to maintain a mixed 64/32-bit
environment for a long time, and it took a while to work up to it.
[It takes more work to clean up UNIX to mix 32- and 64-bit code].
Most vendors will take the same path as SGI; that doesn't mean DEC
was wrong or silly.

4) Needless to say, 3Q94 Power Challenge 64-bit would have been much harder
to achieve if R4000s hadn't come out when they did.  As I have 
explained, the whole point was to shift the installed base over to
64-bit hardware so that software transitions could happen as convenient,
BECAUSE THEY TAKE A LONG TIME.
The latter is a subtle point that is often missed & I'll return to it in
a later posting. 

5) In 1976, we had PDP-11/70s with 1MB of memory, with a task limit of
64KI+64KD. Most processes fit OK, but there were a few that wanted more
per process.
Legions of Bell Labs programmers spent years slicing and dicing software
to fit, especially databases.  This went away with the VAX, which wasn't
much faster than an 11/70, but you could throw memory at problems rather
than staff-years.

6) In 1996, we have systems where you can buy GBs of memory, and some
people do.  Some of them even want to sometimes allocate most of the memory
in a machine to one process, sometimes, and not be hassled.
Anybody who claims this is:
	- unimportant
	- not likely to happen
	- craziness by weirdo freaks
just doesn't talk to the customers who buy these things, who buy for the same
perfectly good reasons that made us prefer VAXEn to 11/70s 20 years ago.

7) The first wave of people to use >32-bit addressability on micros were
typically supercomputer users, with 64-bit clean FORTRAN codes, who just wanted
to recompile their program with bigger array dimensions (sometimes a 1-line
change), and run their code with finer grids for their Fluid Dynamics or
stuctures, etc codes. They can, and do, consume *any* amount of
memory the same day.  Some people have supported parallelizing compilers
for a while, so it's quite possible that somebody takes their FORTRAN
program, parallelizes it onto the whole system, and has zero interest
in being told that they need to redesign their code into multiple
processes and address spaces. 

But, you say: lunatic fringe?  Well, how about:

8) DBMS
	1) CPUs get faster, DRAMs 4X every 3 years, and disks seem to be
	on a 2X capacity increase in 18-24 months.  This is good.
	2) But disk access times improve slowly.  This is very bad.

	So, maybe it would be a good idea if a DBMS were to:
		keep frequently-used relational tables in memory?
		keep more of the pointers to data around in memory?
		try to reduce the number of sequentially-dependent disk
		accesses to get to data.
	Maybe performance would improve?
	Maybe a huge chunk of the computer business cares?

	Note: this does *not* mean I think that the main use of this is
	to fit an entire DBMS in memory; occasionally, that's a great idea,
	and it's certainly wonderful for benchmarking :-)

	Note that DEC Alphas *made* some DBMS vendors get 64-bit-clean code,
	even if they didn't rearchitect them at that point. DEC has done
	well with the Very Large Memory versions, and probably had the
	useful fringe benefit of getting people to clean up code earlier
	than they might have otherwise :-)

So: is the above:
	a) Crazed wishful thinking, lunatic-fringe stuff.
	b) Or something mainline, non-wacko enterprise data management folks
	   are sufficently interested in to induce the leading DBMS companies
	   to not only port to 64-bit, but to do some re-architecting to
	   take advantage of very large memory pools [single address space,
	   since they don't feel like starting from scratch.]

Well, it turns out that b) is right, and this is for 1996.

By the way, it is also the case, that some DBMS vendors' arms are being twisted
hard by certain vendors [who do not yet have 64-bit OSs, and who publicly
say that they are irrelevant] to do "hack-jobs" (their term)
on their DBMS to be able to use larger memories than a single process
can access :-)   Shades of 1976...  and also, of some of the addressing
changes done for IBM mainframes over the years.

	
-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311


Subject: Re: Speed, and its need ...
From: mash@mash.engr.sgi.com (John R. Mashey)
Date: May 30 1996
Newsgroups: sci.math.num-analysis,comp.arch

In article <31ac31cc.1632213@nntp.ix.netcom.com>, tewestb@ix.netcom.com
(Thomas Westberg) writes:

|> >I thought I answered that question in the comment quoted, but I guess not:
|> >let me try again: when people who sell/use 64-bit MIPS chips for embedded
|> >applications are willing to tell me what they're doing, and I ask why, they
|> >say NOT for the addressing, but to be conveniently able to push memory
|> >around faster, like doing logical operations on "long long", or of course,
|> >doing things like 64x64 -> 128 bits. 
|> 
|> No. That doesn't answer the question. If you're saying they need 64bit
|> registers just to do block moves efficiently, The bandwidth to the
|> D-cache is high, but if you're really pushing memory around, it's
|> going off-chip (to the graphics engine, in the N64). Then you mention
|> "long long" arithmetic operations, but without an example of what kind
|> of real-world data would be interesting to operate on 64 bits at a
|> time.

Suggestion: go back to BYTE magazine, September 1991 article on
64-bit, where this topic was covered fairly thoroughly, with exception
that when I wrote it, the mention of cryptography was mostly thinking
that NSA might buy some, but it turns out more people are interested.

In order of apparent popularity, as filtered through several layers of
people, plus occasional email from direct users:
(1) use of long long for data movement, including uncached load/stores
	on implementations that have 64-bit busses.
	This does not necessarily mean bcopy (which can, after all, be
	done using load-double/store-double FP, although in some environments
	people would rather not dirty the FP registers & incur additional
	context-switch overhead).
	Instructions: load64, store64 integer
(2) Use of long long for bit-vector algorithms, i.e., like many modern
optimizing compilers, but also, sometimes, for various storage-management,
and sometimes, certain image-processing codes. 
	Instructions: and, or, xor, not
(3) Use of long long for integer multiply/divides, i.e. like 256-bit or
	larger operations.  For example, if you need 64x64->128-bit,
	synthesizing that out of 32-bit multiplies takes 4 of them (plus adds).
	instructions: mul64, div64
 


|> To rephrase the question: if video games don't do 64 bit arithmetic
|> operations (in general), and don't need 64 bit address space, why is
|> that silicon real estate there? (In this case, I'm concentrating on
|> the Nintendo 64 system, but many other embedded applications will have
|> the same questions.)

One more time: in the R4000, a 64-bit integer datapath cost less than
5%; in the R4400, it was more like 2-3%, and in an R4300i,
it's hard to believe it's any more than that, and probably less.
No one in their right mind would create yet another architectural version
(i.e., R4K minus all 64-bit stuff) for a couple % of die space.

|> When the R4000 family was introduced, people asked in this group "who
|> needs 64 bits" and the answer was huge address space. This was

No, the answer was:
	(1) Big addressing
	(2) 64-bit integer operations
with (1) being observed to be the more crucial, but (2) being relevant
for performance for certain applications.
and if someone else gave another answer, they didn't explain it right,
or they didn't know.

|> I think this is the real reason: "it was already there." The R4200
|> chips are undoubtedly very fast, but if you offered an R3000-family
|> chip at the same clock rates and for a few dollars less, which do you
|> think would win?

It will depend on what you're doing. There are a large number of
variants of both 32- and 64-bit MIPS chips on the market in the embedded
space, and people pick the ones that fit best, and they have varying
different sets of criteria, so this part of the market demands a wide
range of variants. 32-bitters will be around a long time.
However:
	(a) R4X code can be denser than R3K code, even without using 64-bit
	integer operations. 
	(i.e., load-nops disappeared, additional instructions).
	(b) R4Xs can generate 64-bit cache transactions and 64-bit
	uncached external bus transactions on 64-bit busses.
	R3Ks can't, as there are no load/stores with 64-bits.

As I noted in an earlier posting, I explicitly observed that people
were sometimes using embedded 64-bit chips for reasons other than the fact that
they were 64-bit [i.e., might have higher clock rates, might have more
integration, other features] ... but I continue to assert that some of
them indeed have choice between 32- and 64-bit chips of the same family,
and that some of them have codes that can get performance advantages from
64-bit integers, and this factors into their decisions.

-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch,comp.benchmarks
Subject: Re: vlm for databases [was: Scalability (?) of new Sun E10000 ?]
Date: 12 Feb 1997 06:58:25 GMT

1) Disks are getting bigger fast, getting faster slowly, and so it is very
easy for disk latency to the bottleneck.

2) In the history of computing, if you could afford memory, and if memory
solved your problem, it was A Good Thing.  If you could afford it, but
couldn't use it because you couldn't address it, it was A Bad Thing ...
and this happened with:
	S/360's 24-bit, then 31-bit addressing, with ESA being a
		complexification to get around the latter.
	PDP-11's 16-bit, with separate I&D to help, but not enough
	X86's 16-bit, with segments...

3) Data: the RDBMS vendors, some of whom employ some ex-IBM DB2
folks (who lived through complexities of ESA & multiple address spaces),
do not want to do it again.  2 of the RDBMS vendors already ship 64-bit (or are
in Beta) on DEC & SGI, and since very major vendor has 64-bit software
plans in their public roadmaps, the DBMS vendors are *not* interested
in retrofitting.  "Some vendors are here every week trying to twist our
arms to do a hack job on our database to fake 64-bit on 32-bit, and
we're not interested", a quote from a DBMS vendor located in California.

4) Strangely enough, some customers who are building large DBMS would
prefer to just do it with 64-bit so they don't have to change it later,
even if they don't need it yet.  The disk vendors are on a strong
4X?3 years (or better) density roll.



-- 
-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
EMAIL:  mash@sgi.com 
DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics/Cray Research 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94043-1389



From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch.arithmetic,comp.arch,comp.lang.c
Subject: Re: Arithmetic architecture [really: 64-bit; C; UNIX] [long]
Date: 17 Apr 1997 02:55:36 GMT
Keywords: LP64, long long, CPU architectures, UNIX

In article <5ivkph$k5e$1@goanna.cs.rmit.edu.au>,
ok@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes:

|> mash@mash.engr.sgi.com (John R. Mashey) writes:
|> 
|> >1970s:

|> >	- No 32-bit UNIX system was expected to support 16-bit binaries.

|> Didn't the VAX 11/780 have microcode to run PDP-11 programs?
|> I'm sure I remember reading that one of the games distributed
|> with 4.1bsd UNIX was actually a PDP-11 binary.

VAXen had such microcode, and I believe that was important for DEC
in the transition on VMS, and there was indeed a game or two...
(and I may have spent a cycle or two playing some :-)
Let's try the longer discussion of which this was a terse version:

	- When 32-bit minis appeared, feverybody expected that UNIX
	applications would get recompiled for VAX (or 3B, or whatever).
	Nobody expected that running PDP11 UNIX binaries was required.
	(DEC's VMS did this for PDP-11 code, and I undertand this was
	important in the conversion, but UNIX-land was mainly worried about
	making source code portable, not in running the binaries).
	In addition, nobody was expecting, that as APIs were evolving,
	that a VAX would be supporting:
		16-bit PDP-11 binaries, in each new API
		32-bit VAX binaries, in each new API
	(I.e., "support" above means much more than the conversion effort of
	running an existing set of of binaries, it means compiler/linker
	library support that continues....)
	Also "everybody has the source, don't they?" :-)

	Note the contrast with today: people expect that they have current
	compilers and libraries for both 32- and 64-bit models, and they
	can choose to compile one way or the other,
	or (if their code has this kind of portability), both ways
	(and some people do this already...).  They expect that when
	the evolution of standards and features happens, that the 32-bit
	environment gets carried along as a supported thing...

	This is a *much* more stringent requirement than the 1970s 16->32,
	and much more of the issue is OS support than whether or not the
	hardware can emulate the instructions, or run a few patched-old
	binaries.  

	Note: from public pronouncements, most major UNIX vendors are
	taking this strategy, with DEC being an exception, as noted earlier,
	due to non-existence of 32-bit UNIX-on-Alpha installed base.
	(But note 32 -> 64/32 evolution on VMS).  
	

-- 
-john mashey    DISCLAIMER: <generic disclaimer: I speak for me only...>
EMAIL:  mash@sgi.com  DDD:    415-933-3090	FAX: 415-967-8496
USPS:   Silicon Graphics/Cray Research 6L-005,
2011 N. Shoreline Blvd, Mountain View, CA 94043-1389

From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.std.c
Subject: Re: MS on IA64 has size_t > long
Date: 27 Jul 2000 16:54:48 GMT

In article <397F75E1.CB9C1DE4@tribble.com>, David R Tribble <david@tribble.com> writes:

|> That's the logical conclusion.  But that's not what Microsoft chose.
|> Win64 uses 32-bit ints and longs, and 64-bit pointers.
|>
|> > It would mean that Microsoft have chosen the wrong size for an int.
|>
|> Yep.  They claim that their reason for doing this was to facilitate
|> backward compatibility with existing object code.  Some of us would
|> say that the result just makes Win64 a backwards API.

This is inconsistent with:
	(a) Discussions with Microsoft since 1992.
	(b) The documentation that I have
So, maybe "they" say something else, but the info I've got says otherwise.
This was discussed before, and I'll quote the same thing I quoted before,
"Win64(TM) Architectural Overview", Oscar Newkirk, Microsoft,
in Spring '99 Intel Developer Forum Conference, which is also quite
consistent with various private conversations.

In 1992, the people I talked to weren't sure what they were going to do
[this was at the 64-bit C working group meetings that tried to get us all
to do something consistent.] Recall that at least some people (like some
from Sun), preferred to keep long 32 bits as well, but as the trend
developed for LP64, went with it as well.

At some point, Microsoft decided on their present course, and it may or may not
be optimal, but I was told that they had looked at a lot of Win32 code,
and decided that in that world, unlike the Unix world, it would break less,
and be easier to keep source compatibility to leave long as 32 bits.

There was a specific comment, that unlike UNIX, where people had mostly
had 32-bit ints for a long time, in a mixed Win16/Win32 world, people had
had 16-bit ints, and quite logically, had only used 32-bit longs where they
really needed to, and hence unadorned long had a stronger attachment to
32-bittedness than it did in UNIX-originated code.  This, of course, of course,
was isomoprhic to the situation in 1978-1979 inside Bell Labs, where code
needed to be source-portable among PDP-11s & VAXen; ints were 16-bits on the
former, 32-bits on the latter.  At that point, in practice, "long" meant
"I really need this to be 32-bits, or big enough to hold a pointer",
because it was serious overhead on the PDP-11s.

Anyway, Microsoft may or may not care about portability, and they may or
may not have made the correct decision, but I do believe they thought
about what they were doing, and the decision was a reasoned tradeoff.
--
-John Mashey EMAIL:  mash@sgi.com  DDD: 650-933-3090 FAX: 650-933-2663
USPS:   SGI 1600 Amphitheatre Pkwy., ms. 562, Mountain View, CA 94043-1351

Index Home About Blog