Index Home About Blog
From: mash@mash.engr.sgi.com (John R. Mashey)
Newsgroups: comp.arch
Subject: Re: RISC and CISC
Date: 13 Oct 1995 19:14:43 GMT
Organization: Silicon Graphics, Inc.

In article <45lv7b$g7b@senator-bedfellow.MIT.EDU>, jfc@mit.edu (John Carr) writes:

|> A comparable instruction on HP's RISC was defended here a few years ago
|> as a good example of the RISC design philosophy.  HP has BCD-correct
|> instructions in hardware.  The instructions are justified because the
|> architecture was designed to support business applications (unlike more
|> modern designs which have been targeted at scientific or general purpose
|> computing).  If the processor spends 20% of its time doing decimal
|> add/subtract/multiply it makes sense to speed up those operations.
|> 
|> Machines are faster now, and the British use boring decimal currency
|> like everyone else, so I doubt that either instruction would make
|> sense in a new design.

I would defend HP's inclusion of the instructions that they did
(this is not to say that everybody should do it).  They have a couple
register-register instructions that help do decimal arithmetic; they do *not*
have 2-operand, memory-to-memory, variable-length decimal instructions
like S/360, or the 3-operand (+ 3 lengths) instructions of the VAX.
They had plenty of COBOL use, and looked carefully at their statistics,
and included reasonable RISCy instructions to do it.   MIPS looked at the
same things, did not expect as much COBOL, had some unaligned instructions
that helped, and consciously omitted such instructions ... at least, over
beers in Cupertino, a table of MIPSers and HPers some years ago compared
notes about why we'd done things differently & concluded that both were
rational, and simply had different priorities and tradeoffs.

In my opinion, the most common reasonable reason for including some
instructions is:
	a) There is a datatype deemed worth supporting, i.e., whose use
	burns an interesting number of cycles, either because:
		1) It gets used often enough, OR
		2) It is used less often, but is really expensive.
	b) There is a reasonable hardware cost to support the datatype.
	c) Simulating the use of the datatype with the existing instructions
	   is noticably expensive.

In practice, b)+c) usually mean that the datatype operations can obtain
useful parallelism in hardware, that cannot be gotten at by sequences of
the existing instructions.  IF it is difficult to beat sequences of
existing instructions in speed, then only if those sequences are really
frequent would one add new instrucions, i.e.e., for code density.

This is why:
	a) Some 32-bit RISCs included some decimal-help instructions,
	   if COBOL was especially important.
	b) But new 64-bit ones (Alpha) don't: 64-bit integer arithmetic is
	   big enough.
	c) If people care about floating-point, they have FP hardware;
	   they do not try simulating it with integer instructions.
	d) People build multi-media instructions, i.e., that operate on
	   different datatypes.
BUT
	e) RISCs don't usually have a VAX CALLG
	   (general subroutine call) that
	   is difficult to get parallelism on.
	f) RISCs don't have  S/360 Translate-and-test, which is also
	   difficult to get parallelism on.
	g) RISCs don't have hordes of complex addressing modes, as in in
	   MC68020, where you discover that they burn lots of cycles anyway
	   if you read the timing charts.



-john mashey    DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP:    mash@sgi.com 
DDD:    415-390-3090	FAX: 415-967-8496
USPS:   Silicon Graphics 6L-005, 2011 N. Shoreline Blvd, Mountain View, CA 94039-7311

Index Home About Blog