From: firstname.lastname@example.org (John Mashey)
Subject: Re: The WIZ Processor
Date: 3 Jun 2004 14:48:04 -0700
"Tim Clacy" <email@example.com> wrote in message news:<firstname.lastname@example.org>...
> Stephen Sprunk wrote:
> > Memory latency is the single biggest problem facing computing today
> ? What about Energy efficiency, Scalability, Testability, Reliability,
> Security, Time-to-market
All of these are real problems [and energy [as for battery life] and
heat issues are increasingly ugly heads], but latency seems a more
fundamental issue, s the others deal more with engineering, and
latency is (sometimes) set by those pesky laws of physics.
I repeat the old proverb:
"Money can buy bandwidth, but latency is forever."
(Note: in that context, it means first acccess to a random piece of
data, not the total latency of transferring a given set of data items,
for which better bandwdith is good.]
I would suggest that it isn't just DRAM latency, it's the larger
a) On-chip memory latency
b) Off-chip (DRAM) latency.
c) Disk latency.
d) (maybe) other mass-storage latency, like tapes.
e) Local network latency
e) Long-distance network latency.
In a) ,one might try Google: david patterson vector chip for
interesting work; at least, Moore's Law gives as bigger on-chip
memories for a while, and that helps.
In b) through e) bandwidth has been increasing faster than latency has
If at some point other storage media finally displace c) and d), that
part will improve. Note that the terrific increases in disk density
over the last decade have improved capacity, and bandwidth [since the
linear density improves], but the seek times haven't improved so much,
and the rotational delays have improved very little. Ok, maybe one
can go to multiple heads ... but that's been done more than once, and
it generally fails.
For e) and f), the speed of light = pesky irritant.
Even inside the computer room, 1 nanosec ~= 1 foot, at best, and this
started to matter years ago (to people like Seymour Cray). It matters
a lot to people who build big machines.
Outside the computer room:
I did some quick pings of European media sites not likely to be
Not a scientific result, but:
~5600 miles away, ~160ms round trip [~80ms one-way] representative.
Speed of light = 186,000 miles/sec; 5600/186000 = 30ms,
assuming we stay on the surface.
We *will* get 8X better bandwidth; that's economics and engineering.
We will *not* get 8X lower latency, i.e., 10ms.
I'd expand the old proverb to say:
"Money can buy bandwidth, compute cycles, and capacity ... but latency
And that means we'll keep doing the same thing we always do, which is
trade some of what we can buy to avoid use of something we can't.
Hence, at the CPU level, we do caches, pre-fetch, o-o-o, SMT, vector
CPus, etc, etc. At the network level, we use caching and mirroring
techniques as well.
Contrary to Nicholas Capens' faith that SMT will save the day, I doubt
Moderate threading seems a useful tool in the toolkit [this is being
typed on a machine with it], and is certainly useful for applications
that naturally have numerous independent threads. Parallelizing apps
across multiple CPUs is something the industry has plenty of
experience with, but that experience says it works very well for some
apps, and not at all for others.
From: Terje Mathisen <email@example.com>
Subject: Re: The WIZ Processor
Date: Fri, 04 Jun 2004 11:01:05 +0200
John Mashey wrote:
> Outside the computer room:
> I did some quick pings of European media sites not likely to be
> mirrored here.
> Not a scientific result, but:
> ~5600 miles away, ~160ms round trip [~80ms one-way] representative.
> Speed of light = 186,000 miles/sec; 5600/186000 = 30ms,
> assuming we stay on the surface.
Assuming you have to travel through fibers, instead of using free-space
optics, the speed of light will be roughly 33% less, i.e. glass has a
refractive index of about 1.5.
This means that your speed of light calculation would be about 50 ms,
assuming optimal great circle routing, which you won't get.
I.e. at the very best, you can gain (say) 25% on those 80 ms numbers,
getting down to around 60 ms.
> We *will* get 8X better bandwidth; that's economics and engineering.
> We will *not* get 8X lower latency, i.e., 10ms.
We won't even get that within Norway, even though one vendor tried to
"almost all programming can be viewed as an exercise in caching"