Index
Home
About
Blog
Date: Sat, 20 Jan 2001 13:01:54 -0800 (PST)
From: Linus Torvalds <torvalds@transmeta.com>
Subject: Re: Is sendfile all that sexy?
Newsgroups: fa.linux.kernel
On 20 Jan 2001, Kai Henningsen wrote:
>
> Then again, I could easily see those I/O devices go the general embedded
> route, which in a decade or two could well mean they run some sort of
> embedded Linux on the controller.
>
> Which would make some features rather easy to implement.
I'm not worried about a certain class of features. I will predict, for
example, that disk subsystems etc will continue to get smarter, to the
point where most people will end up just buying a "file server" whenever
they buy a disk. THOSE kinds of features are the obvious ones when you
have devices that get smarter, and the kinds of features people are
willing to pay for.
The things I find really doubtful is that somebody would be so silly as to
make the low-level electrical protocol be anything but a simple direct
point-to-point link. Shared buses just do not scale, and they also have
some major problems with true high-performance GBps bandwidth.
Look at where ethernet is today. Ten years ago most people used it as a
bus. These days almost everybody thinks of ethernet as point-to-point,
with switches and hubs to make it look nothing like the bus of yore. You
just don't connect multiple devices to one wire any more.
The advantage of direct point-to-point links is that it's a hell of a lot
faster, and it's also much easier to distribute - the links don't have to
be in lock-step any more etc. It's perfectly ok to have one really
high-performance link for devices that need it, and a few low-performance
links in the same system do not bog the fast one down.
But point-to-point also means that you don't get any real advantage from
doing things like device-to-device DMA. Because the links are
asynchronous, you need buffers in between them anyway, and there is no
bandwidth advantage of not going through the hub if the topology is a
pretty normal "star" kind of thing. And you _do_ want the star topology,
because in the end most of the bandwidth you want concentrated at the
point that uses it.
The exception to this will be when you have smart devices that
_internally_ also have the same kind of structure, and you have a RAID
device with multiple disks in a star around the raid controller. Then
you'll find the raid controller doing raid rebuilds etc without the data
ever coming off that "local star" - but this is not something that the OS
will even get involved in other than sending the raid controller the
command to start the rebuild. It's not a "device-device" transfer in that
bigger sense - it's internal to the raid unit.
Just wait. My crystal ball is infallible.
Linus
Date: Sat, 20 Jan 2001 16:33:20 -0800 (PST)
From: Linus Torvalds <torvalds@transmeta.com>
Subject: Re: Is sendfile all that sexy?
Newsgroups: fa.linux.kernel
On Sat, 20 Jan 2001, Roman Zippel wrote:
>
> On Sat, 20 Jan 2001, Linus Torvalds wrote:
>
> > But point-to-point also means that you don't get any real advantage from
> > doing things like device-to-device DMA. Because the links are
> > asynchronous, you need buffers in between them anyway, and there is no
> > bandwidth advantage of not going through the hub if the topology is a
> > pretty normal "star" kind of thing. And you _do_ want the star topology,
> > because in the end most of the bandwidth you want concentrated at the
> > point that uses it.
>
> I agree, but who says, that the buffer always has to be the main memory?
It doesn't _have_ to be.
But think like a good hardware designer.
In 99% of all cases, where do you want the results of a read to end up?
Where do you want the contents of a write to come from?
Right. Memory.
Now, optimize for the common case. Make the common case go as fast as you
can, with as little latency and as high bandwidth as you can.
What kind of hardware would _you_ design for the point-to-point link?
I'm claiming that you'd do a nice DMA engine for each link point. There
wouldn't be any reason to have any other buffers (except, of course,
minimal buffers inside the IO chip itself - not for the whole packet, but
for just being able to handle cases where you don't have 100% access to
the memory bus all the time - and for doing things like burst reads and
writes to memory etc).
I'm _not_ seeing the point for a high-performance link to have a generic
packet buffer.
Linus
Index
Home
About
Blog