Index
Home
About
Blog
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Page Colouring (was: 2.6.0 Huge pages not working as expected)
Original-Message-ID: <Pine.LNX.4.58.0312261515550.14874@home.osdl.org>
Date: Fri, 26 Dec 2003 23:34:55 GMT
Message-ID: <fa.ifgbhjo.1j0mai8@ifi.uio.no>
On Fri, 26 Dec 2003, Anton Ertl wrote:
> Linus Torvalds <torvalds@osdl.org> writes:
> >
> >And what you are seeing is likely the fact that random placement is
> >guaranteed to not have any worst-case behaviour.
>
> You probably just put the "not" in the wrong place, but just in case
> you meant it: Random replacement does not give such a guarantee.
No, I meant what I said.
Random placement is the _only_ algorithm guaranteed to have no
pathological worst-case behaviour.
> You
> can get the same worst-case behaviour as with page colouring, since
> you can get the same mapping. It's just unlikely.
"pathological worst-case" is something that is repeatable. For example,
the test-case above is a pathological worst-case scenario for a
direct-mapped cache.
> Well, even if, on average, it has no performance impact,
> reproducibility is a good reason to like it. Is it good enough to
> implement it? I'll leave that to you.
Well, since random (or, more accurately in this case, "pseudo-random") has
a number of things going for it, and is a lot faster and cheaper to
implement, I don't see the point of cache coloring.
That's doubly true since any competent CPU will have at least four-way
associativity these days.
> However, the main question I want to look at here is: Does it improve
> performance, on average? I think it does, because of spatial
> locality.
Hey, the discussion in this case showed how it _deproves_ performance (at
least if my theory was correct - and it should be easily testable and I
bet it is).
Also, the work has been done to test things, and cache coloring definitely
makes performance _worse_. It does so exactly because it artifically
limits your page choices, causing problems at multiple levels (not just at
the cache, like this example, but also in page allocators and freeing).
So basically, cache coloring results in:
- some nice benchmarks (mainly the kind that walk memory very
predictably, notably FP kernels)
- mostly worse performance in "real life"
- more complex code
- much worse memory pressure
My strong opinion is that it is worthless except possibly as a performance
tuning tool, but even there the repeatability is a false advantage: if you
do performance tuning using cache coloring, there is nothing that
guarantees that your tuning was _correct_ for the real world case.
In short, you may be doing your performance tuning such that it tunes for
or against one of the (known) pathological cases of the layout, nothing
more.
But hey, some people disagree with me. That's their right. It's not
unconstitutional to be wrong ;)
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Page Colouring (was: 2.6.0 Huge pages not working as expected)
Original-Message-ID: <Pine.LNX.4.58.0312271245370.2274@home.osdl.org>
Date: Sat, 27 Dec 2003 20:58:27 GMT
Message-ID: <fa.j4sarld.1dialgt@ifi.uio.no>
On Sat, 27 Dec 2003, Anton Ertl wrote:
>
> And you probably mean "repeatable every time". Ok, then a random
> scheme has, by your definition, no pathological worst case. I am not
> sure that this is a consolation when I happen upon one of its
> unpredictable and unrepeatable worst cases.
Those "unpredictable" cases are so exceedingly rare that they aren't worth
worrying about.
> The points are:
>
> - repeatability
> - predictability
> - better average performance (you dispute that).
I absolutely dispute that.
And you should realize that I do not dispute it because the applications
themselves would run slower with cache coloring. Most applications don't
much care, they either fit in the cache, or the cache misses have random
enough access patterns that cache layout doesn't much matter.
The test code in question is an anomaly, and doesn't matter. It's exactly
the same kind of strange case that sometimes shows cache coloring making a
huge difference.
The real degradation comes in just the fact that cache coloring itself is
often expensive to implement and causes nasty side effects like bad memory
allocation patterns, and nasty special cases that you have to worry about
(ie special fallback code on non-colored pages when required).
That expense is both a run-time expense _and_ a conceptual one (a
conceptual expense is something that complicates the internal workings of
the allocator so much that it becomes harder to think about and more
bugprone). So far nobody has shown a reasonable way to do it without
either of the two.
> Anyway, back to the performance effects of page colouring: Yes, there
> are cases where it is not beneficial, and the huge-2^n-stride cases in
> examples like the one above are one of them, but I don't think that
> this is the kind of "real life" application that you mention
> elsewhere, or is it?
The "real life" application is something like running a normal server or
desktop, and having the cache coloring code _itself_ be the performance
problem.
It's not that "apache" minds very much. Or "mozilla". They simply don't
care. The problem is the algorithm itself.
> >Also, the work has been done to test things, and cache coloring definitely
> >makes performance _worse_. It does so exactly because it artifically
> >limits your page choices, causing problems at multiple levels (not just at
> >the cache, like this example, but also in page allocators and freeing).
>
> Sorry, I am not aware of the work you are referring to. Where can I
> read more about it? Are you sure that these are fundamental problems
> and not just artifacts of particular implementations?
Hey, there have been at least four different major cache coloring trials
for the kernel over the years. This discussion has been going on since the
early nineties. And _none_ of them have worked well in practice.
In other words, "artifacts of the particular implementation" is certainly
right, but the point I have is that the only thing that _matters_ is
implementation. You can argue about theory all you like, I won't care
until you show me an implementation that works and is robustly better.
And it has to be better on average on _everything_ that Linux supports,
not just one particular braindamaged piece of hardware. I'm totally not
interested in something that makes performance on most machines go down,
if it then improves one or two braindead setups with direct-mapped caches.
Basically: prove me wrong. People have tried before. They have failed.
Maybe you'll succeed. I doubt it, but hey, I'm not stopping you.
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Page Colouring (was: 2.6.0 Huge pages not working as expected)
Original-Message-ID: <Pine.LNX.4.58.0312272046400.2274@home.osdl.org>
Date: Sun, 28 Dec 2003 04:56:32 GMT
Message-ID: <fa.j5rorle.1fi0lgu@ifi.uio.no>
On Sat, 27 Dec 2003, Eric W. Biederman wrote:
> Linus Torvalds <torvalds@osdl.org> writes:
> >
> > Basically: prove me wrong. People have tried before. They have failed.
> > Maybe you'll succeed. I doubt it, but hey, I'm not stopping you.
>
> For anyone taking you up on this I'd like to suggest two possible
> directions.
>
> 1) Increasing PAGE_SIZE in the kernel.
Yes. This is something I actually want to do anyway for 2.7.x. Dan
Phillips had some patches for this six months ago.
You have to be careful, since you have to be able to mmap "partial pages",
which is what makes it less than trivial, but there are tons of reasons to
want to do this, and cache coloring is actually very much a secondary
concern.
> 2) Creating zones for the different colors. Zones were not
> implemented last time, this was tried.
Hey, I can tell you that you _will_ fail.
Zones are actually a wonderful example of the kinds of problems you get
into when you have pages of different types aka "colors". We've had
nothing but trouble trying to balance different zones against each other,
and those problems were in fact _the_ reason for 99% of all the VM
problems in 2.4.x.
Trying to use them for cache colors would be "interesting".
Not to mention that it's impossible to coalesce pages across zones.
> Both of those should be minimal impact to the complexity
> of the current kernel.
Minimal? I don't think so. Zones are basically impossible, and page size
changes will hopefully happen during 2.7.x, but not due to page coloring.
> I don't know where we will wind up but the performance variation's
> caused by cache conflicts in today's applications are real, and easily
> measurable. Giving the growing increase in performance difference
> between CPUs and memory Amdahl's Law shows this will only grow
> so I think this is worth looking at.
Absolutely wrong.
Why? Because the fact is, that as memory gets further and further away
from CPU's, caches have gotten further and further away from being direct
mapped.
Cache coloring is already a very questionable win for four-way
set-associative caches. I doubt you can even _see_ it for eight-way or
higher associativity caches.
In other words: the pressures you mention clearly do exist, but they are
all driving direct-mapped caches out of the market, and thus making page
coloring _less_ interesting rather than more.
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Page Colouring (was: 2.6.0 Huge pages not working as expected)
Original-Message-ID: <Pine.LNX.4.58.0312282000390.11299@home.osdl.org>
Date: Mon, 29 Dec 2003 04:10:37 GMT
Message-ID: <fa.ihg3frn.1n0e8ad@ifi.uio.no>
On Sun, 28 Dec 2003, William Lee Irwin III wrote:
>
> Doubtful. I suspect he may be referring to pgcl (sometimes called
> "subpages"), though Dan Phillips hasn't been involved in it. I guess
> we'll have to wait for Linus to respond to know for sure.
I didn't see the patch itself, but I spent some time talking to Daniel
after your talk at the kernel summit. At least I _think_ it was him I was
talking to - my memory for names and faces is basically zero.
Daniel claimed to have it working back then, and that it actually shrank
the kernel source code. The basic approach is to just make PAGE_SIZE
larger, and handle temporary needs for smaller subpages by just
dynamically allocating "struct page" entries for them. The size reduction
came from getting rid of the "struct buffer_head", because it ends up
being just another "small page".
Daniel, details?
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: 2.6.0 Huge pages not working as expected
Original-Message-ID: <Pine.LNX.4.58.0312261956510.14874@home.osdl.org>
Date: Sat, 27 Dec 2003 04:03:18 GMT
Message-ID: <fa.ifgbijp.1j06bib@ifi.uio.no>
On Sat, 27 Dec 2003, Andrea Arcangeli wrote:
>
> well, at least on the alpha the above mode = 1 is reproducibly a lot
> better (we're talking about a wall time 2/3 times shorter IIRC) than
> random placement. The l2 is huge and one way cache associative,
What kind of strange and misguided hw engineer did that?
I can understand a one-way L1, simply to keep the cycle time low, but
what's the point of a one-way L2? Braindead external cache controller?
> The current patch is for 2.2 with an horrible API (it uses a kernel
> module to set those params instead of a sysctl, despite all the real
> code is linked into the kernel), while developing it I only focused on
> the algorithms and the final behaviour in production. the engine to ask
> the allocator a page of the right color works O(1) with the number of
> free pages and it's from Jason.
Does it keep fragmentation down?
That's the problem that Davem had in one of his cache-coloring patches: it
worked well enough if you had lots of memory, but it _totally_ broke down
when memory was low. You couldn't allocate higher-order pages at all after
a while because of the fragmented memory.
Linus
Index
Home
About
Blog