Index
Home
About
Blog
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Patch 4/6 randomize the stack pointer
Original-Message-ID: <Pine.LNX.4.58.0501271010130.2362@ppc970.osdl.org>
Date: Thu, 27 Jan 2005 18:19:25 GMT
Message-ID: <fa.gstdv66.aikr1i@ifi.uio.no>
On Thu, 27 Jan 2005, John Richard Moser wrote:
>
> What the hell?
John. Stop frothing at the mouth already!
Your suggestion of 256MB of randomization for the stack SIMPLY IS NOT
ACCEPTABLE for a lot of uses. People on 32-bit archtiectures have issues
with usable virtual memory areas etc.
> Red Hat is all smoke and mirrors anyway when it comes to security, just
> like Microsoft. This just reaffirms that.
No. This just re-affirms that you are an inflexible person who cannot see
the big picture. You concentrate on your issues to the point where
everybody elses issues don't matter to you at all. That's a bad thing, in
case you haven't realized.
Intelligent people are able to work constructively in a world with many
different (and often contradictory) requirements.
A person who cannot see outside his own sphere of interest can be very
driven, and can be very useful - in the "please keep uncle Fred tinkering
in the basement, but don't show him to any guests" kind of way.
I have a clue for you: until PaX people can work with the rest of the
world, PaX is _never_ going to matter in the real world. Rigidity is a
total failure at all levels.
Real engineering is about doing a good job balancing different issues.
Please remove me from the Cc when you start going off the deep end, btw.
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Patch 4/6 randomize the stack pointer
Original-Message-ID: <Pine.LNX.4.58.0501271018510.2362@ppc970.osdl.org>
Date: Thu, 27 Jan 2005 18:30:05 GMT
Message-ID: <fa.gut9v6e.8igr1q@ifi.uio.no>
On Thu, 27 Jan 2005, Linus Torvalds wrote:
>
> Real engineering is about doing a good job balancing different issues.
Btw, this is true of real security too.
Being too strict "because it's the secure way" just means that people will
disable you altogether, or start doing things that they know is wrong,
because the right way of doing this may be secure, but they are also very
inconvenient.
Thus a security person who doesn't take other aspects into account is
actually HURTING security by insisting on things that may not be practical
for a general vendor.
I've seen companies that had very strict firewalls in place, and didn't
allow people to upload any internal data except by going through approved
sites and having the data approved first too. Secure? No. I was told people
just connected modems to their local machines in their offices instead:
the security measures didn't work for them, so they had to effectively
disable them entirely. Everybody knew what was going on, but the security
people were pig-headed idiots.
It's a classic mistake of doing totally the wrong thing, and I bet the
pig-headed idiots felt very good about themselves: they had the perfect
excuse for doing something stupid. Namely "we only implement the _best_
security we can do, and we refuse to do anything inferior". It's also a
classic example of perfect being the enemy of good.
So John - next time you flame somebody, ask yourself whether maybe they
had other issues. Maybe a vendor might care about not breaking existing
programs, for example? Maybe a vendor knows that their users don't just
use the programs _they_ provide (and test), but also use their own
programs or programs that they got from the outside, and the vendor cannot
test. Maybe such a vendor understands that you have to ease into things,
and you can't just say "this is how it has to be done from now on".
Linus
Newsgroups: fa.linux.kernel
From: Linus Torvalds <torvalds@osdl.org>
Subject: Re: Patch 4/6 randomize the stack pointer
Original-Message-ID: <Pine.LNX.4.58.0501271121020.2362@ppc970.osdl.org>
Date: Thu, 27 Jan 2005 19:34:45 GMT
Message-ID: <fa.gsddve7.a2krpj@ifi.uio.no>
On Thu, 27 Jan 2005, John Richard Moser wrote:
>
> > Your suggestion of 256MB of randomization for the stack SIMPLY IS NOT
> > ACCEPTABLE for a lot of uses. People on 32-bit archtiectures have issues
> > with usable virtual memory areas etc.
>
> It never bothered me on my Barton core or Thoroughbred, or on the Duron,
> or the Thoroughbred downstairs.
Me, me, me, me! "I don't care about anybody else, if it works for me it
must work for everybody else too".
See a possible logical fallacy there somewhere?
The fact is, different people have different needs. YOU only need to care
about yourself. That's not true for a vendor. A single case that doesn't
work ends up either (a) being ignored or (b) costing them money. See the
problem? They can't win. Except by taking small steps, where the breakage
is hopefully small too - and more importantly, because it's spread out
over time, you hopefully know what broke it.
And when I say RH, I mean "me". That's the reason I personally hate
merging "D-day" things where a lot of things change. I much prefer merging
individual changes in small pieces. When things go wrong - and they will -
you can look at the individual pieces and say "ok, it's definitely not
that one" or "Hmm.. unlikely, but let's ask the reporter to check that
thing anyway" or "ok, that looks suspicious, let's start from there".
So for example, 3GB of virtual space is enough for most things. In fact,
just 1GB is plenty for 99% of all things. But some programs will break,
and they can break in surprising ways. Like "my email indexing stopped
working" - because my combined mailboxes are currently 2.8GB, and it
slurps them all in in one go to speed things up.
(That wasn't a made-up-example, btw. I had to write this stupid email
searcher for the SCO subpoena, and the fastest way was literally to index
everything in memory. Thank gods for 64-bit address spaces, because I
ended up avoiding having to be incredibly careful by just doing it on
another machine instead).
Linus
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Control Kernel
Date: Sun, 30 Sep 2007 19:08:30 UTC
Message-ID: <fa.HN2dlY9YaeqmG+4ImZKxVx/ARc0@ifi.uio.no>
On Sun, Sep 30, 2007 at 07:39:57PM +0200, Andi Kleen wrote:
> > CIPSO also lets systems like SELinux and SMACK talk to other trusted
> > systems (eg., trusted solaris) in a way they understand.
>
> Perhaps, but is the result secure? I have severe doubts.
As always, it depends on your environment. There are people who are
using Linux systems where trusting the network makes sense. For
example, if you're on a battleship, say, or all of the machines are in
a cluster which is inside a Tempest-shielded machine room with a
massive combination lock that you might find on a bank vault, or if
you're environment where network cables are protected by pressurized
pipes (so any attempt to tamper with or tap the cables causes a
pressure drop which sets off the alarm which summons the Marines armed
with M-16's...).
(I've been inside classified machine rooms, which is why my laptop has
the 'Unclassified' label on it; it's needed so people can easily
eyeball it and know that I'm not allowed to connect it to any of the
classified networks. And those are just the less serious machine
rooms. In the really serious classified areas with the bank
vault-like doors, and the copper-tinted windows, I'm not allowed to
bring in an unclassified laptop at all --- and yes, Linux is being
used in such environments. And yes, they do sometimes use IPSO and/or
CIPSO.)
> Security that isn't secure is not really useful. You might as well not
> bother.
There are different kinds of security. Not all of them involve
cryptography and IPSEC. Some of them involve armed soldiers and air
gap firewalls. :-)
Yes, normally the network is outside the Trusted Computing Base (TCB),
but a cluster of Linux machines in a rack is roughly the same size of
a huge Unix server tens year ago --- and it's not like Ethernet is any
more secure than the PCI bus. So why do we consider communications
across a PCI bus secure even though they aren't encrypted? Why,
because normally we assume the PCI bus is inside the trust boundary,
and so we don't worry about bad guys tapping communications between
the CPU and the hard drive on the PCI bus.
But these days, it is obviously possible to create clusters where
certain network interfaces are only connected to machines contained
completely inside the trust boundary, just like a PCI bus in a
traditional server. So don't be so quick to dismiss something like
CIPSO out of hand, just because it doesn't use IPSEC.
Regards,
- Ted
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Control Kernel
Date: Sun, 30 Sep 2007 20:23:06 UTC
Message-ID: <fa.xbHA9IRUsDn761KlXJ4r76tLSII@ifi.uio.no>
On Sun, Sep 30, 2007 at 10:05:57PM +0200, Andi Kleen wrote:
> > but a cluster of Linux machines in a rack is roughly the same size of
> > a huge Unix server tens year ago --- and it's not like Ethernet is any
> > more secure than the PCI bus.
>
> PCI busses normally don't have routers to networks outside the box connected
> to them.
The whole *point* is that the routers are interconnecting boxes inside
the cluster, and none of them connect to the outside world. It's no
different than a SCSI cable connecting to JBOD in a separate box, or a
Fiber Channel router connected to a SAN network connecting to a
storrage array. The SCSI or FC buses aren't encrypted either, and the
in the Fiber channel case we have a router --- yet people aren't
stressing out that we're not encrypting the traffic over the Storage
Area Network? Why? Because it's understood the network stays inside
the machine room. The same thing can true for Ethernet --- think
iSCSI, for example.
> > So don't be so quick to dismiss something like
> > CIPSO out of hand, just because it doesn't use IPSEC.
>
> With your argumentation we could also just disable all security
> in these situations (as in null LSM to save some overhead); after all these
> systems are protected by armed guards. If someone gets past the guards
> they could connect their laptop to the network and fake all the "secured"
> packets. If you assume that won't happen why do you need computer security at all?
If you get past all of the guards, you can usually reboot in single
user mode, and get root anyway. If you have physical access to the
computer, you're generally doomed anyway, unless you are willing to
pay the cost of encrypting everything on every single disk platter.
(And yes, in the more paranoid environments, where it's too expensive
to have 7x24 armed guards, maybe that makes sense.)
The point of something like CIPSO is because you want to label the
packets so the otherside knows how they should be treated. We don't
encrypt unix permission bits on most on-disk filesystems, either. Yet
I haven't heard people saying that just because someone could break
into a machine room, disconnect the JBOD from the computer, hook up
the JBOD to their laptop, and futz with the Unix permission bits,
rehook up the JBOD and reboot, that Unix permission bits are useless,
and we should leave all files at mode 777 --- since clearly we're not
secure against someone who can break into the machine room.....
I *hope* that sounds absurd, right?
- Ted
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Control Kernel
Date: Mon, 01 Oct 2007 19:01:50 UTC
Message-ID: <fa.CUThHeDW3QmzHujeNIAFYMcIKEE@ifi.uio.no>
On Mon, Oct 01, 2007 at 11:40:39AM -0400, Stephen Smalley wrote:
> You argued against pluggable schedulers, right? Why is security
> different?
>
> Do you really want to encourage people to roll their own security module
> rather than working toward a common security architecture and a single
> balanced solution (which doesn't necessarily mean SELinux, mind you, but
> certainly could draw from parts of it)? As with pluggable schedulers,
> the LSM approach prevents cross pollination and forces users to make
> poor choices.
Something should be pluggable, and some things not. We have multiple
filesystems in the tree; but we only have one scheduler and one TCP/IP
stack.
I'm going to argue that security is more like filesystems than
scheduling. The real problem with security is that there are no "hard
numbers", as Linus puts it. Instead, there are different sets of
requirements in terms of what is a valid threat model --- which will
very depending on the environment and how the servers are deployed,
and what the capabilities are of the adversary trying to penetrate
said system --- and how end users are willing to compromise between
security and convenience. This is much like filesystems, where one of
the reasons why people chose different filesystems is because they
have differing requirements and operational environments, and some
filesystems are better suited for certain requirements and
environments than others.
In some environments, say if you are creating a system that will
handle classified data for the U.S. government, there are formal
requirements that your employer, the NSA, sign off on the solution.
This allows the NSA to force the application programmers and end users
to make the tradeoff tilt very much against convenience in favor of
security. And given the threat models and capabilities of the
adversaries involved, that's probably appropriate.
But that's not necessarily appropriate for all users. SELINUX is so
horrible to use, that after wasting a large amount of time enabling it
and then watching all of my applications die a horrible death since
they didn't have the appropriate hand-crafted security policy, caused
me to swear off of it. For me, given my threat model and how much my
time is worth, life is too short for SELinux.
And I can tell you that certain ISV's, when faced with users
complaining that their commercial application which costs ten times
as much as their Linux distribution doesn't work when SELinux is
enabled, simply tells their customers to disable SELinux.
I've often thought that one of the reasons why SELinux folks argue so
strenuously against AppArmor is because since configuring it to
support new applications is so much easier than SELinux, that on an
even playing ground, in many commercial environments that dwarf the
NSA-mandated security pain of the federal sector, there is fear that
AppArmor would be much more popular with customers than SELinux. Yes,
AppArmor protects against fewer threats; but if the choice is to go
without SELinux because it's too painful to configure SELinux policy,
surely some protection is better than none.
> Some have suggested that security modules are no different than
> filesystem implementations, but filesystem implementations at least are
> constrained by their need to present a common API and must conform with
> and leverage the VFS infrastructure. Different security modules present
> very different interfaces and behaviors from one another and LSM doesn't
> provide the same kind of common functionality and well-defined semantics
> as the VFS. The result of merging many wildly different security
> modules will be chaos for application developers and users, likely
> leading them to ignore everything but the least common denominator.
> It almost makes more sense to merge no security modules at all than to
> have LSM and many different security modules.
Look, the reality is that the common interface for application is
system calls returning EPERM. If you have to rewrite applications to
use a security module, or force application writers to create a
complicated SELinux policy, application writers will simply balk. It
Just Won't Happen. Commercial applications like Oracle, DB2,
Websphere, BEA Application Server, Tivoli Storage Manager, and so on
work on multiple OS's, and not just Linux. If the application
developers are forced to use a Linux-specific API, most of them will
just walk away; it's much simpler and cheaper to tell users to disable
SELinux. And in the commercial world, we don't have the "big stick"
the NSA has in the federal/defense space to force application writers
to pay attention to SELinux.
The situation is just the same with filesystems. The common API is
POSIX. As filesystem writers, we often kvetch about how limiting
certain filesystem interfaces created 30 years ago might be, but the
reality is, very few applications will use extended interfaces, and so
if we broke readdir() and told application writers that they had to
use this wonderful new read_directory() API that was so much better
than the old readdir() interface, they would laugh at us and ignore
us. Why do security people think they have the ability to dictate to
application writers that they use specialized API's or write arcane
security policies?
- Ted
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Mon, 01 Oct 2007 15:08:44 UTC
Message-ID: <fa.qjq0EgdI/u5NagfamPl+u9x4j1A@ifi.uio.no>
On Mon, 1 Oct 2007, James Morris wrote:
>
> Merging Smack, however, would lock the kernel into the LSM API.
> Presently, as SELinux is the only in-tree user, LSM can still be removed.
Hell f*cking NO!
You security people are insane. I'm tired of this "only my version is
correct" crap. The whole and only point of LSM was to get away from that.
And anybody who claims that there is "consensus" on SELinux is just in
denial.
People are arguing against other peoples security on totally bogus points.
First it was AppArmor, now this.
I guess I have to merge AppArmor and SMACK just to get this *disease* off
the table. You're acting like a string theorist, claiming that there is
no other viable theory out there. Stop it. It's been going on for too damn
long.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Mon, 01 Oct 2007 16:07:09 UTC
Message-ID: <fa.rocpTZolsigyqXd7NNR38DIb2hw@ifi.uio.no>
On Mon, 1 Oct 2007, Stephen Smalley wrote:
>
> You argued against pluggable schedulers, right? Why is security
> different?
Schedulers can be objectively tested. There's this thing called
"performance", that can generally be quantified on a load basis.
Yes, you can have crazy ideas in both schedulers and security. Yes, you
can simplify both for a particular load. Yes, you can make mistakes in
both. But the *discussion* on security seems to never get down to real
numbers.
So the difference between them is simple: one is "hard science". The other
one is "people wanking around with their opinions".
If you guys had been able to argue on hard data and be in agreement, LSM
wouldn't have been needed in the first place.
BUT THAT WAS NOT THE CASE.
And perhaps more importantly:
BUT THAT IS *STILL* NOT THE CASE!
Sorry for the shouting, but I'm serious about this.
> Do you really want to encourage people to roll their own security module
> rather than working toward a common security architecture and a single
> balanced solution (which doesn't necessarily mean SELinux, mind you, but
> certainly could draw from parts of it)? As with pluggable schedulers,
> the LSM approach prevents cross pollination and forces users to make
> poor choices.
Another difference is that when it comes to schedulers, I feel like I
actually can make an informed decision. Which means that I'm perfectly
happy to just make that decision, and take the flak that I get for it. And
I do (both decide, and get flak). That's my job.
In contrast, when it comes to security, I see people making IDIOTIC
arguments, and I absolutely *know* that those arguments are pure and utter
crap, and at the same time, I see that those people are supposed to be
"experts".
For example, you security guys still debate "inodes" vs "pathnames", as if
that was an either-or issue.
Quite frankly, I'm not a security person, but I can tell a bad argument
from a good one. And an argument that says "inodes _or_ pathnames" is so
full of shit that it's not even funny. And a person who says that it has
to be one or the other is incompetent.
Yet that is *still* the level of disagreement I see.
So LSM stays in. No ifs, buts, maybes or anything else.
When I see the security people making sane arguments and agreeing on
something, that will change. Quite frankly, I expect hell to freeze over
before that happens, and pigs will be nesting in trees. But hey, I can
hope.
> If Smack is mergeable despite likely being nothing more than a strict
> subset of SELinux (MAC, label-based, should be easily emulated on top of
> SELinux or via fairly simple extension to it to make such emulation
> simpler or more optimal), then what isn't mergeable as a separate
> security module?
I'm simply not interested in this discussion. If you cannot understand the
*meta*discussion above (which has nothing to do with SMACK or SELinux per
se), I cannot help you.
The biggest reason for me to merge SMACK (and AppArmor) would not be those
particular security modules in themselves, but to inject a sense of
reality in people. Right now, I see discussions about removing LSM because
"SELinux is everything". THAT IS A PROBLEM.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Tue, 02 Oct 2007 21:21:17 UTC
Message-ID: <fa.vuLcQkm1LT+krjbbTlqhks/OOdI@ifi.uio.no>
On Tue, 2 Oct 2007, Bill Davidsen wrote:
>
> And yet you can make the exact same case for schedulers as security, you can
> quantify the behavior, but if your only choice is A it doesn't help to know
> that B is better.
You snipped a key part of the argument. Namely:
Another difference is that when it comes to schedulers, I feel like I
actually can make an informed decision. Which means that I'm perfectly
happy to just make that decision, and take the flak that I get for it. And
I do (both decide, and get flak). That's my job.
which you seem to not have read or understood (neither did apparently
anybody on slashdot).
> You say "performance" as if it had universal meaning.
Blah. Bogus and pointless argument removed.
When it comes to schedulers, "performance" *is* pretty damn well-defined,
and has effectively universal meaning.
The arguments that "servers" have a different profile than "desktop" is
pure and utter garbage, and is perpetuated by people who don't know what
they are talking about. The whole notion of "server" and "desktop"
scheduling being different is nothing but crap.
I don't know who came up with it, or why people continue to feed the
insane ideas. Why do people think that servers don't care about latency?
Why do people believe that desktop doesn't have multiple processors or
through-put intensive loads? Why are people continuing this *idiotic*
scheduler discussion?
Really - not only is the whole "desktop scheduler" argument totally bogus
to begin with (and only brought up by people who either don't know
anything about it, or who just want to argue, regardless of whether the
argument is valid or not), quite frankly, when you say that it's the "same
issue" as with security models, you're simply FULL OF SH*T.
The issue with LSM is that security people simply cannot even agree on the
model. It has nothing to do with performance. It's about management, and
it's about totally different models. Have you even *looked* at the
differences between AppArmor and SELinux? Did you look at SMACK? They are
all done by people who are interested in security, but have totally
different notions of what "security" even *IS*ALL*ABOUT.
In contrast, anybody who claims that the CPU scheduler doesn't know what
it's all about is just tripping. And anybody who claims that desktop
workloads are so radically different from server workloads (or that the
hardware is so different) is just totally out to lunch.
So next time, think five minutes before you start your argument.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Tue, 02 Oct 2007 23:28:01 UTC
Message-ID: <fa.3Dh2TtlSzM/YwX8eV7Emu7EnKh8@ifi.uio.no>
On Tue, 2 Oct 2007, Linus Torvalds wrote:
>
> I don't know who came up with it, or why people continue to feed the
> insane ideas. Why do people think that servers don't care about latency?
> Why do people believe that desktop doesn't have multiple processors or
> through-put intensive loads? Why are people continuing this *idiotic*
> scheduler discussion?
Btw, one thing that is true: while both servers and desktop cares about
latency, it's often easier to *see* the issues on the desktop (or hear
them: audio skipping).
But that doesn't mean that the server people wouldn't care, and it doesn't
mean that scheduling would be "fundamentally different" on servers or the
desktop.
In contrast, security really *is* fundamentally different in different
situations. For example, I find SELinux to be so irrelevant to my usage
that I don't use it at all. I just don't have any other users on my
machine, so the security I care about is in firewalls etc. And that really
*is* fundamentally different from a system that has shell access to its
users. Which in turn is fundamentally different from one that has some
legal reasons why it needs to have a particular kind of security. Which in
turn is fundamentally different from ....
You get the idea.
It boils down to: "scheduling is scheduling", and doesn't really change
apart from the kind of decisions that are required by any scheduler (ie RT
vs non-RT etc). Everybody wants the same thing in the end: low latency for
loads where that matters, high bandwidth for loads where that matters.
It's not a "one user has only one kind of load". Not at all.
Security, on the other hand, very much does depend on the circumstances
and the wishes of the users (or policy-makers). And if we had one module
that everybody would be happy with, I'd not make it pluggable either. But
as it is, we _know_ that's not the case.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Wed, 03 Oct 2007 04:53:12 UTC
Message-ID: <fa.Q1YuFWdHZjGwsx78wjlaa2wJ9+k@ifi.uio.no>
On Tue, 2 Oct 2007, Bill Davidsen wrote:
>
> Unfortunately not so, I've been looking at schedulers since MULTICS, and
> desktops since the 70s (MP/M), and networked servers since I was the ARPAnet
> technical administrator at GE's Corporate R&D Center. And on desktops response
> is (and should be king), while on a server, like nntp or mail, I will happily
> go from 1ms to 10sec for a message to pass through the system if only I can
> pass 30% more messages per hour, because in virtually all cases transit time
> in that range is not an issue. Same thing for DNS, LDAP, etc, only smaller
> time range. If my goal is <10ms, I will not sacrifice capacity to do it.
Bill, that's a *tuning* issue, not a scheduler logic issue.
You can do that today. The scheduler has always had (well, *almost*
always: I think the really really original one didn't) had tuning knobs.
It in no way excuses any "pluggable scheduler", because IT DOES NOT CHANGE
THE PROBLEM.
[ Side note: not only doesn't it change the problem, but a good scheduler
tunes itself rather naturally for most things. In particular, for things
that really are CPU-limited, the scheduler should be able to notice
that, and will not aim for latency to the same degree.
In fact, what is really important is that the scheduler notice that some
programs are latency-critical AT THE SAME TIME as other programs sharing
that CPU are not, which very much implies that you absolutely MUST NOT
have a scheduler that done one or the other: it needs to know about
*both* behaviors at the same time.
IOW, it is very much *not* about multiple different "pluggable modules",
because the scheduler must be able to work *across* these kinds of
barriers. ]
So for example, with the current scheduler, you can actually set things
like scheduler latency. Exactly so you can tune things. However, I
actually would argue that you generally shouldn't need to, and if you
really do need to, and it's a huge deal for a real load (and not just a
few percent for a benchmark), we should consider that a scheduler problem.
So your "argument" is nonsense. You're arguing for something else than
what you _claim_ to be arguing for. What you state that you want actually
has nothing what-so-ever to do with pluggable schedulers, quite the
reverse!
It's also totally incorrect to state that this is somehow intrisicly a
feature of a "server load". Many server loads have very real latency
constraints. No, not the traditional UNIX loads of SMTP and NNTP, but in
many loads the latency guarantees are a rather important part of it, and
you'll have benchmarks that literally test how high the load can be until
latency reaches some intolerable value - ie latency ends up being the
critical part.
There's also a meta-development issue here: I can state with total
conviction that historically, if we had had a "server scheduler" and a
"desktop scheduler", we'd have been in much worse shape than we are now.
Not only are a lot of the loads the same or at least similar (and aiming
for _one_ scheduler - especially one that auto-tunes itself at least to to
some degree - gets you lots of good testing), but the hardware situation
changes.
For example, even just five years ago, there would have been people who
thought that multiprocessing is a server load - and they'd have been
largely right at the time. Would you have wanted a "server" (SMP, screw
latency) scheduler, a "workstation" (SMP but low-latency) scheduler and a
"desktop" (UP) scheduler for the different cases?
Because yes, SMP does impact the scheduler a lot... The locking, the
migration between CPU's, the CPU affinity.. Things that gamers five years
ago would have felt was just totally screwing them over and making the
scheduler slower and more complex "for no gain".
See? Pluggable things are generally a *bad* thing. You should generally
aim for *never* being pluggable if you can at all avoid it, because it not
only fragments the developer base over totally different code bases, it
generates unmaintainable decisions as the problem space evolves.
To get back to security: I didn't want pluggable security because I
thought that was a technically good solution. No, the reason Linux has LSM
(and yes, I was the one who pushed hard for the whole thing, even if I
didn't actually write any of it) was because the problem wasn't technical
to begin with.
It was social/political and administrative.
See? Another fundamental difference between schedulers and security
modules.
> > I don't know who came up with it, or why people continue to feed the insane
> > ideas. Why do people think that servers don't care about latency?
>
> Because people who run servers for a living, and have to live with limited
> hardware capacity realize that latency isn't the only issue to be addressed,
> and that the policy for degradation of latency vs. throughput may be very
> different on one server than another or a desktop.
Quite frankly a lot of other people run servers for a living too, and
their main issue is often latency. And no, they don't do NNTP or SMTP,
they do strange java things around databases with thousands of threads.
Should they use a "desktop" scheduler? Because clearly their loads have
nothing what-so-ever in common with yours?
Or can you possibly admit that it's really the exact same problem?
Really: tell me what the difference is between "desktop" and "server"
scheduling. There is absolutely *none*.
Yes, there are differences in tuning, but those have nothing to do with
the basic algorithm. They have to do with goals and trade-offs, and most
of the time we should aim for those things to auto-tune (we do have the
things in /proc/sys/kernel/, but I really hope very few people use them
other than for testing or for some extreme benchmarking - at least I
don't personally consider them meant primarily for "production" use).
> > Why do people believe that desktop doesn't have multiple processors or
> > through-put intensive loads? Why are people continuing this *idiotic*
> > scheduler discussion?
>
> Because people can't get you to understand that one size doesn't fit all (and
> I doubt I've broken through).
I understand the "one size" argument, I just disagree vehemently about it
having anything to do with a pluggable scheduler. The scheduler does have
tuning, most of it 100% automatic (that's what the "fairness" thing is all
about!), and none of it needs - or would even remotely be helped by -
pluggability.
Take a really simple example: you have fifty programs all wanting to run
on the same machine at the same time. There simply *needs* to be some
single scheduler that picks which one to run. At some point, you have to
make the decision. And no, they are not all "throughput" or all
"latency", and you cannot make your decision based on a "global pluggable
scheduler policy".
Some of the processes may be purely about throughput, some may be purely
about latency, and some may change over their lifetime.
Not very amenable to "pluggable" things, is it? Especially since the
thing that eventually needs to give the CPU time to *somebody* simply
needs to understand all these different needs at some level anyway. It
always ends up having to be *something* that decides, and it can
absolutely never ignore the other "somethings". So a set of independent
pluggable modules simply wouldn't work.
See?
(Sure you could make a multi-level scheduler with different pluggable ones
for different levels, but that really doesn't work very well, since even
in a multi-level one, you do want to have some generic notion of "this one
cares about latency" and "this process is about throughput", so then the
pluggable stuff wouldn't add any advantage _anyway_ - the top-level
decision would have all the complexities of the one "perfect" scheduler
you want in the first place!)
In contrast, look at fifty programs that all want to run on the same
machine at the same time, but we have security issues. Now, security
pretty much by definition cuts _across_ those programs, with the whole
point generally being to make one machine secure, so almost always you'd
generally want to have "a security model" for the whole machine (or at
least virtual machine) - it's just that the policies may be totally
different in different circumstances and across different machines.
But even if you were to *not* want to have one single policy, but have
different policies for different processes, it at least still makes
some conceptual sense, in ways it does not to try to have independent
schedulers. For schedulers, at some point, it just hits the hardware
resource: the CPU needs to be given to *one* of them. For a security
policy, it's all software choices - you don't need to limit yourself to
one choice at any time.
So a pluggable module makes more sense there anyway.
But no, that's not really why we have LSM. I'd have *much* preferred to
have one unified security module setup that we could all agree on, and no
pluggable security modules. It was not to be - and the reason we have LSM
is not because "it makes more sense than a CPU scheduler", but simply
because "people didn't actually get anything done at all, because they
just argued about what to do".
In the CPU schedulers, Ingo still gets work done, even though people argue
about it. So we haven't needed to go to the extreme of an "LSM for CPU
schedulers", because the arguments don't actually hold up the work.
And THAT is what matters in the end.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH] Version 3 (2.6.23-rc8) Smack: Simplified Mandatory Access
Date: Wed, 03 Oct 2007 00:19:56 UTC
Message-ID: <fa.ctSCygkpMdVAuxwQFzHU3KTcaRw@ifi.uio.no>
On Wed, 3 Oct 2007, Alan Cox wrote:
>
> Smack seems a perfectly good simple LSM module, its clean, its based upon
> credible security models and sound theory (unlike AppArmor).
The problem with SELinux isn't the theory. It's the practice.
IOW, it's too hard to use.
Apparently Ubuntu is giving up on it too, for that reason.
And what some people seem to have trouble admitting is that theory counts
for nothing, if the practice isn't there.
So quite frankly, the SELinux people would look at whole lot smarter if
they didn't blather on about "theory".
Linus
Index
Home
About
Blog