Index
Home
About
Blog
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 02:28:23 UTC
Message-ID: <fa.FocnvcnLqG7kPaYdjYdPJfSdhjc@ifi.uio.no>
On Tue, 15 Jul 2008, pageexec@freemail.hu wrote:
>
> so guys (meaning not only Greg but Andrew, Linus, et al.), when will you
> publicly explain why you're covering up security impact of bugs? and even
> more importantly, when will you change your policy or bring your process
> in line with what you declared?
We went through this discussion a couple of weeks ago, and I had
absolutely zero interest in explaining it again.
I personally don't like embargoes. I don't think they work. That means
that I want to fix things asap. But that also means that there is never a
time when you can "let people know", except when it's not an issue any
more, at which point there is no _point_ in letting people know any more.
So I personally consider security bugs to be just "normal bugs". I don't
cover them up, but I also don't have any reason what-so-ever to think it's
a good idea to track them and announce them as something special.
So there is no "policy". Nor is it likely to change.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 16:08:36 UTC
Message-ID: <fa.R6aYKk1rrhy4oBR5Dm7vIGw2AWk@ifi.uio.no>
On Tue, 15 Jul 2008, pageexec@freemail.hu wrote:
>
> by 'cover up' i meant that even when you know better, you quite
> consciously do *not* report the security impact of said bugs
Yes. Because the only place I consider appropriate is the kernel
changelogs, and since those get published with the sources, there is no
way I can convince myself that it's a good idea to say "Hey script
kiddies, try this" unless it's already very public indeed.
> see my comment about reality above. heck, even linux vendors do track
> and announce them, it's part of the support they provide to paying
> customers (and even non-paying users).
Umm. And they mostly do a crap job at it, only focusing on a small
percentage (the ones that were considered to be "big issues"), and because
they do the reporting they also feel they have to have embargoes in place.
That's why I don't do reporting - it almost inevitably leads to embargoes.
So as far as I'm concerned, "disclosing" is the fixing of the bug. It's
the "look at the source" approach.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 16:14:00 UTC
Message-ID: <fa.a8PYBfKwFq0Fl6ls/kFjBfKbV44@ifi.uio.no>
On Tue, 15 Jul 2008, Linus Torvalds wrote:
>
> So as far as I'm concerned, "disclosing" is the fixing of the bug. It's
> the "look at the source" approach.
Btw, and you may not like this, since you are so focused on security, one
reason I refuse to bother with the whole security circus is that I think
it glorifies - and thus encourages - the wrong behavior.
It makes "heroes" out of security people, as if the people who don't just
fix normal bugs aren't as important.
In fact, all the boring normal bugs are _way_ more important, just because
there's a lot more of them. I don't think some spectacular security hole
should be glorified or cared about as being any more "special" than a
random spectacular crash due to bad locking.
Security people are often the black-and-white kind of people that I can't
stand. I think the OpenBSD crowd is a bunch of masturbating monkeys, in
that they make such a big deal about concentrating on security to the
point where they pretty much admit that nothing else matters to them.
To me, security is important. But it's no less important than everything
*else* that is also important!
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 19:16:46 UTC
Message-ID: <fa.rdcOoBM7quHjitCB7xgr8LCp5vw@ifi.uio.no>
On Tue, 15 Jul 2008, pageexec@freemail.hu wrote:
>
> in other words, try a better argument, possibly without bogeymen.
You know what - when nobody does embargoes, I'll consider your argument to
have a point.
In the meantime, I'm not in the least interested in your idiotic
arguments. Especially as you can't even read what I wrote:
> i don't see you embargo normal bug fixes, why would you embargo security
> bug fixes? they're just normal bugs, aren't they?
Exactly. I don't embargo them. I refuse to have anything to even _do_ with
organizations like vendor-sec that I think is a corrupt cluster-fuck of
people who just want to cover their own ass.
They're just normal bugs.
But that also doesn't mean that I see any reason to make it obvious what
to do to trigger them, and cause problems at universities and such. So I
don't do "here's how to exploit it" commit logs, for example.
(If you haven't been at a univerisity, you don't know how many smart young
people want to "try it to see". And if you have been there, and don't
think it's a problem when they do and wouldn't be happier if they didn't,
you probably don't know what the word "empathy" means).
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 20:19:00 UTC
Message-ID: <fa./37b++jtNZ9aPN3Dgr9N3YXwIrw@ifi.uio.no>
On Tue, 15 Jul 2008, pageexec@freemail.hu wrote:
>
> in any case, i don't see why you can't put keywords into the commit
> that say the bug being fixed is 'security related' or 'potentially
> exploitable', etc. people can then decide how to prioritize them.
Because I see no point. Quite often, we don't even realize some random bug
could have been a security issue.
It's not worth my energy, in other words.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 20:44:04 UTC
Message-ID: <fa.Gw/FjNbaQ1TgSdk8VMT71P26CUQ@ifi.uio.no>
On Tue, 15 Jul 2008, pageexec@freemail.hu wrote:
>
> i understand and i think noone expects that. in fact, i know how much
> expertise and time it takes to determine that. but what happens when
> you do figure out the security relevance of a bug during bug submission
The issue is that I think it's then _misleading_ to mark that kind of
commit specially, when I actually believe that it's in the minority.
If people think that they are safer for only applying (or upgrading to)
certain patches that are marked as being security-specific, they are
missing all the ones that weren't marked as such. Making them even
_believe_ that the magic security marking is meaningful is simply a lie.
It's not going to be.
So why would I add some marking that I most emphatically do not believe in
myself, and think is just mostly security theater?
I generally do not remove peoples changelog entries, although I _will_
do even that if I think it's just too much of an actual exploit
description (of course - the patch itself can make the exploit fairly
clear). So you'll find CVE entries etc in the logs if you look.
But I do hope that anybody who looks for them is _aware_ that it's just a
small minority of possible problems.
Don't get me wrong - I'm not saying that security bugs are _common_, but
especially some local DoS thing for a specific driver or filesystem or
whatever can be a big security problem for _somebody_.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 00:17:24 UTC
Message-ID: <fa.76kRO/7yZpMpHidLpZvBZBIkQFY@ifi.uio.no>
On Tue, 15 Jul 2008, Tiago Assumpcao wrote:
>
> However, as I previously explained [http://lkml.org/lkml/2008/7/15/654],
> security issues are identified and communicated through what can be a long and
> complicated (due to DNAs, etc.) process. If it culminates at implementation,
> without proper information forwarding from the development team, it will never
> reach the "upper layers" -- vendors, distributors, end users, et al.
Umm. That shouldn't be our worry. If others had a long and involved (and
broken) process, they should be the ones that track the fixes too. We
weren't involved, we didn't see that, we simply _cannot_ care.
> Therefore, yes, it is of major importance that you people, too, buy the
> problem and support the process as a whole. Otherwise... well, otherwise,
> we're back to where we started, 20 years ago. Good luck Linux users.
Umm. What was wrong with 20 years ago exactly?
Are you talking about all the wonderful work that the DNS people did for
that new bug, and how they are heroes for synchronizing a fix and keeping
it all under wraps?
And isn't that the same bug that djb talked about and fixed in djbdns from
the start? Which he did about ten YEARS ago?
Excuse me for not exactly being a huge fan of "security lists" and best
practices. They seem to be _entirely_ be based on PR and how much you can
talk up a specific bug. No thank you,
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 01:42:13 UTC
Message-ID: <fa.G28tkcepXbO/cqBctU/MMvzjY5c@ifi.uio.no>
On Tue, 15 Jul 2008, Tiago Assumpcao wrote:
>
> How can I expect one to treat the unknown? If you are not aware of it, you do
> nothing.
Well, some people keep it secret and track it on vendor-sec or similar,
hidden from us.
But then when they are ready to announce it, they want our help to glorify
their corrupt process when they finally deign to let us know. And that
really irritates me.
> All I ask for is to receive the "There are updates available." message as soon
> as one security problem is reported, understood and treated by your
> development part. And that is, the sooner possible, if you please.
Umm. You're talking to _entirely_ the wrong person.
The people who want to track security issues don't run my development
kernels. They usually don't even run the _stable_ kernels. They tend to
run the kernels from some commercial distribution, and usually one that is
more than six months old as far as I - and other kernel developers - are
concerned.
IOW, when we fix security issues, it's simply not even appropriate or
relevant to you. More importantly, when we fix them, your vendor probably
won't have the fix for at least another week or two in most cases anyway.
So ask yourself - what would happen if I actually made a big deal out of
every bug we find that could possibly be a security issue. HONESTLY now!
We'd basically be announcing a bug that (a) may not be relevant to you,
but (b) _if_ it is relevant to you, you almost certainly won't actually
have fixed packages until a week or two later available to you!
Do you see?
I would not actually be helping you. I'd be helping the people you want to
protect against!
Linus
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 01:09:02 UTC
Message-ID: <fa.y7BaNR9eVPRaidSK+CxtZmqlLHs@ifi.uio.no>
On Tue, Jul 15, 2008 at 09:00:19PM -0300, Tiago Assumpcao wrote:
> For all the above: no. And this is the point of divergence.
> For you, as a person who "writes software", every bug is equivalent. You
> need to resolve problems, not classify them.
>
> However, as I previously explained [http://lkml.org/lkml/2008/7/15/654],
> security issues are identified and communicated through what can be a
> long and complicated (due to DNAs, etc.) process. If it culminates at
> implementation, without proper information forwarding from the
> development team, it will never reach the "upper layers" -- vendors,
> distributors, end users, et al.
Look if you want this, pay $$$ to a distribution and get their
supported distribution. It costs time and effort to classify bugs as
security related (or not), and the people who care about this the most
also want to freeze a kernel version to simplify their application
testing, *but* get new drivers and bus support code back-ported so
they can use the latest hardware (while still keeping their
applications and 3rd party proprietary kernel modules from Nvidia and
Vertias stable and working) *and* they want the latest security fixes
(and only security fixes, since other fixes might destabilize their
application). People who want this can get it, today. Just pick up
the phone and give a call to your favoriate enterprise Linux
distribution. It will cost you money, but hey, the people who want
this sort of thing typically are willing to pay for the service.
I'll note that trying to classify bugs as being "security-related" at
the kernel.org level often doesn't help the distro's, since many of
these bugs won't even apply to whatever version of the kernel the
distro's snapshotted 9-18 months ago. So if the distro snapshotted
2.6.18 in Fall 2006, and their next snapshot will be sometime two
years later in the fall of this year, they will have no use for some
potential local denial of service attack that was introduced by
accident in 2.6.24-rc3, and fixed in 2.6.25-rc1. It just doesn't
matter to them.
So basically, if there are enough kernel.org users who care, they can
pay someone to classify and issue CVE numbers for each and every
potential "security bug" that might appear and then disappear. Or
they can volunteer and do it themselves. Of course, this will provide
aid and comfort to Microsoft-shills masquerading as analysts who
misuse CVE numbers to generate reports "proving" that Microsoft is
more secure (because they don't do their development in the open, so
issues that appear and disappear in development snapshots don't get
CVE numbers assigned), but hopefully most users are sophsitcated
enough not to get taken in by that kind of bogus study. :-)
The one thing which is really pointless to do is to ask kernel
developers to do all of this classification work to get CVE numbers,
etc., for free. In free software, people do what they (or their
company) deem to be valuable for them. Flaming and complaining that
the kernel git logs aren't providing free marketing for PaX/grsecurity
isn't going to do much good.
- Ted
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 02:03:36 UTC
Message-ID: <fa.EF4NtdLw8sZHV8Y/hssWKJT/8RI@ifi.uio.no>
On Tue, 15 Jul 2008, Tiago Assumpcao wrote:
>
> So, only those willing to pay have the right of respect?
You keep using that word. I do not think it means what you think it means.
How about respecting my opinion instead?
But no, you claim I must respect you, because you have some other notion
of what should be done, even though I've explained why I don't agree.
It cuts both ways.
Linus
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 13:21:41 UTC
Message-ID: <fa.e7ooHgIVQ0S7rkamO1CI2mO4qfE@ifi.uio.no>
On Wed, Jul 16, 2008 at 11:33:12AM +0200, pageexec@freemail.hu wrote:
> > > That's fallacious. Assuming that you have good programmers, and you
> > > do, it's of very low cost the act of identifying what *is likely to
> > > be* a security bug.
> >
> > That is based on lots and lots of assumptions that are just not true.
> > Ted Tso, Stephen Smalley and I are all recognized as security experts
>
> not so quick. security is a big field, noone really can claim to be
> a general expert. Ted knows kerberos but he would be unable to exploit
> the task refcount leak bug fixed in 2.6.25.10. Stephen and you know
> MAC systems inside out but you too would be unable to exploit that bug.
> different domains, different expertise, despite all being 'security'.
As far as I am concerned, knowing how to exploit a task refcount leak
bug is a technician's job. Sure, I can write code that given an
intercepted or stolen Kerberos srvtab/ketab file, how to forge
Kerberos tickets. But at the end of the day, that's perhaps the least
important part of what a "Security Expert" has to do. Bruce Schneier
has written about this extensively.
The important thing to recognize about security is that it is all
about tradeoffs. How do you protect the System (which may consist of
one computers or multiple computers) given a particular threat
environment, given adversaries with different levels of capability,
given the consequences of a security breach, and how do you do it
within a given parameters of cost and usability?
What a security expert might do is laugh at someone who is spending
all of their time and energy worrying about local escalation attacks,
when they discover that all of the network exchanges are unprotected
and on an insecure network. Or, they might point out that you are
spending 10 times as much money and effort on securing a system as the
cost of a security breach, and that might not make sense either.
This is why there are so many arguments over security. There are
disagreements over what deserves more focus and attention, and what is
and isn't worthwhile trading off against other things. For example,
last I looked, PaX significantly reduces the chance of buffer overrun
attacks, but at the cost of cutting your address space in half and
forcing you to use a custom-built kernels since modules are not
supported either (which means no tools like Systemtap, as well). For
some people, particularly on 32-bit systems, this is unacceptable.
But some people might consider that a valid tradeoff.
As another example, take for example some bug that might be able to
cause a local privilege escalation. If the machine running that
particular kernel is part of a computing cluster which is totally
disconnected from the Internet, that bug is from a security point of
view totally irrelevant.
So to do a true security analysis about the seriousness of a bug
*always* requires some amount of context about what the capabilities
that the adversary might have (or might have acquired). Given that
most systems these days are single user systems, a local privilege
escalation attack may not be as big a of deal in this day and age.
Many people draw their trust boundary around the entire computer.
At the end of the day, it is an engineering discipline, and it is all
about tradeoffs. So while it is useful to have people who focus on
the security of a single box against adversaries who have local shell
access, it is very easy to lose perspective of the greater security
picture. And someone like Linus who is worried about the overall
system, it's even easier to lose perspective. Consider that there was
only one computer system that to my knowledge, ever managed to get
evaluated as passing the Orange Book A1 security requirements; and
that system was a commercial failure. It took too long to bring to
market, it was too slow, and was too expensive. It would be like
people assuming that you could always build a tank by putting more
armor on it, and that there is no such thing as "too much armor".
Same principle.
I have a theory which is that people who are focused on local system
security to the exclusion of all else have a high risk of becoming
unbalanced; they end up like Theo de Rant, frustrated because people
aren't paying attention to them, and that others aren't worried about
the same problems that they think are important. But, the good news
of open source is that if you *do* care about local system security to
the exclusion of all else, including high SMP scalability, and wide
hardware support, etc., you can go use OpenBSD! They may be much more
your type of people. Or, you can pay for support for an enterprise
Linux distribution, where they do have people who will help you out
for it. Hopefully their idea of security and priorities matches up
with your own, although I will note that some of the companies that
have focused exclusively on security to the exclusion of all else
(e.g. Trustix AS, Immunix) haven't necessarily done very well
commercially.
Regards,
- Ted
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 00:27:28 UTC
Message-ID: <fa.PDATW5UI/7YRASq63oxwMMzN3Kk@ifi.uio.no>
On Wed, 16 Jul 2008, pageexec@freemail.hu wrote:
>
> we went through this and you yourself said that security bugs are *not*
> treated as normal bugs because you do omit relevant information from such
> commits
Actually, we disagree on one fundamental thing. We disagree on
that single word: "relevant".
I do not think it's helpful _or_ relevant to explicitly point out how to
trigger a bug. It's very helpful and relevant when we're trying to chase
the bug down, but once it is fixed, it becomes irrelevant.
You think that explicitly pointing something out as a security issue is
really important, so you think it's always "relevant". And I take mostly
the opposite view. I think pointing it out is actually likely to be
counter-productive.
For example, the way I prefer to work is to have people send me and the
kernel list a patch for a fix, and then in the very next email send (in
private) an example exploit of the problem to the security mailing list
(and that one goes to the private security list just because we don't want
all the people at universities rushing in to test it). THAT is how things
should work.
Should I document the exploit in the commit message? Hell no. It's
private for a reason, even if it's real information. It was real
information for the developers to explain why a patch is needed, but once
explained, it shouldn't be spread around unnecessarily.
Linus
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Wed, 16 Jul 2008 01:08:43 UTC
Message-ID: <fa.i+uciaq95ValKpsYLLD14JhhCmI@ifi.uio.no>
On Wed, 16 Jul 2008, pageexec@freemail.hu wrote:
>
> > And I take mostly the opposite view. I think pointing it out is
> > actually likely to be counter-productive.
>
> you keep saying that, but you don't explain *why*.
>
> > For example, the way I prefer to work is to have people send me and the
> > kernel list a patch for a fix, and then in the very next email send (in
> > private) an example exploit of the problem to the security mailing list
> > (and that one goes to the private security list just because we don't want
> > all the people at universities rushing in to test it). THAT is how things
> > should work.
>
> fine with me, i wasn't talking about that at all though ;).
Oh, so now you're suddenly fine with not doing "full disclosure"?
Just a few emails ago you berated me for not doing full disclosure, but
now you're saying it is fine?
Can you now admit that it's a gray line, and that we just have very
different opinions of where the line is drawn?
> 1. simple words/phrases that one can grep for (mentally or automated)
> examples: 'security', 'exploitable', 'DoS', 'buffer overflow', etc
I literally draw the line at anything that is simply greppable for. If
it's not a very public security issue already, I don't want a simple "git
log + grep" to help find it.
That said, I don't _plan_ messages or obfuscate them, so "overflow" might
well be part of the message just because it simply describes the fix. So
I'm not claiming that the messages can never help somebody pinpoint
interesting commits to look at, I'm just also not at all interested in
doing so reliably.
> i believe 3-5 are definitely not commit message material. 1 or 2 are.
> 5 should never be published or disseminated, 3 and 4 may be distributed
> to interested parties.
And I believe you now at least understand the difference. I draw the line
between 0 and 1, where 0 is "explain the fix" - which is something that
any - and every - commit message should do.
Linus
From: Theodore Tso <tytso@mit.edu>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10
Date: Tue, 15 Jul 2008 18:34:36 UTC
Message-ID: <fa.jxEud6S3bDMa2ud4ecvmN1khnJA@ifi.uio.no>
On Tue, Jul 15, 2008 at 05:31:09PM +0200, pageexec@freemail.hu wrote:
> obviously there *is* a policy, it's just not what you guys declared
> earlier in Documentation/SecurityBugs. would you care to update it
> or, more properly, remove it altogether as it currently says:
Hi, so I'm guessing you're new to the Linux kernel. What you are
missing is while *Linus* is unwilling to play the disclosure game,
there are kernel developers (many of whom work for distributions, and
who *do* want some extra time to prepare a package for release to
their customers) who do. So what Linus has expressed is his personal
opinion, and he is simply is not on any of the various mailing lists
that receive limited-disclosure information, such as the general
vendor-sec@lst.de mailing list, or the security@kernel.org list
mentioned in Documentation/SecurityBugs.
Both vendor-sec and security@kernel.org are not formal organizations,
so they can not sign NDAs, but they will honor non disclosure
requests, and the subscription list for both lists is carefully
controlled.
People like Linus who have a strong, principled stand for Full
Disclosure simply choose not to request to be placed on those mailing
lists. And if Linus finds out about a security bug, he will fix it
and check it into the public git repository right away. But he's very
honest in telling you that is what he will do --- so you can choose
whether or not to include him in any disclosures that you might choose
to make.
The arguments about whether or not Full Disclosure is a good idea or
not, and whether or not the "black hat" and "grey hat" and "white hat"
security research firms are unalloyed forces for good, or whether they
have downsides (and some might say very serious downsides) have been
arguments that I have personally witnessed for over two decades
(Speaking as someone who helped to dissect the Robert T. Morris
Internet Worm in 1988, led the Kerberos development team at MIT for
many years, and chaired the IP SEC Working Group for the IETF, I have
more than my fair share of experience). It is clear that we're not
going settle this debate now, and certainly not on the Linux Kernel
Mailing List.
Suffice it to say, though, that there are people whose views on these
matters span the entire gamut, and I know many reasonable people who
hold very different positions along the entire continuum --- and this
is true both in the Internet community at large, and in the Linux
Kernel development community specifically.
Best regards,
- Ted
From: Al Viro <viro@ZenIV.linux.org.uk>
Newsgroups: fa.linux.kernel
Subject: Re: [stable] Linux 2.6.25.10 (resume)
Date: Sun, 20 Jul 2008 17:29:20 UTC
Message-ID: <fa.614ruwFCfuiFtpNSmrEu3xwxSTo@ifi.uio.no>
On Sat, Jul 19, 2008 at 03:13:43PM -0700, Greg KH wrote:
> I disagree with this and feel that our current policy of fixing bugs and
> releasing full code is pretty much the same thing as we are doing today,
> although I can understand the confusion. How about this rewording of
> the sentance instead:
>
> We prefer to fix and provide an update for the bug as soon as
> possible.
>
> So a simple 1 line change should be enough to stem this kind of argument
> in the future, right?
Not quite... OK, here's a story that might serve as a model
of all that crap - it certainly runs afoul of a bunch of arguments on
all sides of that.
We all know that POSIX locks suck by design, in particular where it
deals with close(2) semantics. "$FOO is associated with process P having
a descriptor refering to opened file F, $FOO disappears when any of such
descriptors get removed" is bloody inconvenient in a lot of respects. It
also turns out to invite very similar kind of wrong assumptions in all
implementation that have to deal with descriptor tables being possibly
shared. So far the victims include:
* FreeBSD POSIX locks; used to be vulnerable, fixed.
* OpenBSD POSIX locks; vulnerable.
* Linux POSIX locks and dnotify entries; used to be vulnerable, fixed.
Plan9 happily avoids having these turds in the first place and IIRC NetBSD
simply doesn't have means for sharing descriptor tables. Should such means
appear it would be vulnerable as well. Dnotify is Linux-only, er, entity
(as in "non sunt multiplicandam"). I haven't looked at Solaris and I couldn't
care less about GNU Turd.
In all cases vulnerablities are local, with impact ranging from
user-triggered panic to rather unpleasant privilege escalations (e.g.
"any user can send an arbitrary signal to arbitrary process" in case of
dnotify, etc.)
The nature of mistaken assumption is exactly the same in all cases.
An object is associated with vnode/dentry/inode/whatnot and since it's
destroyed on any close(), it is assumed that we shall be guaranteed that
such objects will not be able to outlive the opened file they are associated
with or the process creating them. It leads to the following nastiness:
A and B share descriptor table.
A: fcntl(fd, ...) trying to create such object; it resolves descriptor to
opened file, pins it down for the duration of operation and blocks
somewhere in course of creating the object.
B: close(fd) evicts opened file from descriptor table. It finds no objects
to be destroyed.
A: completes creation of object, associates it with filesystem object and
releases the temporary hold it had on opened file.
At that point we have an obvious leak and slightly less obvious attack vector.
If no other descriptors in the descriptor table of A and B used to refer to
the same file, the object will not be destroyed since there will be nothing
that could decide to destroy it. Unfortunately, it's worse than just a leak.
These objects are supposed to be destroyed before the end of life of opened
file. As the result, nobody bothers to have them affect refcounts on the
file/vnode/dentry/inode/whatever. That's perfectly fine - the proper fix is
to have fcntl() verify that descriptor still resolves to the same file before
completing its work and destroy the object if it doesn't. You don't need to
play with refcounts for that. However, without that fix we have a leak that
leads to more than just an undead object - it's an undead object containing
references to filesystem object that might be reused and to task_struct/proc/
whatever you call it, again with the possibility of reuse.
Getting from that point to the attack is a matter of simple (really simple)
RTFS. Details obviously depend on what the kernel in question is doing to
these objects, but with that kind of broken assertions it's really not hard
to come up with exploitable holes.
Now, let us look at the history:
* POSIX locks support predates shared descriptor tables; the holes
in question opened as soon as clone(2)/rfork(2) had been merged into a kernel
and grown support for shared descriptor tables. For Linux it's 1.3.22 (Sep
1995), for OpenBSD it's a bit before 2.0 (Jan 1996) for FreeBSD - 2.2 (Feb
1996, from OpenBSD).
* In 2002 dnotify had grown the same semantics (Linux-only thing,
2.5.15, soon backported to 2.4.19). Same kind of race.
* In 2003 FreeBSD folks had found and fixed their instance of that bug;
commit message:
"Avoid file lock leakage when linuxthreads port or rfork is used:
- Mark the process leader as having an advisory lock
- Check if process leader is marked as having advisory lock when
closing file
- Check that file is still open after lock has been obtained
- Don't allow file descriptor table sharing between processes
with different leaders"
"Check that file is still open" bit refers to that race. I have no idea
whether they'd realized that what they'd closed had been locally exploitable
or not.
* In 2005 Peter Staubach had noticed the Linux analog of that sucker.
The fix had been along the same lines as FreeBSD one, but in case of Linux
we had extra fun with SMP ordering. Peter had missed that and his patch
left a hard to hit remnant of the race. His commit message is rather long;
it starts with
[PATCH] stale POSIX lock handling
I believe that there is a problem with the handling of POSIX locks, which
the attached patch should address.
The problem appears to be a race between fcntl(2) and close(2). A
multithreaded application could close a file descriptor at the same time as
it is trying to acquire a lock using the same file descriptor. I would
suggest that that multithreaded application is not providing the proper
synchronization for itself, but the OS should still behave correctly.
...
I'm 100% sure that in this case the vulnerability had _not_ been realized.
Bogus behaviour had been noticed and (mostly) fixed, but implications had
been missed, along with the fact that the same scenario had played out in
dnotify.
* This April I'd caught dnotify hole during code audit. The fix
had been trivial enough and seeing that impact had been fairly nasty (any
user could send any signal to any process, among other things) I'd decided
to play along with "proper mechanisms". Meaning vendor-sec. _Bad_ error
in judgement; the damn thing had not improved since I'd unsubscribed from
that abortion. A trivial patch, obviously local to one function and obviously
not modifying behaviour other than in case when existing tree would've screwed
itself. Not affecting any headers. Not affecting any data structures.
_Obviously_ not affecting any 3rd-party code - not even on binary level.
IOW, as safe as it ever gets.
Alas. The usual shite had played out and we had a *MONTH-LONG*
embargo. I would like to use this opportunity to offer a whole-hearted
"fuck you" to that lovely practice and to vendor-sec in general.
* Very soon after writing the first version of a fix I started
wondering if POSIX locks had the same hole - the same kind of semantics
had invited the same kind of race there (eviction of dnotify entries and
eviction of POSIX locks are called in the same place in close(2)). Current
tree appeared to be OK, checking the history had shown Peter's patch.
A bit after that I'd noticed insufficient locking in dnotify patch, fixed
that. Checking for similar problems in POSIX locks counterpart of that crap
had found the SMP ordering bug that remained there.
* 2.6 -> 2.4 backport had uncovered another interesting thing -
Peter's patch did _not_ make it to 2.4 3 years ago.
* Checking OpenBSD (only now - I didn't get around to that back in
April) shows that the same hole is alive and well there.
Note that
* OpenBSD folks hadn't picked the fix from FreeBSD ones, even though
the FreeBSD version had been derived from OpenBSD one. Why? At a guess, the
commit message had been less than noticable. Feel free to toss yourself off
screaming "coverup" if you are so inclined; I don't swing that way...
* Initial Linux fix _definitely_ had missed security implications
*and* realization that somebody else might suffer from the same problem.
FVO "somebody" including Linux itself, not to mention *BSD.
* Even when the problem had been realized for what it had been in
Linux, *BSD potential issues hadn't registered. Again, the same wunch
of bankers is welcome to scream "coverup", but in this case we even have
the bleeding CVEs.
* CVEs or no CVEs, OpenBSD folks hadn't picked that one.
* Going to vendor-sec is a mistake I won't repeat any time soon and
I would strongly recommend everybody else to stay the hell away from that
morass. It creates inexcusable delays, bounds you to confidentiality and,
let's face it, happens to be the prime infiltration target for zero-day
exploit traders. In _this_ case exploit had been local-only. Anything more
interesting...
So where does that leave us? Bugger if I know... FWIW, I would rather see
implications thought about *and* mentioned in the changelogs. OTOH, the
above shows the real-world cases when breakage hadn't even been realized to
be security-significant. Obviously broken behaviour (leak, for example)
gets spotted and fixed. Fix looks obviously sane, bug it deals with -
obviously real and worth fixing, so into a tree it goes... IOW, one _can't_
rely on having patches that close security holes marked as such. For that
the authors have to notice that themselves in the first place. OTTH, nothing
is going to convince the target audience of grsec, er, gentlemen that we are
not a massive conspiracy anyway, the tinfoil brigade being what it is...
Index
Home
About
Blog