Index
Home
About
Blog
From: tytso@mit.edu (Theodore Y. Ts'o)
Newsgroups: alt.os.linux,comp.os.linux.advocacy,
comp.os.linux.development.system,comp.os.misc,comp.unix.advocacy
Subject: Re: TAO: the ultimate OS
Date: 02 Sep 1999 09:41:45 -0400
peter@zeus.niar.twsu.edu (Peter Samuelson) writes:
> > I just pointed out in a recent message..its obvious you can have some
> > files that are read only to some applications, and write to others.
> > what you are telling me is that it could become slightly more
> > complicated than an existing system. but this is quite different
> > than the claim of impossibility.
>
> He didn't say "impossible", he said there are tradeoffs. Security
> versus ability to do anything useful. If you don't want your system to
> do anything useful, you can secure it real tight.
There's another tradeoff; between security and ease of use. The
question is, how do you configure your sandbox? You either ask the user
to configure it, in which case you lose because the user won't know how
to answers the questions correctly, and it will be horribly complex or,
you can ask the program which is running, which you can't do because
it's not trustworthy.
You could posit that the OS could look over the code and figure how the
sandbox should be configured based on what the requirements of the
applet are, but how does the OS determine what are legitimate
requirements and what are bogus requirements? Sure you can block out
really obvious ones, like "I need raw access to the disk so I can format
it", but more subtle ones like the application claiming "I need write
access to the spreadsheet so I can update some data" --- is it really
updating data, or is it destroying data? Or is it installing an
application macro virus? Having the OS figure it out without asking the
user is tantamount to trying to solve the halting problem.
The other approach which has been tried is code signing --- but that
assumes that the code signers are trustworthy. And more recently, there
have been a number of cases by Microsoft, Compaq, and HP, where they
signed some code which itself could be used improperly, and with the
right data sets the code could destroy your system. Sure, these were
"Bugs", but assuming (by assertion) that your system will be BugFree(tm)
shows an incredible amount of naivete --- it's not unlike Bill Gates
claiming that Microsoft Software has no bugs. (according to him, they
just have "issues" :-)
The final observation I will make is that Vladimir's "we'll just make
some files read-only" also shows an amazing amount of naivete.
Read-only compromise (of strategic planning data of a company, for
example) can often do just as much damage as other kinds of damage ---
after all, even if a virus destroys your data, assuming that you have
competent sysadmins, you'll have backups. But with a read-only
compromise of sensitive data, you may never know what hit you until it's
too late.
But, as I started this whole posting with ---- this completely begs the
question of how do you configure the sandbox in the first place? How do
you know which files (or objects) an application should be given write
access, and to which an application should be only given read access?
What (if any) network connections should the application be allowed to
open? Is the application allowed to throw up a window? Is the
application allowed to grab keyboard focus? So many questions --- and
no way to answer them securely.
Not specifying such troubling, niggling little details is part of the
hand-waving and arm-waving which is going on here.
- Ted
Index
Home
About
Blog