Michal Zalewski recently talked about the reasons why security engineering is hard, and why it hasn’t had a great deal of success in making computer users safer. (To be fair, processes have certainly improved; if it seems that things are getting worse, this is probably simply because the overall level of exposure to any vulnerabilities that may be present has increased so rapidly as to overwhelm any reduction in the number of vulnerabilities that may have been achieved.)
Mr. Zalewski’s article has attracted a considerable amount of interest and a fair bit of comment, including this response by Charles Smutz. I was struck by this comment in particular: “While our users need to improve their resilience to social engineering, in many cases, targeted attacks are so done so well, that I couldn’t fault a user for being duped.”
Now, what I’m about to say is so obvious that it probably didn’t occur to either Mr. Zalewski or Mr. Smutz, experts in their field, to mention it: if we can’t stop users from being duped, it might be a good idea to limit the damage that a duped, or otherwise dangerous, user can do.
So if this is so obvious, why am I mentioning it? Well, mostly because it leads directly to the more interesting question: can we limit the damage that a duped user can do?
The interesting thing about this question is that while it seems obvious that it should be possible, to at least some extent, it is often assumed, without much further clarification or justification, that it isn’t. As an example, let’s consider the simplest possible case – a user opens a malicious executable file sent by email. This is Law Number One of Microsoft’s 10 Immutable Laws of Security; see also this more recent discussion, along with part 2 and part 3.
The key part of Microsoft’s First Law is this sentence: “Once a program is running, it can do anything, up to the limits of what you yourself can do on the computer.”
The only problem with this is that it isn’t actually true – or, more precisely, it doesn’t need to be true. In fact, to a limited extent, Microsoft changed this with Windows Vista’s much maligned User Account Control system. If you are an administrator on the system, and you run a program without elevation, it can’t do everything you can do. So while the First Law is (more or less) true for existing operating systems, it is hardly immutable – it’s just a design feature of the OS.
It’s entirely possible to design an operating system in which each open document is associated with a process that can do nothing but modify that particular document. Oh, there’s a lot more that you would need to do before you could really say that the OS was resistant to user error, and the OS would have to provide facilities for the other things said process would need to do (such as reading the user’s preferences, for one example) but the basic idea is simple enough. I don’t doubt that there are alternative approaches of the same general class.
It should be noted that, in addition to resisting user error, such an OS would be largely resistant to security problems caused by application bugs. Only certain classes of application (mainly network-oriented apps) would be exposed to attack, and in most cases the infection wouldn’t survive a system reboot.
So why isn’t this class of approach taken seriously? Mainly, I think, because it breaks the programming model of pretty much every piece of application software ever written. Porting existing code to such an OS would be a significantly greater challenge than a normal port, although no doubt emulation techniques could be developed that would mitigate this somewhat. Still, developing such an operating system along with software to run on it would be a gargantuan effort which nobody seems likely to want to make.
On the other hand … if, as a society, we decided that we really did want safer computing systems and were willing to spend the money, I’m personally convinced that this would be the best way to go about it. Of course, it isn’t just operating systems that would need to be redesigned. Over the next few weeks I plan to take a break from reality and describe some of the ways in which computers could (perhaps) be made safer, if only financial constraints and backwards compatibility weren’t an issue.