Fending Off Future Attacks by Reducing Attack Surface
Michael Howard
Secure Windows Initiative
February 4, 2003
Summary: Michael Howard discusses why you should reduce the amount of code that is open to future attack by installing only the needed features of a product. (5 printed pages)
Before I get started, I'd like to apologize for taking so long writing this article. My group just wrapped up the final security audit for Microsoft Windows® Server 2003. As part of the multi-pronged process improvements we're making, a security audit is simply a well-defined process to answer one simple question, "Are we there yet?" In coming months, I will explain in detail some of the process changes we're making to help deliver Trustworthy Computing.
I would also like to share some good news —the second edition of Writing Secure Code is now available from Microsoft Press. The new edition outlines many of the things we have learned in the last 12 months as Microsoft spent a massive amount of time securing existing and future products, as well as continuing to spend more time and money on security research. The book has swelled from roughly 500 pages to 800 pages. Every chapter has been updated, and we've added 12 new chapters and appendices. We're really happy with the update and we trust you'll learn a great deal from it. You can get more info on the book, as well as CD materials and a sample chapter at https://www.microsoft.com/mspress/books/5957.asp. Okay, enough advertising, on with this month's Code Secure column.
Building secure software, software that withstands attacks, isn't easy. Now, I'm not going to go on and on about code quality and security patches and make it sound like that we can never progress the security of software. As an industry, we can and must create more secure products. What I mean by "isn't easy" is that you can only create solutions that reflect the current threat environment. You cannot easily build software that anticipates tomorrow's attacks. Or can you?
I think you can, and it's all about reducing the code open to attack by default. We all know that we cannot rely on administrators and end-users applying patches to maintain a secure environment because patching simply isn't a pleasant experience for most people, and frankly, many people don't know they require patches to stay up to date unless they are well-informed.
Really nasty security issues occur for the following reasons:
- The product had a security flaw.
- The product is popular or is running by default.
We know security problems will exist because of the point I raised earlier —there are attacks and attack methods we do not know about yet. Of course, there is also the reason that people simply make mistakes. As an example, here's part of an e-mail discussion I had in December 2002 (names are changed to protect the guilty!).
Michael: "It's all about the input. Don't trust the input. Simply swapping strcpy
with strncpy
isn't gonna make your code secure. Check the input!"
John:" Howard, get off your soapbox. Of course you can write secure code with strncpy."
Michael: "Yes you can, sometimes, but the correct way to build secure software is to check the data. You cannot replace strcpy
with strncpy
without regard for the data and expect to ship quality code."
John: "Yeah, but look at this perfectly secure code that does not check the input."
char *p = "Hello, Michael! "
"This is a quick test to see if I "
"can write secure code with 'n' functions";
char *buff = new char[sizeof(p) + 1];
if (buff)
strncpy(buff,p,strlen(p));
delete [] buff;
Michael: "You calculated the buffer size incorrectly. sizeof(p)
is the size of a pointer, which, in this case is only 4 bytes. sizeof(p)
does not return the length of the string, which is what you intended."
John: "Oops! Typo!"
Michael: "That's how buffer overruns happen!"
Okay, 'nuff said. Now you know how security issues can occur. Of course, a good code review should pick this up. That said, a developer should not make such a simple error. Back to the article we go.
The second bullet point, about products being attacked, is also true. If a product is not used by many people, the chance of bad guys going to the effort to write real exploits is somewhat slim. They'd prefer to attack a bigger, juicier target instead. The issue is made more concerning if the feature is running by default. It really doesn't matter if the feature is popular or not. If it is on by default, it will be attacked because there's a larger population of machines to attack.
All right, where am I going with this? I am a firm believer that you should always install products with just the right number of features to be at least somewhat useful, but the product should not bristle with unused or less-commonly used features. I say this because if a feature has future security vulnerabilities (remember, you can't predict them), and the feature is not running by default, it cannot be attacked. It really is as simple as that. Now, don't get me wrong, turning a feature off is a defense in depth mechanism. You still perform security code reviews and security tests, but you take the next step and reduce the attack profile of the product too.
Calculating the Relative Attack Surface
As my group started working with various Microsoft product groups on reducing their product's attack profile, an interesting question arose—how much more secure is the product that is currently in development than the product that is currently shipping? After much thought we realized this is a difficult question to answer. In fact, after asking a good number of respected security academics this question, it became obvious we don't know the answer. It turns out that the industry doesn't know how to define the answer!
Then one day, I realized you could calculate the "attackability" of a product or its exposure to attack, but not necessarily its vulnerability. That is, how many features are there to attack, not necessarily exploit. If this sounds confusing, think of it this way—banks will always be subject to attack, but it doesn’t mean that all banks are flawed in such a way that they can be compromised.
Here's how you do it. First, all products are attacked in certain ways—most products are often attacked through open ports, Windows has its services attacked, weak ACLs are an attack point too. Many Linux and Unix operating systems are attacked through setuid
root applications and symbolic links. So, the first step is to look over old vulnerabilities and determine the root attack vector. For Windows, we came up with this list:
- Open sockets
- Open RPC endpoints
- Open named pipes
- Services
- Services running by default
- Services running as SYSTEM
- Active Web handlers (ASP files, HTR files, and so on)
- Active ISAPI Filters
- Dynamic Web pages (ASP and such)
- Executable virtual directories
- Enabled Accounts
- Enabled Accounts in admin group
- Null Sessions to pipes and shares
- Guest account enabled
- Weak ACLs in the file system
- Weak ACLs in Registry
- Weak ACLs on shares
Think of this as the list of features an attacker will attempt to compromise. Each attack point has a weight or a bias. For example, a service that runs by default under the SYSTEM account and opens a socket to the world is a prime candidate for attack. It may be very secure code, but the fact that it is running by default, and is running with such elevated privileges, makes it high on the list of points for an attacker to probe. And if the code is vulnerable, the resulting damage could be disastrous. However, a weak ACL in the registry, while still an issue, is nowhere near as bad as the prior scenario, and this is reflected in the bias —we used 0.1 for a low threat to 1.0 for a severe threat.
Note that not only is it inaccurate to compare different products, but the numbers themselves mean nothing until you compare similar products.
Also, think of relative attack surface as Cyclomatic Complexity for security. You can learn more about Cyclomatic Complexity at https://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html.
On analyzing various versions of Windows, we end up with the following relative attack surface figures:
Version of Microsoft Windows | Relative attack surface figures |
---|---|
Windows NT® 4 SP6a | 316.9 |
Windows NT 4 SP6a with Option Pack | 597.9 |
Windows 2000 | 348.9 |
Windows Server 2003 | 173.9 |
Windows Server 2003 with Internet Information Services (IIS) 6.0 | 188.8 |
Windows XP | 189.5 |
Windows XP with Internet Connection Firewall enabled | 124.4 |
Figure 1. Relative attack surface for various versions of Microsoft Windows
The most telling figures are between Windows 2000 and Windows Server 2003, and between Windows Server 2003 and Windows Server 2003 with IIS 6.0 installed. As you can see, the attack surface of a default Windows Server 2003 computer is approximately half that of a Windows 2000 computer. This speaks loudly to the security work performed by the Windows product group. The other interesting figure is between Windows Server 2003 and Windows Server 2003 with IIS 6.0 installed. By default, IIS 6.0 is not installed, but when it is installed, it creates only a marginal increase in the attack profile of the server. This is simply a result of the tireless effort performed by the IIS 6.0 team as they re-architect a more robust and secure Web application server.
There are a couple of caveats to this attack surface calculation. First, it does not represent the code quality, but rather determines the relative "attackability" of the software. It does not mean that there are security flaws in the code.
Second, it is possible to create a product that manipulates these figures. For example, you could multiplex functionality behind single analysis points, which would skew the figures. That said, I would hope the software engineering community is beyond such silly games.
The core message of this article is simple. Do whatever it takes to reduce the attack surface of your product because you, and most importantly your clients, have a big problem on your collective hands if a security flaw is found in the code and the feature is installed and functioning by default.
Spot the Security Flaw
A huge number of people found the flaw in my previous article. The return value from the impersonation function is not checked, and if the function fails, then the thread is still running in the process context; for example, SYSTEM, and not the context of the user. As SYSTEM, the thread can perform tasks the user cannot perform. I’m going to talk more about this issue in a future Code Secure column.
Ok, here's a good one that caught me a while back.
void funkyfunc(char c) {
char *buff = new char(250);
if (buff)
memset(buff,c,250);
delete [] buff;
}
Michael Howard is a Senior Security Program Manager in the Secure Windows Initiative group at Microsoft and is the coauthor of Writing Secure Code and the main author of Designing Secure Web-based Applications for Windows 2000. His main focus in life is making sure people design, build, test, and document nothing short of a secure system. His favorite line is "One person's feature is another's exploit."