David A. Wheeler's Blog

Sat, 23 Sep 2017

Who decides when you need to update vulnerable software? (Equifax)

I have a trick question: Who decides when you need to update vulnerable software (presuming that if it’s unpatched it might lead to bad consequences)? In a company, is that the information technology (IT) department? The chief information officer (CIO)? A policy? The user of the computer? At home, is it the user of the computer? Perhaps the family’s “tech support” person?

Remember, it’s a trick question. What’s the answer? The answer is…

The attacker decides.

The attacker is the person who decides when you get attacked, and how. Not the computer user. Not a document. Not support. Not an executive. The attacker decides. And that means the attacker decides when you need to update your vulnerable software. If that statement makes you uncomfortable, then you need to change your thinking. This is reality.

So let’s look at Equifax, and see what we can learn from it.

Let’s start with the first revelation in 2017: A security vulnerability in Apache Struts (a widely-used software component) was fixed in March 2017, but Equifax failed to update it for two whole months, leading to the loss of sensitive information on about 143 million US consumers. The update was available for free, for two months, and it was well-known that attackers were exploiting this vulnerability in other organizations. Can we excuse Equifax? Is it “too hard” to update vulnerable software (aka “patch”) in a timely way? Is it acceptable that organizations fail to update vulnerable components when those vulnerabilities allow unauthorized access to lots of sensitive high-value data?

Nonsense. Equifax may choose to fail to update known vulnerable components. Clearly it did so! But Equifax needed to update rapidly, because the need to update was decided by the attackers, not by Equifax. In fact, two months is an absurdly long time, because again, the timeframe is determined by the attacker.

Now it’s true that if you don’t plan to rapidly update, it’s hard to update. Too bad. Nobody cares. Vulnerabilities are routinely found in software components, and have been for decades. Since it is 100% predictable that there will be vulnerabilities found in the software you use (including third-party software components you reuse), you need to plan ahead. I don’t know when it will rain, but I know it will, so I plan ahead by paying for a roof and buying umbrellas. When something is certain to happen, you need to plan for it. For example, make sure you rapidly learn about vulnerabilities in third party software you depend on, and that you have a process in place (with tools and automated testing) so that you can update and ship in minutes, not months. Days, not decades.

The Apache Struts Statement on Equifax Security Breach has some great points about how to properly handle reused software components (no matter where it’s from). The Apache Struts team notes that you should (1) understand the software you use, (2) establish a rapid update process, (3) remember that all complex software has flaws, (4) establish security layers, and (5) establish monitoring. Their statement has more details, in particular for #2 they say, “establish a process to quickly roll out a security fix release… [when reused software] needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months.”

Many militaries refer to the “OODA loop”, which is the decision cycle of observe, orient, decide, and act. The idea was developed by military strategist and United States Air Force Colonel John Boyd. Boyd noted that, “In order to win, we should operate at a faster tempo or rhythm than our adversaries…”. Of course, if you want to lose, then you simply need to operate more slowly than your adversary. You need to get comfortable with this adversarial terminology, because if you’re running a computer system today, you are in an adversarial situation, and the attackers are your adversaries.

In short, you must update your software when vulnerabilities are found before attackers can exploit them (if they can be exploited). If you can’t do that, then you need to change how you manage your software so can do that. Again, the attacker decides how fast you need to react.

We’re only beginnning to learn about the Equifax disaster of 2017, but it’s clear that Equifax “security” is just one failure after another. The more we learn, the worse it gets. Here are some of the information we have so far. Equifax used the rediculous pair Username “admin”, password “admin” for a database with personal employee information. Security Now! #628 showed that Equifax recommended using Netscape Navigator in their website discussion on security, a rediculously obsolete suggestion (Netscape shut down in 2003, 14 years ago). Equifax provided customers with PINs that were simply the date and time, making the PINs predictable and thus insecure. Equifax set up a “checker” site which makes false statements: “In what is an unconscionable move by the credit report company, the checker site, hosted by Equifax product TrustID, seems to be telling people at random they may have been affected by the data breach… It’s clear Equifax’s goal isn’t to protect the consumer or bring them vital information. It’s to get you to sign up for its revenue-generating product TrustID… [and] TrustID’s Terms of Service [say] that anyone signing up for the product is barred from suing the company after.” Equifax’s credit report monitoring site was found to be vulnerable to hacking (specifically, an XSS vulnerability that was quickly found by others). Equifax failed to use its own domain name for all its sites (as is standard), making it easy for others to spoof them. Indeed, NPR reported that that ”After Massive Data Breach, Equifax Directed Customers To Fake Site”. There are now suggestions that there were break-ins even earlier which Equifax never detected. In short: The more we learn, the worse it gets.

Most obviously, Equifax failed to responsibly update a known vulnerable component in a timely way. Updating software doesn’t matter when there’s no valuable information, but in this case extremely sensitive personal data was involved. This was especially sensitive data, Equifax was using a component version with a publicly-known vulnerability, and it was known that attackers were exploiting that vulnerability. It was completely foreseeable that attackers would use this vulnerable component to extract sensitive data. In short, Equifax had a duty of care that they failed to perform. Sometimes attackers perform an unprecedented kind of sneaky attack, and get around a host of prudent defenses; that would be different. But there is no excuse for failing to promptly respond when you know that a component is vulnerable. That is negligence.

But how can you quickly update software components? Does this require magic? Not at all, it just requires accepting that this will happen and so you must be ready. This is not an unpredictable event; I may not know exactly when it will happen, but I can be certain that it will happen. Once you accept that it will happen, you can easily get ready for it. There are tools that can help you monitor when your components publicly report a vulnerability or security update, so that you quickly find out when you have a problem. Package managers let you rapidly download, review, and update a component. You need to have an automated checking system that uses a variety of static tools, automated test suites, and other dynamic tools so that you can be confident that the system (with updated component) works correctly. You need to be confident that you can ship to production immediately with acceptable risk after you’ve updated your component and run your automated checking system. If you’re not confident, then your checking system is unacceptable and needs to be fixed. You also need to quickly ship that to production (and this must be automated), because again, you have to address vulnerabilities faster than the attacker.

Of course, your risks go down much further if you think about security the whole time you’re developing software. For example, you can design your system so that a defect is (1) less likely to lead to a system vulnerability or (2) has less of an impact. When you do that, then a component vulnerability will often not lead to a system vulnerability anyway. A single vulnerability in a front-end component should not have allowed such a disastrous outcome in the first place, since this was especially sensitive data, so the Equifax design also appears to have been negligent. They also failed to detect the problem for a long time; you should be monitoring high-value systems, to help reduce the impact of a vulnerability. The failure to notice this is also hard to justify. Developing secure software is quite possible, and you don’t need to break the bank to do it. It’s impossible in the real world to be perfect, but it’s very possible to be adequately secure.

Sadly, very few software developers know how to develop secure software. So I’ve created a video that’s on YouTube that should help: “How to Develop Secure Applications: The BadgeApp Example” (by David A. Wheeler). This walks through a real-world program (BadgeApp) as an example, to show approaches for developing far more secure software. If you’re involved in software development in any way, I encourage you to take a look at that video. Your software will almost certainly look different, but if you think about security throughout development, the results will almost certainly be much better. Perfection is impossible, but you can manage your risks, that is, reduce the probability and impact of attacks. There are a wide variety of countermeasures that can often prevent attacks, and they work well when combined with monitoring and response mechanisms for the relatively few attacks that get through.

The contrast between Equifax and BadgeApp is stark. Full disclosure: I am the technical lead of the BadgeApp project… but it is clear we did a better job than Equifax. Earlier this week a vulnerability was announced in one of the components (nokogiri) that is used by the BadgeApp. This vulnerability was announced on ruby-advisory-db, a database of vulnerable Ruby gems (software library components) used to report to users about component vulnerabilities. Within two hours of that announcement the BadgeApp project had downloaded the security update, run the BadgeApp application through a variety of tools and its automated test suite (with 100% statement coverage) to make sure everything was okay, and pushed the fixed version to the production site. The BadgeApp application is a simpler program, sure, but it also manages much less sensitive data than Equifax’s systems. We should expect Equifax to do at least as well, because they handle much more sensitive data. Instead, Equifax failed to update reused components with known vulnerabilities in a timely fashion.

Remember, the attacker decides.

The attacker decides how fast you need to react, what you need to defend against, and what you need to counter. More generally, the attacker decides how much you need to do to counter attacks. You do not get to decide what the attacker will choose to do. But you can plan ahead to make your software secure.

path: /security | Current Weblog | permanent link to this entry

Tue, 25 Oct 2016

Creating Laws for Computer Security

In 2016 the website KrebsonSecurity was taken down by a large distributed denial-of-service (DDoS) attack. More recently, many large sites became inaccessible due to a massive DDoS attack (see, e.g., “Hackers Used New Weapons to Disrupt Major Websites Across U.S.” by Nicole Perlroth, Oct. 21, 2016, NY Times).

Sadly, the “Internet of Things” is really the “Internet of painfully insecure things”. This is fundamentally an externalities problem (the buyers and sellers are not actually bearing the full cost of the exchange), and in these cases mechanisms like law and regulation are often used.

So, what laws or regulations should be created to improve computer security? Are there any? Obviously there are risks to creating laws and regulations. These need to be targeted at countering widespread problems, without interfering with experimentation, without hindering free expression or the development of open source software, and so on. It’s easy to create bad laws and regulations - but I believe it is possible to create good laws and regulations that will help.

My article Creating Laws for Computer Security lists some potential items that could be turned into laws that I think could help computer security. No doubt some could be improved, and there are probably things I’ve missed. But I think it’s important that people start discussing how to create narrowly-tailored laws that counter the more serious problems without causing too many negative side-effects. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Mon, 01 Feb 2016

Address Sanitizer on an entire Linux distribution!

Big news in computer security: Hanno Boeck has recently managed to get Address Sanitizer running on an entire Linux distribution (Gentoo) as an experimental edition. For those who don’t know, Address Sanitizer is an amazing compile-time option that detects a huge range of memory errors in memory-unsafe languages (in particular C and C++). These kinds of errors often lead to disastrous security vulnerabilities, such as Heartbleed.

This kind of distribution option is absolutely not for everyone. Address Sanitizer on average increases processing time by about 73%, and memory usage by 340%. What’s more, this work is currently very experimental, and you have to disable some other security mechanisms to make it work. That said, this effort has already borne a lot of valuable fruit. Turning on these mechanisms across an entire Linux distribution has revealed a large number of memory errors that are getting fixed. I can easily imagine this being directly useful in the future, too. Computers are very fast and have lots of memory, even when compared to computers of just a few years earlier. There are definitely situations where it’s okay to effectively halve performance and reduce useful memory, and in exchange, significantly increase the system’s resistance to novel attack. My congrats!!

path: /security | Current Weblog | permanent link to this entry

Mon, 23 Nov 2015

Ransomware coming to medical devices?

Forrester Research has an interesting cybersecurity prediction for 2016: We’ll see ransomware for a medical device or wearable.

This is, unfortunately, plausible. I don’t know if it will happen in 2016, but it’s pretty reasonable. Indeed, I can see threats.. even if we can’t be sure that the ransomware is even installed.

After all, Dick Cheney had his pacemaker’s Wifi disabled because of this concern (see also here). People have already noted that terrorists might use this, since medical devices are often poorly secured. The additional observation is that may be a better way to (criminally) make money. We already have ransomware, including organizations who are getting better at extorting with it. Traditional ransomware is foiled by good backups; in this case backups won’t help, and victims will (understandably) be willing to pay much, much more. And I think that medical devices are actually a softer target.

With luck, this won’t come true in 2016. The question is, is that because it doesn’t show up until 2017 or 2018… or because the first ones were in 2015? DHS is funding work in this area, and that’s good… but while research can help, the real problem is that we have too many software developers who do not have a clue how to develop secure software… and too many people (software developers or not) who think that’s acceptable.

In short, we still have way too many people building safety-critical devices who don’t understand that security is necessary for safety. I hope that this changes - and quickly.

path: /security | Current Weblog | permanent link to this entry

Tue, 07 Apr 2015

Heartbleed found with american fuzzy lop (afl) and Address Sanitizer (ASan)

Big news in security vulnerability research: Hanno Böck found Heartbleed using american fuzzy lop (afl) and Address Sanitizer (ASan) - and in only 6 hours of execution time.

This means that software developers should seriously consider using a more-advanced fuzzer, such as american fuzzy lop (afl), along with Address Sanitizer (ASan) (an option in both the LLVM/clang and gcc compilers), whenever you write in C, C++, Objective-C, or in other circumstances that are not memory-safe. In particular, seriously consider doing this if your program is exposed to the internet or it processes data sent via the internet (practically all programs meet this criteria nowadays). I had speculated that this combination could have found Heartbleed in my essay on Heartbleed, but this confirmation is really important. Here I will summarize what’s going on (using the capitalization conventions of the various tool developers).

The american fuzzy lop (afl) program created by Michal Zalewski is a surprisingly effective fuzzer. A fuzzer is simply a tool that sends lots of semi-random inputs into a program and to detect gross problems (typically a crash). Fuzzers do not know what the exact correct answers are, but because they do not, they can try out more inputs than systems that know the exact correct answers. But afl is smarter than most fuzzers; instead of just sending random inputs, afl tracks which branches are taken in a program. Even more interestingly, afl even tracks how often different branches are taken when running a program (that is especially unusual). Then, when afl creates new inputs, it prefers to create them based on inputs that have produced different counts on at least some branches. This evolutionary approach, using both branch coverage and the number of times a branch is used, is remarkably effective. Simple dumb random fuzzers can only perform relatively shallow tests; getting any depth has required more complex approaches such as detailed descriptions of the required format (the approach used by so-called “smart” fuzzers) and/or white-box constraint solving (such as fuzzgrind or Microsoft’s SAGE). It’s not at all clear that afl eliminates the value of these other fuzzing approaches; I can see combining their approaches. However, afl is clearly getting far better results than simple dumb fuzzers that just send random values. Indeed, the afl of today is getting remarkably deep coverage for a fuzzer. For example, the post Pulling JPEGs out of thin air shows how afl was able to start with only the text “hello” (a hideously bad starting point) and still automatically figure out how to create valid JPEG files.

However, while afl is really good at creating inputs, it can only detect problems if they lead to a crash; vulnerabilities like Heartbleed do not normally cause a crash. That’s where Address Sanitizer (ASan) comes in. Address Sanitizer turns many memory access errors, including nearly all out-of-bounds accesses, double-free, and use-after-free, into a crash. ASan was originally created by Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. ASan is amazing all by itself, and the combination is even better. The fuzzer afl is good at creating inputs, and ASan is good at turning problems into something that afl can detect. Both are available at no cost as Free/ libre/ open source software (FLOSS), so anyone can try them out, see how they work, and even make improvements.

Normally afl can only fuzz file inputs, but Heartbleed could only be triggered by network access. This is no big deal; Hanno describes in his article how to wrap up network programs so they can be fuzzed by file fuzzers.

Sometimes afl and ASan do not work well together today on 64-bit systems. This has to do with some technical limitations involving memory use; on 64-bit systems ASan reserves (but does not use) a lot of memory. This is not necessarily a killer; in many cases you can use them together anyway (as Hanno did). More importantly, this problem is about to go away. Recently I co-authored (along with Sam Hakim) a tool we call afl-limit-memory; it uses Linux cgroups to eliminate the problem so that you can always combine afl and ASan (at least on Linux). We have already submitted the code to the afl project leader, and we hope it will become part of afl soon. So this is already a disappearing problem.

There are lots of interesting related resources. If you want to learn about fuzzing more generally, some books you might want to read are Fuzzing: Brute Force Vulnerability Discovery by Sutton, Greene, and Amini and Fuzzing for Software Security Testing and Quality Assurance (Artech House Information Security and Privacy) by Takanen, DeMott, and Miller. My class materials for secure software design and programming, #9 (analysis tools), also cover fuzzing (and are freely available). The Fuzzing Project led by Hanno is an effort to encourate the use of fuzzing to improving the state of free software security, and includes some tutorials on how to do it. The paper AddressSanitizer: A Fast Address Sanity Checker is an excellent explanation of how ASan works. My essay How to Prevent the next Heartbleed discusses many different approaches that would, or would not, have detected Heartbleed.

I do not think that fuzzers (or any dynamic technique) completely replace static analysis approaches such as source code weakness analyzers. Various tools, including dynamic tools like fuzzers and static tools like source code weakness analyzers, are valuable complements for finding vulnerabilities before the attackers do.

path: /security | Current Weblog | permanent link to this entry

Sat, 04 Apr 2015

Security presentation updates

I’ve updated my presentations on how to design and implement secure software. In particular, I’ve added much about analysis tools and formal methods. There is a lot going on in those fields, and no matter what I do I am only scratching the surface. On the other hand, if you have not been following these closely, then there’s a lot you probably don’t know about. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Sat, 14 Feb 2015

Learning from Disaster

Learning from Disaster is a collection of essays that examines computer security disasters, and what we can learn from those disasters. This includes Heartbleed, Shellshock, POODLE, the Apple goto fail, and Sony Pictures. If you’re interested in computer security I think you’ll find this collection interesting.

So: please enjoy Learning from Disaster.

path: /security | Current Weblog | permanent link to this entry

Tue, 06 Jan 2015

Cloud security

There seems to be a lot of confusion about security fundamentals of cloud computing (and other utility-based approaches). For example, many people erroneously think hardware virtualization is required for clouds (it is not), or that hardware virtualization and containerization are the same (they are not).

My essay Cloud Security: Virtualization, Containers, and Related Issues is my effort to counteract some of this confusion. It has a quick introduction to clouds, a contrast of various security isolation mechanisms used to implement them, and a discussion of some related issues.

So please check out (and enjoy) Cloud Security: Virtualization, Containers, and Related Issues.

path: /security | Current Weblog | permanent link to this entry

Wed, 31 Dec 2014

I hope we learn from the computer security problems of 2014

As 2014 draws to a close, I hope anyone involved with computers will resolve to learn from the legion of security problems of 2014.

We had way too many serious vulnerabilities in widely-used software revealed in 2014. In each case, there are lessons that people could learn from them. Please take a look at the lessons that can be learned from Heartbleed, Shellshock, the POODLE attack on SSLv3, and the Apple goto fail vulnerability. More generally, a lot of information is available on how develop secure software - even though most software developers still do not know how to develop secure software. Simiarly, there are a host of lessons that organizations could learn from Sony Pictures.

Will people actually learn anything? Georg Wilhelm Friedrich Hegel reportedly said that, “We learn from history that we do not learn from history”.

Yet I think there are reasons to hope. There are a lot of efforts to improve the security of Free/Libre/Open Source Software (FLOSS) that are important yet inadequately secure. The Linux Foundation (LF) Core Infrastructure Initiative (CII) was established to “fund open source projects that are in the critical path for core computing functions” to improve their security. most recent European Union (EU) budget includes €1 million for auditing free-software programs to identify and fix vulnerabilities. The US DHS HOST project is also working to improve security using open source software (OSS). The Google Application Security Patch Reward Program is also working to improve security. And to be fair, these problems were found by people who were examining the software or protocols so that the problems could be fixed - exactly what you want to happen. At an organizational level, I think Sony was unusually lax in its security posture. I am already seeing evidence that other organizations have suddenly become much more serious about security, now that they see what has been done to Sony Pictures. In short, they are finally starting to see that security problems are not theoretical; they are real.

Here’s hoping that 2015 will be known as the year where people took computer security more seriously, and as a result, software and our computer systems became much harder to attack. If that happens, that would make 2015 an awesome year.

path: /security | Current Weblog | permanent link to this entry

Sat, 20 Dec 2014

Sony Pictures, Lax Security, and Passwords

Sony Pictures, Lax Security, and Passwords is a new essay about the devastating attack on Sony Pictures. We now have new information about how Sony Pictures was attacked; from that, and public information about Sony Pictures, we can see why the attack was so devastating. Even more importantly, we can learn from it. So please, take a look at Sony Pictures, Lax Security, and Passwords.

path: /security | Current Weblog | permanent link to this entry

Sun, 23 Nov 2014

Lessons learned from Apple goto fail

The year 2014 has not been a good year for the SSL/TLS protocol. SSL/TLS is the fundamental algorithm for securing web applications. Yet every major implementation has had at least one disastrous vulnerability, including Apple (goto fail), GnuTLS, OpenSSL (Heartbleed), and Microsoft. Separately a nasty attack has been found in the underlying SSLv3 protocol (POODLE). But instead of just noting those depressing statistics, we need to figure out why those vulnerabilities happened, and change how we develop software to prevent them from happening again.

To help, I just released The Apple goto fail vulnerability: lessons learned, a paper that is similar to my previous papers that focuses on how to counter these kinds of vulnerabilities in the future. In many ways Apple goto fail vulnerability was much more embarassing compared to Heartbleed; the goto fail vulnerability was easy to detect, in a portion that was a key part of its functionality. This vulnerability was reported back in February 2014, but there does not seem to be a single place where you can find a more complete list of approaches to counter it. I also note some information that doesn’t seem to be available elsewhere.

So if you develop software - or manage people who do - take a look at The Apple goto fail vulnerability: lessons learned.

path: /security | Current Weblog | permanent link to this entry

Tue, 14 Oct 2014

POODLE attack against SSLv3

There is a new POODLE attack against SSLv3. See my page for more info.

path: /security | Current Weblog | permanent link to this entry

Sun, 05 Oct 2014

Shellshock

I have posted a new paper about Shellshock. In particular, it includes a detailed timeline about shellshock, which counters a number of myths and misunderstandings. It also shows a correct way to detect if your system is vulnerable to shellshock (many postings get it wrong and only detect part of the problem).

I also briefly discuss how to detect or prevent future shellshock-like attacks. At the moment this list is short, because these kinds of vulnerabilities are known to be difficult to detect ahead of time. Still, I think it is worth trying to do this. My goal is to eventually end up with something similar to the list of countermeasures for Heartbleed-like attacks that I developed earlier.

path: /security | Current Weblog | permanent link to this entry

Tue, 19 Aug 2014

Software SOAR released!!

The Software SOAR (which I co-authored) has finally been released to the public! This document - whose full name is State-of-the-Art Resources (SOAR) for Software Vulnerability Detection, Test, and Evaluation (Institute for Defense Analyses Paper P-5061, July 2014) - is now available to everyone. It defines and describes the following overall process for selecting and using appropriate analysis tools and techniques for evaluating software for software (security) assurance. In particular, it identifies types of tools and techniques available for evaluating software, as well as the technical objectives those tools and techniques can meet. A key thing that it does is make clear that in most cases you need to use a variety of different tools if you are trying to evaluate software (e.g., to find vulnerabilities).

The easy way to get the document is via the Program Protection and System Security Engineering web page, then scroll to the bottom to look for it (it is co-authored by David A. Wheeler and Rama S. Moorthy). You can jump directly to the Main report of the software SOAR and Appendix E (Software State-of-the-Art Resources (SOAR) Matrix). You can also get the software SOAR report via IDA.

I don’t normally mention things I’ve done at work, but this is publicly available, some people have been waiting for it, and I’ve found that some people have had trouble finding it. For example, the article “Pentagon rates software assurance tools” by David Perera (Politico, 2014-08-19) is about this paper, but it does not tell people how to actually get it. I’m hoping that this announcement will give people a hand.

path: /security | Current Weblog | permanent link to this entry

Sun, 13 Jul 2014

Flawfinder version 1.28 released!

I’ve released yet another new version of flawfinder - now it’s version 1.28. Flawfinder is a simple program that examines C/C++ source code and reports on likely security flaws in the program, ranked by risk level.

This new version has some new capabilities. Common Weakness Enumeration (CWE) references are now included in most hits (this makes it easier to use in conjunction with other tools, and it also makes it easier to find general information about a weakness). The new version of flawfinder also has a new option to only produce reports that match a regular expression (e.g., you can report only hits with specific CWE values). This version also adds support for the git diff format.

This new version also has a number of bug fixes. For example, it handles files not ending in newline, and it more gracefully handles handles unbalanced double-quotes in sprintf. A bug in reporting the time executed has also been fixed.

For more information, or a copy, just go to my original flawfinder home page or the flawfinder project page on SourceForge.net. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Tue, 10 Jun 2014

Interview on Application Security

A new interview of me is available: David A. Wheeler on the Current State of Application Security (by the Trusted Software Alliance) (alternate link). In this interview I discuss a variety of topics with Mark Miller, including the need for education in developing secure software, the need to consider security thoughout the lifecycle, and the impact of componentization. I warn that many people do not include security (including software assurance) when they ask for quality; while I agree in principle that security is generally part of quality, in practice you have to specifically ask for security or you won’t get it.

This interview is part of their 50 in 50 interviews series, along with Joe Jarzombek (Department of Homeland Security), Steve Lipner (Microsoft), Bruce Schneier, Jeff Williams (Aspect Security and OWASP), and many others. It was an honor and pleasure to participate, and I hope you enjoy the results.

path: /security | Current Weblog | permanent link to this entry

Sat, 03 May 2014

How to Prevent the next Heartbleed

My new article How to Prevent the next Heartbleed describes why the Heartbleed vulnerability in OpenSSL was so hard to find… and what could be done to prevent something like it next time.

path: /security | Current Weblog | permanent link to this entry

Sat, 16 Nov 2013

Vulnerability bidding wars and vulnerability economics

I worry that the economics of software vulnerability reporting is seriously increasing the risks to society. The problem is the rising bidding wars for vulnerability information, leading to a rapidly-growing number of vulnerabilities known only to attackers. These kinds of vulnerabilities, when exploited, are sometimes called “zero-days” because users and suppliers had zero days of warning. I suspect we should create laws limiting the sale of vulnerability information, similar to the limits we place on organ donation, to change the economics of vulnerability reporting. To see why, let me go over some background first.

A big part of the insecure software problem today is that relatively few of today’s software developers know how to develop software that resists attack (e.g., via the Internet). Many schools don’t teach it at all. I think that’s ridiculous; you’d think people would have heard about the Internet by now. I do have some hope that this will get better. I teach a graduate course on how to develop secure software at George Mason University (GMU), and attendance has increased over time. But today, most software developers do not know how to create secure software.

In contrast, there is an increasing bidding war for vulnerability information by organizations who intend to exploit those vulnerabilities. This incentivizes people to search for vulnerabilities, but not report them to the suppliers (who could fix them) and not alert the public. As Bruce Schneier reports in “The Vulnerabilities Market and the Future of Security” (June 1, 2012), “This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.” Forbes ran an article about this in 2012, Meet The Hackers Who Sell Spies The Tools To Crack Your PC (And Get Paid Six-Figure Fees). The Forbes article describes what happened when French security firm Vupen broke the security of the Chrome web browser. Vupen would not tell Google how they broke in, because the $60,000 award Google from Google was not enough. Chaouki Bekrar, Vupen’s chief executive, said that they “wouldn’t share this [information] with Google for even $1 million… We want to keep this for our customers.” These customers do not plan to fix security bugs; they purchase exploits or techniques with the “explicit intention of invading or disrupting”. Vupen even “hawks each trick to multiple government agencies, a business model that often plays its customers against one another as they try to keep up in an espionage arms race.” Just one part of the Flame espionage software (exploiting Microsoft Update) has been estimated as being worth $1 million when it was not known.

This imbalance in economic incentives creates a dangerous and growing mercenary subculture. You now have a growing number of people looking for vulnerabilities, keeping them secret, and selling them to the highest bidder… which will encourage more to look for, and keep secret, these vulnerabilities. After all, they are incentivized to do it. In contrast, the original developer typically does not know how to develop secure software, and there are fewer economic incentives to develop secure software anyway. This is a volatile combination.

Some think the solution is for suppliers to pay people when they report security vulnerabilities to suppliers (“bug bounties”). I do not think bug bounty systems (by themselves) will be enough, though suppliers are trying.

There has been a lot of discussion about Yahoo and bug bounties. On September 30, 2013, the article What’s your email security worth? 12 dollars and 50 cents according to Yahoo reported that Yahoo paid for each vulnerability only $12.50 USD. Even worse, this was not actual money, it was “a discount code that can only be used in the Yahoo Company Store, which sell Yahoo’s corporate t-shirts, cups, pens and other accessories”. Ilia Kolochenko, High-Tech Bridge CEO, says: “Paying several dollars per vulnerability is a bad joke and won’t motivate people to report security vulnerabilities to them, especially when such vulnerabilities can be easily sold on the black market for a much higher price. Nevertheless, money is not the only motivation of security researchers. This is why companies like Google efficiently play the ego card in parallel with [much higher] financial rewards and maintain a ‘Hall of Fame’ where all security researchers who have ever reported security vulnerabilities are publicly listed. If Yahoo cannot afford to spend money on its corporate security, it should at least try to attract security researchers by other means. Otherwise, none of Yahoo’s customers can ever feel safe.” Brian Martin, President of Open Security Foundation, said: “Vendor bug bounties are not a new thing. Recently, more vendors have begun to adopt and appreciate the value it brings their organization, and more importantly their customers. Even Microsoft, who was the most notorious hold-out on bug bounty programs realized the value and jumped ahead of the rest, offering up to $100,000 for exploits that bypass their security mechanisms. Other companies should follow their example and realize that a simple “hall of fame”, credit to buy the vendor’s products, or a pittance in cash is not conducive to researcher cooperation. Some of these companies pay their janitors more money to clean their offices, than they do security researchers finding vulnerabilities that may put thousands of their customers at risk.” Yahoo has since decided to establish a bug bounty system with larger rewards.

More recently, the Internet Bug Bounty Panel (founded by Microsoft and Facebook) will award public research into vulnerabilities with the potential for severe security implications to the public. It has a minimum bounty of $5,000. However, it certainly does not cover everything; they only intend to pay out widespread vulnerabilities (wide range of products or end users), and plan to limit bounties to only severe vulnerabilities that are novel (new or unusual in an interesting way). I think this could help, but it is no panacea.

Bug bounty systems are typically drastically outbid by attackers, and I see no reason to believe this will change.

Indeed, I do not think we should mandate, or even expect, that suppliers will pay people when people report security vulnerabilities to suppliers (aka bug bounties). Such a mandate or expectation could kill small businesses and open source software development, and it would almost certainly chill software development in general. Such payments would not also deal with what I see as a key problem: the people who sell vulnerabilities to the highest bidder. Mandating payment by suppliers would get most people to send them problem reports… if the bug bounty payments were required to be larger than payments to those who would exploit the vulnerability. That would be absurd, because given current prices, such a requirement would almost certainly prevent a lot of software development.

I think people who find a vulnerability in software should normally be free to tell the software’s supplier, so that the supplier can rapidly repair the software (and thus fix it before it is exploited). Some people call this “responsible disclosure”, though some suppliers misuse this term. Some suppliers say they want “responsible disclosure”, but they instead appear to irresponsibly abuse the term to stifle warning those at risk (including customers and the public), as well as irresponsibly delay the repair of critical vulnerabilities (if they repair the vulnerabilities at all). After all, if a supplier convinces the researcher to not alert users, potential users, and the public about serious security defects in their product, then these irresponsible suppliers may believe they don’t need to fix it quickly. People who are suspicious about “responsible disclosure” have, unfortunately, excellent reasons to be suspicious. Many suppliers have shown themselves untrustworthy, and even trustworthy suppliers need to have a reason to stay that way. For that and other reasons, I also think people should be free to alert the public in detail, at no charge, about a software vulnerability (so-called “full disclosure”). Although it’s not ideal for users, full disclosure is sometimes necessary; it can be especially justifiable when a supplier has demonstrated (through past or current actions) that he will not rapidly fix the problem that he created. In fact, I think it’d be an inappropriate constraint of free speech to prevent people from revealing serious problems in software products to the public.

But if we don’t want to mandate bug bounties, or so-called “responsible disclosure”, then where does that leave us? We need to find some way to change the rules so that economics works more closely with and not against computer security.

Well, here is an idea… at least one to start with. Perhaps we should criminalize selling vulnerability information to anyone other than the supplier or the reporter’s government. Basically, treat vulnerability information like organ donation: intentionally eliminate economic incentives in a specific area for a greater social good.

That would mean that suppliers can set up bug bounty programs, and researchers can publish information about vulnerabilities to the public, but this would sharply limit who else can legally buy the vulnerability information. In particular, it would be illegal to sell the information to organized crime, terrorist groups, and so on. Yes, governments can do bad things with the information; this particular proposal does nothing directly to address it. But I think it’s impossible to prevent a citizen from telling his country’s government about a software vulnerability; a citizen could easily see it as his duty. I also think no government would forbid buying such information for itself. However, by limiting sales to that particular citizen’s government, it becomes harder to create bidding wars between governments and other groups for vulnerability information. Without the bidding wars, there’s less incentive for others to find the information and sell it to them. Without the incentives, there would be fewer people working to find vulnerabilities that they would intentionally hide from suppliers and the public.

I believe this would not impinge on freedom of speech. You can tell no one, everyone, or anyone you want about the vulnerability. What you cannot do is receive financial benefit from selling vulnerability information to anyone other than the supplier (who can then fix it) or your own government (and that at least reduces bidding wars).

Of course, you always have to worry about unexpected consequences or easy workarounds for any new proposed law. An organization could set itself up specifically to find vulnerabilities and then exploit them itself… but that’s already illegal, so I don’t see a problem there. A trickier problem is that a malicious organization (say, the mob) could create a “supplier” (e.g., a reseller of proprietary software, or a downstream open source software package) that vulnerability researchers could sell their information to, working around the law. This could probably be handled by requiring, in law, that suppliers report (in a timely manner) any vulnerability information they receive to their relevant suppliers.

Obviously there are some people will do illegal things, but some people will avoid doing illegal things in principle, and others will avoid illegal activities because they fear getting caught. You don’t need to stop all possible cases, just enough to change the economics.

I fear that the current “vulnerability bidding wars” - left unchecked - will create an overwhelming tsunami of zero-days available to a wide variety of malicious actors. The current situation might impede the peer review of open source software (OSS), since currently people can make more money selling an exploit than in helping the OSS project fix the problem. Thankfully, OSS projects are still widely viewed as public goods, so there are still many people who are willing to take the pay cut and help OSS projects find and fix vulnerabilities. I think proprietary and custom software are actually in much more danger than OSS; in those cases it’s a lot easier for people to think “well, they wrote this code for their financial gain, so I may as well sell my vulnerability information for my financial gain”. The problem for society is that this attitude completely ignores the users and those impacted by the software, who can get hurt by the later exploitation of the vulnerability.

Maybe there’s a better way. If so, great… please propose it! My concern is that economics currently makes it hard - not easy - to have computer security. We need to figure out ways to get Adam Smith’s invisible hand to work for us, not against us.

Standard disclaimer: As always, these are my personal opinions, not those of employer, government, or (deceased) guinea pig.

path: /security | Current Weblog | permanent link to this entry

Thu, 26 Sep 2013

Welcome, those interested in Diverse Double-Compiling (DDC)!

A number of people have recently been discussing or referring to my PhD work, “Fully Countering Trusting Trust through Diverse Double-Compiling (DDC)”, which counters Trojan Horse attacks on compilers. Last week’s discussion on reddit based on a short short slide show discussed it directly, for example. There have also been related discussions such as Tor’s work on creating deterministic builds.

For everyone who’s interested in DDC… welcome! I intentionally posted my dissertation, and a video about it, directly on the Internet with no paywall. That way, anyone who wants the information can immediately get it. Enjoy!

I even include enough background material so other people can independently repeat my experiments and verify my claims. I believe that if you cannot reproduce the results, it is not science… and a lot of computational research has stopped being a science. This is not a new observation; “Reproducible Research: Addressing the Need for Data and Code Sharing in Computational Science” by Victoria C. Stodden (Computing in Science & Engineering, 2010) summarizes a roundtable on this very problem. The roadtable found that “Progress in computational science is often hampered by researchers’ inability to independently reproduce or verify published results” and, along with a number of specific steps, “reproducibility must be embraced at the cultural level within the computational science community.” “Does computation threaten the scientific method (by Leslie Hatton and Adrian Giordani) and “The case for open computer programs” in Nature (by Darrel C. Ince, Leslie Hatton, and John Graham-Cumming) make similar points. For one of many examples, the paper “The Evolution from LIMMAT to NANOSAT” by Armin Biere (Technical Report #444, 15 April 2004) reported that they could not reproduce results because “From the publications alone, without access to the source code, various details were still unclear.” In the end they realized that “making source code… available is as important to the advancement of the field as publications”. I think we should not pay researchers, or their institutions, if they fail to provide the materials necessary to reproduce the work.

I do have a request, though. There is no patent on DDC, nor is there a legal requirement to report using it. Still, if you apply my approach, please let me know; I’d like to hear about it. Alternatively, if you are seriously trying to use DDC but are having some problems, let me know.

Again - enjoy!

path: /security | Current Weblog | permanent link to this entry

Thu, 20 Jun 2013

Industry-wide Misunderstandings of HTTPS (SSL/TLS)

Industry-wide Misunderstandings of HTTPS describes a nasty security problem involving HTTP (SSL/TLS) and caching. The basic problem is that developers of web applications do not know or understand web standards. The result: 70% of sites tested expose private data on users’ machines by recording data that is supposed to be destroyed.

Here’s the abstract: “Most web browsers, historically, were cautious about caching content delivered over an HTTPS connection to disk - to a greater degree than required by the HTTP standard. In recent years, in response to the increased use of HTTPS for non-sensitive data, and the proliferation of bandwidth-hungry AJAX and Web 2.0 sites, some browsers have been changed to strictly follow the standard, and cache HTTPS content far more aggressively than before. HTTPS web servers must explicitly include a response header to block standards-compliant browsers from caching the response to disk - and not all web developers have caught up to the new browser behavior. ISE identified 21 (70% of sites tested) financial, healthcare, insurance and utility account sites that failed to forbid browsers from storing cached content on disk, and as a result, after visiting these sites, unencrypted sensitive content is left behind on end-users’ machines.”

This vulnerability isn’t as easy to exploit as some other problems; it just means that data that should have been destroyed is hanging around. But it does set up serious problems, because that information should have been destroyed.

This is really just yet another example of the security problems that can happen when people assume, “the only web browser is Internet Explorer 6”. That was never true, and by ignoring standards, they set themselves up for disaster. This isn’t even a new standard; HTTP version 1.1 was released in 1999, so there’s been plenty of time to fix things. Today, many modern systems use AJAX, and SSL/TLS encryption is far more widely used as well, and given these changing conditions, web browsers are changing in standards-compliant ways. Web application developers who followed the standard are doing just fine. The web application developers who ignored the standards are, once again, putting their users at risk.

path: /security | Current Weblog | permanent link to this entry