David A. Wheeler's Blog

Sat, 16 Nov 2013

Vulnerability bidding wars and vulnerability economics

I worry that the economics of software vulnerability reporting is seriously increasing the risks to society. The problem is the rising bidding wars for vulnerability information, leading to a rapidly-growing number of vulnerabilities known only to attackers. These kinds of vulnerabilities, when exploited, are sometimes called “zero-days” because users and suppliers had zero days of warning. I suspect we should create laws limiting the sale of vulnerability information, similar to the limits we place on organ donation, to change the economics of vulnerability reporting. To see why, let me go over some background first.

A big part of the insecure software problem today is that relatively few of today’s software developers know how to develop software that resists attack (e.g., via the Internet). Many schools don’t teach it at all. I think that’s ridiculous; you’d think people would have heard about the Internet by now. I do have some hope that this will get better. I teach a graduate course on how to develop secure software at George Mason University (GMU), and attendance has increased over time. But today, most software developers do not know how to create secure software.

In contrast, there is an increasing bidding war for vulnerability information by organizations who intend to exploit those vulnerabilities. This incentivizes people to search for vulnerabilities, but not report them to the suppliers (who could fix them) and not alert the public. As Bruce Schneier reports in “The Vulnerabilities Market and the Future of Security” (June 1, 2012), “This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.” Forbes ran an article about this in 2012, Meet The Hackers Who Sell Spies The Tools To Crack Your PC (And Get Paid Six-Figure Fees). The Forbes article describes what happened when French security firm Vupen broke the security of the Chrome web browser. Vupen would not tell Google how they broke in, because the $60,000 award Google from Google was not enough. Chaouki Bekrar, Vupen’s chief executive, said that they “wouldn’t share this [information] with Google for even $1 million… We want to keep this for our customers.” These customers do not plan to fix security bugs; they purchase exploits or techniques with the “explicit intention of invading or disrupting”. Vupen even “hawks each trick to multiple government agencies, a business model that often plays its customers against one another as they try to keep up in an espionage arms race.” Just one part of the Flame espionage software (exploiting Microsoft Update) has been estimated as being worth $1 million when it was not known.

This imbalance in economic incentives creates a dangerous and growing mercenary subculture. You now have a growing number of people looking for vulnerabilities, keeping them secret, and selling them to the highest bidder… which will encourage more to look for, and keep secret, these vulnerabilities. After all, they are incentivized to do it. In contrast, the original developer typically does not know how to develop secure software, and there are fewer economic incentives to develop secure software anyway. This is a volatile combination.

Some think the solution is for suppliers to pay people when they report security vulnerabilities to suppliers (“bug bounties”). I do not think bug bounty systems (by themselves) will be enough, though suppliers are trying.

There has been a lot of discussion about Yahoo and bug bounties. On September 30, 2013, the article What’s your email security worth? 12 dollars and 50 cents according to Yahoo reported that Yahoo paid for each vulnerability only $12.50 USD. Even worse, this was not actual money, it was “a discount code that can only be used in the Yahoo Company Store, which sell Yahoo’s corporate t-shirts, cups, pens and other accessories”. Ilia Kolochenko, High-Tech Bridge CEO, says: “Paying several dollars per vulnerability is a bad joke and won’t motivate people to report security vulnerabilities to them, especially when such vulnerabilities can be easily sold on the black market for a much higher price. Nevertheless, money is not the only motivation of security researchers. This is why companies like Google efficiently play the ego card in parallel with [much higher] financial rewards and maintain a ‘Hall of Fame’ where all security researchers who have ever reported security vulnerabilities are publicly listed. If Yahoo cannot afford to spend money on its corporate security, it should at least try to attract security researchers by other means. Otherwise, none of Yahoo’s customers can ever feel safe.” Brian Martin, President of Open Security Foundation, said: “Vendor bug bounties are not a new thing. Recently, more vendors have begun to adopt and appreciate the value it brings their organization, and more importantly their customers. Even Microsoft, who was the most notorious hold-out on bug bounty programs realized the value and jumped ahead of the rest, offering up to $100,000 for exploits that bypass their security mechanisms. Other companies should follow their example and realize that a simple “hall of fame”, credit to buy the vendor’s products, or a pittance in cash is not conducive to researcher cooperation. Some of these companies pay their janitors more money to clean their offices, than they do security researchers finding vulnerabilities that may put thousands of their customers at risk.” Yahoo has since decided to establish a bug bounty system with larger rewards.

More recently, the Internet Bug Bounty Panel (founded by Microsoft and Facebook) will award public research into vulnerabilities with the potential for severe security implications to the public. It has a minimum bounty of $5,000. However, it certainly does not cover everything; they only intend to pay out widespread vulnerabilities (wide range of products or end users), and plan to limit bounties to only severe vulnerabilities that are novel (new or unusual in an interesting way). I think this could help, but it is no panacea.

Bug bounty systems are typically drastically outbid by attackers, and I see no reason to believe this will change.

Indeed, I do not think we should mandate, or even expect, that suppliers will pay people when people report security vulnerabilities to suppliers (aka bug bounties). Such a mandate or expectation could kill small businesses and open source software development, and it would almost certainly chill software development in general. Such payments would not also deal with what I see as a key problem: the people who sell vulnerabilities to the highest bidder. Mandating payment by suppliers would get most people to send them problem reports… if the bug bounty payments were required to be larger than payments to those who would exploit the vulnerability. That would be absurd, because given current prices, such a requirement would almost certainly prevent a lot of software development.

I think people who find a vulnerability in software should normally be free to tell the software’s supplier, so that the supplier can rapidly repair the software (and thus fix it before it is exploited). Some people call this “responsible disclosure”, though some suppliers misuse this term. Some suppliers say they want “responsible disclosure”, but they instead appear to irresponsibly abuse the term to stifle warning those at risk (including customers and the public), as well as irresponsibly delay the repair of critical vulnerabilities (if they repair the vulnerabilities at all). After all, if a supplier convinces the researcher to not alert users, potential users, and the public about serious security defects in their product, then these irresponsible suppliers may believe they don’t need to fix it quickly. People who are suspicious about “responsible disclosure” have, unfortunately, excellent reasons to be suspicious. Many suppliers have shown themselves untrustworthy, and even trustworthy suppliers need to have a reason to stay that way. For that and other reasons, I also think people should be free to alert the public in detail, at no charge, about a software vulnerability (so-called “full disclosure”). Although it’s not ideal for users, full disclosure is sometimes necessary; it can be especially justifiable when a supplier has demonstrated (through past or current actions) that he will not rapidly fix the problem that he created. In fact, I think it’d be an inappropriate constraint of free speech to prevent people from revealing serious problems in software products to the public.

But if we don’t want to mandate bug bounties, or so-called “responsible disclosure”, then where does that leave us? We need to find some way to change the rules so that economics works more closely with and not against computer security.

Well, here is an idea… at least one to start with. Perhaps we should criminalize selling vulnerability information to anyone other than the supplier or the reporter’s government. Basically, treat vulnerability information like organ donation: intentionally eliminate economic incentives in a specific area for a greater social good.

That would mean that suppliers can set up bug bounty programs, and researchers can publish information about vulnerabilities to the public, but this would sharply limit who else can legally buy the vulnerability information. In particular, it would be illegal to sell the information to organized crime, terrorist groups, and so on. Yes, governments can do bad things with the information; this particular proposal does nothing directly to address it. But I think it’s impossible to prevent a citizen from telling his country’s government about a software vulnerability; a citizen could easily see it as his duty. I also think no government would forbid buying such information for itself. However, by limiting sales to that particular citizen’s government, it becomes harder to create bidding wars between governments and other groups for vulnerability information. Without the bidding wars, there’s less incentive for others to find the information and sell it to them. Without the incentives, there would be fewer people working to find vulnerabilities that they would intentionally hide from suppliers and the public.

I believe this would not impinge on freedom of speech. You can tell no one, everyone, or anyone you want about the vulnerability. What you cannot do is receive financial benefit from selling vulnerability information to anyone other than the supplier (who can then fix it) or your own government (and that at least reduces bidding wars).

Of course, you always have to worry about unexpected consequences or easy workarounds for any new proposed law. An organization could set itself up specifically to find vulnerabilities and then exploit them itself… but that’s already illegal, so I don’t see a problem there. A trickier problem is that a malicious organization (say, the mob) could create a “supplier” (e.g., a reseller of proprietary software, or a downstream open source software package) that vulnerability researchers could sell their information to, working around the law. This could probably be handled by requiring, in law, that suppliers report (in a timely manner) any vulnerability information they receive to their relevant suppliers.

Obviously there are some people will do illegal things, but some people will avoid doing illegal things in principle, and others will avoid illegal activities because they fear getting caught. You don’t need to stop all possible cases, just enough to change the economics.

I fear that the current “vulnerability bidding wars” - left unchecked - will create an overwhelming tsunami of zero-days available to a wide variety of malicious actors. The current situation might impede the peer review of open source software (OSS), since currently people can make more money selling an exploit than in helping the OSS project fix the problem. Thankfully, OSS projects are still widely viewed as public goods, so there are still many people who are willing to take the pay cut and help OSS projects find and fix vulnerabilities. I think proprietary and custom software are actually in much more danger than OSS; in those cases it’s a lot easier for people to think “well, they wrote this code for their financial gain, so I may as well sell my vulnerability information for my financial gain”. The problem for society is that this attitude completely ignores the users and those impacted by the software, who can get hurt by the later exploitation of the vulnerability.

Maybe there’s a better way. If so, great… please propose it! My concern is that economics currently makes it hard - not easy - to have computer security. We need to figure out ways to get Adam Smith’s invisible hand to work for us, not against us.

Standard disclaimer: As always, these are my personal opinions, not those of employer, government, or (deceased) guinea pig.

path: /security | Current Weblog | permanent link to this entry