David A. Wheeler's Blog

Sun, 13 Dec 2020

FLOSS Weekly #609!

I’m currently scheduled to be a guest on FLOSS Weekly on Wednesday, 2020-12-16, at 12:30pm Eastern Time (9:30am Pacific, 17:30 UTC). The general topic will be about Linux Foundation work on improving Open Source Software security.

Please join the live audience or listen later. I expect it will be interesting. I expect that we’ll discuss the Open Source Security Foundation (OpenSSF), the Report on the 2020 FOSS Contributor Survey, the free edX trio of courses on Secure Software Development Fundamentals, and the CII Best Practices Badge program.

path: /security | Current Weblog | permanent link to this entry

Report on the 2020 FOSS Contributor Survey

It’s here! You can now see the Report on the 2020 Free and Open Source Software (FOSS) Contributor Survey! This work was done by the Linux Foundation under the Core Infrastructure Initiative (CII) and later the Open Source Software Foundation (OpenSSF), along with Harvard University.

path: /security | Current Weblog | permanent link to this entry

Secure Software Development Fundamentals

If you develop software, please consider taking the free trio of courses Secure Software Development Fundamentals on edX that I recently created for the Linux Foundation’s Open Source Security Foundation (OpenSSF). The trio of courses is free; if you want to get a certificate to prove you learned it, you can pay to take some tests to earn the certificate (this is how many edX courses work).

Here’s a brief summary:

Almost all software is under attack today, and many organizations are unprepared in their defense. This professional certificate program, developed by the Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, is geared towards software developers, DevOps professionals, software engineers, web application developers, and others interested in learning how to develop secure software, focusing on practical steps that can be taken, even with limited resources to improve information security. The program enables software developers to create and maintain systems that are much harder to successfully attack, reduce the damage when attacks are successful, and speed the response so that any latent vulnerabilities can be rapidly repaired. The best practices covered in the course apply to all software developers, and it includes information especially useful to those who use or develop open source software.

The program discusses risks and requirements, design principles, and evaluating code (such as packages) for reuse. It then focuses on key implementation issues: input validation (such as why allowlists and not denylists should be used), processing data securely, calling out to other programs, sending output, cryptography, error handling, and incident response. This is followed by a discussion on various kinds of verification issues, including tests, including security testing and penetration testing, and security tools. It ends with a discussion on deployment and handling vulnerability reports.

The training courses included in this program focus on practical steps that you (as a developer) can take to counter most common kinds of attacks. It does not focus on how to attack systems, how attacks work, or longer-term research.

Modern software development depends on open source software, with open source now being pervasive in data centers, consumer devices, and services. It is important that those responsible for cybersecurity are able to understand and verify the security of the open source chain of contributors and dependencies. Thanks to the involvement of OpenSFF, a cross-industry collaboration that brings together leaders to improve the security of open source software by building a broader community, targeted initiatives, and best practices, this program provides specific tips on how to use and develop open source secur

I also teach a graduate course on how to the design and implementation of secure software. As you might expect, a graduate course isn’t the same thing. But please, if you’re a software developer, take the free edX, my class, or in some other way learn about how to develop secure software. The software that society depends on needs to be more secure than it is today. Having software developers know how develop secure software is a necesary step towards creating that secure software we all need.

path: /security | Current Weblog | permanent link to this entry

Sat, 23 May 2020

Verizon still failing to support RPKI

On 2019-06-24 parts of the Internet became inaccessible because Verizon failed to implement a key security measure called Resource Public Key Infrastructure (RPKI). Here’s a brief story about the 2019 failure by Verizon, with follow-on details.

What’s shocking is that Verizon is still failing to implement RPKI. Verizon’s continuing failure continues to make it trivial for both accidents and malicious actors (including governments) to shut down large swathes of the Internet, including networks around the US capital. That’s especially absurd because during the COVID-19 pandemic we have become more dependent on the Internet. There have been many routine failures by accident or on purpose; it’s past time to deploy the basic countermeasure (RPKI) to deal with it. Verizon needs to implement RPKI, as many other operators already have.

The fundamental problem is that the Internet depends on a routing system called Border Gateway Protocol (BGP), which never included a (useful) security mechanism. Resource Public Key (RPKI) provides an important security mechanism to counter certain kinds of BGP problems (either by accident or on purpose). “Why it’s time to deploy RPKI” (RIPE NCC, 2019-05-17) is a short 2-minute video that explains why it’s past time to deploy RPKI.

Verizon already knows that they’re failing to support RPKI; here’s a complaint posted on 2020-04-19 7:16AM that Verizon wasn’t supporting RPKI. It’s clear RPKI is useful; “Visualizing the Benefits of RPKI” by Kemal Sanjta (2019-07-19) shows how RPKI really does help.

If you’re a Verizon customer, you can easily verify Verizon’s status via Is BGP safe yet?. The answer for Verizon users is “no”.

If your Internet Service Provider (ISP) doesn’t support RPKI, please nag them to do so. If you’re a government, and your ISPs won’t yet support RPKI, ask when they’re going secure their network with this basic security measure. It will take work, and it won’t solve all problems in the universe, but those are merely excuses for failure; those statements describe all things that should be done. RPKI is an important minimum part of securing the Internet, and it’s time to ensure that every Internet Service Provider (ISP) supports it.

path: /security | Current Weblog | permanent link to this entry

Tue, 19 May 2020

Software Bill of Materials (SBOM) work at NTIA

Modern software systems contain many components, which themselves contain components, which themselves contain components. Which raises some important questions, for example, when a vulnerability is publicly identified, how do you know if your system is affected? Another issue involves licensing - how can you be confident that you are meeting all your legal obligations? This is getting harder to do as systems get bigger, and also because software development is a global activity.

On July 19, 2018, the US National Telecommunications and Information Administration (NTIA) “convened a meeting of stakeholders from across multiple sectors to begin a discussion about software transparency and the proposal being considered for a common structure for describing the software components in a product containing software.” [Framing Software Component Transparency: Establishing a Common Software Bill of Material (SBOM)]

A key part of this is to make it much easier to define and exchange a “Software Bill of Materials” (SBOM). You can see a lot of their information at the Community-Drafted Documents on Software Bill of Materials. If you’re interested in this topic, that’s a decent place to start.

path: /security | Current Weblog | permanent link to this entry

Fri, 15 May 2020

Initial Analysis of Underhanded Source Code

Announcing - a newly-available security paper I wrote! It’s titled “Initial Analysis of Underhanded Source Code” (by David A. Wheeler, IDA Document D-13166, April 2020). Here’s what it’s about, from its executive summary:

“It is possible to develop software source code that appears benign to human review but is actually malicious. In various competitions, such as the Obfuscated V Contest and Underhanded C Contest, software developers have demonstrated that it is possible to solve a data processing problem “with covert malicious behavior [in the] source code [that] easily passes visual inspection.” This is not merely an academic concern; in 2003, an attacker attempted to subvert the widely used Linux kernel by inserting underhanded software (this attack inserted code that used = instead of ==, an easily missed, one-character difference). This paper provides a brief initial look at underhanded source code, with the intent to eventually help develop countermeasures against it. …

This initial work suggests that countering underhanded code is not an impossible task; it appears that a relatively small set of simple countermeasures can significantly reduce the risk from underhanded code. I recommend examining more samples, identifying a recommended set of underhanded code countermeasures, and applying countermeasures in situations where countering underhanded code is important and the benefits exceed their costs.”

In my experience there are usually ways to reduce security risks, once you know about them. This is another case in point; once you know that this is a potential attack, there are a variety of ways to reduce their effectiveness. I don’t think this is the last word at all on this topic, but I hope it can be immediately applied and that others can build on it.

This was the last paper I wrote when I worked at IDA (I now work at the Linux Foundation). My thanks to IDA for releasing it! My special thanks go to Margaret Myers, Torrance Gloss, and Reginald N. Meeson, Jr., who all worked to make this paper possible.

So if you’re interested in the topic, you can view the Landing page for IDA Document D-13166 or go directly to the PDF for IDA DOcument D-13166, “Initial Analysis of Underhanded Source Code”. (If that doesn’t work, use this Perma.cc link to paper D-13166.) Enjoy!

path: /security | Current Weblog | permanent link to this entry

Tue, 07 Apr 2020

COVID-19/Coronavirus and Computer Attacks

Sadly, attackers have been exploiting the COVID-19 pandemic (caused by Coronavirus SARS-CoV-2) to cause problems via computers around the world. Modern Healthcare notes that hospitals are seeing active attacks, emails where a sender (pretending to be from the Centers for Disease Control and Prevention) asks the receiver to open a link (which is actually malware), other scams claim to track COVID-19 cases but actually steals personal information. Many official government COVID-19 mobile applications have threats (ranging from malware to incredibly basic security problems). For example, in Columbia the government released a mobile app called CoronApp-Colombia to help people track potential COVID-19 symptoms; the intention is great, but as of March 25 it failed to use HTTPS (secure communication), and instead used HTTP (insecure) to relay personal data (including health data).

In the long term, the solution is for software developers and operators to do a much better job in creating and deploying secure applications. In the short term, we need to take extra care about our computer security.

path: /security | Current Weblog | permanent link to this entry

Tue, 17 Sep 2019

CWE Top 25 for 2019

In case you weren’t aware of it, there is now a 2019 version of the CWE Top 25 list. This list attempts to rank what are the most important kinds of software vulnerabilities (what they call “weaknesses”).

Their new approach is to directly use the National Vulnerability Database (NVD) to score various kinds of vulnerabilities. There are a number of limitations with this approach, and they discuss many of them in the cited page.

Their approach does have some oddnesses, for example, their #1 worst problem (CWE-119, Improper restriction of operations within the bounds of a memory buffer) is itself the parent of items #5 (CWE-125, out-of-bounds read) and #12 (CWE-787, out-of-bounds write).

Another oddity: they rank Cross-Site Request Forgery (CSRF) quite high (#9). CSRF doesn’t even appear in the 2017 (latest) OWASP Top 10 list, even though the OWASP top 10 list focuses on websites (where CSRF can occur). I think this happens because the CWE folks are using a large dataset from 2017-2018, where there are still a large number of CSRF vulnerabilities. But the impact of those remaining vulnerabilities has been going down, due to changes to frameworks, standards, and web browsers. Most sites use a pre-existing frameworks, and frameworks have been increasingly adding on-by-default CSRF countermeasures. The “SameSite” cookie attribute that provides an easy countermeasure against CSRF was implemented in most browsers around 2016-2018 (depending on the browser), but having it take effect required that websites make changes, and during that 2017-2018 timeframe websites were only starting to deploy those changes. As of late 2019 several browsers are in the process of switching their SameSite defaults so that they counter CSRF by default, without requiring sites to do anything. (In particular, see the announcement for Chrome and the change log for Mozilla Firefox.) These changes to the SameSite defaults implement the security improvements proposed in Incrementally Better Cookies by M. West in May 2019. This change in the security default could not have been realistically done before 2019 because of a bug in the Apple Safari browser that was only fixed in 2019. As more browsers self-protect against CSRF by default, without requiring sites or developers to do anything, CSRF vulnerabilities will become dramatically less likely. This once again shows the power of defaults; systems should be designed to be secure by default whenever possible, because normally people simply accept the defaults.

That said, having a top 25 list based on quantitative analysis is probably for the best long-term, and the results appear to be (mostly) very reasonable. I’m glad to see it!

path: /security | Current Weblog | permanent link to this entry

Tue, 26 Mar 2019

Assurance cases

No one thing creates secure software, so you need to do a set of things to make adequately secure software. But no one has infinite resources; how can you have confidence that you are doing the right set? Many experts (including me) have recommended creating an assurance case to connect the various approaches together to an efficient, cohesive whole. It can be hard to start an assurance case, though, because there are few public examples.

So I am pleased to report that you can now freely get my paper A Sample Security Assurance Case Pattern by David A. Wheeler, December 2018. This paper discusses how to create secure software by applying an assurance case, and uses the Badge Application’s assurance case as an example. If you are trying to create a secure application, I hope you will find it useful.

path: /security | Current Weblog | permanent link to this entry

Thu, 16 Aug 2018

Verified voting still necessary, paperless voting still untrustworthy

In 2006 I wrote “Direct Recording Electronic (DRE) Voting: Why Your Vote Doesn’t Matter”. Over a decade later, voting systems are still being used that are fundamentally insecure - though things are better in some places.

First, the basics. If a voting system uses anything other than voter-verified paper to vote, then that voting system is not secure. Paper does not automatically make a voting system secure, but a system that does not use voter-verified paper cannot be secure. Verified voting using paper ballots is a minimum requirement for a trustworthy voting system. Direct recording equipment (DRE) and mobile phone voting systems cannot be adequately secure for elections of government positions. These insecure systems are simply invitations for vote tampering.

The article “Why US elections remain ‘dangerously vulnerable’ to cyber-attacks” discusses some of the reasons why many of the US voting systems are fundamentally untrustworthy. One quote: “Georgia’s election officials continue to defend the state’s electronic voting system that is demonstrably unreliable and insecure, and have repeatedly refused to take administrative, regulatory or legislative action to address the election security failures.” Another quote: “there is little mystery about the safest available voting technology - optically scanned paper ballots, now used by about 80% of US voters. Some of the states that don’t have this technology, like Louisiana, would like it but don’t have the funds to switch. Others, like Georgia and South Carolina, simply aren’t interested in ditching their all-electronic systems despite the compelling reasons to do so.”

“West Virginia to introduce mobile phone voting for midterm elections” by Donie O’Sullivan discusses West Virginia’s introduction of mobile phone voting. Does this require a paper ballot? No. Therefore, West Virginia’s proposed voting system is horrifically insecure, and its results will be completely untrustworthy if implemented.

XKCD’s “Voting Software” is a funny summary. In short: experts on computer security agree that computers must not be directly used for voting when there are important stakes (such as a vote for a political office). When experts say “you cannot adequately trust the systems we build” you should believe the experts.

As I noted earlier, “I used to do magic tricks, and all magic tricks work the same way - misdirect the viewer, so that what they think they see is not the same as reality. Many magic tricks depend on rigged props, where what you see is NOT the whole story. DREs are the ultimate illusion - the naive think they know what’s happening, but in fact they have no way to know what’s really going on.”

I am sure that some election officials will bristle when told that we cannot trust the legitimacy of their results. Too bad. If your election system uses technology that is widely known to be easily subverted, such as voting machines that do not use voter-verified paper ballots, then your results should be viewed with deep suspicion. Without voter-verified paper ballots there is no way to independently verify vote counts, so there is no reason to trust the results. This is old information; those who have not replaced insecure systems are those who have failed to act. Some states certify or approve the use of voting machines without voter-verified paper ballots, but that just shows that their certification or approval processes fail to provide even a minimum level of security.

There is more to protecting the legitimacy of votes, of course. For example, it is critical to ensure that only eligible voters can vote, that voters can vote at most once, and that paper votes cannot be added or removed. But currently many districts are not doing the minimum necessary to have trustworthy election results, and we need to get systems up to minimal standards.

There is an old phrase: “It’s not the people who vote that count. It’s the people who count the votes.” Stalin did not say that exactly, but he did say something like it. The point is that if we do not adequately protect the process of counting votes, then the vote counts are vulnerable to manipulation.

The Voting system principles from Verified Voting provides a useful starting list of requirements; there are other guides too. Voting systems that fail to meet those principles are untrustworthy toys that should not be used for real elections. It is fine to use direct recording equipment, mobile phone voting, or other insecure systems when you are voting for homecoming queen or deciding where to go to lunch. But it is time to stop using fundamentally flawed voting systems like these for elections that matter.

path: /security | Current Weblog | permanent link to this entry

Tue, 24 Jul 2018

Email encryption is here! Use STARTTLS everywhere!

Historically most email has been unencrypted, and that has a serious flaw: unencrypted email can be read and modified by anyone between the sender and final receiver. Tools to do “end-to-end” encryption of email (to prevent reading and/or modifying it) have been available for decades, but they are often hard to use by “normal” users.

Thankfully, there’s been work to significantly improve email security. In particular, STARTTLS email encryption is now widely supported, and the Electronic Frontier Foundation “STARTTLS Everywhere” initiative is working to get everyone to support STARTTLS in their email systems. Therefore:

STARTTLS is not perfect, as I’ll discuss below. My point is that it’s way more secure than most email without it, because it improves security without requiring end-users to do anything. Below is additional information that I think you’ll find interesting.

First, here’s how STARTTLS works. Email is transmitted by a series of “hops”; if the hop recipient supports STARTTLS, email is automatically encrypted on that hop as it goes through the infrastructure, without requiring email users to do anything special. That ease-of-use is a big deal - users normally do whatever is the default, so if the default is secure, then users will normally do the secure thing.

Lots of organizations now support STARTTLS. Google reports that by 2018-07-24 90% of its incoming email, and 90% of outgoing email, was encrypted using STARTTLS (“Email encryption in transit”). Many email services support STARTTLS, including Gmail, Yahoo.com, Outlook.com, and runbox.com. (This includes the top email services.) Many other organizations support STARTTLS, including Google, Microsoft, Bank of America, The American Red Cross, The Salvation Army, The Software Engineering Institute (SEI), Carnegie Mellon University (CMU), and University of California, Berkeley. I give this list to show that there are many different kinds of organizations that support STARTTLS. The STARTTLS Policy List has an incomplete list of organizations known to be supporting STARTTLS.

The Electronic Frontier Foundation “STARTTLS Everywhere” initiative is an effort to get lagging organizations to support STARTTLS. As I noted earlier, you should use their tools to see if your organization properly supports STARTTLS on its incoming emails, and if not, complain to get that fixed.

There are some historical problems that the STARTTLS Everywhere project is working to fix:

STARTTLS is not an end-to-end encryption system. STARTTLS only encrypts while the email is being sent between systems (“hops”). That’s not all bad. For example, it means that receiving organizations can continue to examine the emails to check for viruses/malware, counter spam, and so on. But of course, there are downsides.

STARTTLS is, in general, not as strong as an end-to-end encryption system (from the point-of-view of providing confidentiality and integrity). For example, receiving organizations (and anyone who subverts their email system) can see and modify the email. Users who do not trust their email service providers should not depend on STARTTLS; they must use end-to-end encryption. In general, end-to-end encryption is stronger, so we should still work to make end-to-end email encryption easier to use and deploy. But for various reasons it’s hard to deploy end-to-end email encryption, and we’ve spent decades trying. Also, STARTTLS works just fine with end-to-end encryption.

Please indulge me: I think a small rant is appropriate here. There are some security specialists who think that only the perfect is acceptable. Nonsense! Requiring perfection is crazy. I think it is important, when creating and maintaining systems, to have an engineering mindset. In particular, you must always remember that that choices have trade-offs. It is not possible to have no risk; an asteroid might land on your head tomorrow. It is not reasonable to demand that systems be used regardless of their difficulty or expense; we all have limited time and money. Security issues are real, and we do need to address them, but time, money, and ease-of-use also matter greatly.

Unlike most other systems, STARTTLS is completely automatic (end-users don’t have to do anything) once it is set up, it is not hard to set up, and it counters a large class of attacks. For almost all users, email encryption with STARTTLS is a major improvement over what they had before. Let’s keep working to deploy even better systems, but let’s take partial victories where we can get them.

path: /security | Current Weblog | permanent link to this entry

Sat, 23 Sep 2017

Who decides when you need to update vulnerable software? (Equifax)

I have a trick question: Who decides when you need to update vulnerable software (presuming that if it’s unpatched it might lead to bad consequences)? In a company, is that the information technology (IT) department? The chief information officer (CIO)? A policy? The user of the computer? At home, is it the user of the computer? Perhaps the family’s “tech support” person?

Remember, it’s a trick question. What’s the answer? The answer is…

The attacker decides.

The attacker is the person who decides when you get attacked, and how. Not the computer user. Not a document. Not support. Not an executive. The attacker decides. And that means the attacker decides when you need to update your vulnerable software. If that statement makes you uncomfortable, then you need to change your thinking. This is reality.

So let’s look at Equifax, and see what we can learn from it.

Let’s start with the first revelation in 2017: A security vulnerability in Apache Struts (a widely-used software component) was fixed in March 2017, but Equifax failed to update it for two whole months, leading to the loss of sensitive information on about 143 million US consumers. The update was available for free, for two months, and it was well-known that attackers were exploiting this vulnerability in other organizations. Can we excuse Equifax? Is it “too hard” to update vulnerable software (aka “patch”) in a timely way? Is it acceptable that organizations fail to update vulnerable components when those vulnerabilities allow unauthorized access to lots of sensitive high-value data?

Nonsense. Equifax may choose to fail to update known vulnerable components. Clearly it did so! But Equifax needed to update rapidly, because the need to update was decided by the attackers, not by Equifax. In fact, two months is an absurdly long time, because again, the timeframe is determined by the attacker.

Now it’s true that if you don’t plan to rapidly update, it’s hard to update. Too bad. Nobody cares. Vulnerabilities are routinely found in software components, and have been for decades. Since it is 100% predictable that there will be vulnerabilities found in the software you use (including third-party software components you reuse), you need to plan ahead. I don’t know when it will rain, but I know it will, so I plan ahead by paying for a roof and buying umbrellas. When something is certain to happen, you need to plan for it. For example, make sure you rapidly learn about vulnerabilities in third party software you depend on, and that you have a process in place (with tools and automated testing) so that you can update and ship in minutes, not months. Days, not decades.

The Apache Struts Statement on Equifax Security Breach has some great points about how to properly handle reused software components (no matter where it’s from). The Apache Struts team notes that you should (1) understand the software you use, (2) establish a rapid update process, (3) remember that all complex software has flaws, (4) establish security layers, and (5) establish monitoring. Their statement has more details, in particular for #2 they say, “establish a process to quickly roll out a security fix release… [when reused software] needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months.”

Many militaries refer to the “OODA loop”, which is the decision cycle of observe, orient, decide, and act. The idea was developed by military strategist and United States Air Force Colonel John Boyd. Boyd noted that, “In order to win, we should operate at a faster tempo or rhythm than our adversaries…”. Of course, if you want to lose, then you simply need to operate more slowly than your adversary. You need to get comfortable with this adversarial terminology, because if you’re running a computer system today, you are in an adversarial situation, and the attackers are your adversaries.

In short, you must update your software when vulnerabilities are found before attackers can exploit them (if they can be exploited). If you can’t do that, then you need to change how you manage your software so can do that. Again, the attacker decides how fast you need to react.

We’re only beginnning to learn about the Equifax disaster of 2017, but it’s clear that Equifax “security” is just one failure after another. The more we learn, the worse it gets. Here are some of the information we have so far. Equifax used the rediculous pair Username “admin”, password “admin” for a database with personal employee information. Security Now! #628 showed that Equifax recommended using Netscape Navigator in their website discussion on security, a rediculously obsolete suggestion (Netscape shut down in 2003, 14 years ago). Equifax provided customers with PINs that were simply the date and time, making the PINs predictable and thus insecure. Equifax set up a “checker” site which makes false statements: “In what is an unconscionable move by the credit report company, the checker site, hosted by Equifax product TrustID, seems to be telling people at random they may have been affected by the data breach… It’s clear Equifax’s goal isn’t to protect the consumer or bring them vital information. It’s to get you to sign up for its revenue-generating product TrustID… [and] TrustID’s Terms of Service [say] that anyone signing up for the product is barred from suing the company after.” Equifax’s credit report monitoring site was found to be vulnerable to hacking (specifically, an XSS vulnerability that was quickly found by others). Equifax failed to use its own domain name for all its sites (as is standard), making it easy for others to spoof them. Indeed, NPR reported that that ”After Massive Data Breach, Equifax Directed Customers To Fake Site”. There are now suggestions that there were break-ins even earlier which Equifax never detected. In short: The more we learn, the worse it gets.

Most obviously, Equifax failed to responsibly update a known vulnerable component in a timely way. Updating software doesn’t matter when there’s no valuable information, but in this case extremely sensitive personal data was involved. This was especially sensitive data, Equifax was using a component version with a publicly-known vulnerability, and it was known that attackers were exploiting that vulnerability. It was completely foreseeable that attackers would use this vulnerable component to extract sensitive data. In short, Equifax had a duty of care that they failed to perform. Sometimes attackers perform an unprecedented kind of sneaky attack, and get around a host of prudent defenses; that would be different. But there is no excuse for failing to promptly respond when you know that a component is vulnerable. That is negligence.

But how can you quickly update software components? Does this require magic? Not at all, it just requires accepting that this will happen and so you must be ready. This is not an unpredictable event; I may not know exactly when it will happen, but I can be certain that it will happen. Once you accept that it will happen, you can easily get ready for it. There are tools that can help you monitor when your components publicly report a vulnerability or security update, so that you quickly find out when you have a problem. Package managers let you rapidly download, review, and update a component. You need to have an automated checking system that uses a variety of static tools, automated test suites, and other dynamic tools so that you can be confident that the system (with updated component) works correctly. You need to be confident that you can ship to production immediately with acceptable risk after you’ve updated your component and run your automated checking system. If you’re not confident, then your checking system is unacceptable and needs to be fixed. You also need to quickly ship that to production (and this must be automated), because again, you have to address vulnerabilities faster than the attacker.

Of course, your risks go down much further if you think about security the whole time you’re developing software. For example, you can design your system so that a defect is (1) less likely to lead to a system vulnerability or (2) has less of an impact. When you do that, then a component vulnerability will often not lead to a system vulnerability anyway. A single vulnerability in a front-end component should not have allowed such a disastrous outcome in the first place, since this was especially sensitive data, so the Equifax design also appears to have been negligent. They also failed to detect the problem for a long time; you should be monitoring high-value systems, to help reduce the impact of a vulnerability. The failure to notice this is also hard to justify. Developing secure software is quite possible, and you don’t need to break the bank to do it. It’s impossible in the real world to be perfect, but it’s very possible to be adequately secure.

Sadly, very few software developers know how to develop secure software. So I’ve created a video that’s on YouTube that should help: “How to Develop Secure Applications: The BadgeApp Example” (by David A. Wheeler). This walks through a real-world program (BadgeApp) as an example, to show approaches for developing far more secure software. If you’re involved in software development in any way, I encourage you to take a look at that video. Your software will almost certainly look different, but if you think about security throughout development, the results will almost certainly be much better. Perfection is impossible, but you can manage your risks, that is, reduce the probability and impact of attacks. There are a wide variety of countermeasures that can often prevent attacks, and they work well when combined with monitoring and response mechanisms for the relatively few attacks that get through.

The contrast between Equifax and BadgeApp is stark. Full disclosure: I am the technical lead of the BadgeApp project… but it is clear we did a better job than Equifax. Earlier this week a vulnerability was announced in one of the components (nokogiri) that is used by the BadgeApp. This vulnerability was announced on ruby-advisory-db, a database of vulnerable Ruby gems (software library components) used to report to users about component vulnerabilities. Within two hours of that announcement the BadgeApp project had downloaded the security update, run the BadgeApp application through a variety of tools and its automated test suite (with 100% statement coverage) to make sure everything was okay, and pushed the fixed version to the production site. The BadgeApp application is a simpler program, sure, but it also manages much less sensitive data than Equifax’s systems. We should expect Equifax to do at least as well, because they handle much more sensitive data. Instead, Equifax failed to update reused components with known vulnerabilities in a timely fashion.

Remember, the attacker decides.

The attacker decides how fast you need to react, what you need to defend against, and what you need to counter. More generally, the attacker decides how much you need to do to counter attacks. You do not get to decide what the attacker will choose to do. But you can plan ahead to make your software secure.

path: /security | Current Weblog | permanent link to this entry

Tue, 25 Oct 2016

Creating Laws for Computer Security

In 2016 the website KrebsonSecurity was taken down by a large distributed denial-of-service (DDoS) attack. More recently, many large sites became inaccessible due to a massive DDoS attack (see, e.g., “Hackers Used New Weapons to Disrupt Major Websites Across U.S.” by Nicole Perlroth, Oct. 21, 2016, NY Times).

Sadly, the “Internet of Things” is really the “Internet of painfully insecure things”. This is fundamentally an externalities problem (the buyers and sellers are not actually bearing the full cost of the exchange), and in these cases mechanisms like law and regulation are often used.

So, what laws or regulations should be created to improve computer security? Are there any? Obviously there are risks to creating laws and regulations. These need to be targeted at countering widespread problems, without interfering with experimentation, without hindering free expression or the development of open source software, and so on. It’s easy to create bad laws and regulations - but I believe it is possible to create good laws and regulations that will help.

My article Creating Laws for Computer Security lists some potential items that could be turned into laws that I think could help computer security. No doubt some could be improved, and there are probably things I’ve missed. But I think it’s important that people start discussing how to create narrowly-tailored laws that counter the more serious problems without causing too many negative side-effects. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Mon, 01 Feb 2016

Address Sanitizer on an entire Linux distribution!

Big news in computer security: Hanno Boeck has recently managed to get Address Sanitizer running on an entire Linux distribution (Gentoo) as an experimental edition. For those who don’t know, Address Sanitizer is an amazing compile-time option that detects a huge range of memory errors in memory-unsafe languages (in particular C and C++). These kinds of errors often lead to disastrous security vulnerabilities, such as Heartbleed.

This kind of distribution option is absolutely not for everyone. Address Sanitizer on average increases processing time by about 73%, and memory usage by 340%. What’s more, this work is currently very experimental, and you have to disable some other security mechanisms to make it work. That said, this effort has already borne a lot of valuable fruit. Turning on these mechanisms across an entire Linux distribution has revealed a large number of memory errors that are getting fixed. I can easily imagine this being directly useful in the future, too. Computers are very fast and have lots of memory, even when compared to computers of just a few years earlier. There are definitely situations where it’s okay to effectively halve performance and reduce useful memory, and in exchange, significantly increase the system’s resistance to novel attack. My congrats!!

path: /security | Current Weblog | permanent link to this entry

Mon, 23 Nov 2015

Ransomware coming to medical devices?

Forrester Research has an interesting cybersecurity prediction for 2016: We’ll see ransomware for a medical device or wearable.

This is, unfortunately, plausible. I don’t know if it will happen in 2016, but it’s pretty reasonable. Indeed, I can see threats.. even if we can’t be sure that the ransomware is even installed.

After all, Dick Cheney had his pacemaker’s Wifi disabled because of this concern (see also here). People have already noted that terrorists might use this, since medical devices are often poorly secured. The additional observation is that may be a better way to (criminally) make money. We already have ransomware, including organizations who are getting better at extorting with it. Traditional ransomware is foiled by good backups; in this case backups won’t help, and victims will (understandably) be willing to pay much, much more. And I think that medical devices are actually a softer target.

With luck, this won’t come true in 2016. The question is, is that because it doesn’t show up until 2017 or 2018… or because the first ones were in 2015? DHS is funding work in this area, and that’s good… but while research can help, the real problem is that we have too many software developers who do not have a clue how to develop secure software… and too many people (software developers or not) who think that’s acceptable.

In short, we still have way too many people building safety-critical devices who don’t understand that security is necessary for safety. I hope that this changes - and quickly.

path: /security | Current Weblog | permanent link to this entry

Tue, 07 Apr 2015

Heartbleed found with american fuzzy lop (afl) and Address Sanitizer (ASan)

Big news in security vulnerability research: Hanno Böck found Heartbleed using american fuzzy lop (afl) and Address Sanitizer (ASan) - and in only 6 hours of execution time.

This means that software developers should seriously consider using a more-advanced fuzzer, such as american fuzzy lop (afl), along with Address Sanitizer (ASan) (an option in both the LLVM/clang and gcc compilers), whenever you write in C, C++, Objective-C, or in other circumstances that are not memory-safe. In particular, seriously consider doing this if your program is exposed to the internet or it processes data sent via the internet (practically all programs meet this criteria nowadays). I had speculated that this combination could have found Heartbleed in my essay on Heartbleed, but this confirmation is really important. Here I will summarize what’s going on (using the capitalization conventions of the various tool developers).

The american fuzzy lop (afl) program created by Michal Zalewski is a surprisingly effective fuzzer. A fuzzer is simply a tool that sends lots of semi-random inputs into a program and to detect gross problems (typically a crash). Fuzzers do not know what the exact correct answers are, but because they do not, they can try out more inputs than systems that know the exact correct answers. But afl is smarter than most fuzzers; instead of just sending random inputs, afl tracks which branches are taken in a program. Even more interestingly, afl even tracks how often different branches are taken when running a program (that is especially unusual). Then, when afl creates new inputs, it prefers to create them based on inputs that have produced different counts on at least some branches. This evolutionary approach, using both branch coverage and the number of times a branch is used, is remarkably effective. Simple dumb random fuzzers can only perform relatively shallow tests; getting any depth has required more complex approaches such as detailed descriptions of the required format (the approach used by so-called “smart” fuzzers) and/or white-box constraint solving (such as fuzzgrind or Microsoft’s SAGE). It’s not at all clear that afl eliminates the value of these other fuzzing approaches; I can see combining their approaches. However, afl is clearly getting far better results than simple dumb fuzzers that just send random values. Indeed, the afl of today is getting remarkably deep coverage for a fuzzer. For example, the post Pulling JPEGs out of thin air shows how afl was able to start with only the text “hello” (a hideously bad starting point) and still automatically figure out how to create valid JPEG files.

However, while afl is really good at creating inputs, it can only detect problems if they lead to a crash; vulnerabilities like Heartbleed do not normally cause a crash. That’s where Address Sanitizer (ASan) comes in. Address Sanitizer turns many memory access errors, including nearly all out-of-bounds accesses, double-free, and use-after-free, into a crash. ASan was originally created by Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. ASan is amazing all by itself, and the combination is even better. The fuzzer afl is good at creating inputs, and ASan is good at turning problems into something that afl can detect. Both are available at no cost as Free/ libre/ open source software (FLOSS), so anyone can try them out, see how they work, and even make improvements.

Normally afl can only fuzz file inputs, but Heartbleed could only be triggered by network access. This is no big deal; Hanno describes in his article how to wrap up network programs so they can be fuzzed by file fuzzers.

Sometimes afl and ASan do not work well together today on 64-bit systems. This has to do with some technical limitations involving memory use; on 64-bit systems ASan reserves (but does not use) a lot of memory. This is not necessarily a killer; in many cases you can use them together anyway (as Hanno did). More importantly, this problem is about to go away. Recently I co-authored (along with Sam Hakim) a tool we call afl-limit-memory; it uses Linux cgroups to eliminate the problem so that you can always combine afl and ASan (at least on Linux). We have already submitted the code to the afl project leader, and we hope it will become part of afl soon. So this is already a disappearing problem.

There are lots of interesting related resources. If you want to learn about fuzzing more generally, some books you might want to read are Fuzzing: Brute Force Vulnerability Discovery by Sutton, Greene, and Amini and Fuzzing for Software Security Testing and Quality Assurance (Artech House Information Security and Privacy) by Takanen, DeMott, and Miller. My class materials for secure software design and programming, #9 (analysis tools), also cover fuzzing (and are freely available). The Fuzzing Project led by Hanno is an effort to encourate the use of fuzzing to improving the state of free software security, and includes some tutorials on how to do it. The paper AddressSanitizer: A Fast Address Sanity Checker is an excellent explanation of how ASan works. My essay How to Prevent the next Heartbleed discusses many different approaches that would, or would not, have detected Heartbleed.

I do not think that fuzzers (or any dynamic technique) completely replace static analysis approaches such as source code weakness analyzers. Various tools, including dynamic tools like fuzzers and static tools like source code weakness analyzers, are valuable complements for finding vulnerabilities before the attackers do.

path: /security | Current Weblog | permanent link to this entry

Sat, 04 Apr 2015

Security presentation updates

I’ve updated my presentations on how to design and implement secure software. In particular, I’ve added much about analysis tools and formal methods. There is a lot going on in those fields, and no matter what I do I am only scratching the surface. On the other hand, if you have not been following these closely, then there’s a lot you probably don’t know about. Enjoy!

path: /security | Current Weblog | permanent link to this entry

Sat, 14 Feb 2015

Learning from Disaster

Learning from Disaster is a collection of essays that examines computer security disasters, and what we can learn from those disasters. This includes Heartbleed, Shellshock, POODLE, the Apple goto fail, and Sony Pictures. If you’re interested in computer security I think you’ll find this collection interesting.

So: please enjoy Learning from Disaster.

path: /security | Current Weblog | permanent link to this entry

Tue, 06 Jan 2015

Cloud security

There seems to be a lot of confusion about security fundamentals of cloud computing (and other utility-based approaches). For example, many people erroneously think hardware virtualization is required for clouds (it is not), or that hardware virtualization and containerization are the same (they are not).

My essay Cloud Security: Virtualization, Containers, and Related Issues is my effort to counteract some of this confusion. It has a quick introduction to clouds, a contrast of various security isolation mechanisms used to implement them, and a discussion of some related issues.

So please check out (and enjoy) Cloud Security: Virtualization, Containers, and Related Issues.

path: /security | Current Weblog | permanent link to this entry

Wed, 31 Dec 2014

I hope we learn from the computer security problems of 2014

As 2014 draws to a close, I hope anyone involved with computers will resolve to learn from the legion of security problems of 2014.

We had way too many serious vulnerabilities in widely-used software revealed in 2014. In each case, there are lessons that people could learn from them. Please take a look at the lessons that can be learned from Heartbleed, Shellshock, the POODLE attack on SSLv3, and the Apple goto fail vulnerability. More generally, a lot of information is available on how develop secure software - even though most software developers still do not know how to develop secure software. Simiarly, there are a host of lessons that organizations could learn from Sony Pictures.

Will people actually learn anything? Georg Wilhelm Friedrich Hegel reportedly said that, “We learn from history that we do not learn from history”.

Yet I think there are reasons to hope. There are a lot of efforts to improve the security of Free/Libre/Open Source Software (FLOSS) that are important yet inadequately secure. The Linux Foundation (LF) Core Infrastructure Initiative (CII) was established to “fund open source projects that are in the critical path for core computing functions” to improve their security. most recent European Union (EU) budget includes €1 million for auditing free-software programs to identify and fix vulnerabilities. The US DHS HOST project is also working to improve security using open source software (OSS). The Google Application Security Patch Reward Program is also working to improve security. And to be fair, these problems were found by people who were examining the software or protocols so that the problems could be fixed - exactly what you want to happen. At an organizational level, I think Sony was unusually lax in its security posture. I am already seeing evidence that other organizations have suddenly become much more serious about security, now that they see what has been done to Sony Pictures. In short, they are finally starting to see that security problems are not theoretical; they are real.

Here’s hoping that 2015 will be known as the year where people took computer security more seriously, and as a result, software and our computer systems became much harder to attack. If that happens, that would make 2015 an awesome year.

path: /security | Current Weblog | permanent link to this entry