This is the main web site for flawfinder, a simple program that examines C/C++ source code and reports possible security weaknesses (“flaws”) sorted by risk level. It’s very useful for quickly finding and removing at least some potential security problems before a program is widely released to the public. See “how does Flawfinder work?”, below, for more information on how it works. Flawfinder is officially Common Weakness Enumeration (CWE)-compatible.
Flawfinder is specifically designed to be easy to install and use. After installing it, at a command line just type:
Flawfinder works on Unix-like systems (it’s been tested on GNU/Linux), and on Windows by using Cygwin. It requires Python 2 to run.
Please take a look at other static analysis tools for security, too. One reason I wrote flawfinder was to encourage using static analysis tools to find security vulnerabilities.
If you’re curious what the results look like, here are some sample outputs:
Flawfinder is released under the General Public License (GPL) version 2 or later, and thus is open source software (as defined by the Open Source Definition) and Free Software (as defined by the Free Software Foundation’s GNU project). Feel free to see Open Source Software / Free Software (OSS/FS) References or Why OSS/FS? Look at the Numbers! for more information about OSS/FS.
Others have found it useful. Here are few testimonials that it’s received over time:
Flawfinder comes with a simple manual describing how to use it. If you’re not sure you want to take the plunge to install the program, you can just look at the documentation first, which discusses how to use it and how it supports the CWE. The documentation is available in the following formats:
Many Unix-like systems have a package already available to them, including Fedora, Debian, and Ubuntu. Debian and Ubuntu users can install flawfinder using apt-get install flawfinder (as usual); Fedora users can use yum install flawfinder. Cygwin also includes flawfinder. It’s also available in many other distributions. Flawfinder is available via FreeBSD’s Ports system (see this FreeBSD ports query for flawfinder and flawfinder info for security-related ports). OpenBSD includes flawfinder in its “ports”. NetBSD users can simply use NetBSD’s pkgsrc to install flawfinder (my thanks to Thomas Klausner for doing the flawfinder NetBSD packaging). The Fink project, which packages OSS/FS for Darwin and Mac OS X, has a Fink flawfinder package, so users of those systems may find that an easy way to get flawfinder.
If there’s no package available to you, or it’s old, you can download and install flawfinder directly. The current version of flawfinder is 1.29. If you want to see how it’s changed, view its ChangeLog. You can even go look at the flawfinder source code directly. We assume you have a Unix-like system such as a Linux-based system. If you use Windows, install Cygwin first, and install it on top of Cygwin.
First, download it. You can get the current released version flawfinder in .tar.gz (tarball) format) here. You can also get flawfinder by visiting the SourceForge flawfinder project page, in particular its Files section. You definitely need to go to the SourceForge project site if you want to get on the mailing list, submit a bug report or feature request, or see/get the latest drafts.
On Unix-like systems, you install it in the usual manner. First, uncompress the file and become root to install:
gunzip flawfinder-*.tar.gz tar xvf flawfinder-*.tar cd flawfinder-* suThen install. You can install to the default installation directory, /usr/local, which will put the program in /usr/local/bin and the manual inside /usr/local/man, by invoking:
make installYou can override these defaults using standard GNU conventions, by overriding on the make command INSTALL_DIR (normally /usr/local), INSTALL_DIR_BIN (usually INSTALL_DIR/bin), and/or INSTALL_DIR_MAN (usually INSTALL_DIR/man). For example, to install the binary in /usr/bin, and the manual pages inside /usr/share/man (like a Red Hat Linux system would tend to be configured), do this:
make INSTALL_DIR=/usr INSTALL_DIR_MAN=/usr/share/man install
Cygwin systems (for Microsoft Windows) need to set “PYTHONEXT=.py” in the make command, like this:
make PYTHONEXT=.py install
See the installation instructions for more information.
Flawfinder is now hosted on SourceForge. You can discuss how to use or improve the tool on its mailing list, and you can see the latest drafts on the Subversion version control system.
If you have a general question or issue, use the mailing list. If you have a specific bug, especially if you have a patch, use git or the issue tracker.
Flawfinder works by using a built-in database of C/C++ functions with well-known problems, such as buffer overflow risks (e.g., strcpy(), strcat(), gets(), sprintf(), and the scanf() family), format string problems ([v][f]printf(), [v]snprintf(), and syslog()), race conditions (such as access(), chown(), chgrp(), chmod(), tmpfile(), tmpnam(), tempnam(), and mktemp()), potential shell metacharacter dangers (most of the exec() family, system(), popen()), and poor random number acquisition (such as random()). The good thing is that you don’t have to create this database - it comes with the tool.
Flawfinder then takes the source code text, and matches the source code text against those names, while ignoring text inside comments and strings (except for flawfinder directives). Flawfinder also knows about gettext (a common library for internationalized programs), and will treat constant strings passed through gettext as though they were constant strings; this reduces the number of false hits in internationalized programs.
Flawfinder produces a list of “hits” (potential security flaws), sorted by risk; by default the riskiest hits are shown first. This risk level depends not only on the function, but on the values of the parameters of the function. For example, constant strings are often less risky than fully variable strings in many contexts. In some cases, flawfinder may be able to determine that the construct isn’t risky at all, reducing false positives.
Flawfinder gives better information - and better prioritization - than simply running “grep” on the source code. After all, it knows to ignore comments and the insides of strings, and it will also examine parameters to estimate risk levels. Nevertheless, flawfinder is fundamentally a naive program; it doesn’t even know about the data types of function parameters, and it certainly doesn’t do control flow or data flow analysis (see the references below to other tools, like SPLINT, which do deeper analysis). I know how to do that, but doing that is far more work; sometimes all you need is a simple tool. Also, because it’s simple, it doesn’t get as confused by macro definitions and other oddities that more sophisticated tools have trouble with. Flawfinder can analyze software that you can't build; in some cases it can analyze files you can't even locally compile.
Not every hit is actually a security vulnerability, and not every security vulnerability is necessarily found. As noted above, flawfinder doesn’t really understand the semantics of the code at all - it primarily does simple text pattern matching (ignoring comments and strings) Nevertheless, flawfinder can be a very useful aid in finding and removing security vulnerabilities.
The documentation points out various security issues when using the tool. In general, you should analyze a copy of the source code you’re evaluating. Also, do not load or diff hitlists from untrusted sources. Hitlists are implemented using Python’s pickle facility, which is not intended for untrusted input.
First, create a “unified diff” file comparing the older version to the current version (using git diff, GNU diff with the -u option, or Subversion’s diff). Then run flawfinder on the newer version, and give it the --patch (-P) option pointing to that unified diff.
This works because flawfinder will do its job, but it will only report hits that relate to lines that changed in the unified diff (the patch file). Flawfinder will read the unified diff file, which tells flawfinder what files changed and what lines in those files were changed. More specifically, it uses “Index:” or “+++” lines to determine the files that changed, it uses the line numbers in “@@” regions to get the chunk line number ranges, and it then uses the initial +, -, and space after that to determine which lines really changed.
One challenge is statements that span lines; a statement might start on one line, yet have a change that adds a vulnerability in a later line, and depending on how the vulnerability is reported it might get chopped off. Currently flawfinder handles this by showing vulnerabilities that are reported one line before or after any changed line - which seems to be a reasonable compromise.
Note that the problem with this approach is that it won’t notice if you remove code that enforces security requirements. Flawfinder doesn’t have that kind of knowledge anyway, so that’s not a big deal in this case.
An example of horrific tool misuse is disabling vulnerability reports without (1) fixing the vulnerability, or (2) ensuring that it is not a vulnerability. It’s publicly known that RealNetworks did this with flawfinder; I suspect others have misused tools this way. I don’t mean to beat on RealNetworks particularly, but it’s important to apply lessons learned from others, and unlike many projects, the details of their vulnerable source code are publicly available (and I applaud them for that!). As noted in iDEFENSE Security Advisory 03.01.05 on RealNetworks RealPlayer (CVE-2005-0455), a security vulnerability was in this pair of lines:
char tmp; /* Flawfinder: ignore */ strcpy(tmp, pScreenSize); /* Flawfinder: ignore */
This means that flawfinder did find this vulnerability, but instead fixing it, someone added the “ignore” directive to the code so that flawfinder would stop reporting the vulnerability. But an “ignore” directive simply stops flawfinder from reporting the vulnerability - it doesn’t fix the vulnerability! The intended use of this directive is to add it once a reviewer determined that it was definitely a false positive, but in this case the tool was reporting a real vulnerability. The same thing happened again in iDefense Security Advisory 06.23.05, aka CVE-2005-1766, where the vulnerable line was:
sprintf(pTmp, /* Flawfinder: ignore */And a third vulnerability with the same issue was reported still later in iDefense Security Advisory 06.26.07, RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability, aka CVE-2007-3410, where the vulnerable line was:
strncpy(buf, pos, len); /* Flawfinder: ignore */
This is not to say that RealNetworks is a fool or set of fools. Indeed, I believe many organizations, not just RealNetworks, have misused tools this way. My thanks to RealNetworks publicly admitting their mistake - it allows others to learn from their mistake! My specific point is that you can’t just add comments with “ignore” directives and expect that the software is suddenly more secure. Do not add “ignore” directives until you are certain that the report is a false positive.
This kind of problem can easily happen in organizations that say “run scanning tools until there are no more warnings” but don’t later review the changes that were made to eliminate the warnings. If warnings are eliminated because code is changed to eliminate vulnerabilities, that’s great! General-purpose tools scanning like flawfinder will have false positive reports, though; it’s easy to create a tool without false positives, but they’ll do that by failing to report many possible vulnerabilities (some of which will really be vulnerabilities). The obvious answer if you want a broader tool is to allow developers to examine the code, and if they can truly justify that it’s a false positive, document why it is a false positive (say in a comment near the report) and then add a “Flawfinder: ignore” directive. But you need to really justify that the report is a false positive; just adding an “ignore” directive doesn’t fix anything! Sometimes it’s easier to fix a problem that may or may not be a vulnerability, instead of ensuring that it’s a false positive - the OpenBSD developers have been doing this successfully for years, since if complicated code isn’t an exploitable vulnerability yet, a tiny change can often turn such fragile code into a vulnerability.
If you’re in an organization using a scanning tool like this, make sure you review every change caused by a vulnerability report. Every change should be either (1) truly fixed or (2) correctly and completely justified as a false positive. I think organizations should require any such justification to be in comments next to the “ignore” directive. If the justification isn’t complete, don’t mark it with an “ignore” directive. And before developers even start writing code, get them trained on how to write secure code and what the common mistakes are; this material is not typically covered in university classes or even on the job.
The “ignore” directives are a very useful mechanism - once you have done the analysis, having to re-do the analysis for no reason could use up so much time that it would prevent you from resolving real vulnerabilities. Indeed, many people wouldn’t use source scanning tools at all if they couldn’t insert “ignore” directives when they are done. The result would be code with vulnerabilities that would be found by such tools. But any mechanism can be misused, and clearly this one has been.
Flawfinder does include a weapon against useless “ignore” directives - the --neverignore (-n) option. This option is the “ignore the ignores” option - any “ignore” directives are ignored. But in the end, you still need to fix vulnerabilities or ensure that reported vulnerabilities aren’t really vulnerabilities at all.
Another problem is that if a tool tells you there’s a problem, never fix a bug you don’t understand. For example, the Debian folks ran a tool that found a purported problem in OpenSSL; it wasn’t really a problem, and their fix actually created a security problem.
More generally, I am not of the opinion that analysis tools are always “better” than any other method for creating secure software. I don’t really believe in a silver bullet, but if I had to pick one, “developer education” would be my silver bullet, not analysis tools. Again, a “fool with a tool is still a fool”. I believe that when you need secure software, you need to use a set of methods, including education, languages/tools where vulnerabilities are less likely, good designs (e.g., ones with limited privilege), human review, fuzz testing, and so on; a source scanning tool is just a part of it. Gary McGraw similarly notes that simply passing a scanning tool does not mean perfect security, e.g., tools can’t normally find “didn’t ask for authorization when it should have”.
That said, I think tools that search source or binaries for vulnerabilities usually need to be part of the answer if you’re trying to create secure software in today’s world. Customers/users are generally unwilling to reduce the amount of functionality they want to something we can easily prove correct, and formally proving programs correct has not scaled well yet (though I commend the work to overcome this). No programming language can prevent all vulnerabilities from being written in the first place, even though selecting the right programming language can be helpful. Human review is great, but it’s costly in many circumstances and it often misses things that tools can pick up. Execution testing (like fuzz testing) only checks a miniscule part of the input space. So we often end up needing source or binary scanning tools as part of the process, even though current tools have a HUGE list of problems.... because NOT using them is often worse. Other methods may find the vulnerability, but other methods typically don’t scale well.
When you think about it, that makes sense. If a program has a high hit density, it suggests that its developers often use very dangerous constructs that are hard to use correctly and often lead to vulnerabilities. Even if the hits themselves aren’t vulnerabilities, developers who repeatedly use dangerous constructs will sooner or later make the final mistake and allow a vulnerability. It’s like a high-wire act -- even talented people will eventually fall if they walk on it long enough.
This appeared to break down on very small programs (less than 10K); a program much smaller than its competition might have a larger hit density yet still be secure. I speculate that because density is a fraction, when a program is much smaller than its rivals, density is dramatically forced up (because size is in the denominator). Yet programs that are made dramatically smaller are much easier to evaluate directly, so direct review is more likely to counter vulnerabilities in this case.
Until the time where we’ve figured out how to merge these dissimilar projects, I recommend that distributions and software development websites include both programs. Each has advantages that the other doesn’t. For example, at the time of this writing Flawfinder is easier to use - just give flawfinder a directory name, and flawfinder will enter the directory recursively, figure out what needs analyzing, and analyze it. Other advantages of flawfinder are that it can handle internationalized programs (it knows about special calls like gettext(), unlike RATS), flawfinder can report column numbers (as well as line numbers) of hits, and flawfinder can produce HTML-formatted results. The automated recursion and HTML formatted results make flawfinder especially nice for source code hosting systems. The flawfinder database includes a number of entries not in RATS, so flawfinder will find things RATS won’t. In contrast, RATS can handle other programming languages and runs faster. Both projects are essentially automated advisors, and having two advisors look at your program is likely to be better than using only one (it’s somewhat analogous to having two people review your code for security).
Many have reviewed flawfinder or mentioned flawfinder in articles, as well as related tools. Examples include:
A small article describing the results of examining Windows 2000, Linux and OpenBSD source code using Flawfinder. “OpenBSD is ahead, Flawfinder finds a surprisingly small number of potentially dangerous constructs. The source code audit by the OpenBSD team seems to pay out. Additionally, OpenBSD uses the secure strlcpy/strlcat by Todd C. Miller instead of strcpy etc.”
Practical Code Auditing by Lurene Grenier (December 13, 2002) briefly discusses simple approaches that can be performed for manual auditing (she works on the OpenBSD project). It does note that you can “grep” for certain kinds of problems; flawfinder is essentially a smart grep that already knows what to look for, so it could easily fit into process at those points. The paper also specifically notes some of the things that are hard to grep for (which are the kinds of things that flawfinder would miss).
Secure Programming with Static Analysis (Addison-Wesley Software Security Series) by Brian Chess and Jacob West discusses static analysis tools in great detail.
Of course, there are many programs that analyze programs, particularly those that work like “lint”. There is a set of papers about the Stanford checker which you may find interesting.
There are many other static analysis tools, and many of them look for security vulnerabilities. NIST’s Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a general list of static analysis tools focused on finding security vulnerabilities. John Carmack (founder and former technical director of Id Software)’s post “Static Code Analysis” discusses static analysis in general: “Automation is necessary... I feel the success that we have had with code analysis has been clear enough that I will say plainly it is irresponsible to not use it.” Carmack also quotes Dave Revell, “the more I push code through static analysis, the more I’m amazed that computers boot at all.”
As noted above, RATS is the project most similar to flawfinder; it uses the same basic technique, and is released under the GPL. If you’re looking for another FLOSS tool to help you find security problems in your C programs, for now I particularly suggest that you look at SPLINT.
I’m a big fan of using multiple tools to find security vulnerabilities. Flawfinder is intentionally simple, easy-to-use, and easy-to-understand. It is certainly not the be-all of tools, but that is not the point. My hope is that flawfinder will encourage people to start looking into the various tools available, and trying some out. Software is complex; we need tools to help us find vulnerabilities ahead-of-time in software we develop.
Other OSS/FS tools/projects that statically analyze programs for security issues (besides flawfinder) include:
cppcheck -a -f ./ 2> cpperr.txt &
There is a similar program, ITS4 (from Cigital), but it isn’t open source software or Free Software (OSS/FS) as defined above, and as far as I know it isn’t maintained.
Of course, you could go the other way: Instead of looking for specific common weaknesses, you could prove that the program actually meets (or does not meet) certain requirements. If you’re interested in open source software tools related to proving programs correct, seej High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods / Software Verification and the Open Proofs website.
There are various suppliers that sell proprietary programs that do this kind of static analysis. These include:
There are of course many companies that sell the service of performing security reviews of source code for a fee; who generally use a combination of tools and expertise. These include Secure Software developer of RATS, and Aspect Security, backers of the Open Web Application Security Project (OWASP).
Arian Evans has announced that he’s working on a list of such tools, and intends to post that list at OWASP; by the time you read this, it may already be available. NIST’s Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a list of static analysis tools, along with a list of related papers and projects. Common Weakness Enumeration (CWE) is developing a standard set of definitions of common weaknesses and their interrelationships.
Other places list security tools, but not really static analysis tools; these include the Talisker Security Wizardry Portal and insecure.org’s survey of the top 75 tools.
Java2s has a list of Java-related tools for source analysis which may be of interest. They make the common mistake of saying “commercial” when they mean “proprietary” (OSS is commercial software too).
There are a vast number of static analysis tools that check for style or for possible errors, which might happen to catch security problems. They’re usually not focused on security issues, though, and there are too many to list anyway, so I don’t try to list them all here.
This list can’t possibly be exhaustive, sorry. My goal here isn’t to provide all possible alternatives, merely to provide useful information pointing to at least some other tools and services. My goal is mainly so you can have an idea of what’s going on in the field.
Many people have developed “language subsets” in an effort to reduce the risk of errors. In concept, these can be really helpful, especially for languages like C which are easy to abuse. Such language subsets should be automated by static analysis tools; then, it’s easy to check if you’ve met the rules. But these only have value if the subset is well-designed. In particular, the subset should be designed to minimize cases where perfectly acceptable constructs are forbidden (essentially false positives), and should maximize detection of actual failures (best done through analysis of real-world failures).
One of the better-known subsets for C is “MISRA C”. Les Hatton has published a detailed and devastating critique of MISRA C (of both MISRA C 1998 and the later MISRA C 2004). Fundamentally, MISRA C’s development was not based on real data on failures, but instead on random rule creation, some of which are absolutely full of false positives, and many have no value. See Les Hatton’s papers, including those showing why MISRA C is badly flawed. His paper Language subsetting in an industrial context: a comparison of MISRA C 1998 and MISRA C 2004 is “A comparison of real to false positive ratios between the 1998 and 2004 versions of the MISRA C guidelines on a common population of 7 commercial software packages”, and it has devestating conclusions: “On these results, MISRA C 2004 seems a step backwards and attempts at compliance with either document are essentially pointless until something is done about improving the wording of the standard and its match with existing experimental data. In its current form, the complexity and noisiness of the rules suggest that only the tool vendors are likely to benefit.”
An additional problem with MISRA C is that it is not open access (aka Internet-published). That is, you can’t just use Google to find it and then immediately view its contents (without registering or paying for the contents). That makes it hard to apply. Purported standards that aren’t open access are becoming increasingly pointless; IETF, OASIS, W3C, Ecma, and many other bodies already do this.
I’m a fan of Les Hatton’s work, and I particularly like his paper on his EC-- ruleset. The EC-- ruleset is Internet-published, and is much smaller, so it’s actually easier to apply than MISRA C. More importantly, though, the EC-- ruleset appears to be much better matched to the real world for finding failures, so I strongly prefer EC-- over MISRA C. Here were his rules for creating the EC-- ruleset; once you look at this list, I think you’ll see why:
Additional rules specific to security would be a good idea, too, if they’re well-crafted. The CERT C Secure Coding Standard is an effort to craft rules for developing secure C programs. I haven’t had time to evaluate it in-depth, though, so I don’t know what its quality is. Another document you might examine is Microsoft’s Security Development Lifecycle (SDL) Banned Function Calls.
Static analysis tools are unlikely to catch all problems in practice; they’re best complemented with other approaches. Certainly, having humans look at code is wonderful (this is a manual static analysis approach).
Dynamic analysis tools send data to executing programs as a way to possibly find problems. Many tools are based on the idea of sending random or partly random data for testing; some “randomize” but try to concentrate on patterns most likely to reveal security problems. Dynamic analysis tools include:
There are lots of scanning tools for checking for already known specific vulnerabilities, and sometimes they help. Nessus is a widely-used vulnerability assessment tool. Nikto scans web servers for common problems.
There are many, many other tools and techniques available; I can’t list all of them. You can find a few leads from the Top 75 Security Tools survey at insecure.org. ISP planet’s The article Web Vulnerability Assessment Tools.
You might want to look at my Secure Programming HOWTO web page, or some of my other writings such as Open Standards and Security, Open Source Software and Software Assurance (Security), and High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).
You can also view my home page.