This is the main web site for flawfinder, a simple program that examines C/C++ source code and reports possible security weaknesses (“flaws”) sorted by risk level. It’s very useful for quickly finding and removing at least some potential security problems before a program is widely released to the public. See “how does Flawfinder work?”, below, for more information on how it works. Flawfinder is officially Common Weakness Enumeration (CWE)-compatible.

Flawfinder is specifically designed to be easy to install and use. After installing it, at a command line just type:

    flawfinder directory_with_source_code

Flawfinder works on Unix-like systems (it’s been tested on GNU/Linux), and on Windows by using Cygwin. It requires Python 2 to run (it should work on version 2.5 or later, but it is tested using version 2.7).

Please take a look at other static analysis tools for security, too. One reason I wrote flawfinder was to encourage using static analysis tools to find security vulnerabilities.

Sample Output

If you’re curious what the results look like, here are some sample outputs:

  1. The actual text output (when allowing all potential vulnerabilities to be displayed)
  2. The actual HTML output, with context information. This output uses the “--context” option; the text of the risky line is included in the output, which some people find useful. Note that you can use your own web browser to display the results!
All of these results came from analyzing this test C program.


Flawfinder is released under the General Public License (GPL) version 2 or later, and thus is open source software (as defined by the Open Source Definition) and Free Software (as defined by the Free Software Foundation’s GNU project). Feel free to see Open Source Software / Free Software (OSS/FS) References or Why OSS/FS? Look at the Numbers! for more information about OSS/FS.


Others have found it useful. Here are few testimonials that it’s received over time:

  1. I just installed the 0.21 version of Flawfinder. I tried a few different code checking tools and it’s by far the friendliest to use. - Darryl Luff
  2. I just sent tons of C/C++ source through flawfinder 1.0. Thanks for the tool, it found several places that I have now fixed. - Joerg Beyer
  3. Thank you for flawfinder! It has helped me in many ways over the last year, for which I am truly [grateful]! - Elfyn McBratney
  4. The other day I was about to clean some old code. After receiving 17K lines of mixed C/C++ I realized that running some kind of source code analyser would be a good idea. I downloaded a whole bunch’of’em (tm), but the only tool that just plain worked on the first run was Flawfinder. Easy to use, no hazzles with strange parameters or configfiles! Instead of learning new software I could concentrate on what I wanted, namely to get down’n’dirty with the code. Thanks! - Jon Björkebäck, developer, Sweden
  5. flawfinder is a good tool for finding potential security issues, and I’ve been happily using it for a few months now. - Steve Kemp, Debian Security Audit Project, 2004-05-22
  6. I would like to thank you for this awesome piece of software. We are using it in our project Scribus (scribus.net) for a few days. It’s very helpful for us. cheers! - Petr Vanek, developer, 2005-12-10
  7. thanx for this great tool. It’s working da*n good. I’m using it against wireshark [previously named ethereal] and it is really useful to track potential misuse of C functions. - Sebastien Tandel, developer, 2007-01-10
  8. “Hurra FlawFinder ! FlawFinder is the greatest software of the World. We are fans ! With FlawFinder we never have buffer overflow... With FlawFinder we always find FlawFlaw to make 300 000 getinfos in 300 seconds ! FlawFinder is the Kikip├ędia of the day.” - Christophe JUILLET, 2008-10-08
  9. “flawfinder is fast and stable” - cameronhansen (2012-10-28)
  10. “works great” - nicolascook (2013-02-14)
  11. “Great tool” - jonahbishop (2013-01-20)


Flawfinder comes with a simple manual describing how to use it. If you’re not sure you want to take the plunge to install the program, you can just look at the documentation first, which discusses how to use it and how it supports the CWE. The documentation is available in the following formats:

Using a pre-packaged version of flawfinder

Many Unix-like systems have a package already available to them, including Fedora, Debian, and Ubuntu. Debian and Ubuntu users can install flawfinder using apt-get install flawfinder (as usual); Fedora users can use yum install flawfinder. Cygwin also includes flawfinder. It’s also available in many other distributions. Flawfinder is available via FreeBSD’s Ports system (see this FreeBSD ports query for flawfinder and flawfinder info for security-related ports). OpenBSD includes flawfinder in its “ports”. NetBSD users can simply use NetBSD’s pkgsrc to install flawfinder (my thanks to Thomas Klausner for doing the flawfinder NetBSD packaging). The Fink project, which packages OSS/FS for Darwin and Mac OS X, has a Fink flawfinder package, so users of those systems may find that an easy way to get flawfinder.

Downloading and Installing

If there’s no package available to you, or it’s old, you can download and install flawfinder directly. The current version of flawfinder is 1.31. If you want to see how it’s changed, view its ChangeLog. You can even go look at the flawfinder source code directly. We assume you have a Unix-like system such as a Linux-based system. If you use Windows, install Cygwin first, and install it on top of Cygwin.

First, download it. You can get the current released version flawfinder in .tar.gz (tarball) format) here. You can also get flawfinder by visiting the SourceForge flawfinder project page, in particular its Files section. You definitely need to go to the SourceForge project site if you want to get on the mailing list, submit a bug report or feature request, or see/get the latest drafts.

On Unix-like systems, you install it in the usual manner. First, uncompress the file and become root to install:

  tar xvzf flawfinder-*.tar.gz
  cd flawfinder-*
Then install. Typically you would do this (omit "sudo" if you are root):
  sudo make prefix=/usr install
You can override these defaults using standard GNU conventions. If you omit the "prefix=/usr" statement, it will store in the default directory /usr/local. You can set bindir and mandir to set their specific locations. Cygwin systems (for Microsoft Windows) need to set “PYTHONEXT=.py” in the make command, like this:
  sudo make PYTHONEXT=.py install

See the installation instructions for more information.

Joining the flawfinder community

Flawfinder is now hosted on SourceForge. You can discuss how to use or improve the tool on its mailing list, and you can see the latest drafts on the Subversion version control system.

If you have a general question or issue, use the mailing list. If you have a specific bug, especially if you have a patch, use git or the issue tracker.


Flawfinder is written in Python, to simplify the task of writing and extending it. Python code is not as fast as C code, but for the task I believe it’s just fine. Flawfinder version 1.31 averaged an analysis speed of 45,126 lines/second when it examined the Linux kernel version 3.16 (released 2014-08-03) on a Intel Core2 Duo CPU E8400 @ 3.00GHz (each CPU running at 2GHz) running Fedora Linux version 20. That is because it examined 17,135,214 lines in 36,859 files in approximately 379.72 seconds (less than 6.5 minutes). The physical Source Lines of Code (SLOC) was lower: 12,237,248. The Linux kernel is not the best test case for utility, since flawfinder is designed for examining application code, but it is a great test for showing that flawfinder can examine larger programs in a relatively short time. In another test, Flawfinder 1.28 averaged 24,475 lines/second on a 2.8GHz laptop and Cygwin; Cygwin on Windows tends to be much slower than Linux, but even on Cygwin flawfinder has a reasonable speed. Flawfinder 1.20 and later normally report their speed (in analyzed lines/second) if you’re curious. The speed reported begins when the program starts running, not including the fixed Python start-up time.

How does Flawfinder Work?

Flawfinder works by using a built-in database of C/C++ functions with well-known problems, such as buffer overflow risks (e.g., strcpy(), strcat(), gets(), sprintf(), and the scanf() family), format string problems ([v][f]printf(), [v]snprintf(), and syslog()), race conditions (such as access(), chown(), chgrp(), chmod(), tmpfile(), tmpnam(), tempnam(), and mktemp()), potential shell metacharacter dangers (most of the exec() family, system(), popen()), and poor random number acquisition (such as random()). The good thing is that you don’t have to create this database - it comes with the tool.

Flawfinder then takes the source code text, and matches the source code text against those names, while ignoring text inside comments and strings (except for flawfinder directives). Flawfinder also knows about gettext (a common library for internationalized programs), and will treat constant strings passed through gettext as though they were constant strings; this reduces the number of false hits in internationalized programs.

Flawfinder produces a list of “hits” (potential security flaws), sorted by risk; by default the riskiest hits are shown first. This risk level depends not only on the function, but on the values of the parameters of the function. For example, constant strings are often less risky than fully variable strings in many contexts. In some cases, flawfinder may be able to determine that the construct isn’t risky at all, reducing false positives.

Flawfinder gives better information - and better prioritization - than simply running “grep” on the source code. After all, it knows to ignore comments and the insides of strings, and it will also examine parameters to estimate risk levels. Nevertheless, flawfinder is fundamentally a naive program; it doesn’t even know about the data types of function parameters, and it certainly doesn’t do control flow or data flow analysis (see the references below to other tools, like SPLINT, which do deeper analysis). I know how to do that, but doing that is far more work; sometimes all you need is a simple tool. Also, because it’s simple, it doesn’t get as confused by macro definitions and other oddities that more sophisticated tools have trouble with. Flawfinder can analyze software that you can't build; in some cases it can analyze files you can't even locally compile.

Not every hit is actually a security vulnerability, and not every security vulnerability is necessarily found. As noted above, flawfinder doesn’t really understand the semantics of the code at all - it primarily does simple text pattern matching (ignoring comments and strings) Nevertheless, flawfinder can be a very useful aid in finding and removing security vulnerabilities.

The documentation points out various security issues when using the tool. In general, you should analyze a copy of the source code you’re evaluating. Also, do not load or diff hitlists from untrusted sources. Hitlists are implemented using Python’s pickle facility, which is not intended for untrusted input.

Reviewing patches

Sometimes you don’t want to review an entire program - you only want to review the set of changes that were made to a program. If the changes are well-localized (e.g., to a particular section of a file), this is trivial to do by hand, but it’s harder otherwise. Flawfinder 1.27 added automated support so that you can review only the changes in a program.

First, create a “unified diff” file comparing the older version to the current version (using git diff, GNU diff with the -u option, or Subversion’s diff). Then run flawfinder on the newer version, and give it the --patch (-P) option pointing to that unified diff.

This works because flawfinder will do its job, but it will only report hits that relate to lines that changed in the unified diff (the patch file). Flawfinder will read the unified diff file, which tells flawfinder what files changed and what lines in those files were changed. More specifically, it uses “Index:” or “+++” lines to determine the files that changed, it uses the line numbers in “@@” regions to get the chunk line number ranges, and it then uses the initial +, -, and space after that to determine which lines really changed.

One challenge is statements that span lines; a statement might start on one line, yet have a change that adds a vulnerability in a later line, and depending on how the vulnerability is reported it might get chopped off. Currently flawfinder handles this by showing vulnerabilities that are reported one line before or after any changed line - which seems to be a reasonable compromise.

Note that the problem with this approach is that it won’t notice if you remove code that enforces security requirements. Flawfinder doesn’t have that kind of knowledge anyway, so that’s not a big deal in this case.

A Fool with a Tool is still a Fool

Any static analysis tool, such as Flawfinder, is merely a tool. No tool can substitute for human thought! In short, “a fool with a tool is still a fool”. It’s a mistake to think that analysis tools (like flawfinder) are a substitute for security training and knowledge. Developers - please read documents like my Secure Programming book so you’ll understand the vulnerabilities that the tool is trying to find! Organizations - please make sure your developers understand how to develop secure software (including learning about the common mistakes past developers have made), before having them develop software or use static analysis tools.

An example of horrific tool misuse is disabling vulnerability reports without (1) fixing the vulnerability, or (2) ensuring that it is not a vulnerability. It’s publicly known that RealNetworks did this with flawfinder; I suspect others have misused tools this way. I don’t mean to beat on RealNetworks particularly, but it’s important to apply lessons learned from others, and unlike many projects, the details of their vulnerable source code are publicly available (and I applaud them for that!). As noted in iDEFENSE Security Advisory 03.01.05 on RealNetworks RealPlayer (CVE-2005-0455), a security vulnerability was in this pair of lines:

   char tmp[256]; /* Flawfinder: ignore */
   strcpy(tmp, pScreenSize); /* Flawfinder: ignore */

This means that flawfinder did find this vulnerability, but instead fixing it, someone added the “ignore” directive to the code so that flawfinder would stop reporting the vulnerability. But an “ignore” directive simply stops flawfinder from reporting the vulnerability - it doesn’t fix the vulnerability! The intended use of this directive is to add it once a reviewer determined that it was definitely a false positive, but in this case the tool was reporting a real vulnerability. The same thing happened again in iDefense Security Advisory 06.23.05, aka CVE-2005-1766, where the vulnerable line was:

   sprintf(pTmp,  /* Flawfinder: ignore */
And a third vulnerability with the same issue was reported still later in iDefense Security Advisory 06.26.07, RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability, aka CVE-2007-3410, where the vulnerable line was:
   strncpy(buf, pos, len); /* Flawfinder: ignore */

This is not to say that RealNetworks is a fool or set of fools. Indeed, I believe many organizations, not just RealNetworks, have misused tools this way. My thanks to RealNetworks publicly admitting their mistake - it allows others to learn from their mistake! My specific point is that you can’t just add comments with “ignore” directives and expect that the software is suddenly more secure. Do not add “ignore” directives until you are certain that the report is a false positive.

This kind of problem can easily happen in organizations that say “run scanning tools until there are no more warnings” but don’t later review the changes that were made to eliminate the warnings. If warnings are eliminated because code is changed to eliminate vulnerabilities, that’s great! General-purpose tools scanning like flawfinder will have false positive reports, though; it’s easy to create a tool without false positives, but they’ll do that by failing to report many possible vulnerabilities (some of which will really be vulnerabilities). The obvious answer if you want a broader tool is to allow developers to examine the code, and if they can truly justify that it’s a false positive, document why it is a false positive (say in a comment near the report) and then add a “Flawfinder: ignore” directive. But you need to really justify that the report is a false positive; just adding an “ignore” directive doesn’t fix anything! Sometimes it’s easier to fix a problem that may or may not be a vulnerability, instead of ensuring that it’s a false positive - the OpenBSD developers have been doing this successfully for years, since if complicated code isn’t an exploitable vulnerability yet, a tiny change can often turn such fragile code into a vulnerability.

If you’re in an organization using a scanning tool like this, make sure you review every change caused by a vulnerability report. Every change should be either (1) truly fixed or (2) correctly and completely justified as a false positive. I think organizations should require any such justification to be in comments next to the “ignore” directive. If the justification isn’t complete, don’t mark it with an “ignore” directive. And before developers even start writing code, get them trained on how to write secure code and what the common mistakes are; this material is not typically covered in university classes or even on the job.

The “ignore” directives are a very useful mechanism - once you have done the analysis, having to re-do the analysis for no reason could use up so much time that it would prevent you from resolving real vulnerabilities. Indeed, many people wouldn’t use source scanning tools at all if they couldn’t insert “ignore” directives when they are done. The result would be code with vulnerabilities that would be found by such tools. But any mechanism can be misused, and clearly this one has been.

Flawfinder does include a weapon against useless “ignore” directives - the --neverignore (-n) option. This option is the “ignore the ignores” option - any “ignore” directives are ignored. But in the end, you still need to fix vulnerabilities or ensure that reported vulnerabilities aren’t really vulnerabilities at all.

Another problem is that if a tool tells you there’s a problem, never fix a bug you don’t understand. For example, the Debian folks ran a tool that found a purported problem in OpenSSL; it wasn’t really a problem, and their fix actually created a security problem.

More generally, I am not of the opinion that analysis tools are always “better” than any other method for creating secure software. I don’t really believe in a silver bullet, but if I had to pick one, “developer education” would be my silver bullet, not analysis tools. Again, a “fool with a tool is still a fool”. I believe that when you need secure software, you need to use a set of methods, including education, languages/tools where vulnerabilities are less likely, good designs (e.g., ones with limited privilege), human review, fuzz testing, and so on; a source scanning tool is just a part of it. Gary McGraw similarly notes that simply passing a scanning tool does not mean perfect security, e.g., tools can’t normally find “didn’t ask for authorization when it should have”.

That said, I think tools that search source or binaries for vulnerabilities usually need to be part of the answer if you’re trying to create secure software in today’s world. Customers/users are generally unwilling to reduce the amount of functionality they want to something we can easily prove correct, and formally proving programs correct has not scaled well yet (though I commend the work to overcome this). No programming language can prevent all vulnerabilities from being written in the first place, even though selecting the right programming language can be helpful. Human review is great, but it’s costly in many circumstances and it often misses things that tools can pick up. Execution testing (like fuzz testing) only checks a miniscule part of the input space. So we often end up needing source or binary scanning tools as part of the process, even though current tools have a HUGE list of problems.... because NOT using them is often worse. Other methods may find the vulnerability, but other methods typically don’t scale well.

Hit Density (Hits/KSLOC)

One of the metrics that flawfinder reports is hit density, that is, hits per thousand lines of source code. In some unpublished work, I and someone else found that hit density is a helpful relative indicator of the likelihood of security vulnerabilities in various products. We examined some open source software, such as Sendmail and Postfix, and determined the hit density of each; the ones with higher hit density tended to be the ones with the worse security record in the future. And that’s even if none or few of the reported hits were clearly security vulnerabilities.

When you think about it, that makes sense. If a program has a high hit density, it suggests that its developers often use very dangerous constructs that are hard to use correctly and often lead to vulnerabilities. Even if the hits themselves aren’t vulnerabilities, developers who repeatedly use dangerous constructs will sooner or later make the final mistake and allow a vulnerability. It’s like a high-wire act -- even talented people will eventually fall if they walk on it long enough.

This appeared to break down on very small programs (less than 10K); a program much smaller than its competition might have a larger hit density yet still be secure. I speculate that because density is a fraction, when a program is much smaller than its rivals, density is dramatically forced up (because size is in the denominator). Yet programs that are made dramatically smaller are much easier to evaluate directly, so direct review is more likely to counter vulnerabilities in this case.

Flawfinder and RATS

Unbenownst to me, while I was developing flawfinder, Secure Software Solutions simultaneously developed RATS, which is also a GPL’ed source code scanner using a similar approach. We agreed to release our programs simultaneously (on May 21, 2001), and we agreed to mention each other’s programs in our announcements (you can even see the original flawfinder announcement). Now that we’ve both released our code, we hope to coordinate in the future so that there will be a single “best of breed” source code scanner that is open source / free software. Exactly how this will happen is not yet clear, so be prepared for future announcements.

Until the time where we’ve figured out how to merge these dissimilar projects, I recommend that distributions and software development websites include both programs. Each has advantages that the other doesn’t. For example, at the time of this writing Flawfinder is easier to use - just give flawfinder a directory name, and flawfinder will enter the directory recursively, figure out what needs analyzing, and analyze it. Other advantages of flawfinder are that it can handle internationalized programs (it knows about special calls like gettext(), unlike RATS), flawfinder can report column numbers (as well as line numbers) of hits, and flawfinder can produce HTML-formatted results. The automated recursion and HTML formatted results make flawfinder especially nice for source code hosting systems. The flawfinder database includes a number of entries not in RATS, so flawfinder will find things RATS won’t. In contrast, RATS can handle other programming languages and runs faster. Both projects are essentially automated advisors, and having two advisors look at your program is likely to be better than using only one (it’s somewhat analogous to having two people review your code for security).


Many have reviewed flawfinder or mentioned flawfinder in articles, as well as related tools. Examples include:

  1. “Code Injection in C and C++ : A Survey of Vulnerabilities and Countermeasures” by Yves Younan, Wouter Joosen, and Frank Piessens (Report CW386, July 2004, Department of Computer Science, K.U.Leuven) is a comprehensive survey of many different ways to counter vulnerabilities. Its abstract says, ” ... This report documents possible vulnerabilities in C and C++ applications that could lead to situations that allow for code injection and describes the techniques generally used by attackers to exploit them. A fairly large number of defense techniques have been described in literature. An important goal of this report is to give a comprehensive survey of all available preventive and defensive countermeasures that either attempt to eliminate specific vulnerabilities entirely or attempt to combat their exploitation. Finally, the report presents a synthesis of this survey that allows the reader to weigh the advantages and disadvantages of using a specific countermeasure as opposed to using another more easily.” They list a wide variety of countermeasures and describe their pros and cons. For example, They state that flawfinder, as well as RATS and ITS4, all have the advantages of “very low” comparitive cost and “very low” memory cost, and that all can find vulnerabilities in the categories V1 (Stack-based buffer overflow), V2 (Heap-based buffer overflow), and V4 (Format string vulnerabilities). All have the applicability limitations A1 (Source code required) and A10 (Only protects libc string manipulation functions). All have the protection limitation P17 (False negatives are possible). That’s a fair description of the strengths and weaknesses of flawfinder and similar tools.
  2. Source Code Scanners for Better Code in Linux Journal discusses Flawfinder, RATS, and ITS4. The review noted that the version of flawfinder they used had a weakness - it didn’t automatically report static character buffers. That weakness has since been corrected; flawfinder as of version 1.20 can also report static character buffers
  3. Clean Up Your Code with Flawfinder was one of the first announcements by others about Flawfinder
  4. Flawfinder 1.22, le chasseur de failles
  5. the UC Davis Reducing Software Security Risk through an Integrated Approach project (see the flawfinder entry)
  6. “Apparently insecure, analysis of Windows 2000, Linux and OpenBSD sourcecode” (in German), iX 04/04, p. 14. This is noted in the OpenBSD press area for March, 2004, which states that:
    A small article describing the results of examining Windows 2000, Linux and OpenBSD source code using Flawfinder. “OpenBSD is ahead, Flawfinder finds a surprisingly small number of potentially dangerous constructs. The source code audit by the OpenBSD team seems to pay out. Additionally, OpenBSD uses the secure strlcpy/strlcat by Todd C. Miller instead of strcpy etc.”
  7. “A Comparison of Publicly Available Tools for Static Intrusion Prevention”. You might also want to see ”A Comparison of Publicly Available Tools for Dynamic Buffer Overflow Prevention”)
  8. “A Comparison of Static Analysis and Fault Injection Techniques for Developing Robust System Services” by Pete Broadwell and Emil Ong, Technical Report, Computer Science Division, University of California, Berkeley, May 2002, used static source code analysis (like flawfinder) and software fault injection against some commonly-used applications. They used some static tools (like ITS4, Warnbuf, and Stumoch) and some dynamic tools (like Fuzz Lite and FIG). As with many other papers, they found that static tools found many false positives, but that “When the tool did find an error however, they were extremely useful.” This paper also has references to many other papers.
  9. Methods for the prevention, detection and removal of software security vulnerabilities by Jay-Evan J. Tevis and John A. Hamilton (Auburn University, Auburn, Alabama). This was published in the Proceedings of the 42nd annual Southeast regional conference, Huntsville, Alabama, 2004 (Pages: 197 - 202). ISBN 1-58113-870-9/04/04. The ACM digital library has a copy.
  10. Software Security for Open-Source Systems by Crispin Cowan (IEEE Security and Privacy, 2003) briefly reviews various auditing (static and dynamic) and vulnerability mitigation tools.
  11. “Characterizing the ‘Security Vulnerability Likelihood’ of Software Functions” by DaCosta, Dahn, Mancoridis, and Prevelakis gives evidence that most vulnerabilities are clustered near inputs, a plausible hypothesis. Note that flawfinder includes the ability to highlight input functions, because I expected that myself.
  12. The presentation Lexical analysis in source code scanning by Jose Nazariol (Uninet Infosec 2002), 20 April, 2002, discusses his prototype tool, Czech, which uses techniques similar to flawfinder. In it, he says “source code analysis using lexical analysis techniques is worthwhile for development. However, it can only assist the developer, not replace a manual audit” (true enough!)
  13. The paper Static Analysis for Security by Gary McGraw (Cigital) and Brian Chess of Fortify Software gives an overview of static analysis tools (like flawfinder). This is the fifth article in the IEEE Security & Privacy magazine series called “Building Security In.”
  14. Will code check tools yield worm-proof software? by Robert Lemos (CNET News.com), dated May 26, 2004, describes gives an overview of static analysis tools for a somewhat lay audience.

Practical Code Auditing by Lurene Grenier (December 13, 2002) briefly discusses simple approaches that can be performed for manual auditing (she works on the OpenBSD project). It does note that you can “grep” for certain kinds of problems; flawfinder is essentially a smart grep that already knows what to look for, so it could easily fit into process at those points. The paper also specifically notes some of the things that are hard to grep for (which are the kinds of things that flawfinder would miss).

Secure Programming with Static Analysis (Addison-Wesley Software Security Series) by Brian Chess and Jacob West discusses static analysis tools in great detail.

Of course, there are many programs that analyze programs, particularly those that work like “lint”. There is a set of papers about the Stanford checker which you may find interesting.

Other static analysis tools for security

There are many other static analysis tools, and many of them look for security vulnerabilities. NIST’s Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a general list of static analysis tools focused on finding security vulnerabilities. John Carmack (founder and former technical director of Id Software)’s post “Static Code Analysis” discusses static analysis in general: “Automation is necessary... I feel the success that we have had with code analysis has been clear enough that I will say plainly it is irresponsible to not use it.” Carmack also quotes Dave Revell, “the more I push code through static analysis, the more I’m amazed that computers boot at all.”

If you’re looking for another FLOSS tool to help you find security problems in your C programs more in depth, for now I particularly suggest that you look at the Clang Static Analyzer and SPLINT. You might also look at cppcheck; it has a naive analysis approach (as does flawfinder), but since it focuses on low false positives it should be easy to examine its reports. As noted above, RATS is the project most similar to flawfinder; it uses the same basic technique, and is also released under the GPL.

I’m a big fan of using multiple tools to find security vulnerabilities. Flawfinder is intentionally simple, easy-to-use, and easy-to-understand. It is certainly not the be-all of tools, but that is not the point. My hope is that flawfinder will encourage people to start looking into the various tools available, and trying some out. Software is complex; we need tools to help us find vulnerabilities ahead-of-time in software we develop.

OSS tools

Other OSS/FS tools/projects that statically analyze programs for security issues (besides flawfinder) include:

  1. Clang Static Analyzer (BSD-like license) can find bugs in C, C++, and Objective-C programs. Here are a few comments about Clang Static Analyzer from a user. It does inter-procedural analysis with contraint modeling, so it can do far more in-depth analysis of software.
  2. OWASP LAPSE+, a static security analyzer for Java web applications that is a successor to the LAPSE project (GPL).
  3. FindSecurityBugs (LGPL) is a plug-in for FindBugs for finding security-related defects.
  4. SPLINT (GPL license). This works somewhat like lint, searching for probable errors; to really use it, developers need to add additional annotations to help the tool identify problems. This is a very mature program, widely used, and one you can start using right away on ‘real programs”.
  5. (Facebook) Infer (BSD license) is a static analyzer that looks for defects in Java, C, and Objective-C code code. It does interprocedural analysis and is based on separation logic (a logic system with additions specifically for reasoning about programs). It focuses primarily on quality issues like resource leaks and null dereferences, rather than security issues, but it seems promising.
  6. Cqual (GPL license). “Cqual is a type-based analysis tool that provides a lightweight, practical mechanism for specifying and checking properties of C programs. Cqual extends the type system of C with extra user-defined type qualifiers. The programmer adds type qualifier annotations to their program in a few key places, and Cqual performs qualifier inference to check whether the annotations are correct. The analysis results are presented with a user interface that lets the programmer browse the inferred qualifiers and their flow paths.”
  7. MOPS (old BSD license) “MOPS is designed to check for violations of rules that can be expressed as temporal safety properties. A temporal safety property dictates the order of a sequence of operations. For example, in Unix systems, we might verify that the C program obeys the following rule: a setuid-root process should not execute an untrusted program without first dropping its root privilege.” It uses a model checking approach.
  8. RIPS does static code analysis on PHP code. It’s currently in PHP, but RIPS is being rewritten.
  9. CIL is a framework for analyzing C programs.
  10. BLAST (Berkeley Lazy Abstraction Software Verification Tool). “BLAST is a software model checker for C programs. The goal of BLAST is to be able to check that software satisfies behavioral properties of the interfaces it uses. BLAST uses counterexample-driven automatic abstraction refinement to construct an abstract model which is model checked for safety properties. The abstraction is constructed on-the-fly, and only to the required precision.” Note: The first version of BLAST was developed at UC Berkeley, but follow-on work is going on at EPFL.
  11. BOON (BSD-like license). BOON stands for “Buffer Overrun detectiON”. “BOON is a tool for automatically finding buffer overrun vulnerabilities in C source code. Buffer overruns are one of the most common types of security holes, and we hope that BOON will enable software developers and code auditors to improve the quality of security-critical programs.”
  12. ggcc is an extension of the gcc compiler suite that will do static checking of various kinds. As of May 2008 it was in early development.
  13. Stanse (GPLv2) is a static analysis framework to find bugs in C code. It’s written in Java, plus some perl.
  14. The Spike PHP Security Audit Tool is for analyzing PHP programs.
  15. Pixy scans PHP programs for XSS and SQLI vulnerabilities; it is written in Java.
  16. Orizon is a general-purpose code analysis system (though their primary interest is security scanning). Milk is a Java source code security scanner built on top of Orizon. They are connected to OWASP.
  17. PScan (GPL license) is a source code scanner like flawfinder and RATS, but has only a limited capability. It’s really only intended to find format string problems. In contrast, both flawfinder and RATS can find format string problems and many other problems as well.
  18. The Open Source Quality Project at Berkeley is investigating tools and techniques for assuring software quality (not just security) of OSS/FS programs.
  19. Project pedantic’s Czech by Jose Nazario might become interesting, but as of April 2004 it looks like that project has halted, with only a buggy not-ready prototype so far (which is too bad!).
  20. smatch. is a general-purpose tool for statically analyzing programs, and could be used to build vulnerability scanners. Indeed, there are lots of tools for statically analyzing programs in a general way, this is only one example.
  21. Sparse is a specialized static analysis tool that does additional type-checking, including checks related to security. It was originally designed to check the Linux kernel source code. Sparse finally has its own web page. More information on sparse is available from the CE Linux forum, the Quick sparse HOWTO by Randy Dunlap, and the sparse mailing list. You can download older snapshots of sparse’s code from codemonkey.
  22. Oink (including Cqual++) (BSD-like license). (a Collaboration of C++ Static Analysis Tools).
  23. Yasca (BSD license) is a “simple static analysis tool designed to analyze source code and for a variety of errors. It is both a framework and an implementation, and leverages other open source code scanners where applicable.” You can also see the Yasca Github site.
  24. Frama-C (LGPL) is a framework for the development of collaborating static analyzers for the C language. Many analyzers are provided in the distribution, including a value analysis plug-in that provides variation domains for the variables of the program, and Jessie, a plug-in for computing Hoare style weakest preconditions. It provides a formal behavioral specification language for C programs named ACSL.
  25. RTL-check “RTL-check is an extensible and powerful abstract interpretation framework for static analysis of programs from a safety and security perspective. It performs analysis on RTL, which is the low-level intermediate representation generated by GCC. See the documentation section for more information.” The code is on SourceForge; a good first start to learning about it is to read Patrice Lacroix master’s thesis.
  26. PMD looks for potential problems in Java code. Not specific to security. (BSD-style license) There are other Java program analyzers too.
  27. Findbugs also looks for potential problems in Java code. Not specific to security (LGPL license).
  28. cppcheck searches for defects in C/C++ code. It appears to work by tokenizing source files into sequences of tokens and then matching on the tokens; thus it's more like flawfinder and RATS, since it does not have deeper analysis available to it (e.g., it cannot do interprocedural analysis). There’s little documentation, unfortunately, but you can invoke it like this (use the force option “-f” else it will give up on some files, and use -a (“all warnings”) to get all details):
      cppcheck -a -f ./ 2> cpperr.txt &
  29. PerlCritic analyzes perl programs. It’s really a style checker, not so much a vulnerability scanner.
  30. Agnitio is a tool to manage checklists when doing manual reviews. It’s a different kind of tool, but I thought it’d be worth noting. Warning: it needs .NET and doesn’t run on Mono as of 2011-09-15 (though they are working on that).
  31. Treehydra is a GCC plugin that provides a low level JavaScript binding to GCC’s GIMPLE AST representation. Treehydra is intended for precise static analyses. Most of Treehydra is generated by Dehydra. A Dehydra script walks the GCC tree node structure using the GTY attributes present in GCC. Treehydra is included in Dehydra source, and is built when a plugin-enabled CXX is detected.
  32. Coccinelle aka spatch Coccinelle, also known as spatch, is a source-to-source translator available under GPLv2. Valerie Henson (now Valerie Aurora) has written an article about Coccinelle, and here’s another article about it.
  33. bddbddb / bddshell. bddbddb (aka b5b) is a general-purpose tool for analyzing big programs. It lets you read in a program and then enter queries in a Prolog-like language, and its internals use the BDD datastructure to make all of this work for large programs. bddshell lets you use it interactively. These are more “tools for building analysis tools”, rather than analysis tools themselves.
  34. LLVM. LLVM is really a compiler infrastructure project, but among other things it can be used to create analysis tools. But it’s not a security analysis tool by itself.
  35. shellcheck (GNU Affero General Public License version 3) is a static analysis tool that reports on common mistakes in (Bourne) shell scripts. It is not specific to security, but several of its reports are security-related.
  36. Elsa. Elsa (BSD license) is a C/C++ parser based on Elkhound. GCC also has a parser.

There is a similar program, ITS4 (from Cigital), but it isn’t open source software or Free Software (OSS/FS) as defined above, and as far as I know it isn’t maintained.

Of course, you could go the other way: Instead of looking for specific common weaknesses, you could prove that the program actually meets (or does not meet) certain requirements. If you’re interested in open source software tools related to proving programs correct, seej High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods / Software Verification and the Open Proofs website.

Quasi-open tools

  1. CERT ROSE checkers checks C and C++ against a subset of the rules in the CERT Secure Coding Standards for C and C++. The ROSE checkers are themselves open source, and build on the open source ROSE, but ROSE itself is fundamentally dependent on a a proprietary component (Edison Design Group’s C/C++ compiler), so the whole stack is in fact proprietary.
  2. ROSE/Compass (BSD license) is a source-to-source translator that can be used to build analysis programs. It includes Compass, which reports violations of a number of rules that relate to security.
  3. VisualCodeGrepper - this is a code security review tool for C/C++, C#, VB, PHP, Java, and PL/SQL. The EULA says it's under the GPL.. but I can't find any actual source code. As far as I can tell it's a lexically-based tool, which means it appears to work the same way as flawfinder, RATS, and ITS4.

Proprietary tools

There are various suppliers that sell proprietary programs that do this kind of static analysis. These include:

  1. HP/Fortify Software. Their Fortify Source Code Analysis tool is briefly described in the PCWorld article Software Searches for Security Flaws. Fortify Software is now owned by HP (as of 2010).
  2. Coverity’s SWAT tool searches for defects in general, including some security issues. It’s based on previous work on the Stanford checker, which was implemented by xgcc and the Metal language (the Stanford site has lots of interesting papers, but no code as far as I can tell -- please let me know if things are otherwise).
  3. GrammaTech develops and sells “static-analysis and program-transformation tools for C/C++ and Ada”. This include CodeSurfer/CodeSonar (R) for static analysis, and CodeSurfer/x86 for analyzing and rewriting binary executables.
  4. Veracode has tools to analyze software for security vulnerabilities (including binary analysis).
  5. Sofcheck Inspector performs static analysis on Java and Ada programs to find defects.
  6. Red Lizard Software is an Australian firm that sells Goanna, a tool that analyzes C/C++ code for software quality bugs (including some security vulnerabilities).
  7. Kestrel Institute works to “make formal methods work in practice”; they have various proprietary tools.
  8. Ounce Labs’s product Prexis. Ounce labs was recently bought by IBM.
  9. Klocwork sells various products that do static analysis.
  10. @stake, now owned by Symantec Corporation, sells a tool called the SmartRisk (TM) Analyzer; unlike many tools, this one analyzes binary code.
  11. Parasoft sells some static analysis tools.
  12. Microsoft bought the company Intrinsa, and their product (known as PREfix) is used now to do static analysis of many of their own products.
  13. PVS-Studio is “a static analyzer that detects errors in source code of C/C++/C++0x applications.” (It’s not specifically focused on security issues). Here's an article about PVS-Studio being used to find mistakes in the Linux kernel.
  14. Parfait is a Sun research project, which has found some vulnerabilities. An interview discusses Parfait further. At the time of this writing, this is unreleased.
  15. KDM Analytics has developed some prototypes using a standards-based approach. Code is first transformed into KDM (an OMG standard), and rules are defined using SBVR (another OMG standard). Then you can search for matches/violations of rules. One neat thing is that this can analyze (in principle) either binary or source code in arbitrary languages. I know some people are modifying gcc to generate KDM. SBVR (the rule-defining language) is a restricted-English logic language, so the rules are unusually readable. To my knowledge, these are not available on the market yet.

There are of course many companies that sell the service of performing security reviews of source code for a fee; who generally use a combination of tools and expertise. These include Secure Software developer of RATS, and Aspect Security, backers of the Open Web Application Security Project (OWASP).

Arian Evans has announced that he’s working on a list of such tools, and intends to post that list at OWASP; by the time you read this, it may already be available. NIST’s Software Assurance Metrics and Tool Evaluation (SAMATE) project posts a list of static analysis tools, along with a list of related papers and projects. Common Weakness Enumeration (CWE) is developing a standard set of definitions of common weaknesses and their interrelationships.

Other places list security tools, but not really static analysis tools; these include the Talisker Security Wizardry Portal and insecure.org’s survey of the top 75 tools.

Java2s has a list of Java-related tools for source analysis which may be of interest. They make the common mistake of saying “commercial” when they mean “proprietary” (OSS is commercial software too).

There are a vast number of static analysis tools that check for style or for possible errors, which might happen to catch security problems. They’re usually not focused on security issues, though, and there are too many to list anyway, so I don’t try to list them all here.

This list can’t possibly be exhaustive, sorry. My goal here isn’t to provide all possible alternatives, merely to provide useful information pointing to at least some other tools and services. My goal is mainly so you can have an idea of what’s going on in the field.

Be careful defining language subsets

Many people have developed “language subsets” in an effort to reduce the risk of errors. In concept, these can be really helpful, especially for languages like C which are easy to abuse. Such language subsets should be automated by static analysis tools; then, it’s easy to check if you’ve met the rules. But these only have value if the subset is well-designed. In particular, the subset should be designed to minimize cases where perfectly acceptable constructs are forbidden (essentially false positives), and should maximize detection of actual failures (best done through analysis of real-world failures).

One of the better-known subsets for C is “MISRA C”. Les Hatton has published a detailed and devastating critique of MISRA C (of both MISRA C 1998 and the later MISRA C 2004). Fundamentally, MISRA C’s development was not based on real data on failures, but instead on random rule creation, some of which are absolutely full of false positives, and many have no value. See Les Hatton’s papers, including those showing why MISRA C is badly flawed. His paper Language subsetting in an industrial context: a comparison of MISRA C 1998 and MISRA C 2004 is “A comparison of real to false positive ratios between the 1998 and 2004 versions of the MISRA C guidelines on a common population of 7 commercial software packages”, and it has devestating conclusions: “On these results, MISRA C 2004 seems a step backwards and attempts at compliance with either document are essentially pointless until something is done about improving the wording of the standard and its match with existing experimental data. In its current form, the complexity and noisiness of the rules suggest that only the tool vendors are likely to benefit.”

An additional problem with MISRA C is that it is not open access (aka Internet-published). That is, you can’t just use Google to find it and then immediately view its contents (without registering or paying for the contents). That makes it hard to apply. Purported standards that aren’t open access are becoming increasingly pointless; IETF, OASIS, W3C, Ecma, and many other bodies already do this.

I’m a fan of Les Hatton’s work, and I particularly like his paper on his EC-- ruleset. The EC-- ruleset is Internet-published, and is much smaller, so it’s actually easier to apply than MISRA C. More importantly, though, the EC-- ruleset appears to be much better matched to the real world for finding failures, so I strongly prefer EC-- over MISRA C. Here were his rules for creating the EC-- ruleset; once you look at this list, I think you’ll see why:

Additional rules specific to security would be a good idea, too, if they’re well-crafted. The CERT C Secure Coding Standard is an effort to craft rules for developing secure C programs. I haven’t had time to evaluate it in-depth, though, so I don’t know what its quality is. Another document you might examine is Microsoft’s Security Development Lifecycle (SDL) Banned Function Calls.

Dynamic/Hybrid Analysis Tools

Static analysis tools are unlikely to catch all problems in practice; they’re best complemented with other approaches. Certainly, having humans look at code is wonderful (this is a manual static analysis approach).

Dynamic analysis tools send data to executing programs as a way to possibly find problems. Many tools are based on the idea of sending random or partly random data for testing; some “randomize” but try to concentrate on patterns most likely to reveal security problems. Dynamic analysis tools include:

  1. SPIKE Proxy is an OSS/FS HTTP proxy for finding security flaws in web sites. It is part of the Spike Application Testing Suite and supports automated SQL injection detection, web site crawling, login form brute forcing, overflow detection, and directory traversal detection.
  2. Brute Force Binary Tester (BFBTester) checks for single and multiple argument command line overflows and environment variable overflows, and version 2.0 can also watch for tempfile creation activity.
  3. Michal Zalewski’s mangleme (demo and source code) sends stressing random data for testing web browsers.
  4. iExploder is another tools for testing web browsers by sending random data.
  5. zzuf is a fuzzer (open source, MIT-style license). See the FOSDEM 2007 slides and Joe Barr’s article about zzuf.
  6. OWASP ZAP dynamically probes web applications, looking for indicators of unknown vulnerabilities. There a lot of these kinds of tools.

There are lots of scanning tools for checking for already known specific vulnerabilities, and sometimes they help. Nessus is a widely-used vulnerability assessment tool. Nikto scans web servers for common problems.

There are many, many other tools and techniques available; I can’t list all of them. You can find a few leads from the Top 75 Security Tools survey at insecure.org. ISP planet’s The article Web Vulnerability Assessment Tools.

You might want to look at my Secure Programming HOWTO web page, or some of my other writings such as Open Standards and Security, Open Source Software and Software Assurance (Security), and High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).

You can also view my home page.