Flawfinder

This is the main web site for flawfinder, a simple program that examines C/C++ source code and reports possible security weaknesses (“flaws”) sorted by risk level. It’s very useful for quickly finding and removing at least some potential security problems before a program is widely released to the public. See “how does Flawfinder work?”, below, for more information on how it works. Flawfinder is officially Common Weakness Enumeration (CWE)-compatible.

Flawfinder is specifically designed to be easy to install and use. After installing it, at a command line just type:

    flawfinder directory_with_source_code

Flawfinder works on Unix-like systems (it’s been tested on GNU/Linux), and on Windows by using Cygwin. It requires Python 2 to run (it should work on version 2.5 or later, but it is tested using version 2.7).

Please take a look at other static analysis tools for security, too. One reason I wrote flawfinder was to encourage using static analysis tools to find security vulnerabilities.

Sample Output

If you’re curious what the results look like, here are some sample outputs:

  1. The actual text output (when allowing all potential vulnerabilities to be displayed)
  2. The actual HTML output, with context information. This output uses the “--context” option; the text of the risky line is included in the output, which some people find useful. Note that you can use your own web browser to display the results!
All of these results came from analyzing this test C program.

License

Flawfinder is released under the General Public License (GPL) version 2 or later, and thus is open source software (as defined by the Open Source Definition) and Free Software (as defined by the Free Software Foundation’s GNU project). Feel free to see Open Source Software / Free Software (OSS/FS) References or Why OSS/FS? Look at the Numbers! for more information about OSS/FS.

Testimonials

Others have found it useful. Here are few testimonials that it’s received over time:

  1. I just installed the 0.21 version of Flawfinder. I tried a few different code checking tools and it’s by far the friendliest to use. - Darryl Luff
  2. I just sent tons of C/C++ source through flawfinder 1.0. Thanks for the tool, it found several places that I have now fixed. - Joerg Beyer
  3. Thank you for flawfinder! It has helped me in many ways over the last year, for which I am truly [grateful]! - Elfyn McBratney
  4. The other day I was about to clean some old code. After receiving 17K lines of mixed C/C++ I realized that running some kind of source code analyser would be a good idea. I downloaded a whole bunch’of’em (tm), but the only tool that just plain worked on the first run was Flawfinder. Easy to use, no hazzles with strange parameters or configfiles! Instead of learning new software I could concentrate on what I wanted, namely to get down’n’dirty with the code. Thanks! - Jon Björkebäck, developer, Sweden
  5. flawfinder is a good tool for finding potential security issues, and I’ve been happily using it for a few months now. - Steve Kemp, Debian Security Audit Project, 2004-05-22
  6. I would like to thank you for this awesome piece of software. We are using it in our project Scribus (scribus.net) for a few days. It’s very helpful for us. cheers! - Petr Vanek, developer, 2005-12-10
  7. thanx for this great tool. It’s working da*n good. I’m using it against wireshark [previously named ethereal] and it is really useful to track potential misuse of C functions. - Sebastien Tandel, developer, 2007-01-10
  8. “Hurra FlawFinder ! FlawFinder is the greatest software of the World. We are fans ! With FlawFinder we never have buffer overflow... With FlawFinder we always find FlawFlaw to make 300 000 getinfos in 300 seconds ! FlawFinder is the Kikip├ędia of the day.” - Christophe JUILLET, 2008-10-08
  9. “flawfinder is fast and stable” - cameronhansen (2012-10-28)
  10. “works great” - nicolascook (2013-02-14)
  11. “Great tool” - jonahbishop (2013-01-20)

Documentation

Flawfinder comes with a simple manual describing how to use it. If you’re not sure you want to take the plunge to install the program, you can just look at the documentation first, which discusses how to use it and how it supports the CWE. The documentation is available in the following formats:

Using a pre-packaged version of flawfinder

Many Unix-like systems have a package already available to them, including Fedora, Debian, and Ubuntu. Debian and Ubuntu users can install flawfinder using apt-get install flawfinder (as usual); Fedora users can use yum install flawfinder. Cygwin also includes flawfinder. It’s also available in many other distributions. Flawfinder is available via FreeBSD’s Ports system (see this FreeBSD ports query for flawfinder and flawfinder info for security-related ports). OpenBSD includes flawfinder in its “ports”. NetBSD users can simply use NetBSD’s pkgsrc to install flawfinder (my thanks to Thomas Klausner for doing the flawfinder NetBSD packaging). The Fink project, which packages OSS/FS for Darwin and Mac OS X, has a Fink flawfinder package, so users of those systems may find that an easy way to get flawfinder.

Downloading and Installing

If there’s no package available to you, or it’s old, you can download and install flawfinder directly. The current version of flawfinder is 1.31. If you want to see how it’s changed, view its ChangeLog. You can even go look at the flawfinder source code directly. We assume you have a Unix-like system such as a Linux-based system. If you use Windows, install Cygwin first, and install it on top of Cygwin.

First, download it. You can get the current released version flawfinder in .tar.gz (tarball) format) here. You can also get flawfinder by visiting the SourceForge flawfinder project page, in particular its Files section. You definitely need to go to the SourceForge project site if you want to get on the mailing list, submit a bug report or feature request, or see/get the latest drafts.

On Unix-like systems, you install it in the usual manner. First, uncompress the file and become root to install:

  tar xvzf flawfinder-*.tar.gz
  cd flawfinder-*
Then install. Typically you would do this (omit "sudo" if you are root):
  sudo make prefix=/usr install
You can override these defaults using standard GNU conventions. If you omit the "prefix=/usr" statement, it will store in the default directory /usr/local. You can set bindir and mandir to set their specific locations. Cygwin systems (for Microsoft Windows) need to set “PYTHONEXT=.py” in the make command, like this:
  sudo make PYTHONEXT=.py install

See the installation instructions for more information.

Joining the flawfinder community

Flawfinder is now hosted on SourceForge. You can discuss how to use or improve the tool on its mailing list, and you can see the latest drafts on the Subversion version control system.

If you have a general question or issue, use the mailing list. If you have a specific bug, especially if you have a patch, use git or the issue tracker.

Speed

Flawfinder is written in Python, to simplify the task of writing and extending it. Python code is not as fast as C code, but for the task I believe it’s just fine. Flawfinder version 1.31 averaged an analysis speed of 45,126 lines/second when it examined the Linux kernel version 3.16 (released 2014-08-03) on a Intel Core2 Duo CPU E8400 @ 3.00GHz (each CPU running at 2GHz) running Fedora Linux version 20. That is because it examined 17,135,214 lines in 36,859 files in approximately 379.72 seconds (less than 6.5 minutes). The physical Source Lines of Code (SLOC) was lower: 12,237,248. The Linux kernel is not the best test case for utility, since flawfinder is designed for examining application code, but it is a great test for showing that flawfinder can examine larger programs in a relatively short time. In another test, Flawfinder 1.28 averaged 24,475 lines/second on a 2.8GHz laptop and Cygwin; Cygwin on Windows tends to be much slower than Linux, but even on Cygwin flawfinder has a reasonable speed. Flawfinder 1.20 and later normally report their speed (in analyzed lines/second) if you’re curious. The speed reported begins when the program starts running, not including the fixed Python start-up time.

How does Flawfinder Work?

Flawfinder works by using a built-in database of C/C++ functions with well-known problems, such as buffer overflow risks (e.g., strcpy(), strcat(), gets(), sprintf(), and the scanf() family), format string problems ([v][f]printf(), [v]snprintf(), and syslog()), race conditions (such as access(), chown(), chgrp(), chmod(), tmpfile(), tmpnam(), tempnam(), and mktemp()), potential shell metacharacter dangers (most of the exec() family, system(), popen()), and poor random number acquisition (such as random()). The good thing is that you don’t have to create this database - it comes with the tool.

Flawfinder then takes the source code text, and matches the source code text against those names, while ignoring text inside comments and strings (except for flawfinder directives). Flawfinder also knows about gettext (a common library for internationalized programs), and will treat constant strings passed through gettext as though they were constant strings; this reduces the number of false hits in internationalized programs.

Flawfinder produces a list of “hits” (potential security flaws), sorted by risk; by default the riskiest hits are shown first. This risk level depends not only on the function, but on the values of the parameters of the function. For example, constant strings are often less risky than fully variable strings in many contexts. In some cases, flawfinder may be able to determine that the construct isn’t risky at all, reducing false positives.

Flawfinder gives better information - and better prioritization - than simply running “grep” on the source code. After all, it knows to ignore comments and the insides of strings, and it will also examine parameters to estimate risk levels. Nevertheless, flawfinder is fundamentally a naive program; it doesn’t even know about the data types of function parameters, and it certainly doesn’t do control flow or data flow analysis (see the references below to other tools, like SPLINT, which do deeper analysis). I know how to do that, but doing that is far more work; sometimes all you need is a simple tool. Also, because it’s simple, it doesn’t get as confused by macro definitions and other oddities that more sophisticated tools have trouble with. Flawfinder can analyze software that you can't build; in some cases it can analyze files you can't even locally compile.

Not every hit is actually a security vulnerability, and not every security vulnerability is necessarily found. As noted above, flawfinder doesn’t really understand the semantics of the code at all - it primarily does simple text pattern matching (ignoring comments and strings) Nevertheless, flawfinder can be a very useful aid in finding and removing security vulnerabilities.

The documentation points out various security issues when using the tool. In general, you should analyze a copy of the source code you’re evaluating. Also, do not load or diff hitlists from untrusted sources. Hitlists are implemented using Python’s pickle facility, which is not intended for untrusted input.

Reviewing patches

Sometimes you don’t want to review an entire program - you only want to review the set of changes that were made to a program. If the changes are well-localized (e.g., to a particular section of a file), this is trivial to do by hand, but it’s harder otherwise. Flawfinder 1.27 added automated support so that you can review only the changes in a program.

First, create a “unified diff” file comparing the older version to the current version (using git diff, GNU diff with the -u option, or Subversion’s diff). Then run flawfinder on the newer version, and give it the --patch (-P) option pointing to that unified diff.

This works because flawfinder will do its job, but it will only report hits that relate to lines that changed in the unified diff (the patch file). Flawfinder will read the unified diff file, which tells flawfinder what files changed and what lines in those files were changed. More specifically, it uses “Index:” or “+++” lines to determine the files that changed, it uses the line numbers in “@@” regions to get the chunk line number ranges, and it then uses the initial +, -, and space after that to determine which lines really changed.

One challenge is statements that span lines; a statement might start on one line, yet have a change that adds a vulnerability in a later line, and depending on how the vulnerability is reported it might get chopped off. Currently flawfinder handles this by showing vulnerabilities that are reported one line before or after any changed line - which seems to be a reasonable compromise.

Note that the problem with this approach is that it won’t notice if you remove code that enforces security requirements. Flawfinder doesn’t have that kind of knowledge anyway, so that’s not a big deal in this case.

A Fool with a Tool is still a Fool

Any static analysis tool, such as Flawfinder, is merely a tool. No tool can substitute for human thought! In short, “a fool with a tool is still a fool”. It’s a mistake to think that analysis tools (like flawfinder) are a substitute for security training and knowledge. Developers - please read documents like my Secure Programming book so you’ll understand the vulnerabilities that the tool is trying to find! Organizations - please make sure your developers understand how to develop secure software (including learning about the common mistakes past developers have made), before having them develop software or use static analysis tools.

An example of horrific tool misuse is disabling vulnerability reports without (1) fixing the vulnerability, or (2) ensuring that it is not a vulnerability. It’s publicly known that RealNetworks did this with flawfinder; I suspect others have misused tools this way. I don’t mean to beat on RealNetworks particularly, but it’s important to apply lessons learned from others, and unlike many projects, the details of their vulnerable source code are publicly available (and I applaud them for that!). As noted in iDEFENSE Security Advisory 03.01.05 on RealNetworks RealPlayer (CVE-2005-0455), a security vulnerability was in this pair of lines:

   char tmp[256]; /* Flawfinder: ignore */
   strcpy(tmp, pScreenSize); /* Flawfinder: ignore */

This means that flawfinder did find this vulnerability, but instead fixing it, someone added the “ignore” directive to the code so that flawfinder would stop reporting the vulnerability. But an “ignore” directive simply stops flawfinder from reporting the vulnerability - it doesn’t fix the vulnerability! The intended use of this directive is to add it once a reviewer determined that it was definitely a false positive, but in this case the tool was reporting a real vulnerability. The same thing happened again in iDefense Security Advisory 06.23.05, aka CVE-2005-1766, where the vulnerable line was:

   sprintf(pTmp,  /* Flawfinder: ignore */
And a third vulnerability with the same issue was reported still later in iDefense Security Advisory 06.26.07, RealNetworks RealPlayer/HelixPlayer SMIL wallclock Stack Overflow Vulnerability, aka CVE-2007-3410, where the vulnerable line was:
   strncpy(buf, pos, len); /* Flawfinder: ignore */

This is not to say that RealNetworks is a fool or set of fools. Indeed, I believe many organizations, not just RealNetworks, have misused tools this way. My thanks to RealNetworks publicly admitting their mistake - it allows others to learn from their mistake! My specific point is that you can’t just add comments with “ignore” directives and expect that the software is suddenly more secure. Do not add “ignore” directives until you are certain that the report is a false positive.

This kind of problem can easily happen in organizations that say “run scanning tools until there are no more warnings” but don’t later review the changes that were made to eliminate the warnings. If warnings are eliminated because code is changed to eliminate vulnerabilities, that’s great! General-purpose tools scanning like flawfinder will have false positive reports, though; it’s easy to create a tool without false positives, but they’ll do that by failing to report many possible vulnerabilities (some of which will really be vulnerabilities). The obvious answer if you want a broader tool is to allow developers to examine the code, and if they can truly justify that it’s a false positive, document why it is a false positive (say in a comment near the report) and then add a “Flawfinder: ignore” directive. But you need to really justify that the report is a false positive; just adding an “ignore” directive doesn’t fix anything! Sometimes it’s easier to fix a problem that may or may not be a vulnerability, instead of ensuring that it’s a false positive - the OpenBSD developers have been doing this successfully for years, since if complicated code isn’t an exploitable vulnerability yet, a tiny change can often turn such fragile code into a vulnerability.

If you’re in an organization using a scanning tool like this, make sure you review every change caused by a vulnerability report. Every change should be either (1) truly fixed or (2) correctly and completely justified as a false positive. I think organizations should require any such justification to be in comments next to the “ignore” directive. If the justification isn’t complete, don’t mark it with an “ignore” directive. And before developers even start writing code, get them trained on how to write secure code and what the common mistakes are; this material is not typically covered in university classes or even on the job.

The “ignore” directives are a very useful mechanism - once you have done the analysis, having to re-do the analysis for no reason could use up so much time that it would prevent you from resolving real vulnerabilities. Indeed, many people wouldn’t use source scanning tools at all if they couldn’t insert “ignore” directives when they are done. The result would be code with vulnerabilities that would be found by such tools. But any mechanism can be misused, and clearly this one has been.

Flawfinder does include a weapon against useless “ignore” directives - the --neverignore (-n) option. This option is the “ignore the ignores” option - any “ignore” directives are ignored. But in the end, you still need to fix vulnerabilities or ensure that reported vulnerabilities aren’t really vulnerabilities at all.

Another problem is that if a tool tells you there’s a problem, never fix a bug you don’t understand. For example, the Debian folks ran a tool that found a purported problem in OpenSSL; it wasn’t really a problem, and their fix actually created a security problem.

More generally, I am not of the opinion that analysis tools are always “better” than any other method for creating secure software. I don’t really believe in a silver bullet, but if I had to pick one, “developer education” would be my silver bullet, not analysis tools. Again, a “fool with a tool is still a fool”. I believe that when you need secure software, you need to use a set of methods, including education, languages/tools where vulnerabilities are less likely, good designs (e.g., ones with limited privilege), human review, fuzz testing, and so on; a source scanning tool is just a part of it. Gary McGraw similarly notes that simply passing a scanning tool does not mean perfect security, e.g., tools can’t normally find “didn’t ask for authorization when it should have”.

That said, I think tools that search source or binaries for vulnerabilities usually need to be part of the answer if you’re trying to create secure software in today’s world. Customers/users are generally unwilling to reduce the amount of functionality they want to something we can easily prove correct, and formally proving programs correct has not scaled well yet (though I commend the work to overcome this). No programming language can prevent all vulnerabilities from being written in the first place, even though selecting the right programming language can be helpful. Human review is great, but it’s costly in many circumstances and it often misses things that tools can pick up. Execution testing (like fuzz testing) only checks a miniscule part of the input space. So we often end up needing source or binary scanning tools as part of the process, even though current tools have a HUGE list of problems.... because NOT using them is often worse. Other methods may find the vulnerability, but other methods typically don’t scale well.

Hit Density (Hits/KSLOC)

One of the metrics that flawfinder reports is hit density, that is, hits per thousand lines of source code. In some unpublished work, I and someone else found that hit density is a helpful relative indicator of the likelihood of security vulnerabilities in various products. We examined some open source software, such as Sendmail and Postfix, and determined the hit density of each; the ones with higher hit density tended to be the ones with the worse security record in the future. And that’s even if none or few of the reported hits were clearly security vulnerabilities.

When you think about it, that makes sense. If a program has a high hit density, it suggests that its developers often use very dangerous constructs that are hard to use correctly and often lead to vulnerabilities. Even if the hits themselves aren’t vulnerabilities, developers who repeatedly use dangerous constructs will sooner or later make the final mistake and allow a vulnerability. It’s like a high-wire act -- even talented people will eventually fall if they walk on it long enough.

This appeared to break down on very small programs (less than 10K); a program much smaller than its competition might have a larger hit density yet still be secure. I speculate that because density is a fraction, when a program is much smaller than its rivals, density is dramatically forced up (because size is in the denominator). Yet programs that are made dramatically smaller are much easier to evaluate directly, so direct review is more likely to counter vulnerabilities in this case.

Flawfinder and RATS

Unbenownst to me, while I was developing flawfinder, Secure Software Solutions simultaneously developed RATS, which is also a GPL’ed source code scanner using a similar approach. We agreed to release our programs simultaneously (on May 21, 2001), and we agreed to mention each other’s programs in our announcements (you can even see the original flawfinder announcement). Now that we’ve both released our code, we hope to coordinate in the future so that there will be a single “best of breed” source code scanner that is open source / free software. Exactly how this will happen is not yet clear, so be prepared for future announcements.

Until the time where we’ve figured out how to merge these dissimilar projects, I recommend that distributions and software development websites include both programs. Each has advantages that the other doesn’t. For example, at the time of this writing Flawfinder is easier to use - just give flawfinder a directory name, and flawfinder will enter the directory recursively, figure out what needs analyzing, and analyze it. Other advantages of flawfinder are that it can handle internationalized programs (it knows about special calls like gettext(), unlike RATS), flawfinder can report column numbers (as well as line numbers) of hits, and flawfinder can produce HTML-formatted results. The automated recursion and HTML formatted results make flawfinder especially nice for source code hosting systems. The flawfinder database includes a number of entries not in RATS, so flawfinder will find things RATS won’t. In contrast, RATS can handle other programming languages and runs faster. Both projects are essentially automated advisors, and having two advisors look at your program is likely to be better than using only one (it’s somewhat analogous to having two people review your code for security).

Reviews/Papers

Many have reviewed flawfinder or mentioned flawfinder in articles, as well as related tools. Examples include:

  1. “Code Injection in C and C++ : A Survey of Vulnerabilities and Countermeasures” by Yves Younan, Wouter Joosen, and Frank Piessens (Report CW386, July 2004, Department of Computer Science, K.U.Leuven) is a comprehensive survey of many different ways to counter vulnerabilities. Its abstract says, ” ... This report documents possible vulnerabilities in C and C++ applications that could lead to situations that allow for code injection and describes the techniques generally used by attackers to exploit them. A fairly large number of defense techniques have been described in literature. An important goal of this report is to give a comprehensive survey of all available preventive and defensive countermeasures that either attempt to eliminate specific vulnerabilities entirely or attempt to combat their exploitation. Finally, the report presents a synthesis of this survey that allows the reader to weigh the advantages and disadvantages of using a specific countermeasure as opposed to using another more easily.” They list a wide variety of countermeasures and describe their pros and cons. For example, They state that flawfinder, as well as RATS and ITS4, all have the advantages of “very low” comparitive cost and “very low” memory cost, and that all can find vulnerabilities in the categories V1 (Stack-based buffer overflow), V2 (Heap-based buffer overflow), and V4 (Format string vulnerabilities). All have the applicability limitations A1 (Source code required) and A10 (Only protects libc string manipulation functions). All have the protection limitation P17 (False negatives are possible). That’s a fair description of the strengths and weaknesses of flawfinder and similar tools.
  2. Source Code Scanners for Better Code in Linux Journal discusses Flawfinder, RATS, and ITS4. The review noted that the version of flawfinder they used had a weakness - it didn’t automatically report static character buffers. That weakness has since been corrected; flawfinder as of version 1.20 can also report static character buffers
  3. Clean Up Your Code with Flawfinder was one of the first announcements by others about Flawfinder
  4. Flawfinder 1.22, le chasseur de failles
  5. the UC Davis Reducing Software Security Risk through an Integrated Approach project (see the flawfinder entry)
  6. “Apparently insecure, analysis of Windows 2000, Linux and OpenBSD sourcecode” (in German), iX 04/04, p. 14. This is noted in the OpenBSD press area for March, 2004, which states that:
    A small article describing the results of examining Windows 2000, Linux and OpenBSD source code using Flawfinder. “OpenBSD is ahead, Flawfinder finds a surprisingly small number of potentially dangerous constructs. The source code audit by the OpenBSD team seems to pay out. Additionally, OpenBSD uses the secure strlcpy/strlcat by Todd C. Miller instead of strcpy etc.”
  7. “A Comparison of Publicly Available Tools for Static Intrusion Prevention”. You might also want to see ”A Comparison of Publicly Available Tools for Dynamic Buffer Overflow Prevention”)
  8. “A Comparison of Static Analysis and Fault Injection Techniques for Developing Robust System Services” by Pete Broadwell and Emil Ong, Technical Report, Computer Science Division, University of California, Berkeley, May 2002, used static source code analysis (like flawfinder) and software fault injection against some commonly-used applications. They used some static tools (like ITS4, Warnbuf, and Stumoch) and some dynamic tools (like Fuzz Lite and FIG). As with many other papers, they found that static tools found many false positives, but that “When the tool did find an error however, they were extremely useful.” This paper also has references to many other papers.
  9. Methods for the prevention, detection and removal of software security vulnerabilities by Jay-Evan J. Tevis and John A. Hamilton (Auburn University, Auburn, Alabama). This was published in the Proceedings of the 42nd annual Southeast regional conference, Huntsville, Alabama, 2004 (Pages: 197 - 202). ISBN 1-58113-870-9/04/04. The ACM digital library has a copy.
  10. Software Security for Open-Source Systems by Crispin Cowan (IEEE Security and Privacy, 2003) briefly reviews various auditing (static and dynamic) and vulnerability mitigation tools.
  11. “Characterizing the ‘Security Vulnerability Likelihood’ of Software Functions” by DaCosta, Dahn, Mancoridis, and Prevelakis gives evidence that most vulnerabilities are clustered near inputs, a plausible hypothesis. Note that flawfinder includes the ability to highlight input functions, because I expected that myself.
  12. The presentation Lexical analysis in source code scanning by Jose Nazariol (Uninet Infosec 2002), 20 April, 2002, discusses his prototype tool, Czech, which uses techniques similar to flawfinder. In it, he says “source code analysis using lexical analysis techniques is worthwhile for development. However, it can only assist the developer, not replace a manual audit” (true enough!)
  13. The paper Static Analysis for Security by Gary McGraw (Cigital) and Brian Chess of Fortify Software gives an overview of static analysis tools (like flawfinder). This is the fifth article in the IEEE Security & Privacy magazine series called “Building Security In.”
  14. Will code check tools yield worm-proof software? by Robert Lemos (CNET News.com), dated May 26, 2004, describes gives an overview of static analysis tools for a somewhat lay audience.

Practical Code Auditing by Lurene Grenier (December 13, 2002) briefly discusses simple approaches that can be performed for manual auditing (she works on the OpenBSD project). It does note that you can “grep” for certain kinds of problems; flawfinder is essentially a smart grep that already knows what to look for, so it could easily fit into process at those points. The paper also specifically notes some of the things that are hard to grep for (which are the kinds of things that flawfinder would miss).

Secure Programming with Static Analysis (Addison-Wesley Software Security Series) by Brian Chess and Jacob West discusses static analysis tools in great detail.

Of course, there are many programs that analyze programs, particularly those that work like “lint”. There is a set of papers about the Stanford checker which you may find interesting.

Other static analysis tools for security

For more information about other static analysis tools for security, see Static analysis tools for security


You might want to look at my Secure Programming HOWTO web page, or some of my other writings such as Open Standards and Security, Open Source Software and Software Assurance (Security), and High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS).

You can also view my home page.