David A. Wheeler's Blog

Mon, 15 Nov 2010

HTML version of “Fully Countering Trusting Trust through Diverse Double-Compiling”

I’ve now posted an HTML version of my PhD dissertation, “Fully Countering Trusting Trust through Diverse Double-Compiling”. This has been available as a PDF for some time, but in some cases an HTML version is more convenient (e.g., for small devices like cell phones or browsers that don’t have PDF readers). This dissertation describes an effective countermeasure for the nasty trusting trust attack; see my page about countering trusting trust or the dissertation itself for more information.

The HTML is very simple HTML that should be acceptable nearly universally. If you’re curious, I generated the HTML from OpenOffice.org, ran HTML tidy, and cleaned up the results a little further (via a simple script I created for the purpose and by hand). I eliminated all forced font names, for example. The goal was to create simple HTML that pretty much any web browser can display reasonably well, both today and into the future. For example, even browsers that can’t handle CSS or <div> should produce reasonable results. People should even be able to read the HTML directly, if they want to, without too much trouble. This is all part of my effort to make sure that anyone who wants this information can get it, either now or in the future.

The HTML almost passes the W3C HTML Validator, but I drew the line at the “absmiddle” value for the “align” attribute. The official HTML4 specification does not not include align=absmiddle, but this is widely implemented by all major browsers, so I view its omission as an error in the HTML spec. I do use in-line <center>, <b>, and <i>; some people may whine about that, but it’s completely standard and universally supported, while alternatives are not universally supported.

I tested on a variety of browsers, and it seems to come out well. The Wii web browser (based on Opera 9) doesn’t seem to handle certain entities that are part of the Unicode and ISO 10646 specification, and shows rectangles instead. But since these are standard characters, I view this as a problem with the browser:

For many of the equations I use embedded graphic images, primarily because many systems do not have the fonts necesssary to display them. I do include alt=… values so that blind users will be able to understand them — this is intended to be accessible.

Ideally, I’d be able to write stuff that’s both HTML and XHTML. I can almost pull that off, but not with <br>. It’d be nice to use <br /> because in theory that would work in both HTML and XHTML. But many tools complain. So I just use <br>, which is standard and completely understood by all HTML parsers.

I now have a few errata, which are posted on the main page about countering the trusting trust attack. They are all trivial typos, and do not affect the fundamentals discussed in the paper.

So, feel free to take a peek at my HTML version of my PhD dissertation or my general page about countering the trusting trust attack.

path: /security | Current Weblog | permanent link to this entry

Tue, 05 Oct 2010

Poor quality patents

Groklaw has some interesting amicus briefs about the horrifically poor quality of many of today’s patents; in particular, they explain why patents that never should have existed are routinely enforced. These briefs make it clear that the current U.S. patent process has become a massive boat anchor on the economy. These briefs show that the current patent process grants boatloads of nonsense patents, and then stacks the deck further by making it nearly impossible to overturn patents that should never have been granted. This is completely unfair; the patent process is supposed to promote the “Progress of Science and useful Arts”, but instead it impedes progress. Even pro-patent companies like Google declare that “Abusive patent suits based on invalid patents have powerful coercive effects and are a scourge of modern business.”

The first brief covered by Groklaw is from by Google, Verizon, Dell, HP, HTC, and Wal-Mart. They had lots of interesting things to say, and I’ll quote it liberally (below). They point out that currently overturning a patent in the U.S. requires “clear and convincing evidence”, a very high legal standard — bizarrely high, in fact. Rulings are normally overcome by a “preponderance of the evidence” instead (this is a lower bar).

That higher standard might make sense if the U.S. patent office did a completely fair and exhaustive review of a patent application before granting it. But that’s not what happens. Indeed, the U.S. Federal Trade Commission (“FTC”) has determined that “current law and [United States Patent and Trademark Office (PTO)] procedure stack the deck heavily in favor of issuing patents”.

For example, in most cases, an applicant must prove that he is entitled to a government benefit or privilege, but under the current patent regime, the applicant is granted the monopoly rights of a patent unless the government can present a case against it. All by itself that is just wrong; if you want a special monopoly, you should at least be required to prove your case; why should the government presume that you should be granted a monopoly? As Google et al say, “In other words, the PTO presumes that it should award an exclusive property right to anyone who asks for it.” Giving exclusive rights to the undeserving is an unfair invisible tax.

What’s worse, patent examiners don’t even normally consider “all aspects of patentability”, and “rarely inquire into important non-documentary sources of information, such as the knowledge of skilled artisans, market demands… and public uses or commercial offers for sale… The PTO can require an applicant to disclose such information [but] has not widely required applicants to disclose such information.” In short, the PTO will often not look for the information that any ordinarily-skilled person in that field would know — shameful. This is compounded by the PTO’s well-known lack of resources; “it lacks the resources to review each patent application thoroughly. This Court identified that problem over four decades ago… and the PTO’s resources have been stretched even further since then.”

In short, “The bottom line is that patent applicants receive the benefit of favorable procedures and a resource-constrained review by the PTO and then assert presumptively valid patents that, according to the Federal Circuit, can be defeated only by clear and convincing evidence. That serves only to insulate patents of dubious quality from adequate scrutiny at any stage.”

A different brief shown in Groklaw was filed by the Electronic Frontier Foundation (EFF), Public Knowledge, Computer & Communications Industry Association (CCIA), and Apache. They point out some other unfair aspects of the patent process. In particular, they note that “patent owners assert that accused infringers must use the prior art’s source code to prove invalidity, but that source code is often unavailable years after the fact”.

All of this is made worse because of software patents. Historically software could not be patented, and software patents are still not permitted in Europe and many other places. Software patents have deluged the PTO with applications, so the PTO has very little time it can devote to reviewing any one patent. What’s worse, the PTO simply does not have a useful database to compare to… so many software patents are patents of prior art. Software patents need to be abolished, but in the meantime, we need to at least tweak the system so that it is fairer.

path: /oss | Current Weblog | permanent link to this entry

Fri, 24 Sep 2010

Software patents and patent trolls are hurting the country

A new paper has come out, and it debunks some key assumptions made about patents. I think this paper is good evidence that software patents and patent trolls are hurting the United States. The weak patents that software patents and patent trolls create (and litigate) are clogging the courts and stifling innovation. Instead of creating jobs by making and selling great products, our innovators get stuck in court… and stuck behind the rest of the world. If the U.S. legislature wants people to have jobs, we need to make it legal to create jobs.

The paper is “Patent Quality and Settlement among Repeat Patent Litigants” by John R. Allison, Mark A. Lemley, and Joshua Walker (September 16, 2010) ; a brief on-line summary is available. What’s remarkable is that they were pro these kinds of things, at least going in, but their study found results they were not expecting.

Their study examined “repeat patent plaintiffs”, which they defined as those who sue eight or more times on the same patents. These suits have “a disproportionate effect on the patent system [because they] are responsible for a sizeable fraction of all patent lawsuits. Their patents should be among the strongest, according to all economic measures of patent quality… But, to our surprise, we find that when they do go to trial or judgment, overwhelmingly they lose. This result seems to be driven by two parallel findings: both software patents and patents owned by non-practicing entities (so-called “patent trolls”) fare extremely poorly in court.” Basically, software patents and patents from patent trolls are remarkably weak; when people stand up to the extortion, the patent-holders often lose in court. This usually only happens after the victims have spent millions of dollars and years of time defending themselves, draining money and time away from productive pursuits.

Their paper’s introduction shows that the “differences are dramatic. Once-litigated patents win in court almost 50% of the time, while the most-litigated - and putatively most valuable - patents win in court only 10.7% of the time”. The fundamental problems are patent trolls (aka non-practicing entities or NPEs) and software patents: “Software patentees win only 12.9% of their cases, while NPEs win only 9.2%”. In short, “It appears that as a society, we are spending a disproportionate amount of time and money litigating a class of weak patents.”

Delving into the statistics, it is clear that this is not just a slight difference. “One of the most striking findings is the weakness of software and NPE-owned patents in the overall dataset… it seems likely that software patents are dragging down the averages. … There have been numerous complaints about the quality of software patents; our data may give some empirical support to those assertions. If we consider just patent owner wins and defendant wins on the merits, non-software patent owners win 37.1% of their cases across both the most-litigated and once-litigated data sets, while software patentees win only 12.9%. If we include default judgments, non-software patent owners win 51.1% of their cases, while software patentees win only 12.9%. Something similar can be said about suits brought by NPEs. NPE suits, like software suits, are a large percentage of the most-litigated cases… If we consider just patent owner wins and defendant wins on the merits, product owners win 40% of their cases across both the most-litigated and once- litigated data sets, while NPEs win only 8%. If we include default judgments, product-producing companies win 50% of their cases, while NPEs win only 9.2%.”

What’s remarkable is that the authors were not against software patents or NPEs. Indeed, “the authors have elsewhere expressed skepticism over efforts to eliminate particular types of patents, and one has argued that we shouldn’t single out patent trolls for special treatment.”

Yet with a bias for software patents and patent trolls, they admitted that this data is strong evidence that software patents and patent trolls are a serious problem. They state that “it is important to recognize that software patents and patents asserted by NPEs are both taking disproportionate resources in patent litigation, and that the social benefit from those cases appears to be slight… Society is spending a large chunk of its patent law resources dealing with what are - for whatever reason - the weakest cases. [This gives] substantial ammunition to those who argue against software patents and who want to restrain patent trolls. If software and NPE patents are overwhelmingly bad - either invalid or overclaimed - the social benefit of allowing them may well be outweighed by the harm they cause.”

One claim some make is that “the system is working - that the bad patents are being weeded out of the system and are not stifling innovation.” But as the authors note, this “seems altogether too facile. After all, roughly 90% of those cases settled without judgment. While those settlements are confidential, we expect that the vast majority involved some sort of payment to the patent plaintiff - a payment that the outcomes data suggests might represent not the acquisition of real legal rights but a nuisance settlement over a likely-invalid patent”.

In short, “If software and NPE patents are overwhelmingly bad - either invalid or overclaimed - the social benefit of allowing them may well be outweighed by the harm they cause… The patents and patentees that occupy the most time and attention in court and in public policy debates - the very patents that economists consider the most valuable - are astonishingly weak. Non-practicing entities [NPEs, aka patent trolls] and software patentees almost never win their cases. That may be a good thing, if you believe that most software patents are bad or that NPEs are bad for society. But it certainly means that the patent system is wasting more of its time than expected dealing with weak patents.”

If you are interested in more, see my essay explaining why software patents should be eliminated.

path: /oss | Current Weblog | permanent link to this entry

Sun, 12 Sep 2010

Musopen

Musopen is raising money to record and release music. The result will be a (bigger) library of copyright free music. They “want your help to hire an internationally renowned orchestra to record and release the rights to: the Beethoven, Brahms, Sibelius, and Tchaikovsky symphonies. We have price quotes from several orchestras and are ready to hire one, pending the funds… [please] Donate, and vote on what we should buy with the money. Then we will release that music in lossless quality with a creative commons license”.

The entire Musopen project is a very cool idea. As they say, “We aim to record or obtain recordings that have no copyrights so that our visitors may listen, re-use, or in any way enjoy music. Put simply, our mission is to set music free”.

This particular fund-raiser is also an interesting use of the Kickstarter web site. I can see this kind of fund-raising being done to release other creative works, too. For example, I can easily see projects ransoming software so it can be released as open source software (this happened with Blender). I think it’s easily possible that we will see more of this in the future, where people’s money is combined together to release artistic and creative works in ways that eliminate constant tolls and enable further creativity.

path: /oss | Current Weblog | permanent link to this entry

Sun, 15 Aug 2010

Geek Video Franchises

I have a new web page on silly game I title Geek Video Franchises. The goal of this game is to interconnect as many geek video franchises as possible via common actors. In this game, you’re only allowed to use video franchises that geeks tend to like.

For example: The Matrix connects to The Lord of the Rings via Hugo Weaving (Agent Smith/Elrond), which connects to Indiana Jones via John Rhys-Davies (Gimli/Sallah), which connects to Star Wars via Harrison Ford (Indiana Jones/Han Solo). The Lord of the Rings directly connects to Star Wars via Christopher Lee (Saruman/Count Dooku). Of course, Lord of the Rings also connects to X-men via Ian McKellen (Gandalf/Magneto), which connects to Star Trek via Patrick Stewart (Professor Xavier / Captain Jean-Luc Picard). Star Trek connects to Dr. Who via Simon Pegg (JJ Abrams’ Montgomery Scott/The Editor), which connects to Harry Potter via David Tennant (Dr. Who #10/Barty Crouch Jr.), which connects to Monty Python via John Cleese (Nearly Headless Nick/Lancelot, etc.).

So if you’re curious, check out Geek Video Franchises.

path: /misc | Current Weblog | permanent link to this entry

Sat, 03 Jul 2010

Opening files and URLs from the command line

Nearly all operating systems have a simple command to open up a file, directory, or URL from the command line. This is useful when you’re using the command line, e.g., xdg-open . will pop up a window in the current directory on most Unix/Linux systems. This capability is also handy when you’re writing a program, because these are easy to invoke from almost any language. You can then pass it a filename (to open that file using the default application for that file type), a directory name to start navigating in that directory (use “.” for the current directory), or a URL like “http://www.dwheeler.com” to open a browser at that URL.

Unfortunately, the command to do this is different on different platforms.

My new essay How to easily open files and URLs from the command line shows how to do this.

For example, on Unix/Linux systems, you should use xdg-open (not gnome-open or kde-open), because that opens the right application given the user’s current environment. On MacOS, the command is “open”. On Windows you should use start (not explorer, because invoking explorer directly will ignore the user’s default browser setting), while on Cygwin, the command is “cygstart”. More details are in the essay, including some gotchas and warnings.

Anyway, take a look at: How to easily open files and URLs from the command line

path: /misc | Current Weblog | permanent link to this entry

Thu, 20 May 2010

Stop Worrying and Love the Internet

Back in 1999 Douglas Adams wrote “How to Stop Worrying and Learn to Love the Internet”. It’s a wonderful essay that is still a good read today. In particular, I think it’s an important article to read if you’re still struggling with understanding where the Internet is going, or if you’re trying to figure out how to address the trustworthiness of group-developed information like Wikipedia, open source software, or the blogosphere. Adams said:

“Because the Internet is so new we still don’t really understand what it is. We mistake it for a type of publishing or broadcasting, because that’s what we’re used to. So people complain that there’s a lot of rubbish online, or that it’s dominated by Americans, or that you can’t necessarily trust what you read on the web. Imagine trying to apply any of those criticisms to what you hear on the telephone. Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do. For some batty reason we turn off this natural scepticism when we see things in any medium which require a lot of work or resources to work in, or in which we can’t easily answer back — like newspapers, television or granite. Hence ‘carved in stone.’ What should concern us is not that we can’t take what we read on the internet on trust — of course you can’t, it’s just people talking — but that we ever got into the dangerous habit of believing what we read in the newspapers or saw on the TV [emphasis mine] — a mistake that no one who has met an actual journalist would ever make… Interactivity. Many-to-many communications. Pervasive networking. These are cumbersome new terms for elements in our lives so fundamental that, before we lost them, we didn’t even know to have names for them.”

My thanks to Andrew Sullivan for reminding me of this important piece.

path: /oss | Current Weblog | permanent link to this entry

Thu, 22 Apr 2010

Filenames and Pathnames in Shell - Doing it Correctly

Traditionally, Unix/Linux/POSIX filenames and pathnames can be almost any sequence of bytes. Unfortunately, most developers and users of Bourne shells (including bash, dash, ash, and ksh) don’t handle filenames and pathnames correctly. Even good textbooks on shell programming get filename and pathname processing completely wrong. Thus, many shell scripts are buggy, leading to surprising failures. In fact, mis-handling of filenames is a significant source of security vulnerabilities.

So I’ve created a short essay on how to correctly process filenames in Bourne shells as used in Unix, Linux, and various POSIX systems. It presumes that you already know how to write Bourne shell scripts.

The essay is: Filenames and Pathnames in Shell: How to do it correctly. Please, take a look!

Frankly, it would be better if filenames weren’t so permissive. In particular, filenames with control characters, leading dash (“-”), and non-UTF-8 encoding cause a lot of grief. To see more about that, please see my essay Fixing Unix/Linux/POSIX Filenames. If filenames weren’t so permissive, correct programs would be much easier to write.

So, Filenames and Pathnames in Shell: How to do it correctly explains how to handle filenames properly in shell programs, given the current situation. Please take a look; I hope you find it useful.

path: /oss | Current Weblog | permanent link to this entry

Fri, 02 Apr 2010

The new face of journalism: PJ, Groklaw, and SCO

The jury in the District Court of Utah trial between SCO Group and Novell has issued a verdict, and SCO lost big. SCO had been threatening and trying to extract money from many innocent people and organizations, including the developers and users of Linux, IBM, Red Hat, and Novell. But the jury found that the copyrights for Unix did not go from Novell to SCO, so many of SCO’s claims against these innocent people have collapsed. It’ll take many years for the rest of the cases to wind down, but their other cases were even less probable.

Perhaps the happiest part of this sorry tale is the rise of Groklaw, established and run by PJ. Carla Schroder’s “Groklaw: How One Person Can Do Big Deeds. Thanks PJ.” and Brian Proffitt’s “SCO, Novell: Grokking Where Credit is Due” wisely point out the important role that Groklaw has played in this saga.

It’s hard to know if Groklaw changed the outcome of this case, but Groklaw clearly changed what people knew about the case. Traditional journalists completely failed the public in the SCO cases. Even though this had the potential to seriously harm the most important development in information technology (IT) — the rise of open source software — almost no IT journalists looked into it. The few that did tended to spend little time looking at (or for) evidence. If journalists are simply reorganizing press releases, there’s really no need for journalism, is there?

Groklaw was vastly different. Groklaw is more than a website or blog, it is a community of people who gathered evidence, analyzed it, and helped other people get the true picture. Traditional journalists may bemoan the loss of local newspapers, but why should people pay for rehashed press releases when the blogs are a more accurate and broader source of information? In short, if you wanted full and accurate public information related to SCO, Groklaw had it; traditional sources didn’t.

While Groklaw is a community, PJ was and is a key part of it. She had the idea of setting up Groklaw, and made it work. In short, she established an environment, and made it possible for the rest of the world to see what was going on.

So hats off to Groklaw, and to PJ in particular. Journalism will never be the same again.

path: /oss | Current Weblog | permanent link to this entry

Sun, 21 Mar 2010

Using Wikipedia for research

Some teachers seem to lose their minds when asked about Wikipedia, and make absurd rules like “I forbid students from using Wikipedia”. A 2008 article states that Wikipedia is the encyclopedia “that most universities forbid students to use”.

But the professors don’t need to be such Luddites; it turns out that college students tend to use Wikipedia quite appropriately. A research paper titled How today’s college students use Wikipedia for course-related research examines Wikipedia use among college students; it found that Wikipedia use was widespread, and that the primary reason they used Wikipedia was to obtain background information or a summary about a topic. Most respondents reported using Wikipedia at the beginning of the research process; very few used Wikipedia near or at the end. In focus group sessions, students described Wikipedia as “the very beginning of the very beginning for me” or “a .5 step in my research process”, and that it helps primarily in the beginning because it provided a “simple narrative that gives you a grasp”. Another focus group participant called Wikipedia “my presearch tool”. Presearch, as the participant defined it, was “the stage of research where students initially figure out a topic, find out about it, and delineate it”.

Now, it’s perfectly reasonable to say that Wikipedia should not be cited as an original source; I have no trouble with professors making that rule. Wikipedia itself has a rule that Wikipedia does not publish original research or original thought. Indeed, the same is true for Encyclopedia Britannica or any other encyclopedia; encyclopedias are supposed to be summaries of knowledge gained elsewhere. You would expect that college work would normally not have many citations of any encyclopedia, be it Wikipedia or Encyclopedia Britannica, simply because encyclopedias are not original sources.

Rather than running in fear from new materials and techologies, teachers should be helping students understand how to use them appropriately, helping them consider the strengths and weaknesses of their information sources. Wikipedia should not be the end of any serious research, but it’s a reasonable place to start. You should supplement it with other material, for the simple reason that you should always examine multiple sources no matter where you start, but that doesn’t make Wikipedia less valuable. For younger students, there are reasonable concerns about inappropriate material (e.g., due to Wikipedia vandalism and because Wikipedia covers topics not appropriate for much younger readers), but the derivative “Wikipedia Selection for Schools” is a good solution for that problem. I’m delighted that so much information is available to people everywhere; we need to help people use these resources instead of ignoring them.

And speaking of which, if you like Wikipedia, please help! With a little effort, you can make it better for everyone. In particular, Wikipedia needs more video; please help the Video on Wikipedia folks get more videos on Wikipedia. This also helps the cause of open video, ensuring that the Internet continues to be open to innovation.

path: /misc | Current Weblog | permanent link to this entry

Sat, 06 Mar 2010

Robocopy

If you use Microsoft Windows (XP or some later version), and don’t have an allergic reaction to the command line, you should know about Robocopy. Robocopy (“robust file copy”) is a command-line program from Microsoft that copies collections of files from one place to another in an efficient way. Robocopy is included in Windows Vista, Windows 7, and Windows Server 2008. Windows XP and Windows Server 2003 users can download Robocopy for free from Microsoft as part of the Windows Server 2003 “Resource Kit Tools”.

Robocopy copies files, like the COPY command, but Robocopy will only copy a file if the source and destination have different time stamps or different file sizes. Robocopy is nowhere near as capable as the Unix/Linux “rsync” command, but for some tasks it suffices. Robocopy will not copy files that are currently open (by default it will repeatedly retry copying them), it can only do one-way mirroring (not bi-directional synchronization), it can only copy mounted filesystems, and it’s foolish about how it copies across a network (it copies the whole file, not just the changed parts). Anyway, you invoke it at the command line like this:

ROBOCOPY Source Destination OPTIONS

So, here’s an example of copying everything from “c:\data” to “q:\data”:

 robocopy c:\data u:\data /MIR /NDL /R:20

To do this on an automated schedule in Windows XP, put your commands into a text file with a name ending in “.bat” and select Control Panel-> Scheduled Tasks-> Add Scheduled Task. Select your text file to run, have it run “daily”. You would think that you can’t run it more than once a day this way, but that’s actually not true. Click on “Open advanced properties for this task when I click Finish” and then press Finish. Now select the “Schedule” tab. Set it to start at some time when you’re probably using the computer, click on “Advanced”, and set “repeat task” so it will run (say, every hour with a duration of 2 hours). Then click on “Show multiple schedules”, click “new”, and then select “At system startup”. Now it will make copies on startup AND every hour. You may want to go to the “Settings” tab and tweak it further. You can use Control Panel-> Scheduled tasks to change the schedule or other settings.

A GUI for Robocopy is available. An alternative to Robocopy is SyncToy; SyncToy has a GUI, but Microsoft won’t support it, I’ve had reliability and speed problems with it, and SyncToy has a nasty bug in its “Echo” mode… so I don’t use it. I suspect the Windows Vista and Windows 7 synchronization tools might make Robocopy a less useful, but I find that the Windows XP synchronization tools are terrible… making using Robocopy a better approach. There are a boatload of applications out there that do one-way or two-way mirroring, including ports of rsync, but getting them installed in some security-conscious organizations can be difficult.

Of course, if you’re using Unix/Linux, then use rsync and be happy. Rsync usually comes with Unix/Linux, and rsync is leaps-and-bounds better than robocopy. But not everyone has that option.

path: /misc | Current Weblog | permanent link to this entry

Sun, 28 Feb 2010

Open government: Default release as OSS and Open Access

U.S. government agencies are soliciting ideas on how to make them more transparent, participatory, collaborative and innovative.

Please support proposals to release government-funded works by default as open access (for research papers) or as open source software (for software). An example is the proposal to the National Science Foundation (NSF) called Public funding = Public viewing. This proposal recommends that publicly funded projects must be published as open access and all data and code shared as open source software. Please vote for this, make helpful comments, and so on. Similarly, please vote for and/or add similar proposals for other agencies where they apply.

If “we the people” pay for research and development, then “we the people” should normally get the results. I can see the need for exceptions — particularly for classified works — but those should be exceptions. In short, I think this kind of proposal makes sense.

As I’ve commented before, Government-developed Unclassified Software should by default be released as Open Source Software, and research papers produced from U.S. government funding should be open access. So please make sure that U.S. agencies know this. Thanks.

path: /oss | Current Weblog | permanent link to this entry

Mon, 22 Feb 2010

Free/Libre/Open Source Software’s big win: Jacobsen/JMRI v. Katzer

There’s been a major legal victory for Free/Libre/Open Source Software (FLOSS): Jacobsen v. Katzer. Articles like Bruce Perens’ “Inside Open Source’s Historic Victory” and A Big Victory for F/OSS: Jacobsen v. Katzer is Settled give many of the specifics; here is a quick summary.

Bob Jacobsen is a high-energy physicist who developed (as a hobby) the Java Model Railroad Interface (JMRI) Project. JMRI is a set of FLOSS Java tools for configuring and controlling model railroad trains. Matthew Katzer used loopholes in the law to patent ideas that Jacobsen and others had created and publicly discussed first, domain-squatted, tried to embarass Jacobsen to Jacbonsen’s employer, and used part of Jacobsen’s JMRI software in Katzer’s own product without complying with the JMRI license (by not providing the required credit). The JMRI has a short summary of this unpleasant fight, as well as lots of details and court papers.

What’s impressive was that Bob Jacobsen stuck through a very hard series of events. At first the court didn’t seem to understand FLOSS at all, and Jacobsen was handed some very unpleasant defeats. At one point, Jacobsen had to pay over $30,000 of his own money.

But Jacobsen persevered, and won critical rulings and a final settlement that is really a complete victory for him. In 2008 the United States Court of Appeals for the Federal Circuit vacated the district court’s ruling and held that the terms of the Artistic License (a FLOSS license) are enforceable. The court said, “Open source licensing has become a widely used method of creative collaboration that serves to advance the arts and sciences in a manner and at a pace that few could have imagined just a few decades ago”. On February 18, 2010, the parties finally settled. Among other terms, Jacobsen has won $100,000, Katzer is forbidden to use Jacobsen’s software, and the two patents at issue have been disclaimed. What’s more, the rulings stemming from this case have created a precedent that FLOSS licenses are legally enforceable, eliminating a lot of uncertainty, and because there is a final settlement it is not possible to appeal the case. Strictly speaking, the precedents do not automatically apply everywhere in the U.S., but even where they do not strictly apply, they will still have a strong weight.

This result is critically important to FLOSS. If FLOSS developers could not enforce their licenses, the probable result would be that a lot of such software would never be written. The Amici Curiae brief by Creative Commons Corporation et al. and the Software Freedom Law Center Amicus Brief in Jacobsen v. Katzer both do a nice job explaining why getting this ruling right was so important.

So, my hat’s off to Bob Jacobsen. Through his persistence, he’s made the world better for all of us.

path: /oss | Current Weblog | permanent link to this entry

Sat, 09 Jan 2010

California: Open Source Software is Okay!

The California state government has officially declared that it’s okay to use open source software inside the California state government. On January 7, 2010, the California Office of the State Chief Information Officer (OCIO) released Information Technology Policy Letter (ITPL) 10-01, titled “Open Source Software Policy” . A key purpose of ITPL 10-01 is to “formally establish the use of Open Source Software (OSS) in California state government as an acceptable practice”, and the first sentence of its policy statement is that “The OCIO permits the use of OSS”. It even includes the ten-point open source definition (OSD) as promulgated by the Open Source Intiative, to make sure that there’s no misunderstanding.

I think this is a big deal. Officially saying “it’s okay to use free/libre/open source software (FLOSS)” is really important before FLOSS can get widespread use in governments. Most technologists already understand the potential advantages of FLOSS, but they encounter a lot of resistance when they try to use or develop FLOSS in large organizations like governments. Far too many middle managers are instinctively afraid of change from “the way we’ve always done it”. For example, they may be afraid of unseen problems, or afraid their bosses will rake them over the coals later. Far too often the middle managers have misunderstandings about FLOSS, too. For example, many managers still believe the myth that “you can’t get support” and are unaware of the many companies that do provide such support. Companies that make competing proprietary products are delighted (of course) when governments don’t consider their competition… but in an era of tight budgets, it doesn’t make sense for governments to ignore competing (and often less expensive) products. When top officials give official “top cover” permission to consider FLOSS, then the technologists and middle managers are far more likely to fairly and honestly consider them.

Also, the fact that it’s California matters. The economy of the California is larger than most countries (if it were a country, it would be third through tenth in the world depending on how you measure it). Anything the state of California does can influence other states and countries; acts like this further legitimize the user of Free/Libre/Open Source Software (FLOSS).

Of course, the state of California isn’t the only government organization to release a memo officially declaring that it’s okay to use free/libre/open source software (FLOSS). Just looking inside the U.S., the U.S. DoD did this in 2003, the Office of Management and Budget (OMB) released a somewhat similar memo in 2004 that applied to the entire U.S. federal government, the U.S. Navy did this in 2007, and the the U.S. DoD released clarifying guidance in 2009 re-emphasizing this point. And that’s only a few examples from U.S. government organizations; the examples from around the world are legion. It’s really difficult to get people to change what they do… as you can tell from the number of times that various U.S. federal government organizations have had to state and re-state it. Still, they really do have an effect. Official policy statements that FLOSS is used, such as the one California just released, are a necessary first step to changing things from “the way we’ve always done things”.

path: /oss | Current Weblog | permanent link to this entry