The Most Important Software Innovations

David A. Wheeler

Revised 2021-12-09; First version 2001-08-01

Introduction

Too many people confuse software innovations with other factors, such as the increasing speed of computer and network hardware. This paper tries to end the confusion by identifying the most important innovations in software, removing hardware advances and products that didn’t embody significant new software innovations. This paper presents its criteria for the most important software innovations and sources, the software innovations themselves, discusses software patents and what’s not an important software innovation, and then closes with Conclusions.

The results may surprise you.

Criteria

This paper lists the “most important software innovations,” so we first need to clarify what each of those words mean:
  1. To be a “most important” innovation, an innovation has to be an idea that is very widely used and is critically important where it applies. Innovations that are only used by a very small proportion of software (or software users) aren’t included.

  2. To be a “software” innovation, it has to be a technological innovation that impacts how computers are programmed (e.g., an approach to programming or an innovative way to use a computer).

    I’m intentionally omitting computer hardware innovations or major hardware events that don’t involve software innovation. People seem to confuse hardware and software, so by intentionally not including hardware, we get a different (and interesting) picture we do not see otherwise. For example, a court case in 1973 determined that John Vincent Atanasoff is the legal inventor of the electronic digital computer, but that’s a hardware innovation and thus not included. I’ve omitted other strictly hardware innovations such as the transistor (1947) and integrated circuits (1958). I’ve also omitted Ethernet, which Bob Metcalfe developed in 1973, for the same reason (Ethernet used early Internet Protocol software, so the software ideas already existed.)

    I’ve omitted inventions that aren’t really technological inventions (e.g., social or legal innovations), even if they are important for software technology and/or are widespread. For example, the concept of a copylefting license is an innovative software licensing approach that permits modification while forbidding the software from becoming proprietary; it is used by a vast array of software via the General Public License (GPL). The first real copylefting license (the Emacs Public License) was developed by Richard Stallman in 1985 - but since copyleft is really a social and legal invention, not a technological one, it’s not included in this list. Also, the “smiley” marker :-) is not included - it’s certainly widespread, but it’s a social invention not a technological one.

  3. We also have to define “innovation” carefully. For purposes of this paper, an “innovation” is not simply combining two functions into a single product - that’s “integration” and usually doesn’t require any significant innovation (just hard work). In particular, integrating functions to prevent customers from using a competitor’s product is “predation,” not “innovation.” An “innovation” is not a product, either, although a product may embody or contain innovations. Re-implementing a product so that it does the same thing on a different computer or operating system isn’t an innovation, either. An innovation is a  new idea. And in this paper, what’s meant is an important new idea in software technology.

As a result, you may be surprised by the number of events in computing history that are not on this list. Most software products are not software innovations by themselves, since most products are simply re-implementations of another idea. For example, WordStar was the first microprocessor word processor, but it wasn’t the first word processor - WordStar was simply a re-implementation of a previous product on a different computer. Later word processors (such as Word Perfect and Microsoft Word) were later re-implementations by other vendors, not innovations themselves. Some major events in computing are simply product announcements of hardware, and have nothing to do with innovations in software. Thus, while the IBM PC and Apple ][’s appearances were important to the computing world, they didn’t represent an innovation in software - they were simply lower-cost hardware, with some software written for them using techniques already well-known at the time.

Occasionally a product is the first appearance of an innovation (e.g., the first spreadsheet program), in which case the date of the product’s release is the date when the idea was announced to the public. Some innovations are innovative techniques, which aren’t directly visible to software users but have an extraordinary effect on software development (e.g., subroutines and object-orientation) - and these  are included in this list of software innovations. For the more debatable entries, I’ve tried to discuss why I believe they should be included.

Matt Ridley's How Innovation Works (and Why it Flourishes in Freedom) (2020) is an interesting book on invention and innovation. Ridley defines innovation as "finding new ways to apply energy to create improbable things, and to see them catch on... developing an invention to the point where it catches on because it is sufficiently practical, affordable, reliable and ubiquitous to be worth using... [and] applying new ideas to raising living standards" (page 4). Ridley walks through inventions and innovations throughout history, From that point of this, this article focuses on key software inventions that were applied and thus were important innovations. Ridley walks through history noting various innovations (his term), and notes that "most innovation is a gradual process" (page 9 and 240+). He also argues that the "people who find ways to drive down the costs and simplify the product who make the biggest difference" (page 246), not just whoever came up with an idea. He argues that "the main ingredient [for] innovation is freedom" (page 359). His terminology is not identical to this papers', but I think the connections are clear enough.

I’ve tried to identify and date the earliest public announcement of an idea, rather than its embodiment in some product. The first implementation and first widespread implementation are often noted as well. “Public” in this case means, at least, an announcement to a wide inter-organizational audience. In some cases identifying a specific date or event is difficult; I welcome references to earlier works. For example, sometimes it is difficult to identify a “first” because an idea forms gradually through the actions of many.

Sources

Since I haven’t found some sort of consensus of what the most important computing innovations are, I’ve developed this list by selecting events from many other sources. I used many sources so I wouldn’t miss anything important, in particular, I used IEEE Computer’s historical information (including their 50-year timeline), the Virtual Museum of Computing, Hobbes’ Internet Timeline, Paul E. Ceruzzi’s A History of Modern Computing, and John Naughton’s  A Brief History of the Future. I also used Janet Abbate’s  Inventing the Internet in a few cases, but I tried to double-check everything in that source because (unfortunately) Abbate makes several errors that make its use as a source suspect. For example, Abbate (page 22) doesn’t realize that although both Strachey and John McCarthy used the same word (“time-sharing”) for their ideas, they didn’t mean the same thing at all. Also, Abbate (page 201) claims Steve Bellovin was at Duke, but this is wrong. I’ve also examined other sources, such as James Durham’s History-Making Components and A History and Future of Computing. Note that, in general, these sources mix computer hardware and software together. Another source is the “Software Pioneers” conference (June 28-29, 2001, Bonn) sponsored by Software Design and Management. Many specific sources such as “OSI and TCP: A History” by Peter H. Salus were checked too. The Association for Computing Machinery (ACM) Software Systems Award was helpful, but this rewards the developers of influential software systems; the recipients are certainly worthy, but in many cases the influential software systems represent good engineering and refinement of already-existing ideas, instead of being the first implementation of a new idea themselves. As is discussed futher later, we need to distinguish between innovations and important products; a product can be important or useful without being innovative.

If you find computing history interesting, you might also enjoy the 20 Year Usenet Timeline, a Brief History of Hackerdom, and Landley’s Computer history page, though they aren’t significant sources for the material here.

After I started identifying innovations, many asked me about software patents. I have done what I can to find applicable patents, though the problems are legion. Software patents are often incomprehensible, even by software experts. Search systems cannot find all relevant software patents; unlike drugs, there is no good indexing system, either for software patents or for software ideas in general (different words can be used for the same idea). This inability to find patents causes many other problems. Software patents are often granted for prior art, even though they are not supposed to be. Indeed, someone else can hear of an idea (possibly years later), file a software patent, and the patent office is likely to grant it. The patent office may even grant a software patent on something already patented.

Yet if the real question about software patents is, "do patents provide an incentive to innovate in software", then things can be simplified. If that were true, it is reasonable to presume that (a) the innovator (or his company) would file the patent, (b) that it would have a form corresponding to the original innovation, and (c) he would file within the legal grace period (12 months from date of public knowledge). Also, patents generally were not granted on software before 1980. My thanks to Jim Bessen for these insights. These factors make patent searching far more tractable, e.g., using Google's advanced patent search. My thanks to many, including Jim Bessen, for searching for patents on these key innovations to find relevant patents. Where found, this article identifies the US patent number. If no patent has been identified, that means that people have looked but not found a plausibly-valid patent for it. The section on software patents discusses this further.

Since this paper was originally published, I’ve received several additional suggestions which rounded out this paper. My thanks to those who have provided those suggestions. It’s quite possible this paper is still missing some important innovations; please contact me if you have a correction or addition (dwheeler, at dwheeler.com, no spam please).

The Most Important Software Innovations

Here is a list of the most important software innovations:
YearInnovationComments
1837Software (Babbage’s Analytical Engine) Charles Babbage was an eminent scientist; he was elected Lucasian Professor of Mathematics at Cambridge in 1828 (the same chair held by Isaac Newton and Stephen Hawking). In 1837 he publicly described an analytical engine, a mechanical device that would take instructions from a program instead of being designed to do only one task. Babbage had apparently been thinking about the problem for some time before this; as with many innovations, pinning down a single date is difficult. This appears to be the first time the concept of software (computing instructions for a mechanical device) is seriously contemplated. Babbage even notes that the instructions can be reused (a key concept in how today’s software works). In 1842 Ada Augusta, Countess of Lovelace, released a translation of “Sketch of the Analytical Engine” with extensive commentary of her own. That commentary has a clear description of computer architecture and programming that is quite recognizable today, and Ada is often credited as being the “first computer programmer”. Unfortunately, due to many factors the Analytical Engine was never built in Babbage’s lifetime, and it would be many years before general-purpose computers were built. No patent identified.
1845Boolean Algebra George Boole published “An Investigation of the Laws of Thought”. His system for symbolic and logical reasoning became the basis of computing. No patent identified.
1936-37Turing Machines Alan Turing wrote his paper “On computable numbers, with an application to the Entscheidungsproblem”, where he first describes Turing Machines. This mathematical construct showed the strengths - and fundamental limitations - of computer software. For example, it showed that there were some kinds of problems that could not be solved. No patent identified.
1945Stored program In the “First Draft of a Report on the EDVAC”, the concept of storing a program in the same memory as data was described by John von Neumann. This is a fundamental concept for software manipulation that all software development is based on. Eckert, Mauchly, and Konrad Zuse have all claimed prior invention, but this is uncertain and this draft document is the one that spurred its use. Alan Turing published his own independent conception, but went further in showing that computers could be used for the logical manipulation of symbols of any kind. The approach was first implemented (in a race) by the prototype Mark I computer at Manchester in 1948. No patent identified.
1945Hypertext Hypertext was first described in Vannevar Bush’s “As we may think”, though of course it was heavily influenced by previous library-related work (e.g., the work of Paul Otlet). The word “hypertext” itself was later coined by Ted Nelson in his 1965 article A File Structure for the Complex, the Changing, and the Indeterminate (20th National Conference, New York, Association for Computing Machinery). No patent identified.
1951Subroutines Maurice Wilkes, Stanley Gill, and David Wheeler (not me) developed the concept of subroutines in programs to create re-usable modules and began formalizing the concept of software development. No patent identified.
1952Assemblers Alick E. Glennie wrote “Autocoder”, which translated symbolic statements into machine language for the Manchester Mark I computer. Autocoding later came to be a generic term for assembly language programming. No patent identified.
1952Compilers Grace Murray Hopper described techniques to select (compile) pre-written code segments in correspondence with codes written in a high level language, i.e., a compiler. Her 1952 paper is titled “The Education of a Computer” (Proc. ACM Conference), and is reprinted in the  Annals of the History of Computing (Vol. 9, No. 3-4, pp. 271-281), based on her 1951-1952 effort to develop A-0. She was later instrumental in developing COBOL. A predecessor of the compiler concept was developed by Betty Holberton in 1951, who created a “sort-merge generator”. No patent identified.
1954Practically Compiling Human-like Notation (FORTRAN) John Backus proposed the development of a programming language that would allow users to express their programs directly in commonly understood mathematical notation. The result was Fortran. The first Fortran implementation was completed in 1957. There were a few compilers before this point; languages such as A-0, A-1, and A-2 inserted subroutines, the Whirlwind I included a special-purpose program for solving equations (but couldn’t be used for general-purpose programming), and an “interpreter” for the IBM 701 named Speedcoding had been developed. However, Fortran used notation far more similar to human notation, and its developers developed many techniques so that, for the first time, a compiler could create highly optimized code [Ceruzzi 1998, 85]. No patent identified.
1955Stack Principle Frierich L. Bauer and Klaus Samelson developed the “stack principle” (“the operation postponed last is carried out first”) at the Technische Universität München. This served as the basis for compiler construction, and was naturally extended to all bracketed operation structures and all bracketed data structures. No patent identified.
1957Time-sharing In Fall 1957 John McCarthy (MIT, US) began proposing time-sharing operating systems, where multiple users could share a single computer (and each believes they control an entire computer). On January 1, 1959, he wrote a memo to Professor Philip Morse proposing that this be done for an upcoming machine. This idea caused immense excitement in the computing field. It’s worth noting that Christopher Strachey (National Research Development Corporation, UK) published a paper on “time-sharing” in 1959, but his notion of the term was having  programs share a computer, not that  users would share a computer (programs had already been sharing computers, e.g., in the SAGE project). [Naughton 2000, 73] By November 1961 Fernando Corbató (also at MIT) had a four-terminal system working on an IBM 709 mainframe. Soon afterwards CTSS (Compatible Time Sharing System) was running, the first effective time-sharing system. Even in those systems of today which aren’t shared by different users, these mechanisms are a critical support for computer security. No patent identified.
1958-1960List Processing (LISP) McCarthy (at Stanford) developed the LISP programming language for supporting list processing; it continues to be critical for Artificial Intelligence and related work, and is still widely used. List processing was not completely new at this point; at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Newell, Shaw and Simon described IPL 2, a list processing language for Rand Corporation’s JOHNNIAC computer. However, McCarthy realized that a program could itself be represented as a list, refining the approach into a flexible system fundamentally based on list processing. In 1956-1958 he began thinking about what would be needed for list processing, with significant work beginning in 1958 with hand simulated compilations. LISP demonstrated other important innovations used in many later languages, including polymorphism and unlimited-extent data structures. No patent identified.
1959-1960Automatic memory management (garbage collection) One of the key capabilities originally developed in Lisp is automatic memory management aka automatic garbage collection. This capability is now included in most programming languages, including JavaScript, Java, Python, and others. This was developed in 1959, and is described in John McCarthy's paper "Recursive functions of symbolic expressions and their computation by machine, Part I", Communications of the ACM, Volume 3, Issue 4, April 1960, pp.184-195.
1959-1960Vendor-Independent Exchange Standards for Software (COBOL and ASCII) In the early days of computing every vendor had their own incompatible method for creating programs and storing data. IBM, for example, encoded characters using systems such as BCD and EBCDIC. But this created terrible problems for users, who could not easily exchange information and were kept hostage by the various vendors. Thus, vendor-independent exchange standards began to be developed. The solution was to create vendor-independent exchange standards.

The basic idea of creating standards was not new, even then. But creating standards for something ephemeral like software was new, so vendor-independent exchange standards for software are being counted as an innovation. Such standards are critical; standards finally made it possible for users to choose and change their suppliers, and since they could work together even with different suppliers. (Even today, people fail to understand the need for standards and thus fall victim to vendor lock-in.) Two of the first efforts to create such standards were COBOL (for exchanging programs) and ASCII (for exchanging text).

In 1959, an industry-wide team was assembled to formulate a standardized business programming language, Common Business Oriented Language (COBOL). The initial specification was presented in April 1960, and was developed in cooperation with computer manufactures, users (including the U.S. Department of Defense) and universities. Soon afterwards, in May 1962, a committee began developing a standard for the Fortran language.

American Standard Code for Information Interchange (ASCII) is a way of encoding characters as numbers, so that there is a standard number to represent each character of text. Work on ASCII began in 1960, and it was first published in 1963. For many years ASCII competed with the vendor-specific EBCDIC, but eventually the open vendor-neutral ASCII beat the vendor-specific format (a pattern that often repeated over the years) . No patent identified.

1960Packet-Switching Networks In 1960 Paul Baran (RAND) proposed a message switching system that could forward messages over multiple paths. Unlike previous approaches (which required large storage capacities at each node), his approach used higher transmission speeds, so each node could be small, simple, and cheap. Baran’s approach routed messages to their destination instead of broadcasting them to all, and these routing decisions were made locally. In 1961 Leonard Kleinrock (MIT) published “Information Flow in Large Communication Nets,” the first larger work examining and defining packet-switching theory. In 1964 Paul Baran wrote a series of papers titled “On Distributed Communications Networks” that expanded on this idea. This series described how to implement a distributed packet-switching network with no single outage point (so it could be survivable). In 1966 Donald Davies (NPL, UK) publicly presented his ideas, which he termed “packet switching”, and learned that Baran had already invented the idea (though we still use Davies’ term “packet switching”). Davies started the “Mark I” project in 1967 to implement it, and ARPANET planning (the ancestor of the Internet) also began in 1967.

It’s worth noting here that in a similar time period, ARPA was looking for solutions to some of the problems that packet-switching solves. J.C.R. Licklider, head of two ARPA departments for a time, had formed the jokingly named “Intergalactic Computer Group” in the early 1960s. In 1963 he wrote a memo to its members pleading for standardization among the various computer systems so they could easily communicate data between them, a memo that spurred on the search for and implmentation of ways to link computers together. In 1965 (after he left ARPA) Licklider wrote the book “Libraries of the future”, which also hinted at the Internet and World Wide Web of the future; Licklider said that “the concept of a ‘desk’ may have changed from passive to active: a desk may be primarily a display-and-control station in a telecommunication-telecomputation system-and its most vital part may be the cable (’umbilical cord’) that connects it [into the] net [to obtain] “everyday business, industrial, government, and professional information, and perhaps, also to news, entertainment, and education.”

On September 2, 1969, UCLA professor Len Kleinrock, along with graduate students Stephen Crocker and Vinton Cerf, sent the first test data between two ARPA computers in a system that would eventaully become the Internet. These packet-switching concepts are the fundamental basis of the Internet, defining how the Internet uses packet-switching, though it would be several years before the TCP/IP protocols we now use would be developed. Note that TCP/IP and the Internet were not themselves designed to survive nuclear attack or other security issues like that. Instead, the later developers of TCP/IP needed their network to have lots of nice properties, and the packet-switching concept created by Baran (which was developed to be survivable) turned out to have the properties they needed. No patent identified.

1960-1961Quicksort sorting algorithm In 1960 C. A. R. (Tony) Hoare developed the Quicksort algorithm, which was eventually published in 1961. Sorting is an extremely common operation, and this algorithm performed significantly better than the algorithms typically used at the time. Perhaps even more importantly, it inspired a great deal of research into improved algorithms, and showed many that recursive algorithms could be extremely useful. Today there are many other useful sorting algorithms, such as heapsort and merge sort, but even now quicksort is often used for sorting data. No patent identified.
1964Word Processing The first “word processor”, IBM’s product MT/ST (Magnetic Tape/Selectric Typewriter), which combined the features of the Selectric typewriter with a magnetic tape drive. For the first time, typed material could be edited without having to retype the whole text or chop up a coded copy. Later, in 1972, this would be morphed into a word processing system we would recognize today. No patent identified.
1964The Mouse The Mouse was invented in 1964 by Douglas C. Engelbart at SRI, using funding from the U.S. government’s ARPA program [Naughton 2000, 81]. Although this could be viewed as a hardware innovation, it isn’t much of a hardware innovation (it’s nothing more than an upside-down trackball). The true innovations were in the user interface approaches that use the mouse, which is entirely a software innovation. It was patented (US #3541541), though not until 1967, and this never resulted in much money for the inventor.
1964System Virtual Machines A system virtual machine (VM), aka full virtualization VMs, enables a single computer to appear to be multiple real computers and provide the functionality needed to install and execute an entire operating system, including an unmodified "guest" operating system. These are often hardware/software combinations, and the underlying software that implements this is called a virtual machine monitor (VMM) or hypervisor. In 1964 the IBM Cambridge Scientific Center begins development of the CP-40 mainframe, an experimental machine designed to implement virtual machines. The CP-40 was only used in labs (never sold to customers), but it later evolved into th eCP-67, the first first commercial mainframe to support system virtual machines. History of Virtualization has more information. Later products, such as VMWare's, exploited the fact that microprocessors had become so powerful that it was useful to share them. Later on, operating-system-level virtualization (aka containers) would be developed, but we count that here as a separate development.
1965Semaphores E. W. Dijkstra defined semaphores for coordinating multiple processes. The term derives from railroad signals, which in a similar way coordinate trains on railroad tracks. No patent identified.
1965Hierarchical directories, program names as commands (Multics) The Multics project spurred several innovations. Multics was the first operating system to sport hierarchical directories, as described in a 1965 paper by Daley and Neumann. Multics was also the first operating system where, in an innovation developed by Louis Pouzin, what you type at command level is the name of a program to run. This caused related innovations like working directories and a shell. In earlier systems, like CTSS, adding a command requiring recompiling; to run your own program you had to execute a system command that then loaded and ran the program. Louis Pouzin implemented a very limited form of this idea on CTSS as “RUNCOM”, but the full approach was implemented on Multics with his help. Although fewer ordinary users use a command line interface today, these are still important for many programmers. The Multicians.org site has more information on Multics features. No patent identified.
1965Unification J.A. Robinson developed the concept of “unification”. This concept - and algorithms that implement it - become the basis of logic programming. No patent identified.
1965Single-computer Email As best as can be determined, email between users on a single computer was developed in 1965. MIT’s CTSS computer had a message feature in 1965 [Abbate 1999, page 109], called MAIL. It also had another early program on the same computer called SNDMSG. This is confirmed in "The history of email" by Ian Peter. Email distributed across a network is much more powerful, but single-computer email laid the groundwork.
1966Structured Programming Böhm and Jacopini defined the fundamentals of “structured programming”, which showed that programs could be created using a limited set of instructions (looping, conditional, and simple sequence) - and thus showing that the“goto” statement was actually not essential. Edsger Dijkstra’s 1968 letter “GO TO Statement Considered Harmful” popularized the use of this approach, claiming that the “goto” statement produced code that was difficult to maintain. No patent identified.
1966Spelling Checker Les Earnest of Stanford developed the first spelling checker circa 1966. He later improved it around 1971 and this version quickly spread (via the ARPAnet) throughout the world. Earnest noted that 1970s partipants on the ARPAnet “found that both programs and data migrated around the net rather quickly, to the benefit of all” - an early note of the amplifying effect of large networks on OSS/FS development. No patent identified.
1966Pseudo-Code (p-Code) Machine (in BCPL) In computer programming, “a pseudo-code or p-code machine is a specification of a CPU whose instructions are expected to be executed in software rather than in hardware (ie, interpreted).” Basic Combined Programming Language (BCPL) is a computer programming language designed by Martin Richards of the University of Cambridge in 1966. He developed a way to make it unusually portable, by splitting the compiler into two parts: a compiler into an intermediate pseudo-code (which he called O-code), and a back-end that translated that into the actual machine code. Since the intermediate code could be exchanged between arbitrary machines, it enabled portability. Urs Ammann, a student of Wirth’s at the Swiss Federal Institute of Technology Zurich, used the same approach in the 1970s to implement Pascal; this intermediate pseudo-machine code was called p-code, and popularized the technique (see Wirth’s “Recollections about the development of Pascal” ) Java is based on this fundamental approach, which enables compiled code to be run unchanged on different computer architectures (see “Java’s Forgotten Forbear”). Even many text adventure games have been built with this approach, most famously the Z-machine used to implement many Infocom games (such as Zork). BCPL also significantly influenced the C programming language, including its use of curly brackets {...}. No patent identified.
1967Object Oriented Programming Object-oriented (OO) programming was introduced to the world by the Norwegian Computing Centre’s Ole-Johan Dahl and Kristen Nygaard when they release Simula 67. Simula 67 introduces constructs that much later become common in computer programming: objects, classes, virtual procedures, and inheritance. OO programming is later popularized in Smalltalk-80, and still later C++, Java, and C#. This approach proved especially invaluable later when graphical user interfaces became widely used. No patent identified.
1967Separating Text Content from Format The first formatted texts manipulated by computer had embedded codes that described how to format the document (”font size X”, “center this text”). In contrast, in the late 1960s, people began to use codes that described the meaning of the text (such as “new paragraph” or “title”), with separate information on how to format it. This had many advantages, such as allowing specialists to devise formats and easing searching, and influenced later technologies such as SGML, HTML, and XML. Although it is difficult to identify a specific time for this idea, many credit the use of this approach (sometimes called “generic coding”) to a presentation made by William Tunnicliffe, chairman of the Graphic Communications Association (GCA) Composition Committee, during a meeting at the Canadian Government Printing Office in September 1967 (his topic was on the separation of the information content of documents from their format). No patent identified.
1968The Graphical User Interface (GUI) Douglas C. Engelbart gave a 90-minute staged public presentation / demonstration of a networked computer system at the 1968 Fall Joint Computer Conference in San Francisco. His presentation, “A Research Center for Augmenting Human Intellect”, was the first public appearance of the mouse, windows, hypermedia with object linking and addressing, and video teleconferencing (it has sometimes been called the “mother of all demos”). These are the innovations that are fundamental to the graphical user interface (“GUI”). This kind of interface made it much easier to implement a driving idea of J.C.R. “Lick” Licklider, who envisioned a human-computer symbiosis. One of Licklider’s central ideas was that “a close coupling between humans and computers would result in better decision-making. In this novel partnership, computers would do what they excelled at - calculations, routine operations, and the rest - thereby freeing humans to do what they in turn did best. The human-computer system would thus be greater than the sum of its parts.” (Summary from [Naughton, page 71]; see “Man-Computer Symbosis”, IRE Transactions on Human Factors in Electronics, vol. HFE-1, March 1960, pp 4-11.) No patent identified.
1968Regular Expressions Ken Thompson published in the Communications of the ACM, June 1968, the paper “Regular Expression Search Algorithm,” the first known computational use of regular expressions. Regular expressions had been studied earlier in mathematics, based on work by Stephen Kleene. Thompson later embedded this capability in the text editor ed to implement a simple way to define text search patterns. ed’s command ‘g/regular expression/p’ was so useful that a separate utility, grep was created to print every line in a file that matched the pattern defined by the regular expression. Later, many libraries included this capability, and the widely-used Perl language makes regular expressions a fundamental underpinning for the language. See Jeffrey E.F. Friedl’s Mastering Regular Expressions, 1998, pp. 60-62, for more about this history. Patent US3568156A (filed 1967) by Ken Thompson covers a particular approach for implementing this. However, I have not found any evidence that turning this into a patent enabled technology in any way. For one thing, as noted in "A Regular Expression Matcher" by Brian Kernighan (code by Rob Pike), "Ken's original matcher was very fast because it combined two independent ideas... Matching code in later text editors that Ken wrote, like ed, used a simpler algorithm that backtracked when necessary. In theory this is slower but the patterns found in practice rarely involved backtracking, so the ed and grep algorithm and code were good enough for most purposes."
1969-1970Generalized Markup Language (GML), the ancestor of SGML, HTML, and XML In 1969 Charles F. Goldfarb, Ed Mosher, and Ray Lorie developed what they called a “Text Description Language” to enable integrating a text editing application, an information retrieval system, and a page composition program. The documents had to be selectable by query from a repository, revised with the text editor, and returned to the data base or rendered by the composition program. This was an extremely advanced set of capabilities for its time, and one that simple markup approaches did not support well. They solved this problem by creating a general approach to identifying different types of text, supporting formally-defined document types, and creating an explicit nested element structure. Their approach was first mentioned in a 1970 paper, renamed after their initials (GML) in 1971, and use began in 1973. GML became the basis for the Standard Generalized Markup Language (SGML), ISO Standard 8879. While SGML itself is less-used today, descendents of SGML are used all over the world. HTML, the basis of the World Wide Web, is an application of SGML, and the widely-used XML (another critically important technology) is a simplified form of SGML. For more information see The Roots of SGML -- A Personal Recollection and A Brief History of the Development of SGML. These standard markup languages, particularly their descendents HTML and XML, have been critical for supporting standard interchange of data that support a wide variety of display devices and querying from a vast store of documents. No patent identified.
1970Relational Model and Algebra (SQL) E.F. Codd introduced the relational model and relational algebra in a famous article in the Communications of the ACM, June 1970. This is the theoretical basis for relational database systems and their query language, SQL. The first commercial relational database, the Multics Relational Data Store (MRDS), was released in June 1976. No patent identified.
1970Backpropagation (deep learning of neural networks) The backpropogation algorithm is a key algorithm that enables deep learning of neural networks (that it, it permits learning even when there are more than 2 or 3 layers). In 1970 Linnainmaa published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions. This corresponds to backpropagation. In 1974 Werbos mentioned the possibility of applying this principle to artificial neural networks, and in 1982 he applied Linnainmaa's AD method to neural networks in the way that is used today. In 1986 Rumelhart, Hinton and Williams showed experimentally that this method can generate useful internal representations of incoming data in hidden layers of neural networks. For more information, see Backpropogation (Wikipedia). For some notes about its limitations, see "Is AI Riding a One-Trick Pony?" by James Somers.
1971Distributed Network Email Richard Watson at the Stanford Research Institute suggested that a system be developed for transferring mail from one computer to another via the ARPANET network. Ray Tomlinson of Bolt, Beranek and Newman (BBN) implemented the first email program to send messages across a distributed network (ARPANET), derived from an intra-machine email program and a file-transfer program. This is confirmed by "The history of email" by Ian Peter. This quickly became the ARPANET’s most popular and influential service. Note that Tominson defined the “@” convention for email addresses. Email between users on a single computer existed before, but email that can span computers is far more powerful. In 1973 the basic Internet protocol for sending email was formed (though RFC 630 wasn’t released until April 1974), and in 1975 the Internet mail headers were first officially defined [Naughton 2000, 149]. In 1975, John Vittal released the email program MSG, the first email program with an “answer” (reply) command [Naughton 2000, 149]. No patent identified.
1972Modularity Criteria David Parnas published a definition and justification of modularity via information hiding. No patent identified.
1972Screen-Oriented Word Processing Lexitron and Linolex developed the first word processing system that included video display screens and tape cassettes for storage; with the screen, text could be entered and corrected without having to produce a hard copy. Printing could be delayed until the writer was satisfied with the material. It can be argued that this was the first “word processor” of the kind we use today. (see a brief history of word processing for more information). Other word processors were developed since. In 1979, Seymour Rubenstein and Rob Barnaby released “WordStar”, the first commercially successful word processing software program produced for microcomputers, but this was simply a re-implementation of a previous concept. In March of 1980, SSI*WP (the predecessor of Word Perfect) was released. No patent identified.
1972Pipes Pipes are “pipelines” of commands, allowing programs to be easily “hooked together”. Pipes were originally developed for Unix and widely implemented on other operating systems (including all Unix-like systems and MS-DOS/Windows). M. D. McIlroy insisted on their original implementation in Unix; after a few months their syntax was changed to today’s syntax. Redirection of information pre-existed this point (Dartmouth’s system supported redirection, as did Multics), but it was only in 1972 that they were implemented in a way that didn’t require programs to specially support them and permitted programs to be rapidly connected together. No patent identified.
1972B-Tree Rudolf Bayer and Edward M. McCreight publish the seminal paper on B-trees, a critical data structure widely used for handling large datasets. No patent identified.
1972,1976Portable operating systems (OS6, Unix) By this date high-level languages had been used for many years to reduce development time and increase application portability between different computers. But many believed entire operating systems could not be practically ported in the same way, since operating systems needed to control many low-level components. This was a problem, since it was often difficult to port applications to different operating systems. Significant portions of operating systems had been developed using high-level languages; ( Burroughs wrote much of the B5000’s operating system in a dialect of Algol, and later much of Multics was written in PL/I, but both were tied to specific hardware. In 1972 J.E. Stoy and C. Strachy discussed OS6, an experimental operating system for a small computer that was to be portable. In 1973 the fledgling Unix operating system was rewritten in C, a high-level programming language that had just been developed, though at first the primary goal was not general machine portability of the entire operating system. In 1976-1977 the Unix system was modified further to be portable, and the Unix system did not limit itself to being small - it intentionally included significant capabilities such as a hierarchical filesystem and multiple simultaneous users. This allowed computer hardware to advance more rapidly, since it was no longer necessary to rewrite an operating system when a new hardware idea or approach was developed. No patent identified.
1972Internetworking using Datagrams (leading to the Internet’s TCP/IP) The Cyclades project began in 1972 as an experimental network project funded by the French government. It demonstrated that computer networks could be interconnected (“internetworked”) by the simple mechanism of transferring data packets (datagrams), instead of trying to build session connections or trying to create highly reliable “intelligent” networks or “intelligent” systems which connected the networks. Removing the requirement for “intelligence” when trying to hook networks together had great benefits: it made systems less dependent on a specific media or technology, and it also made systems less dependent on central authorities to administer it.

At the time, networks were built and refined for a particular media, making it difficult to make them interoperate. For example, the ARPANET protocols (NCP) depended on highly reliable networks, an assumption that broke down for radio-based systems (which used an incompatible set of protocols). NCP also assumed that it was networking specific computers, not networks of networks. The experience of Xerox PARC’s local system (PARC Universal Packet, or PUP), based on Metcalfe’s 1973 dissertation, also showed that “intelligence” in the network was unnecessary - in their system, “subtracting all the hosts would leave little more than wire.”

On June 1973, Vinton Cerf organized a seminar at Stanford University to discuss the redesign of the Internet, where it was agreed to emphasize host-based approaches to internetworking. In May 1974, Vinton Cerf and Robert E. Kahn published “A Protocol for Packet Network Interconnection,” which put forward their ideas of using gateways between networks and packets that would be encapsulated by the transmitting host. This approach would later be part of the Internet.

In 1977, Xerox PARC’s PUP was designed to support multiple layers of network protocols. This approach resolved a key problem of Vinton Cerf’s Internet design group. Early attempts to design the Internet tried to create a single protocol, but this required too much duplication of effort as both network components and hosts tried to perform many of the same functions. By January 1978, Vint Cerf, Jon Postel, and Danny Cohen developed a design for the Internet, using two layered protocols: a lower-level internetwork protocol (IP) which did not require “intelligence” in the network and a higher-level host-to-host transmission control protocol (TCP) to provide reliability and sequencing where necessary (and not requiring network components to implement TCP). This was combined with the earlier approaches of using gateways to interconnect networks. By 1983, the ARPANET had switched to TCP/IP. This layering concept was later expanded by ISO into the “OSI model,” a model still widely used for describing network protocols. Over the years, TCP/IP was refined to what it is today.

The origins of the Internet are actually quite complex, and I am necessarily omitting some detail. Ian Peter maintains that while there were significant contributions of a number of individuals to claims as “fathers of the Internet”, most of these individuals are at pains to point out the crucial involvement of others; he argues that “the Internet really has no owner and no single place of origin” and that “the history of the Internet is better understood as the history of an era”.

In the early 1980s, DARPA sponsored or encouraged the development of TCP/IP implementations for many systems. BSD implemented TCP/IP as open source software, which led to its being available to many. After TCP/IP had become wildly popular, Microsoft added support for TCP/IP to Windows (originally by licensing TCP/IP code from Spider systems as well as using BSD-developed code; Microsoft later rewrote portions).

As an aside, there are two different misconceptions about the Internet and TCP/IP that should be clarified. Some mistakenly claim that the Internet and TCP/IP were specifically created to resist nuclear attacks; this is absolutely not true, since its parent the ARPANET was specifically created to share large systems. Yet it’s also a mistake to claim that there was no connection between the Internet and survivable networks; the Internet TCP/IP technology is an internetwork of data packets, and as noted earlier, packet-switching of data packets was created was to be survivable in case of disaster.

In 2005, Vinton Cerf and Robert Kahn were awarded the prestigious Turing award for their role in creating the Internet’s basic components, particularly the TCP/IP protocol. One of the reasons given for the adoption of the TCP/IP protocols was they were unencumbered by patent claims; “Dr. Cerf said part of the reason their protocols took hold quickly and widely was that he and Dr. Kahn made no intellectual property claims to their invention.” No patent identified.

1973Font Generation Algorithms There have been many efforts to create fonts using mathematical techniques; Felice Feliciano worked on doing so around 1460. However, these older attempts generally produced ugly results. In 1973-1974 Peter Karow developed Ikarus, the first program to digitally generate fonts at arbitrary resolution. In 1978, Donald Knuth revealed his program Metafont, which generated fonts as well (this work went hand-in-hand with his work on the open source typesetting program TeX, which is still widely used for producing typeset papers with significant mathematical content). Algorithmically-generated fonts were fundamental to the Type 1 fonts of Postscript and to True Type fonts as well. Font generation algorithms made it possible for people to vary their font types and sizes to whatever they wanted, and for displays and printers to achieve the best possible presentation of a font. Today, most fonts displayed on screens and printers are generated by some font generation algorithm. No patent identified.
1974Monitor Hoare (1974) and Brinch Hansen (1975) proposed the monitor, a higher-level synchronization primitive; it’s now built into several programming languages (such as Java and Ada). No patent identified.
1975Communicating Sequential Processes (CSP) C. A. R. Hoare published the concept of Communicating Sequential Processes (CSP) in “Parallel Programming: an Axiomatic Approach” (Computer Languages, vol 1, no 2, June 1975, pp. 151-160). This is a critically important approach for reasoning about parallel processes.
1977Diffie-Hellman Security Algorithm The Diffie-Hellman public key algorithm was created in a way that the public could read about it. According to the United Kingdom’s GCHQ, M. J. Williamson had invented this algorithm (or something very similar to it) in 1974, but it was classified, and I’m only counting those discoveries made available to the public. This algorithm allowed users to create a secure communication channel without meeting. US patent #4200770; I have a report that it was later found to be defective in litigation, so if someone could confirm/deny this, that'd be great.
1977Make (automated build system using dependencies) In 1977 Stuart Feldman developed make at Bell Labs. Make allows developers to briefly state how components depend on other components; the make tool can then automatically re-build (or do other operations) in the right order, skipping what does not need to be done. In 2003 Dr. Feldman received the ACM Software System Award for creating this now-widespread tool. No patent identified.
1978RSA security algorithm Rivest, Shamir, and Adleman, published their seminal paper describing the RSA algorithm, a critical basis for security. The RSA algorithm permits authentication or encryption without having to previously exchange a secret shared key, greatly simplifying security. It’s amusing to note that this paper also introduced “Alice” and “Bob”, fictious characters who are trying to securely communicate, and Alice and Bob have become a standard part of security notation ever since the RSA paper. According to the United Kingdom’s GCHQ, Clifford Cocks had invented the RSA algorithm in 1973, but it was classified. US patent #4405829.
1978Spreadsheet Dan Bricklin and Bob Frankston invented the spreadsheet application (as implemented in their product, VisiCalc). Bricklin and Frankston have made information on VisiCalc’s history available on the web. No patent identified.
1978Lamport Clocks Leslie Lamport published “Time, Clocks, and the Ordering of Events in a Distributed System” (Communications of the ACM, vol 21, no 7, July 1978, pp. 558-565). This is an important approach for ordering events in a distributed system. No patent identified.
1979Distributed Newsgroups (USENET) Tom Truscott and Jim Ellis (Duke University, Durham, NC), along with Steve Bellovin (University of North Carolina, Chapel Hill), set up a system for distributing electronic newsletters, originally between Duke and the University of North Carolina using dial-up lines and the UUCP (Unix-to-Unix copy) program. This was the beginning of the informal network USENET, supporting online forums on a variety of topics, and took off once Usenet was bridged with the ARPANET. ARPANET already had discussion groups (basically mailing lists). However, the owner of ARPANET discussion groups determined who received the information - in contrast, everyone could read USENET postings (a more democratic and scaleable approach) [Naughton 2000, 177-179] No patent identified.
1979Operating system level virtualization (containers) In operating-system-level virtualization (aka containers), the underlying operating system kernel (which typically provides filesystem mechanisms and networking primitives) is shared among a set of run-time units called containers. The containers have at least isolated filesystems, and in many cases are isolated in other ways (e.g., they may appear to have separate process lists and/or network addresses). Since there is a shared underlying kernel, these can be much more efficient than full virtualization (though this sharing also creates a larger attack surface that must be secured). The "chroot" system call was introduced to Unix v7 in 1979, which could change the "root" directory visible to a process and its children. This made it possible to create independent isolated views of a system's files. Later refinements were implemented by FreeBSD jails, Solaris Containers, Open VZ (Open Virtuzzo), LXC, and Docker. See: A Brief History of Containers: From 1970s chroot to Docker 2016
1980Model View Controller (MVC) The “Model, View, Controller” (MVC) triad of classes for developing graphical user interfaces (GUIs) was first introduced as part of the Smalltalk-80 language at Xerox PARC. This work was overseen by Alan Kay, but it appears that many people were actually involved in developing the MVC concept, including Trygve Reenskaug, Adele Goldberg, Steve Althoff, Dan Ingalls, and possibly Larry Tesler. Krasner and Pope later documented the approach extensively and described it as a pattern so it could be more easily used elsewhere. This doesn’t mean that all GUIs have been developed using MVC, indeed, in 1997 and 1998, the Alan Kay team moved their Smalltalk graphic development efforts and research to another model based on display trees called Morphic, which they believe obsoletes MVC. However, this design pattern has since been widely used to implement flexible GUIs, and has influenced later thinking about how to develop GUIs. No patent identified.
1981Remote Procedure Call (RPC) In 1981 Bruce J. Nelson published Remote Procedure Call, his PhD thesis in Computer Science from Carnegie Mellon University. An RPC (Remote Procedure Call) allows one program to request a service from another program, potentially located in another computer, without having to understand network details. The requestor usually waits until the results are returned, and local calls can be optimized (e.g., by using the same address space). This calling is facilitated through an “interface definition language” (IDL) to define the interface. Most people today instead refer to the slightly later paper “Implementing Remote Procedure Calls”, by A.D. Birrell and B.J. Nelson, ACM Tranactions on Computer Systems, Vol. 2, No. 1 1984, pp. 39-59. Sun’s RPC (later an RFC) were derived from this, and later on DCE, CORBA, component programming (COM, DCOM), and web application access (SOAP / WDDI, RPC-XML, and even REST approaches) all derive from this. In 1994 the prestigious Association for Computing Machinery (ACM) Software System Award went to Bruce J. Nelson and Birrell for the Remote Procedure Call. No patent identified.
1982Computer Virus A computer virus is a program that can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself. While not a positive development, this was certainly an innovation. The program “Elk Cloner” is typically identified as the first “in the wild” computer virus. Elk Cloner written in 1982 by junior high school student Richard Skrenta, as a practical joke. It attached to the Apple DOS 3.3 operating system, and spread through floppy disks that were inserted afterwards. The history of computer viruses is more complicated, and some consider earlier programs named Creeper, Rabbit, or Animal as the first virus. In particular, in 1975 John Walker released ”Animal” on Univac systems with a PERVADE subroutine that caused copies of Animal to reappear elsewhere. But since it was careful to only use this to update its own software, it’s not clear that it fits the definition above, and few (other than Walker) understood the dangers of the idea. Fred Cohen later wrote academic works studying computer viruses. No patent identified.
1984Distributing Naming (DNS) The “domain name system” (DNS) was invented, essentially the first massively distributed database, enabling the Internet to scale while allowing users to use human-readable names of computers. Every time you type in a host name such as “dwheeler.com”, you’re relying on DNS to translate that name to a numeric address. Some theoretical work had been done before on massive database distribution, but not as a practical implementation on this scale, and DNS innovated in several ways to make its implementation practical (e.g., by not demanding complete network-wide synchronicity, by distributing data maintenance as well as storage, and by distributing “reverse lookups” through a clever reflective scheme). No patent identified.
1986Lockless version management (CVS) Dick Grune released to the public the Concurrent Versions System (CVS), the first lockless version management system for software development. In 1984-1985, Grune wanted to cooperate with two of his students when working on a C compiler. However, existing version management systems did not support cooperation well, because they all required that versions “locked” before they could be edited, and once locked only one person could edit the file. While standing at the university bus stop, waiting for the bus home in bad autumn weather, he created an approach for supporting distributed software development that did not require project-wide locking. After initial development, CVS was publicly posted by Dick Grune to the newsgroup mod.sources on 1986-07-03 in volume 6 issue 40, (and also to comp.sources.unix) as source code (in shell scripts). CVS has since been re-implemented, but its basic ideas have influenced all later version management systems. CVS was a major step forward for its time, but it was a centralized version control system (VCS). if you're only familiar with distributed VCSs (like git), the article Version Control Before Git with CVS (by Sinclair Target, Two-Bit History) may help you understand CVS better. The initial CVS release did not formally state a license (a common practice at the time), but in keeping with the common understanding of the time, Mr. Grune intended for it to be used, modified, and redistributed; he has specifically stated that he “certainly intended it to be a gift to the international community... for everybody to use at their discretion.” Thus, it appears that the initial implementation of CVS was intended to be open source software / free software (OSS/FS) or something closely akin to it. Certainly CVS has been important to OSS/FS since that time; while OSS/FS development can be performed without it, CVS’s ideas were a key enabler for many OSS/FS projects, and are widely used by proprietary projects as well. CVS’ ideas have been a key enabler in many projects for scaling software development to much larger and more geographically distributed development teams. No patent identified.
1989Distributed Hypertext via Simple Mechanisms (World Wide Web) The World Wide Web (WWW)’s Internet protocol (HTTP), language (HTML), and addressing scheme (URL/URIs) were created by Tim Berners-Lee. The idea of hypertext had existed before, and Nelson’s Xanadu had tried to implement a distributed scheme, but Berners-Lee developed a new approach for implementing a distributed hypertext system. He combined a simple client-server protocol, markup language, and addressing scheme in a way that was new, powerful, and easy to implement. It was platform-independent, as opposed to some previous hypertext systems. There were no patent restrictions to its implementation, another key to its wide and rapid adoption. Each of the pieces had existed in some form before, and there were some related/similar projects such as Andrew. However, the combination was obvious only in hindsight. Berners-Lee’s original proposal was dated March 1989, and he first implemented the approach in 1990. No patent identified.
1991Design Patterns Erich Gamma published in 1991 his PhD thesis which first seriously examined software design patterns as a subject of study including a number of specific design patterns. In 1995 Gamma, Helm, Johnson, and Vlissides (the “Gang of Four”) published “Design Patterns,” which widely popularized the idea. The concept of “design patterns” is old in other fields, specific patterns had been in use for some time, and algorithms had already been collected for some time. Some notion of patterns is suggested in earlier works (see the references in both). However, these works crystallized software design patterns in a way that was immediately useful and had not been done before. This has spawned other kinds of thinking, such as trying to identify anti-patterns (“solutions” whose negative consequences exceed their benefits; see the Antipatterns website, including information on development antipatterns). No patent identified. (Erich Gamma is listed as the inventor on US patent #5544301, but that appears to be different.)
1991Distributed Version Control System (DVCS) In 1991 Sun began developing TeamWare (The Old Man and the C”, Evan Adams, Sun Microsystems). This appears to be the first distribution version control system (DVCS), aka distributed revision control system, distributed source code control system, or distributed (software) configuration management system). In such systems, instead of using a central repository, each user has a repository, and the system enables users to synchronize their work. Later influential DVCS systems include BitKeeper, Monotone, Mercurial (hg), Bazaar (bzr), and git. No directly relevant patent identified; Sun did get US patent #5313646 on TeamWare for a "Method and apparatus for translucent file system" but this was for the transparent overlay TeamWare provided, not for the distributed version control approach itself.
1992Secure Mobile Code (Java and Safe-Tcl) A system supporting secure mobile code can automatically download potentially malicious code from a remote site and safely run it on a local computer. Sun built in 1990-1992, and demonstrated on September 1992, its new programming language, Oak (later called Java), as part of the Green project’s demonstration of its *7 PDA. Oak combined an interpreter (preventing certain illegal actions at run-time) and a bytecode verifier (which examines the mobile code for certain properties before running the program, speeding later execution). Originally intended for the “set-top” market, Oak was modified to work with the World Wide Web and re-launched (with much fanfare) as Java in 1995. Nathaniel Borenstein and Marshall Rose implemented a prototype of Safe-Tcl in 1992; it was first used to implement “active email messages.” An expanded version of Safe-Tcl was incorporated into regular Tcl on April 1996 (Tcl 7.5). US patents #5748964 and #5668999 may be related, though they were filed in 1994.
1993Refactoring Refactoring is the process of changing a software system that does not alter its external behavior but improves its internal structure. It’s sometimes described as“improving the design after it’s written”, and could be viewed as design patterns in the small. Specific refactorings and the general notion of restructuring programs were known much, much longer, of course. However, the idea of creating a list of specific source code refactoring processes, so they could be discussed and studied, was essentially a new idea. This date is based on William F. Opdyke’s PhD dissertation, the first lengthy discussion of it (including a set of standard refactorings) I’ve found. Martin Fowler later published his book “Refactoring” which popularized this idea. No patent identified.
1994Web-Crawling Search Engines The World Wide Web Worm (WWWW) indexed 110,000 web pages by crawling along hypertext links and providing a central place to make search requests; this is one of the first (if not the first) web search engines. Text search engines far precede this, of course, so it can be easily argued that this is simply the reapplication of an old idea. However, text search engines before this time assumed that they had all the information locally available and would know when any content changed. In contrast, web crawlers have to locate new pages by crawling through links (selectively finding the “important” ones). No patent identified.
1996Content-Based Addressing (rsync) Content-based addressing (aka content-based storage) calculates cryptographic hashes of data (usually whole files) and uses that value as the “address” of the data. This dramatically saves network bandwidth - applications need only exchange hashes over the network instead of the actual content. The application “rsync” by Andrew Tridgell used it in 1996, and later applications such as BitTorrent and git use it as well. Although bandwidth has dramatically increased over years, the amount of data we want to send has grown as well - and this makes many large data transfers much more efficient. A more formal analysis is in “Content-Based Addressing and Routing: A General Model and its Application” by Antonio Carzaniga, David S. Rosenblum, and Alexander L. Wolf. No patent identified.
2004Massively-parallel MapReduce In 2004, Google’s Jeffrey Dean and Sanjay Ghemawat revealed massively-parallel MapReduce, a programming model that enables processing and generating large data sets with huge clusters of parallel machines, yet is remarkably easy to program. In this approach, developers “specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.” The developers also specify an input reader, a partition function, a compare function, and an output writer. These are then fed to the MapReduce framework, which executes those definitions on a potentially large distributed computer cluster, which handles complications such as computer and network failure. They note that “many real world tasks are expressible in this model”. Programmers without any experience with parallel or distributed systems can, using this model, use large distributed systems to handle large data sets. The basic MapReduce approach has since been implemented in other tools such as Hadoop and Qt Concurrent; Eugene Ciurana has an article demonstrating how to use MapReduce approaches (using Mule). Google’s reworked search engine no longer uses mapreduce, but it is still widely-applicable to other projects.

It could easily be argued that this is obvious. The sequential MapReduce algorithm has been around for many decades, the idea of parallelizing a common sequential algorithm is obvious, and the idea of automatically handling failed computations is also obvious. In particular, it could be argued that the real change here is that parallel computers have become so inexpensive such algorithms are now more useful. But I must make a decision, and so I have decided to add it to the list. US patent #7650331 and #7756919.

Software Patents

One source that was not helpful for this analysis were software patents. The reason? Software patents are actually harmful, not helpful, to software innovation, as confirmed by a myriad of data. Those unfamiliar with software patents may find that shocking.

There are several basic problems with software patents, compared to actual innovation:

  1. almost all truly important innovations in software were never covered by patents, so using patents as a primary source would omit almost all of the most important software innovations;
  2. as software patentability has increased, the number of key software innovations has decreased; and
  3. software patents are often granted to cover ideas that are obvious to practitioners of the art or have prior art (even though these aren’t supposed to be patented).

There are many reasons most of the most important software innovations were never patented. Historically, software was not patentable, and it’s still not patentable in vast number of countries (including the EU). Many believe software should never be patentable, and many of them oppose software patents on ethical or moral grounds as well as on pragmatic grounds (and many of them will not apply for patents for these reasons). For more about the many who oppose software patents, you can see the ffii.org site and the League for Programming Freedom, including statements by software vendor Oracle and and a list of software luminaries opposed to software patents (including Donald Knuth). Dan Bricklin (inventor of the spreadsheet) explains why introducing patents to the software industry, about 50 years after the industry began (and after it had already been flourishing without them), is a mistake and hardship. AutoCAD’s co-author and Autodesk founder John Walker wrote “Patent Nonsense”, where he states that “Ever since Autodesk had to pay $25,000 to ‘license’ a patent which claimed the invention of XOR-draw for screen cursors (the patent was filed years after everybody in computer graphics was already using that trick), I’ve been convinced that software patents are not only a terrible idea, but one of the principal threats to the software industry... the multimedia industry is shuddering at the prospect of paying royalties on every product they make, because a small company in California has obtained an absurdly broad patent on concepts that were widely discussed and implemented experimentally more than 20 years earlier.” Forbes’ article “Patently Absurd” also notes the problems of patents, as does eWeek. One survey of professional programmers found that by a margin of 79.6% to 8.2%, computer programmers said that granting patents on computer software impedes, rather than promotes, software development (the remaining 12.2% were undecided). By 59.2% to 26.5% (2:1), most went even further, saying that software patents should be abolished outright. Professors Bessen and Maskin, two economists at the Massachusetts Institute of Technology (MIT), have demonstrated in a report that introducing patenting into the software economy only has economic usefulness if a monopoly is the most useful form of software production. This is concerning, because few believe that a monopoly is truly the most useful (or desirable) form of software production.

Paul Vick, lead architect for Visual Basic .Net at Microsoft, was required by his employer to file for a patent on an obvious pre-existing idea (the IsNot operator), which the patent office nevertheless granted -- Paul’s posting on Software patents states, “I don’t believe software patents are a good idea... software patents generally do much more harm than good. As such, I’d like to see them go away and the US patent office focus on more productive tasks... One of the most unfortunate aspect of the software patent system is that there is a distinct advantage, should you have the money to do so, to try and patent everything under the sun in the hopes that something will stick.... [overwhelming the patent system.] Microsoft has been as much a victim of this as anyone else, and yet we’re right there in there with everyone else, playing the game. It’s become a Mexican standoff, and there’s no good way out at the moment short of a broad consensus to end the game at the legislative level. as far as the specific IsNot patent goes, I will say that at a personal level, I do not feel particularly proud of my involvement in the patent process in this case.”

As patentability has increased, there’s good evidence that the number of software innovations has decreased. Bessen and Maskin also demonstrated a statistical correlation between the spread of patentability in the United States and a decline in innovation in software. In particular, between 1987 and 1994 , software patents issuance rose 195%, yet real company funded R&D fell by 21% in these (software) industries while rising by 25% in industries in general. This paper gives additional evidence that software patents are inversely related to innovation; it’s hard to not notice that as patenting become more common (e.g., 1987 and later) that the number of major innovations slowed down and are almost always not patented anyway. Although these only show correlation and not causality, other data suggest that there is a causal relation. Their more recent book, “Patent Failure: How Judges, Bureaucrats, and Lawyers Put Innovators at Risk” by James Bessen and Michael J. Meurer (Princeton University Press, March 2008) provides more information about the failures of software patents. Chapter 9 notes, “In Chapter 7, we noticed that patents on software and especially patents on business methods (which are largely software patents) stood out as being particularly problematic. These patents had high rates of litigation and high rates of claim construction review on appeal. This chapter [argues] that there is, in fact, something crucially different about software: software is an abstract technology. This is a problem because at least since the 18th century, patent law has had difficulty dealing with patents that claimed abstract ideas or principles... Such patents often have unclear boundaries and give rise to opportunistic litigation... Software also seems to be an area with large numbers of relatively obvious patents. For these reasons, it is not surprising that a substantial share of current patent litigation involves software patents... no other technology has experienced anything like the broad industry opposition to software patents that arose beginning during the 1960s. Major computer companies opposed patents on software in their input to a report by a presidential commission in 1966 and in amici briefs to the Supreme Court in Gottschalk v. Benson in 1972. Major software firms opposed software patents through the mid-1990s (for example in USPTO hearings in 1994). Perhaps more surprising, software inventors themselves have mostly been opposed to patents on software. Surveys of software developers in 1992 and 1996 reported that most were opposed to patents... Software patents... play a central role in the failure of the patent system as a whole. Any serious effort at patent reform must address these problems and failure to deal with the problems of software patents... will likely doom any reform effort.” Thus, not only do software patents fail to help encourage innovation - they actually inhibit innovation.

The book “Against Intellectual Monopoly” by Michele Boldrin and David K. Levine presents a number of evidences that software patents are harmful to the software industry and users. Actually, it goes much further, presenting evidence against patents and copyrights in general, but it’s the evidence against software patents that I find especially compelling. Patents are an economic adsurdity argues against patents in general, but has some additional words on the specific problems of software patents.

L. Gordon Crovitz’s “Patent Gridlock Suppresses Innovation” (Wall Street Journal, July 14, 2008, Page A15) states that “for most industries [including software], today’s patent system causes more harm than good... Our patent system for most innovations has become patently absurd. It’s a disincentive at a time when we expect software and other technology companies to be the growth engine of the economy. Imagine how much more productive our information-driven economy would be if the patent system lived up to the intention of the Founders, by encouraging progress instead of suppressing it.”

Bruce Perens explains why patents cause serious problems in creating and implementing standards. Since patents retard the creation and use of standards, they also retard the industry as a whole (since relevant, widely-implemented standards are a key need in the software industry).

The patented European webshop is an excellent illustration of the problem - it shows a few of the many obvious, widely-used ideas of grantedEuropean patents. In short, it demonstrates why patents are a poor match for software.

There are also many reasons why software patents are often granted that cover obvious ideas and prior art (which can give the illusion of innovation without actually having any). As noted by an FTC analysis of patents, in the U.S. about 1,000 patent applications now arrive each day, so patent examiners have from eight to 25 hours to read and understand each application, search for prior art, evaluate patentability, communicate with the applicant, work out necessary revisions, and reach and write up conclusions. (The article also notes -- somehow without irony -- that most granted patents are in fact obvious to practitioners, even though that is illegal.) Many other studies have noted that patent examiners have a poor database of prior art in software, so it’s hard for them to find prior art. But the biggest problem is that there are no incentives for anyone in the patent process to reject bogus patents. The patent applicant has every incentive to ignore prior art, the patent examiner has little time or resources to do this search, and a patent examiner who doesn’t commit enough resources to the search is rewarded (in contrast, a patent examiner who spends too much time on each patent will be punished). And it’s difficult for a patent examiner to declare something is “obvious”; after all, the people who are paying money say that their patent request isn’t obvious, and there’s little downside for an examiner to agree with the petitioner. Also, other areas of the software industry generally pay more than a patent examiner’s salary, decreasing the likelihood that a software patent examiner has the best software experience. The entire software patent examination process favors granting software patents for obvious and prior art. The patent “review” process has become so much of a rubber stamp that Steven Olson managed to obtain a patent on swinging sideways on a swing, an absurd patent that was granted the U.S. patent process.

It’s really difficult to figure out if something is really innovative. Several of the “key innovations” listed above are actually quite debatable. For example, is massively-parallel MapReduce obvious? It’s easy to argue that it is, and at the least, I am certain that someone else would have created it within a year or two, and I am very doubtful that the patent system incentivized its creation at all. Giving a monopoly on an idea that would have been created anyway, for no societal gain, is bad policy.

Frankly, I think permitting software patents in the U.S. was a tremendous mistake, and a misuse of the original patent laws. Very few of the innovations listed here were patented, and of the few that were (e.g., the mouse and RSA), there’s little evidence that granting the patents encouraged innovation. The mouse patent never made much money for its inventor, and although the developers of RSA did make money, there’s no evidence that they would not have developed RSA without the offer of a patent. All evidence seems to show that these ideas would have occurred without the patents! In short, patents impeded deployment and increased customer costs without encouraging innovation. The patent laws were originally written to specifically prevent patenting mathematical algorithms, and lower courts have basically rewritten the laws to re-permit patenting of mathematical algorithms (which is fundamentally what any software patent is). Permitting software patents has done almost nothing to encourage innovation nor award innovators, and the harm that it’s done far, far exceeds any claimed good. Most key software technology innovations were never patented, so tracking patents is certain to miss most of the most important innovations. Conversely, since patent examiners have a poor database of prior art in software and there are no incentives for anyone in the patent process to seriously search for prior art, software patents are routinely granted for previous and obvious inventions in software technology. Basically, the number of patents granted for software primarily shows how much money an organization is willing to spend to submit patent applications - it has nothing to do with innovation. The W3C has noted that its policy of ensuring that all W3C standards were royalty free has been key to universal web access; anything else would cause dangerously harmful balkanization. Vint Cerf stated that part of the reason the Internet protocols took hold so quickly and widely was that he and Dr. Kahn made no intellectual property (patent) claims to their invention. “It was an open standard that we would allow anyone to have access to without any constraints.”

“The Software Patent Experiment” by James Bessen (Research on Innovation and Boston University) and Robert M. Hunt (Federal Reserve Bank of Philadelphia) is a sobering less-technical summary of important research they did on software patents. They found that in the 1990s, the firms that were increasingly patenting software were the ones that were decreasing their research and development -- that is, patents are replacing research and development, not encouraging it. They found strong statistic evidence that patents in the software field do not provide an incentive for research and development -- the vast majority of software patents are obtained by firms outside the software industry which have little investment in the software developers required to develop software inventions. They don’t say it directly, but their research results seem to clearly show that software patents have become legalized extortion, instead of a means to encourage innovation.

The software industry’s solution has been to cross-license patents between companies, creating a sort of software patent detente. More recently, this has included cross-licensing patents with the open source community (through mechanisms like the Open Invention Network). Of course, such mechanisms tend to inhibit newcomers, so software patents’ primary impact is to prevent new ideas from becoming available to end-users, subverting the official justification for them. The only group that seems to be unambiguously aided by software patents are patent lawyers - and since they make the rules, they are happy to have them.

Of course, this fails when someone decides to sue. So-called "patent trolls" do not make products, only lawsuits, and thus have no reason to acknowledge detente. In the mobile space, Apple has decided to try to prevent the sales of all competition by filing patent lawsuits. The result has been more lawsuits, and less innovation.

Any statistic based on software patents is irrelevant when examining software innovation -- because today’s software patents have nothing to do with innovation. End Software Patents is an organization that is trying to eliminate the nonsense of software patents; I hope they succeed, since they are harming instead of helping innovation.

What’s Not an Important Software Innovation?

As I noted earlier, many important events in computing aren’t software innovations, such as the announcements of new hardware platforms. Indeed, sometimes the importance isn’t in the technology at all; when IBM announced their first IBM PC, neither the hardware nor software was innovative - the announcement was important primarily because IBM’s imprimateur made many people feel confident that it was “safe” to buy a personal computer.

An obvious example is that smartphones are not a software innovation. In the mid-2000s, smartphones rapidly became more common. By "smartphone" I mean a phone that can stay connected to the Internet, access the internet with a web browser capable of running programs (e.g., in Javascript), and install local applications. There's no doubt that widespread smartphone availability has had a profound impact on society. But while smartphones have had an important social impact, smartphones do not represent any siginificant software innovation. Smartphones typically run operating systems and middleware that are merely minor variants of software that was already running on other systems, and their software is developed in traditional ways.

Note that there are few software innovation identified in recent times. I believe that part of the reason is that over the last number of years some key software markets have been controlled by monopolies. Monopolies typically inhibit innovation; a monopoly has a strong financial incentive to keep things more or less the way they are. Also, it’s difficult to identify the “most important” innovations within the last few years. Usually what is most important is not clear until years after its development. Software technology, like many other areas, is subject to fads. Most “exciting new technologies” are simply fashions that will turn out to be impractical (or only useful in a narrow niche), or are simply rehashes of old ideas with new names. It is even possible that the emergence of software patents has impeded, instead of promoted, innovation in software, since many innovations occurred when software patents were not permitted.

Standards are extremely important in computing (just as they are in many other fields). In earlier versions of the document I noted that standards long preceded computing, and did not note them as an innovation. However, I’ve since added an entry for vendor-independent standards. The notion of having computing standards was not something that immediately came to mind in the computing industry - so the notion of having computer-related standards is now included above as an innovation. There are many important events in computing history involving standards, but very few standards are listed above as innovations... and for good reason. Standards themselves generally do not try to create significant new innovations, and rarely work well when they do. Instead, standards usually attempt to create agreements based on well-understood technology, where the innovations have already been demonstrated as being useful. Any significant innovation embodied in a standard was usually developed and tested many years before the standards’ development.

Other Technologies that are not innovative

Here are a few technologies that, while important, aren’t really innovative:

  1. XML. XML is simply a simplified version of SGML, which has been around for decades.
  2. SOAP. SOAP is yet another remote procedure call system, employing XML (and often HTTP).

It’s okay to not be innovative

There’s nothing wrong with a technology or product not being innovative. Indeed, a technology or product should primarily be measured as whether or not it solves real world problems (without causing more problems than it solves). Linus Torvalds, creator of the Linux kernel, has stated that a pet peeve of his is that “there is a great deal of talk about ‘innovation’ and ‘vision.’ People want to hear about the one big idea that changes the world, but that’s not how the world works. It’s not about visionary ideas; it’s about lots of good ideas which do not seem world-changing at the time, but which turn out to be great after lots of sweat and work have been applied.” Instead, the Linux kernel (which has been wildly successful) is the result of lots of small ideas contributed by lots of people over a long time.

The focus of this paper is innovation, not utility. Do not confuse innovation with utility.

Conclusions

Clearly, humankind has been impacted by major new innovations in software technology. But the number of major new innovations is smaller than you might expect, especially given the many who declare that software technology “changes rapidly.” If you only consider major new innovations in software technology, instead of various updates to software products, fundamental software technology is  not changing as rapidly as claimed by some.

I believe that this list is evidence that people are far more affected by other issues in computing than by major new software innovations. In particular, I believe that there are at least three reasons for the illusion of rapid changes in major software technology:

  1. People have been able to apply computing technology to more and more areas due to rapidly decreasing costs. Computer hardware performance has improved exponentially, its size has dropped significantly, and its cost has decreased exponentially, making it possible to apply computing technology in more and more situations. The increasing hardware performance has also allowed developers to use techniques that decrease development time by increasing computing time; this trade reduces the development cost and time for software, again making it less costly to apply computing technology (by reducing the cost of software development). The idea of automating actions is not, by itself innovative, but automation can certainly change an environment.
  2. Increasing use begets increasing use. And when a technology is widespread or ubuiquitous, it often enables many widespread uses and social changes. In those cases, people are feeling multiple rapid social changes, caused by the widespread availability of a technology, rather than multiple rapid changes and innovations in the technology itself. When the web browser was first introduced, comparatively few people used it because there was relatively little information or services of interest to them available through it. But once some was available, other providers of information and services had users/customers, enticing them to use the WWW, causing an exponential increase in use. A service can become particularly influential if it becomes a standard (either because it’s formally specified as a “de jure” standard, or simply through widespread use as a “de facto” standard). The idea of creating standards is not new, but once something becomes a standard, the idea’s very ubiquity can mean that it will be widely used in places that it wouldn’t be used before.
  3. Software functionality can be changed over time, adding new functionality and generalizing a particular program’s capabilities. However, when functionality is changed over time, this often requires that the human interfaces change as well. As a result, people constantly have to learn how to handle changes in a given program’s interface. This gives some the illusion of constant innovative change in software technology, while instead, what is changing is a particular implementation.

Intriguingly, one of the richest and most powerful software companies, Microsoft, did not create any major software innovation as identified in this list. Microsoft did not even create the first useful or widely-used implementation of any major software innovation. Others have come to the same conclusions, for example, see the Microsoft “Hall of Innovation”. This could be considered odd, since in the early 2000's Microsoft was claiming to be an innovative company. At the least this shows that innovations, on their own, are often not the most important part of creating software. For more (dated) information about this, see Microsoft, the Innovator?.

In contrast, several major innovations were first implemented as open source software / Free Software (OSS/FS) projects, especially for those involving networks. Examples of innovations initially released as OSS/FS or first widely distributed as OSS/FS include DNS, web servers, the TCP/IP implementations on BSD systems to create internetworks using datagrams, the first spell checker, and the initial implementation of lockless version management. Tim Berners-Lee, inventor of the World Wide Web, stated in December 2001 that “ A very significant factor [in widening the Web’s use beyond scientific research] was that the software was all (what we now call) open source. It spread fast, and could be improved fast - and it could be installed within government and large industry without having to go through a procurement process.” This may be because the ideas of open source software are quite similar to research approaches in general, e.g., in both systems publications are available to all and can be used as the basis of further work (as long as credit is given). The paper Altruistic individuals, selfish firms? The structure of motivation in Open Source Software found in a 2002 survey of 146 Italian firms that their primary reason for supplying OSS/FS programs was that “Open Source software allows small enterprises to afford innovation”. For more information, see the information on OSS/FS innovation from my paper, “Why OSS/FS? Look at the Numbers!”

It’s important to not overstate the value of innovation. As noted in Shapin’s article What Else Is New?, just doing something radically different does not make it important. Indeed, we are surrounded by “old” technology that still serves us well. An useful innovation has to be useful, and not just be a new idea. Even more importantly, it has to be implemented well in a form that can be easily used. That said, sometimes new ideas truly are useful, and when they are, they can improve our world.

In addition, it’s important to note that innovation is primarily a matter of incremental improvement and hard work. While I think it’s useful to note dates (where possible) for new innovations, it can give the illusion that innovation is primarily a matter of “Eureka!” moments that change everything. As noted in Eureka! It Really Takes Years of Hard Work (by Janet Rae-Dupree, New York Times, February 3, 2008), innovation is (primarily) “a slow process of accretion, building small insight upon interesting fact upon tried-and-true process.” Jonathan Sachs (co-developer of Lotus 1-2-3) similarly stated, “The rate of innovation is rather slow. There are only a few really new ideas every decade.” See Scott Berkun’s 2007 book “The Myths of Innovation” for more.

Similarly, just writing a paper usually doesn't capture what's important about an idea. The Business of Extracting Knowledge from Academic Publications (2021), explains that, "Close to nothing of what makes science actually work is published as text on the web... Systems, teams and researchers matter much more than ideas... the complexity threshold kept rising and now we need to grow companies around inventions to actually make them happen... incumbents increasingly acqui-hire instead of just buying the IP and most successful companies that spin out of labs have someone who did the research as a cofounder. Technological utopians and ideologists... underrate how important context and tacit knowledge is."

Software technology changes, but that is not software’s primary impact on us. Software primarily impacts us because of its ubiquity and changeability, as the computers that software controls become ubiquitous and the software is adapted to changing needs.

Appendix: Software Innovations Being Considered

No list of “software innovations” can be complete. At the least, there is always the hope that there will be new innovations, ones that we have not even heard of yet. But innovations must be given time to see if they are truly important, or simply a fad that will quickly fade away. Another problem is that it is sometimes difficult to track backwards to find out when an idea was created, or by who. There is also the challenge of determining if it was really innovative, and what its impact was; in some cases I’m not sure if they should be in the list or not.

Some ideas have been identified that may be added to future versions of this document. Here are ones that aren’t currently on the list, but I may add someday:

  1. Algorithms - Euclid (GCD), and before.
  2. Backus-Naur Form (BNF), a format for defining language syntax.
  3. Abstract Data Types (ADTs) - this preceded object-orientation, but a “first” use seems to be very hard to find.
  4. Big O / Complexity theory
  5. Hashes
  6. Transactions (esp. database transactions) - it can be a little difficult to determine the origins of the idea of transactions. David Lomet (who did key work in this area) has helpfully pointed out to me three papers, along with useful recollections of his:
  7. Recursion (though this was a mathematical concept predating computers, and might not really be a “software” innovation at all)
  8. Database management systems (it is very difficult to trace back to the “first” ones)
  9. Operating systems (again, it’s very difficult to trace back to “first” ones)
  10. Aspect-oriented programming
  11. The “Page Rank” algorithm used by Google
  12. Directories (X.500 and before) - these are read-mostly databases, implemented later by LDAP (and still later by Active Directory)
  13. Bazaar-style programming/development (e.g., Linux; noted by Raymond)
  14. Open-source software / Free Software (see above)
  15. Fourth-generation programming languages (4GLs)
  16. Wiki+P2P (they change Knowledge distribution and coordination!).
  17. Data compression. It’s difficult for me to determine if this was “obvious” or not; I suspect it was obvious, which is why it’s not currently in the list above. My thanks to Aaron Brick for this interesting idea, as well as his pointing out the Shannon-Fano (1949ish) lossless encoding approach, as well as the lossy compression landmarks in the 1928 analog vocoder and the 1981 block truncation coding.
  18. Sketchpad
  19. Flowcharts. Flowcharts are actually older than the computer, and were used for process improvement, and is related to Scientific management. A key person in this field was Frank Bunker Gilbreth, Sr. (described in the delightful book Cheaper by the Dozen). However, flowcharts are essentially unused today (unless you count XKCD cartoons), and I think the other material already cited is more foundational, which is why it’s not currently on the list.
  20. Chomsky’s context-free grammars. I did include BNF (above). Technically these aren’t about computing, but are more general concepts that can be applied to computing as well.
  21. Leibniz’s work on Symbolic thought and Computation. I do include Boole and Babbage, and Leibniz is a towering figure in math, and his idea that human disagreements could be resolved someday by exclaiming “Calculemus” (“let us calculate”) has been inspirational. His work on binary numbers, and on developing machines to do arithmetic, make him a plausible addition.
  22. REST. In some sense, this is a pattern that pre-existed for years, and has more recently been identified as a pattern for use. In some ways, this is already covered by the Design Patterns item.
  23. Bulletin board systems (BBSs). I have Usenet, but Usenet created an interesting new approach for distributing data across a larger network. Bulletin board systems started off by more-or-less implementing an electronic equivalent of paper-based bulletin board systems, which is why I haven’t included BBSs.
  24. Software Plugin (suggested by Ciaran Carthy)

The Association for Computing Machinery (ACM) Software Systems Award is another good place to look for a list of software innovations. However, it is a fantastic list of important software systems and the worthy people who created them, many of these systems are not themselves fundamentally innovative. For example, James Gosling rightly received an award for Java in 2002; Java is an important programming language, but it is basically a well-engineered design that combined already-existing concepts, and was not fundamentally innovative in the way intended by this list. This is not to slight the important work by Gosling and others; it’s just that good engineering results are not the focus of this list.


My thanks to Robert Steinke for noting CSP and Lamport Clocks as key software innovations.

You can get a copy of this document at http://dwheeler.com/innovation. Feel free to examine my home page at http://dwheeler.com.