Encryption: A brief history

 

Encryption is not new, nor are the disputes around how it should be used and regulated. Public policy discussions around the use of encryption now commonly focus on the challenges it poses to identifying and preventing online sexual exploitation and abuse of children, but this is the latest development in a long-running debate. Understanding the history of encryption is essential in order to understand the tensions and disagreements that exist today.


The history

State desires to control cryptography—the techniques for secure communication in the presence of unintended recipients—have a long history. Among the best-known state systems and attempts to break those of the other side were those used during World War II to enable and decipher state secrets between one another, using knowledge of what the other side was planning in information warfare.

After World War II, the United States (US), United Kingdom (UK), Australia, Canada, and New Zealand formed an alliance (Five Eyes) based on a series of bilateral agreements on surveillance and intelligence-sharing. This series of agreements enabled states to share intelligence gathered and decrypted by each of their intelligence agencies by default. While the agreements underlying Five Eyes are not in the public domain, the concern among critics is that the involvement of foreign intelligence agencies in intelligence sharing allows domestic agencies to gain information they could not access themselves without violating domestic legal restrictions on state surveillance.1

As technological developments picked up pace, the so-called crypto-wars began in the 1970s when the US government attempted to classify encryption as munitions—as a technology recognised and regulated as a weapon of war. The origins of the securitisation and desire to control online space by states have been there from the beginning and are an important part of understanding why there is widespread criticism of any proposals that undermine or seek to ban the use of encryption today.

In the early days of the expanding commercial Internet, encryption was a technology that companies in the US could choose to use in products they built and exported. But the US government passed legislation to limit the use of cryptography both in terms of export controls, preventing the export of physical products and software to markets outside of the US that used a strong level of encryption in security-by-design, and also by creating domestic requirements to enable state access to digital content of communications.

As the US debate grew on how to charge for telephony2 in the earlier days of the Internet, a wider range of politicians and governments became involved due to the economic implications and concerns about domestic sovereignty.

Author Wendy Grossman foresaw, writing in 1997, that the technological “Silicon Valley” hype driving the claim that the new medium of communication was going to “remake the world, undermine the status quo and kill off national governments and multinational corporations” would inevitably lead to the imposition of State governance and controls. Even then, nearly thirty years ago, when the majority of people were not yet online, those controls were being talked about as governance that would shape the Internet to fit politicians’ idea of “something that’s safe.”3

This definition of “safety” online was contentious even then. Safe for whom and from what?

In the 1980s, as the Crypto Museum website explains,4 when computers were beginning to emerge in commercial companies after being previously exclusive to military environments, it became increasingly necessary for wireless and wired links to carry not only the data of a single computer, but complete data bundles, from multiple devices simultaneously, often including speech and facsimile (fax) data. Such devices are commonly known as bulk encryptors. The required equipment was bulky and needed manual actions like turning encryption on and off, or using an electronic device for the distribution of cryptographic variables, such as crypto keys.

During the 1990s, the World Wide Web led to a huge increase in the amount of information available to “non-technical” people via the Internet. This decade also saw the rise of e-commerce and the advent of “easy” encryption at scale by the technically minded masses, through pretty good privacy (or PGP as it’s known). This is a tool that enables users to communicate securely by decrypting and encrypting messages, authenticating messages through digital signatures, and encrypting files.

With the growing use of the personal computer, the US government then tried to create a physical route to enable the government to always have access to a key to encrypted communications, using the so-called Clipper Chip, that allowed “back door” access into transmissions from any device built using the chip. There was a process by which government agencies could establish their authority to intercept a particular communication, and then the key held in escrow by a third-party would be given to that agency, so that all data transmitted could be decrypted. Law enforcement and agencies desiring access to the contents of the message could then approach the third-party without notifying the key’s owner.

But in response to the threat of the US Government passing legislation to ban encryption, a number of very strong public encryption packages were released, including Nautilus, PGP, and PGPfone. It was thought that, if strong cryptography was widely available to the public, the government would be unable to stop its use. This approach appeared to be effective, and notwithstanding that a flaw meant it was compromised, the life of the Clipper Chip was limited, broken in 1994.

These government policy proposals created repeated contention. Wendy Grossman said in an interview for this report, “Is it any wonder that the Net feels under siege? Is it surprising that feeling threatened further bonds the community together, and that some elements unite in a determination to see that attempts at regulation fail? Regulating cyberspace is a lot like shooting the messenger.”5

By 1999, there was consensus among technologists, as well as politicians who championed free market libertarian principles, that the imposition of export controls meant the US had exported devices that were not as secure as they should have been. Matt Blaze, who exposed the failings of the Clipper Chip6 among his extensive work in cryptography, describes this period as one in which “‘crypto’ [was] misguidedly derided as some kind of criminal tool during the very time when we needed to be integrating strong security into the Internet’s infrastructure,” and that it set back Internet security “by at least a decade, and we’re still paying the price in the form of regular data breaches, many of which could have been prevented had better security been built in across the stack in the first place.”7

After the attempt to create these keys giving “back door” access into exported technology had failed, the security services tried another method: getting into commercial companies that created secure SIM cards for mobile devices. Whistleblower Edward Snowden, working for the US National Security Agency (NSA), revealed documents in 2015 that allegedly show that the NSA and their British counterpart GCHQ hacked the French-Dutch smart-card company called Gemalto to acquire the cryptographic keys of millions of mobile phone SIM cards.8 It is unknown how many keys were stolen or how efficient the application of such keys would be, but it was claimed that it allowed access to those SIM card users in predominantly 2G environments like Pakistan.

However, encryption experts, digital rights advocates, and tech companies all agree that there is no safe backdoor to encryption.


“Any backdoor would create more security risks, including for individual users, than it would solve. Any friction in the message transmission chain, or security vulnerabilities in the encryption protocol, risks being exploited by adversarial (state and non-state) actors.”9

If a backdoor is created to give “exceptional access” for law enforcement, it is a backdoor for any third party to access the contents of communications. This might sound harmless, but the results can be disastrous, according to the Internet Society.10

The Electronic Frontier Foundation has pointed out that the US government has not been shy about seeking access to encrypted communications, pressuring the companies to make it easier to obtain data with warrants and to voluntarily turn over data. However, the US would face serious constitutional issues if it wanted to pass a law that required warrantless screening and reporting of content.11

Only a decade ago, Lavabit, an open-source encrypted webmail service founded in 2004, suspended its operations on 8 August 2013 after the US Federal Government ordered it to turn over its Secure Sockets Layer private keys, in order to allow the government to spy on Edward Snowden’s email.

Newer proposals from agencies have included rather more transparent discussions. The most recent proposals have moved to new points in the process to eavesdrop or report recognised content. But the principle remains the same, that they enable eavesdropping or report the user to a third party.

The proposal that followed in 2019 from GCHQ was to permit law enforcement and intelligence agencies access to private messaging systems by adding a silent participant—"ghost" users from law enforcement or the security services—to online chats and calls, including those conducted via encrypted messaging tools like WhatsApp, iMessage, or Signal. The “ghost proposal”12 was widely condemned in 2019, including by the Internet Society,13 as the latest attempt by a government to circumvent and/or “backdoor” encrypted communications and was reminiscent of the aims of the Clipper Chip. A coalition of more than fifty civil society organisations, technology companies, and cybersecurity experts, including Apple, Microsoft, Human Rights Watch, and Privacy International, wrote in objection to the proposals that this would “open the door to surveillance abuses that are not possible today.”14 Not only did it create current risk, but it would require companies to keep that risk open and not “patch” the weakness, which others could also exploit.15 By inserting tools for surveillance into products, states would effectively limit security innovation, just as was seen as a result of the US government export controls of the 1990s.


Where that leaves us today

Understanding the origins of the “cyber wars” and long-running debates on state surveillance, with the associated outcomes for individuals and at scale, may go some way to understanding today’s tensions, showing why the technology solutions used and proposed policy approaches have reached somewhat of a stalemate.

A key faultline between those who argue that communications should be encrypted to protect content from prying, and those who argue it should not be encrypted so that state agencies can access the content of any exchange of information, rests on one key issue: the harm that governments and their state bodies perpetuate to populations at scale, and the resulting lack of trust in government agencies and law enforcement that have developed over time.

It is also important to recognise that the push by States to restrict encryption has been, and continues to be, pursued for a number of different purposes globally and has changed over time. The current focus of emerging EU regulation is on child sexual abuse material, in the US counter-terrorism was a driving force for reform since 9/11. In Brazil, the government has claimed16 it is essential to fight crime, bribery, and corruption,17 while in India, mob violence and the connections with misinformation are the current policy drivers.

Privacy in the digital environment is no doubt one of the most important factors in how we enable and control individuals and societies. Today’s children may be the first generation in which the perfect storm of ubiquitous digital information and ubiquitous state and commercial surveillance combine. This is the context in which the debate about regulation of encryption takes place, as well as the analysis of how to do so in a way that respects children’s rights.

 
Illustration of many doors with a keyhole inside all opening at once
 

***

 

Footnotes

1 See https://privacyinternational.org/learn/five-eyes


2 Telephony is technology associated with interactive communication between two or more physically distant parties via the electronic transmission of speech or other data: https://www.techtarget.com/searchunifiedcommunications/definition/Telephony


3 Grossman, W.,net.wars, 1997, New York University Press, p. 196, https://nyupress.org/9780814731031/net-wars


4 See https://www.cryptomuseum.com/crypto/index.htm


5 CRIN and ddm interview with Wendy M. Grossman, 28 September 2022.


6 Callas, J., The Recent Ploy to Break Encryption Is An Old Idea Proven Wrong, 23 July 2019, read here.


7 Blaze, M., Exhaustive Search has Moved, 7 July 2018, read here.


8 See the Crypto Museum website on Gemalto, read here.


9 Tech Against Terrorism, Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies, 2021, here.


10 ISOC, Breaking Encryption Myths: What the European Commission’s leaked report got wrong about online security, 2020, here.


11 EFF, If You Build It, They Will Come: Apple Has Opened the Backdoor to Increased Surveillance and Censorship Around the World, 2021, here.


12 Levy, I. and Robinson, C., Principles for a More Informed Exceptional Access Debate, 2018, here.


13 ISOC, Ghost Protocol Fact Sheet, 2020, here.


14 Clayton Rice, K.C., The Ghost Key Proposal, here.


15 Green, M., On Ghost Users and Messaging Backdoors, 2018, here.


16 See Riana Pfefferkorn regarding Operation Car Wash in this event organised by the Stanford Cyber Policy Center: here.


17 Fishman, A. et al., The Secret History of US Involvement in Brazil’s Scandal-Wracked Operation Car Wash, 12 March 2020, here.