Understanding encryption and its place in the digital environment

 

Developing a children’s rights approach to encryption requires a thorough understanding of the technology: how it works, how it is used and how it is integrated into the digital environment. When it comes to decisions of if and how to apply encryption, there are consequences at the individual, community, institutional, State and international levels.

Some regulators have recognised that each provider is different, with different architectures, business models and user bases. This means that an intervention, or use of specific tools on one platform, may not be proportionate on another.18 This is why it is important to set out the technology and explore the differences and nuances in technical discussions.

There has often been talk of “strong encryption” or “breaking encryption” or “workarounds” of encryption in recent debate. What do these really mean in everyday language? Why does it matter in the current debate around children in the digital environment?


Encryption and the Internet

To understand what encryption is and why it matters one should first understand some basic workings of the Internet and the World Wide Web.

Becky Hogge’s guide, Internet Policy and Governance for Human Rights Defenders,19 offers a useful explainer of the construction of the Internet and how the World Wide Web runs on it, described in seven separate layers, each one “stacked” upon the last.20 The model helps to give a sense of place to everyday users, the actors and stakeholders involved in each part of its design, development, maintenance and existing governance models.

Almost all of the recent debate in the Anglo- and Euro-centric spaces on “encryption”, “platforms” and children only considers the content, users, and their interactions in the one superficial layer where content can be viewed. But secure methods of managing different parts of the Internet rely on encryption throughout the full set of layers, or the “stack” of its construction. This is why some people say, for example, that if you ban encryption online you prevent secure banking or commerce.

As Becky Hogge described in her guide Internet Policy and Governance for Human Rights Defenders:21


“Network operators can censor and monitor content at the physical layer. At the code layer, the IETF and ICANN set standards and maintain the key functions of the internet. The application layer is host to huge technology companies such as Google and Facebook, whose market dominance has conspired to make their services the ‘town squares’ of the digital age.”

How does communicating on the World Wide Web work?

Simplistically speaking, data is sent across the Internet in “packets” from one digital device to another, broken up into manageable parcels that flow in a stream of traffic of electronic data. But just like anything that is sent into the postal service, the sender cannot control what happens to the parcel once it is sent. There are therefore switches and agreements that are used to instruct each part of the system how to handle and distribute the packets. Those instructions need to be readable and understood across the whole World Wide Web, so the administrative functions and tasks are coded in broadly accessible instructions across the Internet. These standards are being constantly refined, improved and new standards designed where needed.

Each packet of information can be sent in a variety of ways, and can be sent “in the clear’’ so that anyone with access to the packet at any point in its journey can also see into its contents, in effect distributing an open letter without an envelope.

Alternatively, the sender and recipient may encode the data through encryption, which is commonly thought of as a method used to preserve confidentiality between parties who want to send, share or store information without it all being visible from the outside. In this sense, encryption is used to protect the contents of the transmitted data, but it is also possible to protect the transport tool, not only what is inside it.

This is where the term “metadata” matters, which is in effect the labelling and descriptive information added to the outside of the packets, including the addresses of the sender and recipient, that enable the packets to all arrive in the same place and be put back together in the correct order for the recipient to receive and read as the sender intended.

 

When encryption is used “in transit” with the intention to prevent third parties who might intercept the content of the data packets from being able to read it while it is moved from one place to another and it can only be read by the sender before it is sent or after it is received by the recipient, it may be called “end-to-end” encryption.


“The internet has been called a “world of ends” and an “end-to-end network”, because on the internet the stuff that matters, the smart stuff, happens at the end points, at the computers that connect to it. The computers that connect to the internet are constantly generating, storing and sharing information.”

— Becky Hogge, Internet Policy and Governance for Human Rights Defenders


An important caveat should be remembered when defining what end-to-end encryption means in practice. If the servers, sending, storing and receiving data, control the encryption keys – the keys used to decode the data – that are used on the servers and not the end users themselves, the server operator will have access to data. The environment is therefore not controlled by the users’ choices about encryption, and the server controller will be able to access its content and provide it to law enforcement upon request.


What does encryption do for me in the World Wide Web?

Encryption is a fundamental part of creating secure websites. However, recent advances in webpage security have led some to argue end-to-end encryption is detrimental to protecting children online.

When users visit a web page, they see data that is hosted on that website because electronic information is transferred between where it is stored to the user’s “browser” (e.g. Google Chrome, Microsoft EDGE, Mozilla Firefox). But how does a computer find which site is the one that you want among the billions of webpages in the world?

The Domain Name System (“DNS”) is a system for naming and identifying computers reachable through the Internet or other Internet Protocol networks. It is the system that enables humans to look up a web address and get what we know as domain names (e.g. https://home.crin.org/) “resolved” into numerical IP addresses that the computer can find (i.e. 198.185.159.144) and back again.

This naming system exists to make finding websites easier for people, who generally find it difficult to remember long strings of numbers. DNS acts as an address book that humans and computers can both understand.

Various browser companies have upgraded user security in recent years to ensure that they use DNS over HTTPS (DoH). This means that data is encrypted when it is transferred from the computer where it is stored to the browser of the person viewing the website. Websites that use this kind of protection (called SSL/TLS) start with “https” rather than “http”. This development is intended to make accessing websites more secure, by preventing false authentication by “man-in-the-middle attacks”.

Man-in-the-middle attacks refer to situations where a stranger interferes with data that is being transferred, for example by pretending to show users the website they are trying to visit, but changing important details. The attacker could point the data entry of credit card details of the user to a different end point as a way of stealing (or “phishing”) personal and financial information.

Cloudflare, a global cloud services provider, explains it like this:22


"SSL ensures that anyone who intercepts the data can only see a scrambled mess of characters. The consumer’s credit card number is now safe, only visible to the shopping website where they entered it.”

“SSL also stops certain kinds of cyber attacks: It authenticates web servers, which is important because attackers will often try to set up fake websites to trick users and steal data. It also prevents attackers from tampering with data in transit, like a tamper-proof seal on a medicine container.”

Website encryption and the challenges of identifying illegal and harmful content

The shift to greater security of websites through “https over DNS” or “DoH” has created challenges for some organisations responsible for creating lists of websites to be blocked or monitored. For example, the UK’s Internet Watch Foundation (IWF) scans webpages to create lists of websites containing illegal or harmful content for children, including related to terrorism or pornography. This makes it possible to block sites containing this content and the creation of watchlists so that others can monitor when their users are accessing this kind of material.

In February 2020, Firefox switched to DNS over HTTPS by default for users in the US, making their default browsing experience more secure. According to John Dunn writing for Sophos,23


“[T]o privacy enthusiasts, this change was good because neither [Internet Service Providers] nor governments have any business knowing which domains users visit. For ISPs, by contrast, DoH hands them several headaches, including how to fulfil their legal obligation in the UK to store a year’s worth of each subscriber’s internet visits in case the government wants to study them later for evidence of criminal activity.”

The UK is already recognised as having one of the more intrusive approaches to state demands made of Internet Service Providers. Companies that want to promote the more secure web architecture, “https over DNS”, include DNS providers that offer filtering and parental controls. However, the Internet Service Providers Association (ISPA)24—a trade association representing British ISPs—and the British Internet Watch Foundation have both criticised Mozilla, the not-for-profit organisation behind the Firefox browser, for supporting DoH, saying that it will undermine web blocking programs including ISP default filtering of adult content, and mandatory court-ordered filtering of copyright violations which rely on less secure architectures to be effective.25 Mozilla subsequently said that DoH will not be used by default in the British market until further discussion with relevant stakeholders, but stated that were it implemented, it “would offer real security benefits to UK citizens”.26

In fact, this workaround is exploited by some companies, for example those that sell web filtering (and user monitoring) systems and services to educational settings. They in effect pose as the real website, but impersonate it.

To filter out content, means first having access to it. Filters essentially come in one of three types, according to Professor Ross Anderson,27 depending on which level they operate at. Packet filtering, circuit gateways (where DNS filtering happens), and application proxies (mail filters that try to weed out spam). Since the adoption of more secure transport routes via https, the tools that perform such jobs have been pushed to the endpoints of systems and networks.

Encryption alone does not protect confidentiality or commercial practice or the contents of communications. It only protects against unwanted third-party observers. It does not guarantee what the individuals or institutions—the “endpoints”—then do afterwards, with the (now decrypted) data.

This is especially important to remember when considering whether one method of encryption is more “privacy-preserving” than another, or in evaluating whether a particular technological intervention at one point in the process does or does not “interfere” with privacy. Encryption is not a single tool at a single point of a physical process, but multiple types may be involved in any one online communication. The principle and practice at stake are whether there is any interference by any third party at all.

Why does understanding this matter? Because encryption is necessary for keeping users safe online and discussions that paint ‘encryption’ only as a threat make finding workable solutions to address the real problems more difficult. As described by Dr. Ian Levy and Crispin Robinson of GCHQ in 2018 in an article on Lawfare:28


“Collectively, we’ve defined the various different service and device problems as a single entity called ‘encryption.’ That’s unhelpful as the details of each device and each service will constrain and drive particular solutions.”

Encryption and metadata

Metadata is information about other data. In the conversation about digital communications, metadata can include information about where data came from, its structure, storage and how it is shared. For example, if data originated from a mobile phone, the metadata might include the name, model, firmware, type of device, configuration and capacity of that phone.

Metadata usually includes information useful to the providers of services used in the communications process, such as how well it is performing, how fast information is being written or read and how quickly systems are responding. For example, if the information that is transmitted includes audio or video, it is important for service providers to optimise the speed and order in how packets of data are sent, arrive and are reconstructed to improve the experience of users. Metadata will also include information about the servers, computers and other devices where data came from, has gone to and is stored.

Encryption that protects the contents of communication does not protect the metadata which the sender and recipient did not create but without which the packet cannot pass through different parts of the system because the routing information needs to be readable for the message to get to the right destination.

On WhatsApp, content and metadata29 are both encrypted, which means artificial intelligence systems cannot scan all chats, images and videos automatically as they do on Facebook and Instagram. However, the metadata is still visible to the parent company Meta so that it can direct the messages to the right user. It can also access content information if users back up their WhatsApp messages and interact with a business account on the platform.30 Content moderation reviewers can gain access to communications when users engage the “report” button on the app, and claim a message is violating the platform’s terms of service, including sextortion, since 2020.31

Metadata is intended for reading by machines, but because metadata is very detailed it can also be used to tell people a lot about the relationship and behaviours of the parties involved in any digital activity or communications, even without seeing what is contained in the content.

When digital publishing companies add metadata onto academic papers or educational materials to catalogue the attributes of contents of libraries, it is used by automated search engines to identify, profile and find materials that match search criteria in all of the content of billions of Internet page searches for example. In similar ways, metadata about communications may be used to identify, profile and find individual people talking to each other among the billions of people online in the world.

David Cole, the National Legal Director of the ACLU and the Honorable George J. Mitchell Professor in Law and Public Policy at the Georgetown University Law Center memorably quoted the NSA General Counsel Stewart Baker in a debate in 2014, saying, “metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content”, to explain how metadata alone can provide an extremely detailed picture of a person’s most intimate associations and interests. It is much easier as a technological task alone to search huge amounts of metadata than to listen to millions of phone calls. His co-panellist in the debate, General Michael Hayden, former director of the NSA and the CIA, called Baker’s comment “absolutely correct” and added, “We kill people based on metadata.”32

The UN High Commissioner for Human Rights described in 2018, why the question of confidentiality applies to both the contents of communications and the metadata, “The protection of the right to privacy is broad, extending not only to the substantive information contained in communications but equally to metadata as, when analysed and aggregated, such data ‘may give an insight into an individual’s behaviour, social relationship, private preference and identity that go beyond even that conveyed by accessing the content of a communication’”.33


Using metadata to identify online child sexual exploitation and abuse

The potency of metadata is one reason that technologists argue that it is not necessary or proportionate to access the content of everybody’s communications through mass surveillance, because metadata can indicate patterns of contact or behaviour that gives away a great deal of information about illegal activities. Some argue that metadata should be used to identify and justify where targeted interventions can be made to access content, based on suspicion, rather than mass surveillance or interception, subject to judicial oversight.

This process can also work in reverse. According to Dr. Ian Levy and Crispin Robinson of GCHQ, in all cases, once an image is determined to be child sexual abuse imagery, the service provider knows from the service metadata the identities of those accounts that shared the content, those that received it and those that re-shared it. This knowledge means that educational messages could be targeted at the relevant users and, if necessary, search warrants taken against users who offend in this way.34 The power of potential uses of metadata have led some actors engaged in online regulatory reform to suggest a higher level principle around using reasonable efforts to identify child sexual abuse material:


“Every platform comprises various different kinds of metadata, collects it, assesses it in particular ways. Metadata can only ever suggest that something is illegal or harmful, it cannot tell you with any certainty. [...] All it can do is say that there are factors which indicate that there might be something illegal or harmful, and then you have to do a human review. And those factors and the weighting they have vary massively from platform to platform. So it is a very difficult thing to regulate on an industry-wide level, and I’m not sure regulation needs to be so specific around the use of metadata. [...] [But it could] require platforms to use reasonable efforts to identify [child sexual abuse material] and then a regulator can make an assessment as to whether a company is doing that, whether it’s using the metadata that it does collect in the most effective way, and require the company to take further steps if it’s not doing so.”35

“It’s a very powerful tool, looking at metadata, it’s potentially very intrusive. We definitely are strongly against the bulk collection or scanning of metadata. Metadata would need to be used in a very targeted fashion, which means that other techniques would first need to be used to identify the suspects. This is, I think, not the way that a lot of people see metadata as solving this problem, because they want to use it in bulk and do big analyses and pattern matching to try to find potentially suspicious individuals.”36

“Lots of metadata is generated [...] I think it’s one of a number of approaches that companies should be working on improving [...] Even if it still leaves significant gaps, I think it’s important to have a process with governments to think about how companies might make more effective use of it without compromising people’s rights.”37

Uses of encryption beyond confidentiality

Encryption has value and uses that go beyond protecting confidential information. The importance of understanding how the applications of encryption go beyond keeping things confidential is vital in the analysis of risks and benefits, according to UK-based technology lawyer, Neil Brown:


“[I]f you are solely focussed on providing a ‘good enough’ solution to confidentiality, and you ignore the other facets of encryption, your solution is likely to be inadequate.”38

He identifies twelve areas where encryption plays a role, in addition to confidentiality, including:

  • Anonimity: keeping the identity of a party unknown to the other party or parties, or to one or more service providers;
  • Asynchronicity: the ability for someone to send a message, even though their intended recipient is offline, or for someone to receive a message, even though the sender of that message is offline; and
  • Authentication: checking that the encrypted information was encrypted correctly, using the chosen encryption algorithm.

Most importantly, “encryption” is not a single technology or even collection of different tools. Brown describes “encryption” as a concept, or set of processes within a system. In practice, the process of encryption is carried out through algorithms, and not all algorithms are the same, or attempt to do the same things. Some algorithms have different capabilities and are better suited to one task above another. Some algorithms demand more from the users than others e.g. more computational resources, or more technical skill to apply.39


Encryption in children’s everyday lives

Children benefit from the use of encryption as applied in their everyday life in cyber security and privacy, as adults do. A common thread across the domains of child safety and privacy might be considered a question of interference. Who may interfere with a child and their full and free development, their everyday activities and communications, how, with what effect, and for what purposes?

The domains in which security may protect children and keep them safe, where they are active online include not only communications and social media, but access to finance, health, education, politics, participation in culture, community, and play. Across these environments, insecure technology has had a significant impact on children.

In 2011, 77 million Sony PlayStation user details were reported stolen. The “illegal and unauthorised person” obtained people’s names, addresses, email addresses, birth dates, usernames, passwords, logins, security questions and more, Sony reported, and children with accounts established by their parents also might have had their data exposed.40

In 2015, children’s technology and toy firm Vtech suspended trading on the Hong Kong stock exchange after admitting a hack that allegedly saw 5 million customer details stolen, including sensitive information and unencrypted chat logs between children and their parents.41

In 2016, the Norwegian Consumer Council (NCC) identified problems in Internet-connected toys that are emblematic of the increased spread of connected devices. The NCC said that in a growing market, it is essential that consumers, and especially children, are not being used as subjects for products that have not been sufficiently tested.42

In 2017, together with the security firm Mnemonic, the NCC also tested several smartwatches for children. The researchers discovered significant security flaws, unreliable safety features and a lack of consumer protection. Finn Myrstad, the Director of Digital Policy at the Norwegian Consumer Council, said at the time that,


“It’s very serious when products that claim to make children safer instead put them at risk because of poor security and features that do not work properly.”43

In the educational environment, the US Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) acknowledged in 2022 that educational settings are high risk for ransomware attacks, where limited cybersecurity capabilities and constrained resources increase settings vulnerability and “K-12 institutions may be seen as particularly lucrative targets due to the amount of sensitive student data accessible through school systems or their managed service providers.”44

In the family environment, a child may experience a conflict between their own agency and the rights and responsibilities of the parent, particularly in culturally conservative households. These considerations are most relevant when considering parental monitoring or control services on children’s phones and other devices. In order to offer parents surveillance or monitoring services over their children’s mobile devices, parental control apps require privileged access to system resources and access to sensitive data.

According to Feal, “this may significantly reduce the dangers associated with kids’ online activities, but it raises important privacy concerns. These concerns have so far been overlooked by organizations providing recommendations regarding the use of parental control applications to the public.”45

In a review of 3,264 parental control apps conducted in 2021, researchers Wang et al. found that they were being increasingly adopted by parents as a means of safeguarding their children’s online safety.46 However, it was not clear whether these apps are always beneficial or effective in what they aim to do; for instance, the overuse of restriction and surveillance has been found to undermine the parent-child relationship and children’s sense of autonomy. Ghosh et al. had found in 2018 that overall, increased parental control was associated with more (not fewer) online risks.47

Dr Ian Levy and Crispin Robinson also point out in their most recently published paper:


"[T]his kind of mechanism may place some children at additional risk from abusive or manipulative parents, even when the parents themselves don’t have access to content, and whilst the technique would be technically relatively straightforward to scale, research would be necessary to determine how well it would be likely to cover the users most at-risk and how at-risk children could be effectively protected.”

Security is a process, not a product. Encryption may turn trust into machine-readable code so that machines can verify and trust each other, but human trust still relies on one another. Using tools to replace that has consequences.


Scanning unencrypted content to match known images

Technological developments have enabled new routes to access and abuse children at scale both in real-time and through repeated distribution of content and so, in turn, new technology is being developed and applied to respond to these challenges.

When it comes to detection and content moderation, to identify and remove images of child sexual abuse, the best known technology is PhotoDNA, created by Professor Hany Farid and owned by Microsoft.

PhotoDNA48 works by creating a unique digital signature (known as a “hash”) of an image which is then compared against hashes of other photos to find matching copies of the same image. This is deployed in an unencrypted environment. Facebook adopted the use of PhotoDNA in 2010 across its entire network, Twitter in 2011 and Google in 2016.49 The software operates in unencrypted environments, such as in the open web without https, non-end-to-end encrypted channels or at points where content is stored in unencrypted form (i.e. at the level of Internet Service Providers).

In 2018, when deploying PhotoDNA, and to avoid the complexity of classifying content whose legality might be disputed, Facebook policy was to “only add content to the database that contains images of children under the age of 12 involved in an explicit sexual act”.50 In 2019, Facebook moved to a different hashing algorithm, PDQ, which they developed themselves and a version of which will also hash video.51

The most common criticism of PhotoDNA is that it only knows what it knows. PhotoDNA will not detect previously unreported or new images. Despite this, former President and CEO of the National Center for Missing and Exploited Children Ernie Allen says it is an important tool in content removal to reduce victimisation, and can identify photos that have been in circulation for many years, or that are new but have been identified and turned into a hash only recently. “Using PhotoDNA, we will be able to match those images, working with online service providers around the country, so we can stop the redistribution of the photos.”52

Professor Farid has also built a modified version of PhotoDNA, called eGlyph, for the identification of material for counter-terrorism purposes. It is worth drawing attention to his own comments made in his 2018 paper, that the application to target any particular kind of image, or person, is not limited by safeguards built into the technology, but by policy:


“[A]ny technology such as that which we have developed and deployed can be misused. The underlying technology is agnostic as to what it searches for and removes. When deploying photoDNA and eGlyph, we have been exceedingly cautious to control its distribution through strict licensing arrangements. It is my hope and expectation that this technology will not be used to impinge on an open and free internet but to eliminate some of the worst and most heinous content online.”

The concern is widespread among privacy experts that policy safeguards provide insufficient protection against the increased scope for usage of the technology beyond the identification of child sexual abuse images.


“A lot of the voluntary work around detection of CSAM is based on these databases, and happening in a relatively limited way. However, the technology that’s being deployed to do that is already being deployed also to look for terrorist content. It’s even potentially being deployed to look for misinformation and disinformation.”53

There is less information in the public domain about the technology that operates in live and real-time digital environments. A 2021 Council of Europe independent experts report54 stated that Microsoft has been leveraging tools for the purposes of detecting grooming, built on artificial intelligence (AI) and aimed at targeting behaviours in programs on their Xbox platform for several years and was exploring its use in chat services, including Skype. However, that may now be out of date,55 as the terms and conditions at the time of writing state56 that, “we do not monitor the Services and make no attempt to do so.”

The hopes of some people who support victims and survivors we spoke to, rest on further emerging technologies that will grant access to live conversations and behaviours to a wider range of people such as safeguarding professionals, including for example the UK project DRAGON-S, (Developing Resistance Against Grooming Online – Spot and Shield). The proposal to triage conversations that human operators believe should be inspected in more detail will still need to respect human rights principles like necessity and proportionality.57

One area of risk and harm that deserves attention in the context of the broader child protection system is identifying images that are consensual peer-to-peer indecent image sharing, commonly known as “sexting”. The child’s actions constitute a criminal offence in the UK and many other jurisdictions, but the intent of most adults supporting young people is to not criminalise them and formal sanction against a child or young person would be considered exceptional.58


Workarounds for encryption and exploits in security in the context of child protection

The difficulties posed in identifying illegal behaviour in end-to-end encrypted environments, including child sexual abuse and exploitation, have led to a number of proposals for how to overcome this challenge.

Client-side scanning

Client-side scanning is a means of monitoring the content and behavioural data generated on a device, as opposed to in transit. This means that outgoing communication from a device is scanned and checked against a list of known images or words before it is sent. If there is a match, the system can refuse to send the message or may report it to law enforcement or watchdog organisations. Client-side scanning has been proposed in particular as a means of identifying child sexual abuse material that is shared across encrypted channels by scanning messages before they are encrypted and sent, but there is nothing about the technique or technology that limits it to identifying any particular type of image or content.

There are also similar “hybrid” style scanning measures, such as those proposed by Apple in 2021. Facing criticism, the company decided to change some of its plans and pause others,59 but the proposals were that where users were backing up photos by copying them to Apple servers, this would initiate a scanning process. This method of detecting child sexual abuse material is not strictly “client-side” but a “hybrid on-device/server pipeline”. While the first phase of the hash matching process60 runs on the device, its output is only interpreted through the second phase, run on Apple’s iCloud Photos servers. Apple announced a change of its plans in December 2022 to refocus its efforts on growing its Communication Safety feature.61

The intended plan was that if already known child sexual abuse images were uploaded to Apple’s iCloud servers in the number that exceeded the review threshold, Apple would detect a match in a database of hashes of images provided by the National Center for Missing and Exploited Children. Although the system uses machine learning to detect minor alterations, for example if the images were cropped, or compressed differently, it would not detect an unknown image.

Several experts interviewed during research for this report saw advantages to the use of client-side scanning as a less intrusive means of identifying content transferred through encrypted channels, since the technology does not seek to have access to the entirety of the user’s communications, but operates before the encryption or after the decryption of the communications, and it does not actually “read” the messages:


One myth is the idea of looking at pictures or scanning your photos. That’s not what happens. [...] They’re 1s and 0s. No one’s looking at anything. It’ll just be a string of numbers compared to another string of numbers. And if they match, take action.”62

However, many people and organisations who work with technology are concerned about proposals that support client-side scanning because any access for people who were not intended to be part of a particular communication requires a way in. Any “back-door” access “increases the ‘attack surface’ for encrypted communications by creating additional ways to interfere with communications by manipulating the database of prohibited content”, and it can’t be guaranteed to be accessed by only “the good guys” according to the Internet Society in their response to the leaked 2020 working copy of an EU Commission paper.63

Fourteen experts in computer science, including from Cambridge University and the Royal Society to MIT and a fellow of the IEEE, the authors of the paper Bugs in our Pockets: The Risks of Client-Side Scanning (2021) remain unconvinced and believe that the promise of client-side scanning is an illusion.

They explain that, “moving content scanning capabilities from the server to the client opens new vantage points for the adversary”, and argue that if the client-side scanning technologies and practice were to become pervasive, there would be “an enormous incentive for nation-states to subvert the organisations that curated the target list, especially if this list were secret”.

Similar criticism has been made that client-side scanning breaks end-to-end encryption in principle if not in practice by creating the route for interference by a third-party, because “fundamentally it’s very targeted at finding the content of the end-to-end encrypted communication: understanding what’s about to be sent, or what has been sent and received on the device itself. So that breaks the expectation that this is supposed to be a private communication only between the known participants. And more broadly, it is likely to be incredibly disproportionate because of the ability to scan for all types of content and potentially heavily censor that content - not only indicate that certain content is about to be sent or has been sent, but also potentially even block that content from being sent.”64

Critics of the use of the measure have also raised concerns about the risk of “mission creep”, whereby measures are introduced exclusively to identify child sexual abuse images, but are then expanded in a way that leads to much greater intrusion and reporting of individuals' activities - legal or otherwise - to authorities. Researchers at Princeton in 2021 stopped their own scanning program when they realised how easily their system could be repurposed for surveillance and censorship. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.”65 That China and India have repurposed such technology for these aims,66 makes this a very real, not theoretical, risk.

Some proposals to implement scanning the device respond in some respects to this concern, warning the user when a match is identified and blocking the content, but not notifying the authorities. Even in this case, some interviewees were wary of mission creep: “If people are used to a system like that on their phone running in the background, then how hard would it be to flip the switch and start reporting back to the authorities? [...] It’s such a powerful tool and many governments around the world that are more repressive are going to want to have access to it and expand it beyond [child sexual abuse material].”67


“From a policy perspective you could try to put controls and limits in place, but of course the next government might have different views and get rid of those controls. [...] Once the tech is in place, people will come up with all sorts of ideas about how this technology could be used to deal with new societal problems. [...] [It has been widely said that] ‘Code is law’, that technology has legal impact. I actually think it goes further than that. I think in some ways technology is like constitutional law, where it puts things in place that are very difficult to change later. Once every iPhone and every Android has this kind of CSAM scanning capability, well, why shouldn’t governments ask it to start looking for bomb-making instruction manuals, extremist images, and insults to religious figures?”68

Dr Ian Levy, former Technical Director of the National Cyber Security Centre (NCSC) and Crispin Robinson, Technical Director for Cryptanalysis at GCHQ, promoted a more overt client-side scanning approach in their July 2022 paper as a way of achieving the same aim in mass surveillance and claim it does so without endangering user privacy. Others disagree. Because this privacy interference is performed at the scale of entire populations, a group of leading security and encryption experts describe it as a bulk surveillance technology in their paper “Bugs in our Pockets”, published in the summer of 2021.69 They explained why it makes what was formerly private on a user’s device potentially available to law enforcement and intelligence agencies, even in the absence of a warrant.


“[Client-side-scanning] neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.”

This risk of scope creep creates the very real question of what limits the technology from becoming an all-purpose facial recognition and reporting tool for the state? Since some aspects of proposed legislation in the EU would make reporting mandatory, and in the US the reporting of child sexual abuse material to NCMEC is already mandatory,70 the question arises of whether organisations involved in monitoring reporting, such as NCMEC, are wholly private or in the legal context, a “state actor.” This question in turn raises questions about appropriate scrutiny and oversight.

In the August 2022 Report, The right to privacy in the digital age, the Office of the UN High Commissioner for Human Rights made several comments around Client-Side Scanning, namely that:


“Client-side scanning also opens up new security challenges, making security breaches more likely.”71

“Imposing general client-side scanning would constitute a paradigm shift that raises a host of serious problems with potentially dire consequences for the enjoyment of the right to privacy and other rights. Unlike other interventions, mandating general client-side scanning would inevitably affect everyone using modern means of communication, not only people involved in crime and serious security threats.”72

“Given the possibility of such impacts, indiscriminate surveillance is likely to have a significant chilling effect on free expression and association, with people limiting the ways they communicate and interact with others and engaging in self-censorship.”73

Homomorphic encryption and emerging technologies

On-device homomorphic encryption - a form of encryption which allows users to carry out computations on encrypted data without decrypting it - with server-side hashing and matching has been suggested as a technology with potential. Using this method, images are encrypted using a carefully chosen partially homomorphic encryption scheme, which enables an encrypted version of the hash to be computed from the encrypted image. The encrypted images are sent to the online service provider server for hashing and matching against an encrypted version of the hash list. The server does not have the homomorphic encryption keys so cannot access the contents of the image, but can only identify if there is a match or not in the database of images. If the database contains only one kind of image content, the server provider can therefore infer what was identified as on the users’ device but not access the image itself.

Some interviewees thought that investing in privacy-enhancing technologies like homomorphic encryption would be a way to move the debate forward.


“We’ve had conversations with industry partners and said, ‘Have you been looking at this?’, but one of the comments that came back was ‘It’s too expensive’. [...] But actually that would be a really positive way of using encryption. I can match something without ever knowing what I’m matching and what I’m actually matching it against. [...] It’s a way of using an encryption method to expose as little information as possible.”74

“My general perception is of technology that just keeps getting better and faster. [...] There have been theoretical conversations about homomorphic encryption or quantum computing and how it may lead to the ability to break encryption, but I think we’re just so far away from those solutions. And by the time we get to that point, we’ll have much more powerful encryption, too.”75

However, easier access to systems, networks and devices, all increases the risk of misuse that the European Union Agency for Cybersecurity (ENISA)76 sees cannot be fought with technology.

Research by Tech Against Terrorism found that the technology is not yet fully developed, and developing such solutions is expensive. Further, it presents security risks, raises jurisdictional questions, and breaches privacy.77

The UK Information Commissioner’s Office describes how important it is to choose the right algorithm, and to ensure that the key size is large enough to defend against attack over the full life-cycle of the data.78 As computing processing power increases or new mathematical attack methods are discovered, a key must remain sufficiently large to ensure that an attack remains a practical impossibility. Quantum computing creates new risks for every previous form of cryptography.

According to ENISA, quantum technology will, “enable a huge leap forward in many branches of industry, as it can efficiently resolve problems technologies of today are not able to provide a solution for. However, this technology will be highly disruptive for our current security equipment and systems. Scientists commonly agree that quantum computers will be able to break widely used public-key cryptographic schemes.”79

Covert access to live content via wiretapping

Another approach to accessing data that is encrypted is through covert monitoring. Various terms are used interchangeably to describe this kind of activity, including “lawful exceptional access” and “legal hacking”, but the most well known proposal was the so-called “ghost protocol”. What all of these measures have in common is that they seek to gain covert access to encrypted communications.

The “ghost protocol”80, from GCHQ, proposed adding a silent third-party to encrypted conversations. In its simplest terms, this would mean that law enforcement or national security actors would be able to access content discussed in encrypted environments, without undermining the encryption itself as they would be part of the conversation. The measure has been widely condemned by technology and privacy groups, including the Internet Society.81 “While optimism and cooperation are nice in principle, it seems unlikely that communication providers are going to voluntarily insert a powerful eavesdropping capability into their encrypted services, if only because it represents a huge and risky modification.”82


“It has been said that the ghost proposal does not break encryption - it does not require the removal of encryption because you’re just adding a silent invisible user. [...] But from our perspective, that creates a huge security vulnerability. [...] This ghost is obviously intended to be law enforcement, but [...] criminals might be able to get access to that technology, states that don’t respect human rights might force service providers to use it to gain access to encrypted communications without the knowledge of the participants, and ultimately that breaks encryption.”83

“Legal hacking” presents another means of gaining access to encrypted environments. These measures try to exploit security vulnerabilities to gain access to end-to-end encrypted communications, whether by intentionally creating a weakness that authorities know how to access or taking advantage of an unintended defect in the security.


“Ultimately what they have in common is that they either mandate or try to exploit vulnerabilities. So to my mind they undermine the very essence of encryption, which is that no person can have access to the communication other than the sender and the receiver. So it’s essentially creating a vulnerability in the system which then law enforcement are able to have access to. Now, that’s kind of like building a house and saying you can have a lock on the front door, but you need to have a back door that the police can enter when they have a court order and all that really does is it creates an opportunity for someone else to break in.”84

Some interviewees argued that legal hacking should be acceptable if it complies with extremely stringent safeguards, for example ensuring that it does not undermine the security of the device as a whole. In any case, as one interviewee put it, “Governments already gain access to encrypted communications content by launching brute force attacks or employing other technical means to circumvent encryption. Such measures need to be regulated, and cabined with procedural and substantive safeguards governing such access on a case-by-case basis.”85

Other commentators have been more sceptical about the possibility of achieving this kind of access safely without fundamentally undermining encrypted communications:


“We know that hackers and those who would want to access people’s encrypted communications are as technologically savvy and in often cases more so than security and law enforcement agencies, so it would only be a matter of time before any vulnerability that was mandated became identified by others, so you’d constantly be playing a cat-and-mouse game of fixing a vulnerability and then having to create a new one. So I don’t think that ultimately that’s a sustainable solution. You might as well not have encryption in the first place if you’re going to have a vulnerability in that case.”84

In practice, law enforcement have a number of tools at their disposal that function as “lawful hacking”. GrayKey enables law enforcement to recover data from iOS and leading Android devices, including encrypted or inaccessible data. Cellebrite’s Universal Forensic Extraction Device, software that extracts the data from a mobile phone and generates a report summarising it, can even detect and report on deleted data. Other tools are IMSI catchers, essentially a “fake” mobile tower acting between the target mobile phone and the service provider’s real towers, which are considered a man-in-the-middle (MITM) attack. These high-cost technology solutions are increasingly procured and used by States. Where States have found these measures politically unpalatable or legally not possible, some are ignoring the principles of the rule of law, democracy and human rights and instead procure other third-party services to do the spying on their behalf.87

Researchers found in Nigeria that the government has increased spending in the last decade on acquiring various surveillance technologies and has approved a supplementary budget to purchase tools capable of monitoring encrypted WhatsApp communications.88

Covert access to live content via malware and interception

Access to encrypted communications can also be achieved through installing “malware” (malicious software) on a device to allow access. The most high-profile example of this has been the use of “Pegasus,” software developed by the NSO Group. The software can be installed on a phone remotely without the owner knowing and turns it into a surveillance device. The software can copy messages that are sent or received, access photos, turn on the microphone to record conversations, turn on the camera and access location data.

Research by the Citizen Lab in 2020, found what they called “a bleak picture of the human rights risks of NSO’s global proliferation.” Countries with significant Pegasus spyware operations had previously been linked to abusive use of spyware to target civil society, including Bahrain, Kazakhstan, Mexico, Morocco, Saudi Arabia, and the United Arab Emirates. In August 2016, the award-winning UAE activist Ahmed Mansoor was targeted with NSO Group’s Pegasus spyware.89

Whether on request, or through approved “lawful interference” or by indirect government interference through hacking,90 as soon as it is possible to open up the contents of communications to a company, by extension the government and law enforcement have access in ways that they wouldn’t otherwise have.

These threats are supplementary to insider threats. In 2019, the US Department for Justice charged two former Twitter employees with accessing the personal information of more than 6,000 Twitter accounts in 2015 on behalf of Saudi Arabia.91 Secure enclave technology, which are in effect “secure settings” inside businesses where not all employees have all access, are designed to mitigate but cannot solve this problem.

There is a lack of trust in governments around the world to not misuse the communications data of their opponents in many shapes and forms. Individuals rely on universal fundamental human rights in law as a deterrent and as a route for redress, where they cannot rely on a government to be trustworthy.


Breaking encryption myths

“Breaking” encryption for data in transit, the content of communications, involves being able to read the contents “in the clear” and joined up the way the sender intended for the recipient to read. That means either you obtain the key by being given, finding, guessing or compelling it from the sender, or you bypass the key by exploiting a flaw, to access the plain contents in use or locate a copy of it. Which method is used depends on who is looking for what access to what content and why. The EU assessment of effectiveness, feasibility, risks and outcomes of various workarounds can be read in a leaked 2020 working copy of an EU Commission paper.92

As more and more content has been made secure in-transit and with increasing use of peer-to-peer systems, the points at which any third-party can most easily access communications data is at the end points. The debate around end-to-end encryption has therefore become more fraught as time has gone on, as security services, states, and law enforcement suggest more effective security for users makes it harder for security services to break into them. The push therefore is towards services that do not need to “break encryption” where it is used and instead to operate on the device, or server that is the end-point of the process. While these techniques may therefore not compromise the technical architecture of end-to-end encrypted systems as a whole, they compromise its purpose and aims in practice.

Generally, the concept of “breaking encryption” in the context of detection and enforcement of law enforcement for child protection, has been superseded by the widespread use of an alternative approach: the encryption workaround.93 The technology does not need itself to be broken if the achievement of the aims of the end-to-end encryption can be broken instead.


User reporting

Effective user reporting is widely recognised as a vital part of any policy and practice, whether by company to bodies responsible for identification and takedown or at individual levels.

For example, WhatsApp reports all apparent instances of child exploitation appearing on their service from anywhere in the world to NCMEC, according to their published policy,94 including via government requests.


“I think user reporting is actually something that should be encouraged as much as possible, assuming that what’s being reported is legitimately illegal behaviour or material. [...] Regulation is needed to make it easier for users to be able to report material and behaviour that violates platforms’ terms of service and to have a clearer and more transparent process by which that assessment is then made by the company, and then look into appeal mechanisms etc. It should be as easy as possible, particularly for children and other vulnerable users, to report something potentially harmful, and to understand the rules of the platforms they use to be able to report harmful or illegal activity and behaviour by other people. [...] Children should be better equipped, as they are growing up using technologies, to know how to use them safely and securely, whether that’s through schools, or by initiatives of the platforms or design choices by the platforms themselves.”95

User reporting by individuals and organised collectives may of course be used against individuals in unexpected ways or weaponised at scale as well.

WhatsApp users have used the reporting system to attack other users according to moderators interviewed by ProPublica, who said in 2021, “we had a couple of months where AI was banning groups left and right” because users in Brazil and Mexico would change the name of a messaging group to something problematic and then report the message. “At the worst of it,” recalled the moderator, “we were probably getting tens of thousands of those. They figured out some words that the algorithm did not like.”96

However, user reporting is the one approach that does not create tensions with privacy and security in encrypted environments, with little to no technical challenge. Interviewees, especially those involved in victim and survivor support, frequently highlighted that user reporting was inadequately supported, and took too long, sometimes with weeks in between, from reporting to takedown. That will likely become increasingly politically and publicly unacceptable with mounting pressure on social media companies from new legislation around the world.

 

Summary: Technology discussed in debate around combatting child violence and sexual exploitation

E2EE in transit with content hash extraction and matching is at point of upload to the service provider or on providers’ servers Workarounds of encryption or exploits in security
Photo DNA (only operates in unencrypted environments e.g. websites without https and non-e2ee messaging, and at points where the content is unencrypted at rest i.e. at the service provider or on device) On-device homomorphic encryption with server-side image hashing and matching (i.e.Apple 2022) Text based scanning tools (only operates in unencrypted environments e.g. in the open web and unencrypted points in communications channels) On-device client-side detection with cloud-based second stage image or text based moderation Secure enclaves in the service provider’s server with matching via homomorphic encryption On-device overt access with information sent to another device (e.g. “parental control” style products) Key Escrow chips installed in the device at mass scale (e.g. The Clipper Chip) Spyware (remote covert access to a mobile device not authorised by the device owner e.g. Pegasus) On-device hacking (physical device access not unauthorised by the device owner e.g. Cellebrite) Server-side access to all content by design (e.g. man-in-the-middle style tools including “Child Safety Tech” products) Ghost protocol (adding a third party to a communication while in progress unknown to the device owner e.g. state intelligence services)
Characteristics of the Tools
Targeted only at individuals (can also be employed at scale) X X X X
Untargeted X X X X X X X X
Identifies content in an encrypted environment X X X X X X X
Enables mass surveillance of content by companies X X X X X X X X X
Enables mass surveillance of content by law enforcement / security services X X X X X X X X X
State security services exceptional access possible (its legality depends on jurisdiction) X X X X X X X X X X X
Compliant with a ban on general monitoring
Application of the Tools
Previously identified (recirculating) CSAM images children aged under 13 X X X X X X X X X X
Previously identified (recirculating) CSAM images children aged aged 13-18 X X X X X X X X X
Previously unknown CSAM images of children aged under 13 X X X X X X X
Previously unknown CSAM images of children aged 13-18 X X X X X X X
Real-time grooming via camera (video) X X X X X X X X
Real-time sextortion via camera (video) X X X X X X X X
Illegal content exchanged in e2ee messaging between adult and child (text or image based) X X X X X X X X X
 

***

 

Footnotes

18 See the Australian eSafety Commissioner, Basic Online Safety Expectations. Responses to transparency notices, 2022, read here.


19 See Hogge, B., Travel Guide to the Digital World: Internet Policy and Governance for Human Rights Defenders, 2014, read here.


20 See here.


21 Hogge, B., Travel Guide to the Digital World: Internet Policy and Governance for Human Rights Defenders, 2014, p. 46. See here.


22 See here.


23 Dunn, J., ISPs call Mozilla ‘Internet Villain’ for promoting DNS privacy, 2019. See here.


24 ISPA, ISPA withdraws Mozilla Internet Villain Nomination, 2019. See here.


25 See here.


26 The Guardian, Firefox: ‘no UK plans’ to make encrypted browser tool its default, 2019. See here.


27 Anderson, R., Security Engineering—A Guide to Building Dependable Distributed Systems, 2020, Chapter 21. See here.


28 Levy, I. and Robinson, C., Principles for a More Informed Exceptional Access Debate, 2018.


29 WhatsApp Encryption Overview Version 6 Updated November 15, 2021. Communication between WhatsApp clients and WhatsApp chat servers is layered within a separate encrypted channel using Noise Pipes with Curve25519, AES-GCM, and SHA256 from the Noise Protocol Framework. See here. See Mooney, N., An Introduction to the Noise Protocol Framework, 2020. See here.


30 Cloud API, operated by Meta, acts as the intermediary between WhatsApp and the Cloud API businesses. In other words, those businesses have given Cloud API the power to operate on their behalf. Because of this, WhatsApp forwards all message traffic destined for those businesses to Cloud API. WhatsApp also expects to receive from Cloud API all message traffic from those businesses. See here.


31 ProPublica, How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users, 2021. See here.


32 Cole, D., ‘We Kill People Based on Metadata’, 2014. See here. The full comments can be heard in the context of the debate at: See here.


33 UN High Commissioner for Human Rights, The right to privacy in the digital age, A/HRC/39/29, 3 August 2018, para. 6. See here.


34 Levy, I. and Robinson, C., Thoughts on child safety on commodity platforms, 2022, p. 64. See here.


35 CRIN and ddm interview with Richard Wingfield, 6 September 2022.


36 CRIN and ddm interview with Privacy International, 26 September 2022.


37 CRIN and ddm interview with Ian Brown, 6 October 2022.


38 Brown, N., The end-to-end encryption debate: 1: the (very) basics of “encryption”, 2022. See here.


39 For more information on the breadth of technical and policy concepts regarding encryption, see: UK Information Commissioner’s Office, What is Encryption?, 2022. See here.


40 Reuters, Sony PlayStation suffers massive data breach, 27 April 2011. See here.


41 VICE, One of the Largest Hacks Yet Exposes Data on Hundreds of Thousands of Kids, 27 November 2015. See here. The breach of the popular kids’ gadgets company VTech also exposed children’s pictures and recordings, and chats with their parents: VICE, Hacker Obtained Children’s Headshots and Chatlogs From Toymaker VTech, 30 November 2015. See here.


42 See here.


43 See here.


44 US Cybersecurity and Infrastructure Security Agency, Alert (AA22-249A) #StopRansomware: Vice Society, 2022. See here.


45 Feal, Á. et al., Angel or Devil? A Privacy Study of Mobile Parental Control Apps, 2020, Proceedings on Privacy Enhancing Technologies 2020 (2): 314 - 335. See here.


46 Wang, G. et al., Protection or punishment? Relating the design space of parental control apps and perceptions about them to support parenting for online safety, 2021, Proceedings of the Conference on Computer Supported Cooperative Work Conference, 5(CSCW2). See here.


47 Ghosh, A. et al., A Matter of Control or Safety?: Examining Parental Use of Technical Monitoring Apps on Teens’ Mobile Devices, 2018, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. See here.


48 See Microsoft on PhotoDNA: here.


49 Farid, H., Reining in Online Abuses, 2018, Technology & Innovation, 19(3) 593–599.


50 Ibid.


51 Meta, Open-Sourcing Photo- and Video-Matching Technology to Make the Internet Safer, 1 August 2019. See here.


52 See here.


53 CRIN and ddm interview with Privacy International, 26 September 2022.


54 Council of Europe, Independent Experts’ Report: Respecting human rights and the rule of law when using automated technology to detect online child sexual exploitation and abuse, 2021, p. 24. See here.


55 See here.


56 See Microsoft Services Agreement from August 2022: here.


57 This platform has been collaboratively developed with Legal Innovation Lab Wales, supported by the European Regional Development Fund through the Welsh Government: here.


58 See CRIN, Discrimination and Disenfranchisement: A global report on status offences, 2016, pp. 38-41. See here. See also: here.


59 EFF, Apple Has Listened And Will Retract Some Harmful Phone-Scanning, 12 November 2021. See here.


60 Apple, Security Threat Model Review of Apple’s Child Safety Features, August 2021. See here.


61 CNN Business, Apple abandons controversial plan to check iOS devices and iCloud photos for child abuse imagery, 8 December 2022. See here.


62 CRIN and ddm interview with IWF, 3 November 2022.


63 Leaked EU Commission working document: Technical solutions to detect child sexual abuse in end-to-end encrypted communications, 2020. See here.


64 CRIN and ddm interview with Privacy International, 26 September 2022.


65 9to5Mac, Princeton University says it knows Apple’s CSAM system is dangerous – because it built one, 20 August 2021. See here.


66 EFF, India’s Draconian Rules for Internet Platforms Threaten User Privacy and Undermine Encryption, 20 July 2021. See here.


67 CRIN and ddm interview with Privacy International, 26 September 2022.


68 CRIN and ddm interview with Ian Brown, 6 October 2022.


69 Abelson, H. et al., Bugs in our Pockets: The Risks of Client-Side Scanning, 2021. See here.


70 Rosenzweig, The Law and Policy of Client-Side Scanning, 20 August 2020. See here.


71 UN High Commissioner for Human Rights, The right to privacy in the digital age, A/HRC/51/17, 4 August 2022, para. 28. See here.


72 Id., para. 27.


73 Ibid.


74 CRIN and ddm interview with IWF, 3 November 2022.


75 CRIN and ddm interview with Privacy International, 26 September 2022.


76 ENISA, Solving the Cryptography Riddle: Post-quantum Computing & Crypto-assets Blockchain Puzzles, 2021. See here.


77 Tech Against Terrorism, Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies, 2021, p. 62.


78 UK Information Commissioner’s Office, Encryption, 2022. See here.


79 ENISA, Solving the Cryptography Riddle: Post-quantum Computing & Crypto-assets Blockchain Puzzles, 2021.


80 Levy, I. and Robinson, C., Principles for a More Informed Exceptional Access Debate, 2018.


81 Internet Society (ISOC), Ghost Protocol Fact Sheet, 2020.


82 Ibid.


83 CRIN and ddm interview with Privacy International, 26 September 2022.


84 CRIN and ddm interview with Richard Wingfield, 6 September 2022.


85 CRIN and ddm interview with the Centre for Democracy and Technology (Europe Office), 13 October 2022.


86 CRIN and ddm interview with Richard Wingfield, 6 September 2022.


87 For example, the Israeli NSO Group’s Pegasus spyware, which was implicated in the murder of Saudi journalist Jamal Khashoggi.


88 Oloyede, R. and Robinson, S., Surveillance laws are failing to protect privacy rights: What we found in six African countries, 26 October 2021, Institute of Development Studies. See here. Premium Times, Nigerian govt moves to control media, allocates N4.8bn to monitor WhatsApp, phone calls, 12 July 2021. See here.


89 Marczak, B. et al., Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries, 2018, Citizen Lab Research Report No. 113, University of Toronto. See here.


90 UK Government, National Cyber Force Transforms country’s cyber capabilities to protect UK, 19 November 2020. See here.


91 Washington Post, Former Twitter employees charged with spying for Saudi Arabia by digging into the accounts of kingdom critics, 6 November 2019. See here.


92 Leaked EU Commission working document: Technical solutions to detect child sexual abuse in end-to-end encrypted communications, 2020.


93 Kerr, O. S. and Schneier, B., Encryption Workarounds, 2017, 106 Georgetown Law Journal 989 (2018). See here.


94 See here.


95 CRIN and ddm interview with Richard Wingfield, 6 September 2022.


96 Ars Technica, WhatsApp “end-to-end encrypted” messages aren’t that private after all, 8 September 2021. See here.