Surveillance and digital rights: Tate Exchange 2020

 

In March 2020, CRIN joined the Digital Maker Collective at the Tate Modern’s Tate Exchange to explore the intersection between art schools, technology and social good. CRIN previously took part at Tate Exchange last year with Beta Utopia which pictured how a rights-respecting future might look like. For this year’s event, we developed a series of workshops and talks introducing the general public to children’s rights and the digital environment, including the installation ‘Welcome to Happyland’ exploring surveillance, a panel on the ethics of children’s data and finally, a panel on surveillance, facial recognition and its dangers.  

 
 

Back in January, Maria (Design and Outreach Officer) co-organised multiple meetups between CRIN, industry partners and UAL students and alumni at the DMC’s HQ at Camberwell College of Arts for 3 months, where CRIN, UAL alumni and students talked about the dangers and endless creative possibilities behind facial recognition and other forms of creative surveillance, such as deepfakes. Our aim was to explore the intersection of art and activism and find creative and alternative ways to introduce these technologies to a general audience, starting with explaining the technical idea behind them, to being aware of its possible dangers, to finally exploring its creative possibilities.

* * * * *

Welcome to Happyland

“Welcome to Happyland: Get your passports ready” is an immersive installation which aims to introduce the general public to surveillance, facial and body recognition and children’s rights. The installation was split in different activities exploring a topic each with each step being a ‘security measure’ for audiences to complete in order to receive a passport visa to Happyland.

 
 
 
 

1. The Emotion Recognition Tool

We introduced facial recognition to the audience by having the user sit in front of a webcam and wait until the machine scanned their emotion, which it classified as either ‘Happy’ or ‘No Emotion detected’. If they were classified as Happy, they would get to pass to the next security gate. If not, they were labeled as having ‘No Emotion’ and thus denied entry until they were classified as Happy.

The idea behind this interactive artwork is to explore how human emotion determines one’s humanity or lack-of and understanding the bias behind an artificial intelligence machine. By building a machine which classifies people with any other emotions than happy as as ‘non-human’, it shows how easy it is for a technologist to influence an extremely important message and outcome through curating the database and the machine’s classification features. What if the machine classified skin colour instead of emotions? What if the technologist himself is biased against specific races? What kind of bias would happen and how would it affect the outcome?

 
 
 
 

2. Real Cat or Cat Drawing: Object Recognition

Once they have passed the emotion recognition stage, we introduced the public to machine learning and object recognition through an activity, where participants had to show an image of a cat or draw their own cat and have Google’s Teachable Machine differentiate a real cat from a drawing. Some people were able to fool the machine by drawing realistic cats with the same colour palette found in the “real cat” database. Google’s Teachable Machine is an online free and accessible tool for users to create machine learning models with no coding required, which introduces everyday audiences to databases and machine learning in a fun and effective way.

Inspired by the pix2pix online tool which generates cats through line drawings, one of the DMC team members Blossom, reached out to a primary school whose class used pix2pix to generate AI cats. We then used this tool to create a database of ‘real cats’ and ‘drawing of cats’ and asked each audience member to show an image of a cat for the machine to guess correctly. This was a play on the ‘Captcha’ security measure on websites, where users are asked to find specific images in a pool of images such as lamp posts, in order to prove that they are not a robot. However, in the Real Cat or Drawing activity, the machine has to be able to classify the images properly in order to recognize the human instead.

 
 
 
 

3. The Body Recognition game

We introduced the public to body recognition through an interactive game, where participants had to choose 3 body stances and recreate the positions using PoseNet, an open-source body detection tool.

We wanted to introduce this fun piece of technology through a simple game of mimicry. Each body is shown as a basic skeleton on the webpage, and we created 6 specific poses which the audience member had to recreate 3 of in order to pass the final security measure. As the audience members tried various poses, some harder than others, it led them to understand how the machine recognized and recreated their digital skeleton in real time, and how it is possible to fool the machine into specific positions. 

 
 
 
 

4. Deepfake Baby!

Once the audience members have gone through each step, they receive a stamped visa passport granting them access to Happyland, where they meet an AI-generated citizen from Happyland: a baby.

The baby, whose face is AI generated, is put into an app called MugLife, where participants can manipulate the baby’s facial features to make her say or do things such as lick her lips and close her eyes. This crucial step confronts the user to a possible and realistic and sometimes, horrific outcome resulting from unregulated data collection (specifically facial and body recognition) which can also affect them or their own children.

Each step before this introduced audience members to data collection and privacy issues, where their face and body was recognized and kept by the machine. Once collected, people can manipulate the data obtained using deep learning to generate new, fictional outcomes. An example is a deepfake. A deepfake is a synthetic, realistic piece of media generated through deep learning by merging databases together. An example is politicians whose lips have been AI-generated and voice recreated to say things which they have never said in real life. 

 
 
 
 

Human or Non-Human: A Storytelling Workshop

We organised an interactive exploration of (AI) Artificial Intelligence and human emotions investigated through gameplay and storytelling named ‘Human or Non-Human: A Storytelling Workshop’.

In this workshop, we downloaded AI-generated faces of all age, race and gender from thispersondoesnotexist.com , and animated their faces to move in an uncanny fashion through the deepfake app MugLife, without telling the audience members.

Each person in the audience got to wear a lanyard with an image of a random person and had to give them a name, age and a profession. Then, the workshop facilitators, Farrukh and Sherry, outlined a basic premise of a human-looking robot causing mayhem in a hospital. Together, the audience members have to guess who amongst them is the robot.

This workshop pushed participants to explore their inner biases when confronted with the non-human or what they perceive to be ‘artificial intelligence’. Without any clues of who the robot is, participants end up using their own personal biases to make uneducated guesses, and as the workshop advances, they get to defend their positions and understand what they perceive and consider as human, and what they consider as artificial. 

Panels

 
 
 
 

1. Ethics of Children’s data

Panelists: DefendDigitalMe, Open Data Institute, Young Coders Meet-Up & CRIN. 

On Saturday 6th March, along with Jen from DefendDigitalMe, Renata from Open Data Institute, Leo from CRIN & -18s from Young Coders group participated in a panel on the ethics of children’s data. The panel explored how children in the digital age face a high risk of being exploited for their personal information by the State and commercial agencies; they are less likely than adults to be aware of their legal right to privacy, the number of ways their data is being collected, used or misused in their daily lives. The panelists discussed the new and emerging threats to children in the digital age, how we can ensure the data that is collected is handled more ethically and equip children with the tools to defend their human rights.

 
 
 
 

2. Surveillance and facial recognition: what are the dangers?

Panelists: Privacy International, Big Brother Watch, Digital Maker Collective & CRIN. 

On Saturday 6th March, we invited Griff Ferris, the legal and policy officer from Big Brother Watch & Ioannis Kouvakas, the legal and policy officer from Privacy International to talk about surveillance, specifically facial recognition, and how it can be abused by specific powers against citizens. 

From monitoring people on the streets of London to building private surveillance networks, the growing use of surveillance poses serious threats to our fundamental rights. What are those threats and which tools should we be careful of?

In addition to the panel, Privacy International delivered two workshops on “Surveillance Incorporated” where the audience got to learn tactics to exercise their human rights and protect your data at the "Rights Clinic". The public could also get involved in a "Choose your own Adventure" workshop exploring a dystopian story set in 2025 London where facial recognition cameras have become a reality, operated by both police and private companies.

What‘s next?

The event was a success and as CRIN, we will be focusing more on children’s digital rights as part of our wider work. If you wish to read more on the topic, read our article on how surveillance in times of coronavirus and how the pandemic has affected children’s right to privacy. To know more about the event, visit the Uni To Unicorns page here.


Credits:

  • Welcome to HappyLand credits:

    Direction / Project Manager / Tech lead: Maria Than 
    Graphics: Mathilde Rougier & Maria Than
    Set Design & Concepts: Mathilde Rougier, Georgia Hughes, Blossom Carrasco, Maria Than, Tera Cho, Farrukh Akbar, Philip Spooner 
    Invigilators/Facilitators: all of the above expect Philip Spooner + to add: Salam Shamki, Rebecca Thomas, Diana Gheorghiu & Jo Collier
    Illustration: Miriam Sugranyes

  • Human / Non-Human workshop:

    Lead: Farrukh Akbar & Sherry Wei (DMC)
    Concept: Sherry Wei, Farrukh Akbar, Blossom Carrasco & Maria Than
    Design: Maria Than, Farrukh Akbar, Salam Shamki
    Workshop leaders: Sherry Wei, Farrukh Akbar, Victor Sande-Aneiros  

  • Interactive facial & motion detection: 

    Lead and Concept: Youngjun Chang
    Design: Tera Cho

  • Ethical Use of Children’s Data:

    Organizers: Maria Than, Lianne Minasian, Leo Ratledge, Diana Gheorghiu
    Contributors: Jen Perrson (DefendDigitalMe), Renata Samson (Open Data Institute), Young Coders

  • Surveillance & Facial Recognition:

    Organizers: Maria Than, Lianne Minasian, Leo Ratledge, Diana Gheorghiu
    Contributors: Caitlin Bishop, Ioannis Kouvakis (Privacy International), Geoff White (journalist) & Griff Ferris (Big Brother Watch)