“The people who get to impose their metaphors on the culture get to define what we consider to be true.”
— George Lakoff and Mark Johnson, Metaphors We Live By
Are policies that deny children access to social media just "restrictions" or outright bans? And is letting children on social media more like sending them to Mars or to a nightclub?
Language and metaphors have immense power in policymaking - they shape how we understand and respond to issues in our societies.
One of the biggest - if not the biggest - current issues in children’s rights and the digital environment is children’s access to social media.
Countries across the world
are rapidly clutching at what they see as a fast-track panacea: social media bans to keep children safe and protect them from a
damaging business model that prioritises
profit over users’ wellbeing. While CRIN
previously argued that “to ban or not to ban” is the wrong
debate from a children’s rights perspective, how then have bans captured people’s imagination so quickly and effectively?
To begin to understand this we have to look more closely at the language and metaphors commonly used in this discussion. What do they reveal about the issue and our politics around it?
What do they obscure? And what should we do once we become aware of their power?
What’s in a word?: Delays, restrictions, bans
When bans are reframed to seem less harsh
Proponents of social media bans for children use a variety of words to describe these new policies, which seem to skirt around the word “ban” and the nature of these policies as plain prohibitions.
Gentler (re)framings like “delays” or “restrictions” are common, however they contribute to obscuring the fundamental flaw in these policies, which is that they regulate the users instead of the business model of the platforms.
Australia, the first country to pass a law excluding under-16s from some social media, is a telling example. Some policymakers, including the
eSafety Commissioner,
are pushing the framing of the exclusion as a “
delay” instead of a “ban”. The argument is that delaying access gives children “breathing space to build digital literacy, critical reasoning,
impulse control and greater resilience”. The idea, according to eSafety, is that “parents and carers don’t have to say ‘yes’ or ‘no’ to social media accounts, instead they can say ‘not yet’”.
Delaying seems gentler than banning and suggests an approach apparently rooted in age-appropriateness and children’s evolving capacities. The softer wording gives the impression that social media bans aren’t that drastic. However, as we
have argued before, there are serious concerns that the Australian measure infringes children’s rights.
The attempt to reframe the issue as a delay sidesteps a serious discussion on how children’s rights are - or are not - being respected now, instead of later.
Another choice by eSafety is to frame the ban as an “age restriction”, again seeming to suggest that the measures are not that severe. The current
UK national consultation sometimes uses “age restrictions” or “minimum age restrictions” in the context of access to whole services, similar to Australia.
But other times the phrases are used in the context of specific risky or addictive features and functionalities like personalisation or infinite scrolling. There is a hint towards the problematic attention-monetising business model of social media platforms - but no real questioning of the model itself.
The focus remains on regulating the users through blanket age-based exclusions instead of the model.
Then there is the language of a minimum age to access social media, used, for example, in the UK national consultation.
Minimum ages prohibit children under a certain age from activities like working, voting, getting married or making decisions about their own healthcare.
Minimum ages pursue two aims: first, to act as a benchmark for when children have capacity to make certain decisions; and second, to protect them from harm they are vulnerable to because of their age.
When it comes to social media, setting minimum ages is framed as a measure to protect children under a certain age from accessing platforms on which they presumably lack the capacity to make informed choices as users or where they face harm that cannot be prevented otherwise.
Again, it’s a move to regulate children’s access instead of the business model.
Another term commonly used, including to criticise bans, is “age-gating”.
It suggests placing social media behind a “gate” and that some age verification is required by a “gatekeeper”. The measure can be understood as protective - similar to denying children access to adult-only venues, which are unsuitable for them.
The focus is more on the responsibility of platforms, though not to reform their business model, but to deny access to those who cannot prove they are over a certain age.
Calling bans by their name
With all this in mind, reframing the debate as a delay or a restriction doesn’t take away from the fact that what’s being discussed is unequivocally a ban. From a children’s rights perspective, the harsh connotations of the word explain the blanket and blunt nature of the measures, which close the door on Ifs or Buts.
“Ban” as a word is also useful to emphasise the “one-size-fits-all” approach that’s currently taking place. And it helps to imply that it’s not the core business model of the platforms that's being changed, but rather that this unhealthy model is allowed to remain.
We argued in our previous commentary that excluding a whole swathe of users from social media based purely on their age very likely infringes their rights.
This is because social media poses both benefits and risks to children’s rights, so banning doesn’t seem necessary and proportionate under human rights law - meaning that it’s not a way to protect children online that is effective and the least restrictive of their rights.
Certainly, regulating the damaging business model first would help address the concern and be less limiting of children’s rights. The punitive undertones of the word “ban” reveal the very real possibility that these measures also amount to age discrimination against children.
What the word “ban” does even better is reveal something deeper about the kind of thinking that lies at the core of these measures. We previously
called this “ban-solutionism” - the belief that a prohibition is the same as a solution, but without having seriously attempted to define the problem well.
And it's also important to situate social media bans in the long and unfortunate tradition of paternalistic prohibitionism against children that continues to justify
youth curfews or
prohibiting children from working.
What we’re banning vs who we’re banning
What’s being banned exactly: social media or children? There are those who argue that some social media is being banned for some children, but children are not being banned from social media. In Australia for example, it’s some platforms that are being regulated,
so the argument goes,
because the duty is on them to manage the age-based exclusions, and it’s these platforms that face sanctions, not children, for not complying with their responsibilities.
However, from a children’s rights perspective, this attempt to take a soft view of the consequences of the ban for children is a distinction without a difference. It’s sophistry to argue that platforms are being regulated if their harmful business model is left intact while some users are denied access. In practice, it certainly feels to
many children that they are being regulated. Indeed, children are being excluded or obliged to disconnect from online spaces and communities they previously belonged to or could have belonged to.
So it’s very fair to think both that social media is being banned for children and that children are being banned from it.
Some propose a slightly different framing that emphasises the damaging business model,
saying that “children should not be banned from accessing the digital world, but companies that exploit them should be banned from accessing them”.
All that said, while we can agree or disagree on the nature of the action being taken - delay, restrict, ban - and can discuss what or who is actually being banned, this is not the only framing that shapes people’s thinking in the debate.
A much more insidious influence comes from the analogies people make (intentionally or absent-mindedly) around what social media is like for children.
(Ab)Using metaphors: From Mars and nightclubs to town squares and cliff edges
Mars
Credited with having sparked the conversation around banning social media for under-16s, the book The Anxious Generation starts by describing an imagined situation where a “visionary billionaire” recruits children to start the first human settlement on Mars.
The scenario warns of how radiation places children at an increased risk of cancer due to cellular damage, low gravity - at a higher risk of organ deformities, and those children may never be able to return to Earth.
The planners don’t seem to have done any research on child safety. Would any parent allow their child to go? “Of course not,” the author answers his own rhetorical question.
While using such an extravagant analogy to bring home the concerns around social media may seem over the top, global political trends sometimes start in unexpected ways. As the author of the book
told one interviewer, the wife of the politician who helped design Australia’s social media ban,
the world’s first, was reading the book in bed “and she turns to him and says: ‘You’ve got to read this book, and then you’ve got to [expletive] do something about it.’”
In a similarly alarmed vein, in the UK, a prominent political party
called for cigarette-style health warnings on social media, and parent forum Mumsnet
launched a campaign in support of a social media ban for under-16s by featuring this same style of health warnings on billboards.
Seen through the public health lens, then, one researcher asks:
“Why not ban everything potentially dangerous for kids?” The safety-first concern hasn’t led to banning children from riding in cars, going to playgrounds or swimming in pools;
the problem was instead approached with a mindset of increasing safety through design and education, and “parents have the primary responsibility of keeping their children safe”. Those analogies suggest that we don’t ban children from boarding vehicles - vehicles come with seat belts and children under a certain age sit in specially-designed seats.
We don’t exclude children from pools - we teach them to swim. We don’t prohibit them from playing risky sports - we provide them with helmets, knee and elbow pads, mouthguards and gloves.
Nightclubs or town squares?
Other metaphors around social media aren’t as harsh as blasting children off to Mars, though they’re still bizarre. The UK Conservative Party leader resolutely
declared: “We don’t ask nightclubs to serve orange squash to kids so they can have something to drink. We say no kids in nightclubs. Social media is exactly the same.”
Metaphors go both ways, however. In the quaint old days of Twitter, its chief executive had
framed it as “the global town square”. One of its founders had
said people used it as a “digital public square”. And Elon Musk
claimed to have acquired it precisely because “it is important for the future of civilisation to have a common digital town square”.
Critics like Ezra Klein had
foretold that this metaphor was dangerous. It implied at least three things.
First, that there was just one global town square, when in fact communities need many town squares. Second, that a billionaire could own the town square and subject it to its whims, when in fact town squares are public spaces, so are meant to be governed by the public.
(Little wonder then that platforms have often been compared to “walled gardens” with private gatekeepers instead.) And third, that the mere existence of the town square was enough, when what actually matters is what happens when people gather.
As Klein put it: “Town squares can host debates. They can host craft fairs. They can host brawls. They can host lynchings.”
But the metaphor of a town square does point to something we are starting to forget. Although soon overtaken by profit-maximising concerns, there was once an aspiration that social media would foster healthy connection instead of destructive exploitation.
The digital experience would be something normal, common to everyone, including children, and the online environment would be a “space” of possibility - to gather, meet, communicate with others.
Ultimately though, by forcing some analogies around social media - social media as a square or a park - it’s frustrating to realise how little imagination there is around any solutions other than bans.
The dangers of relying on metaphors uncritically
It’s normal to use metaphors to try to understand social media, or indeed anything else. After all, “metaphor” (from the Greek for “to carry beyond”) is a mode of transferring meaning. And in the case of the digital environment - which is complex, technical and ever-evolving - metaphors can help us to make better sense of it.
However, we need to be acutely aware of what else is being transferred alongside meaning in order to avoid knee-jerk reactions. The metaphors we use in policymaking not only shape how we make sense of the problem, but also how we respond to it. For example,
research by Defend Digital Me into the most common metaphors around data policy reveals three patterns.
When data is seen as something liquid or fluid (data can “flow” or “leak”), the tendency is to fear it, contain it and control it. When data is seen as a resource (data is “the new oil”, data can be “mined”), the attitude is to prize it, extract it and consume it.
And when data is associated with the body (data leaves a “trace” like a footprint, data is your “digital double”), the individual is seen as the one responsible for hiding and protecting their data.
Similarly, the metaphors used in the social media bans debate reflect its inherent tensions. It’s only natural that those more focused on protection and public health would choose analogies with smoking and drinking alcohol, while those more concerned about civil liberties,
agency and building children’s skills would emphasise free action and open physical spaces.
But protection versus freedom is a false dilemma. As we have
argued before, it fails to account for the interdependence between all children’s rights - the idea that realising one right depends on realising the others.
Within policy debates, however, virtually all metaphors are clumsy in some way. It’s alright to use them, but only if we are aware of their limitations - and their power.
Uncritically relying on metaphors oversimplifies the complexity of children’s lives - whether the issue is social media, data policy or something else - which is why those who wield metaphors manage to sway people’s views and make them think in very black-and-white terms.
Metaphors can enable different participants in a dialogue to establish
a unifying but undefined signifier (a sort of placeholder), left open so that diverse voices unite their aspirations in a common understanding, even regulation -
but this limits the understanding of complex technical, social, political subjects.
If everyone remains stuck in their own framing, this prevents us from seeing others’ perspectives and the result is people just talk past each other. And as cognitive linguists and philosophers George Lakoff and Mark Johnson emphasised as
early as in 1980, “the people who get to impose their metaphors on the culture get to define what we consider to be true.”
One metaphor to rule them all
There is one metaphor, though, which we could support. As children’s rights and online safety organisations, as well experts and bereaved families
have warned, ban-solutionism takes children to… the edge of a cliff.
When they are deemed old enough to join social media - happy birthday! - the gift is to push them off the edge and into a pretty toxic environment: an engagement-maximising, addiction-inducing, profit-over-everything-else business model.
Alas… it could have been much healthier if only someone had wanted to do the hard work of regulating it properly.
OK, assuming we are careful with language and metaphors, then what?
Language matters. The particular words we choose to describe a reality unveil our understanding and influence our thinking. There are subtle, but important, differences between saying that there’s a delay or a restriction or a ban on social media for children.
Metaphors matter even more. Perceived as truths in short form, they can reveal important aspects of an issue, but they also close our mind to its nuances and take us further away from an effective solution.
Social media isn’t Mars, a nightclub or a town square, so we cannot reduce the complexity around social media and children’s rights to a catchy shorthand.
Memorable analogies suggest quick fixes, but no single policy response can protect children online or secure their rights, however rooted in good intent it may be.
We should sit with this difficult process: start with the societal problem, examine a wide range of options, and see which combinations are likely to help us achieve our goal.
The first step is to draw the contours of and agree on what it is we are speaking about. Going back to the basics, what is “social media”? And depending on how we define it, does a ban make sense?
This is what we’ll try to understand in our next debate.
This piece is co-published with Defend Digital Me.
Sign up to our newsletter to stay updated
and follow us on LinkedIn.