BBC/Jessica HromasIt took 13-year-old Isobel less than five minutes to outsmart Australia’s “world-leading” social media ban for children.
A notification from Snapchat, one of the ten platforms affected, had lit up her screen, warning she’d be booted off when the law kicked in this week – if she couldn’t prove she was over 16.
“I got a photo of my mum, and I stuck it in front of the camera and it just let me through. It said thanks for verifying your age,” Isobel claims. “I’ve heard someone used Beyonce’s face,” she adds.
“I texted her,” she gestures to her mum Mel, “and I was like, ‘Hey Mummy, I got past the social media ban’ and she was just like, ‘Oh, you monkey’.”
It made her laugh, Mel explains: “This is exactly what I thought was going to happen.”
Though she had let Isobel use TikTok and Snapchat with tight supervision – preferring that to the teenager using it covertly – she had been hoping the ban, as promised, would help parents like her protect their children from the dangers of the online world.
That hope has now wavered, as a series of experts – and kids themselves – sound the alarm on the viability and safety of the landmark policy, which is being closely watched around the globe and eyed with trepidation by some of its most influential companies.
There’s concern about the reliability of the technology enforcing the ban, along with fears it could isolate vulnerable children and push others into darker, less-regulated corners of the web.
The question nervously being asked in the halls of Canberra, in households across the nation, and in tech boardrooms around the world: is this actually going to work?
‘Parents are worried sick about the safety of kids online’
You’d struggle to find anyone in Australia who feels social media companies are doing enough to shield users – particularly children – from harm on their platforms. Their protestations to the contrary fall on unsympathetic ears.
“We have zero faith the tech companies will do anything other than protect their profits,” Dany Elachi, a father of five and anti-smartphone campaigner, tells the BBC.
“They’ve had ample opportunity to show they take children’s wellbeing seriously and they’ve failed at every turn.”
Detailing how bullying – inescapable due to social media – had culminated in her 15-year-old daughter’s suicide, Emma Mason asked world leaders at the UN last month: “How many more Tillys must die?”
The pair were among the parents and public figures who lent their voices to a headline-grabbing national campaign calling for a new minimum age for social media.
Some experts, including Australia’s online safety tsar, warned banning children from social media was not the solution, but their concerns were swept away by a tide of parental passion and political pressure.
EPAIn November 2024, the prime minister announced the legislation, promising parents and children wouldn’t be penalised. It’d be up to the platforms to take “reasonable steps” to ensure account-holders were at least 16 years old, or face fines of up to $49.5m (US$33m, £25m) for the most serious breaches.
“This one is for the mums and dads… They, like me, are worried sick about the safety of our kids online,” Anthony Albanese said.
The policy – limited versions of which have been attempted with little success by other jurisdictions around the world – would help free children from addictive algorithms exposing them to harmful content like violence, pornography and misinformation, proponents said. It would also reduce cyber-bullying and online child exploitation. It would force kids outside, help them sleep better, improve their physical and mental health, it was suggested.
Noticeably absent from Albanese’s announcement was a plan on exactly how the government was going to do this – it gave itself a year to work that out.
Within weeks, it had rushed a skeletal bill through parliament, after allowing less than 48 hours for the public to make submissions on the law.
How will it be enforced?
A year later, and days out from the official start of the law, questions remain.
A government-funded, industry-run trial looked at the main methods of age assurance and earlier this year reported all were technically possible – but none were foolproof and all carried risks.
Verification using IDs was the most accurate approach, but that requires users to hand over sensitive and important documents when polling shows most Australians don’t trust social media firms.
Age inference, which draws conclusions based on users’ online activity, and facial assessment technology both lacked the precision to be reliably applied to teenagers.
For example, the accuracy of face scans – already rolled out by Meta and Snapchat for suspected underage users – falters for people two to three years either side of 16; the intended target.
Still, the report found age assurance technologies can be “private, robust and effective”, especially when layered.
“When you go to a bottle shop and they look you up and down and go, ‘Mmm not really sure’, they ask you for some ID… It’s the same principle,” says Tony Allen, who heads the UK-based Age Check Certification Scheme and ran the trial.
Its findings weren’t without controversy. Two former advisory board members levelled accusations of bias and “privacy-washing”. And though the trial considered ways teenagers might bypass barriers, it was not tasked with testing them.
Getty ImagesTips have flooded social media: everything from signing up with a parent’s email and moving to platforms not explicitly named on the government’s hit list, to using VPNs, which can disguise a user’s location. There was a temporary surge in VPN use in the UK after it introduced tougher age checks for pornography earlier this year, and experts are expecting the same here.
Polling conducted for the government in May indicated a third of parents intended to help their kids circumvent the ban, and an experiment by the University of Melbourne showed that a $22 Halloween mask was enough to defeat facial assessment technology in some cases.
Proponents of age assurance contend that the technology to thwart circumvention exists. A photo, like Isobel says she used, is not supposed to fool these checks.
The BBC asked Snapchat about this, and a spokesperson said the firm had consistently expressed concern about the “technical challenges” of enforcing the ban: “This is one such challenge.”
“It’s a constant running battle to ensure that the mitigations are improving, literally on a daily basis,” added Luc Delany, an executive for K-ID, which performs age assessments on Snapchat’s behalf.
BBC/Jessica HromasIsobel, buoyed by her experience, says she’s pretty sure the ban won’t work.
“I’m not a screen addict… but I think Anthony Albanese’s idea of us touching grass is stupid,” she says, referring to the prime minister’s comment about getting kids outside.
“If I do end up getting banned, I’ll just find another app to use.”
That’s up for discussion, Mel clarifies. But she and a host of others fear the platforms and the regulator are set for a relentless game of “whack-a-mole” – identifying and shutting down loophole after loophole, and listing emerging platforms only for kids to flock to another.
The social media firms also have a motive to subtly undermine the policy, lest other countries follow suit, analysts say, and the vague “reasonable steps” outlined by the government leave the door ajar.
“They’re going to try to drive a truck through [it],” says Stephen Scheeler, who led Facebook in Australia and New Zealand between 2013 and 2017.
“It’s like getting your kids to do something like load the dishwasher – they’ll do it, but they won’t do it well, and they won’t do it with a smile on their face.”
The fines offer little incentive to behave, he says. Facebook, for example, earns that amount globally in under two hours. “It’s a parking ticket.”
Getty ImagesThen there’s the inevitable legal challenges. Two teenagers have already filed a case in the nation’s highest court, alleging the law is unconstitutional and Orwellian. Alphabet, which owns YouTube and Google, is also apparently considering its own challenge. Human rights groups and a smattering of legal experts have raised their hackles too.
While the government has insisted the social media companies have the money and the technology to make the ban happen, it has at the same time sought to manage expectations.
“It’s going to look a bit untidy on the way through. Big reforms always do,” Communications Minister Anika Wells has said.
The key question is not can children get around it, Mr Allen says; that answer is yes. But will enough of them be bothered to?
“To be a successful policy, it doesn’t have to get to a point where 100% of children aren’t on social media,” he says. “It only really has to get about 80% of them and the rest will follow.”
Some exasperated parents just want to be able to say it is illegal. They don’t want their kids to feel entitled – or pressured – to access social media.
“We’ve always said, whether the law was enforceable or not, our main aim in all of this was to establish a new social norm,” Mr Elachi says.
Will it reduce harm?
Putting aside the question of whether it can be done, many are still asking: should it?
First, there is concern that this policy pushes children into darker parts of the web.
Will it be gaming site chatrooms, which the Australia Federal Police have warned are hotbeds for radicalisation but are excluded from the ban?
Will it be sites like Omegle, which previous generations turned to when told they were too young for mainstream social media? It allowed users to video chat randomly selected strangers and was shut down two years ago over its failure to protect minors from predators. Copycats have quickly replaced it.
Children can also still browse on several of the apps, like TikTok and YouTube, without accounts, a potentially riskier minefield of unfiltered content and advertisements – several platforms currently limit these on minors’ accounts. “This law will not fulfil its promise to make kids safer online, and will, in fact, make Australian kids less safe on YouTube,” a spokesperson for the company said this week.
There’s plenty of criticism of big tech over moderation, but few dispute that these large platforms do it better than their smaller peers. Facebook, for example, has systems which set off alarms bells if an adult is messaging a child frequently.
“You’re not stopping behaviour, you’re just moving that behaviour to other platforms,” says Tim Levy, head of online safety company Qoria and one of the trial advisers who quit. “Telling the concerned Australian parents that it’s all good now is a very dangerous message.”
The science on social media and health is complex and still evolving too. Alongside the studies linking it to poor outcomes is also evidence it can be a lifeline for some children, particularly for those from LGBTQ+, neurodivergent or rural communities.
“We’ve heard very little officially about what is being done to address the unmet needs of these more vulnerable children who for positive reasons have sought help or a sense of belonging and connection online,” former children’s commissioner Anne Hollonds tells the BBC.
Her term finished just a few weeks ago, but Ms Hollonds has spent years lobbying the government on greater online guard-rails for kids and was surprised, frustrated even, to learn this “blunt” tool was the one they’ve chosen.
What could be achieved if this regulatory effort and attention was used to pull other levers, she wonders.
Many have suggested the focus should be on forcing socials to better police harmful content and limit the power of algorithms, while also preparing children for the reality of life on the web.
More than 140 leading Australian and international experts signed an open letter raising these concerns, among others, before the legislation passed.
“There’s nothing magical about the age of 16,” Ms Hollonds says. “[This] really does nothing on its own.”
Getty ImagesBefore her agency was officially charged with implementing this policy, eSafety Commissioner Julie Inman-Grant made similar arguments.
“We do not fence the ocean or keep children entirely out of the water, but we do create protected swimming environments that provide safeguards and teach important lessons from a young age,” she said in June last year.
To that, Minister Wells counters: “We can police the sharks” – a biting reference to social media firms.
Many of the critics are right about the challenges ahead, she told the BBC. But this is just a starting point – a digital duty of care, a legal obligation that firms prevent foreseeable harm to their users, is next on her list.
“For every person that says to me, ‘Why haven’t you included these big elements?’, someone else is saying to me, ‘It is impossible for you to do what you’ve even set out to do now’.”
“This isn’t a cure. It’s a treatment plan, and treatment plans will always evolve.”
“This is, at the end of the day, work to try and save a generation. It’s worth doing.”
https://ichef.bbci.co.uk/news/1024/branded_news/2bf7/live/75176520-d1a2-11f0-9fb5-5f3a3703a365.jpg
2025-12-06 18:58:00
