Several social media platforms have been served legal notices over their child safety practices

Australia issues legal notices to Twitter, TikTok, Google, Discord, and Twitch over child safety practices, seeking information to combat online child sexual abuse.
Several social media platforms have been served legal notices over their child safety practices.

Australia’s eSafety Commissioner has sent legal notices to Twitter, TikTok, Google, Discord, and Twitch, requiring them to provide information about their policies to tackle online child sexual abuse. It comes after an earlier round of notices to companies including Apple, Meta, and Microsoft found many were failing to take even simple steps to protect children.


Under new online safety laws passed in 2021, the eSafety Commissioner can issue legally-binding notices to online platforms at any time to assess whether they meet Australia’s Basic Online Safety Expectations. The expectations include being proactive about blocking unlawful material (e.g. child abuse), protecting children from age-inappropriate content, and ensuring anonymous or encrypted services are not used to do harm. Platforms face heavy fines if they do not comply with these legal notices.


The presence of child sexual exploitation material online is a global problem. Tech companies report tens of millions of instances a year, but eSafety Commissioner Julie Inman Grant says this is likely to be “the tip of a very large iceberg”. In Australia, there have been 76,000 investigations relating to child sexual exploitation material since 2017. It is illegal to create, disseminate or view online child sexual abuse. Inman Grant says it also “inflicts incalculable trauma and ruins lives”.

The first notices:

The first batch of legal notices was issued last August to Apple, Meta, WhatsApp, Microsoft, Skype, Omegle, and Snapchat. The eSafety Commissioner found there was “no common baseline” in steps being taken to protect users. Many companies were not using ‘digital fingerprinting’ techniques which can track down all copies of an image or video with extreme accuracy. “Most” providers were not taking specific steps to identify grooming or potential abuse in video calls or streams, and some were making minimal effort to track repeat offenders. There were also instances of companies taking weeks to respond to user reports of child exploitation.

The new notices:

In a statement today announcing the second round of notices, Inman Grant indicated particular concern about Twitter in the wake of recent job cuts. “Twitter boss Elon Musk tweeted that addressing child exploitation was ‘Priority #1’, but we have not seen detail on how Twitter is delivering on that commitment,” Inman Grant said. “We’ve also seen extensive job cuts to key trust and safety personnel across the company – the very people whose job it is to protect children – and we want to know how Twitter will tackle this problem going forward.”

For information on how to report online harm, head to esafety.gov.au/report.

Become smarter in three minutes

Get the daily email that makes reading the news actually enjoyable. Stay informed, for free.

Be the smart friend in your group chat

Join thousands of young Aussies and get our 5 min daily newsletter on what matters in your world.

It’s easy. It’s trustworthy. It’s free.