AI-driven chatbots are exposing Australian children to interactive content across a range of topics from sex to drug-taking, self-harm, suicide and serious illnesses, such as eating disorders.
In its first Online Safety Advisory, eSafety has warned that children were using the chatbots, for hours each day, which were exposing them to unmoderated conversations which could encourage or reinforce harmful thoughts and behaviours.
eSafety Commissioner Julie Inman Grant said most AI-driven chatbots were not designed with safety in mind and children were not developmentally ready for the risks.
“AI companions can share harmful content, distort reality and give advice that is dangerous,” Ms Inman Grant said.
“In addition, they are often designed to encourage ongoing interaction, which can feel ‘addictive’ and lead to overuse and even dependency.”
She said it was time for big tech to move on from the era of “moving fast and breaking things”, especially when it came to children.
“The industry must embrace Safety by Design as an immediate priority to anticipate risks and ensure products are safe from the outset.
“In the meantime, eSafety offers information, support and advice through eSafety.gov.au, including our new Online Safety Advisories,” she said.
Ms Inman Grant said the Online Safety Advisories would provide fast and expert-driven insights into emerging online risks.
She said they would offer clear and practical support to help Australians with the challenges of digital wellbeing, especially parents, carers, educators, and policymakers.
“We need a holistic approach to online safety, one that doesn’t just rely on parents to monitor every digital interaction.
“The companies profiting from these technologies must do more to build safety into their platforms from the start, rather than applying fixes after harm has occurred,” Ms Inman Grant said.