Australia’s eSafety Commissioner has flagged an almost $50 million penalty for a UK-based company responsible for AI-generated “nudify” services used to create deepfake pornography of local school children.
Commissioner Julie Inman Grant said the technology company, which was not named to avoid promoting it and its services, operated two of the world’s most-visited online AI-generated nude image websites, which allowed its users to upload photos of real people, including children.
Ms Inman Grant said a formal warning had been issued for enabling the creation of child sexual exploitation material in breach of an industry standard under the Online Safety Act.
She said the formal warning was the first step in the enforcement process.
“Further action will be considered should the company continue to fail to comply with Australian online safety standards.
“We will not hesitate to use the full extent of our powers, including seeking civil penalties of up to $49.5 million, if non-compliance continues.”
Ms Inman Grant said in Australia, the company’s two services had been attracting about 100,000 visitors per month and was identified by eSafety as being used to generate explicit deepfake images of students in Australian schools.
“Following reports to eSafety, we found both online services were used nefariously by Australian school children, to create deepfake image-based abuse of their peers,” she said.
“These services failed to provide appropriate safeguards to prevent the creation of child sexual exploitation material and therefore pose a serious risk to our children.
“Shockingly, we found these services did little to deter the generation of synthetic child sexual abuse material by marketing alarming features such as undressing ‘any girl,’ with options for ‘schoolgirl’ and ‘sex mode’ to name a few.
“And while these platforms can be accessed for free, the cost to the children targeted is incredibly high, if not incalculable.”