Noise shield to protect content from AI learning models

Confused robot. | Newsreel
A new tool can protect content from unauthorised AI learning. | Photo: Phonlamai Photo (iStock)

Australian researchers are developing a “noise” shield which can prevent unauthorised artificial intelligence (AI) systems learning from photos, artwork and other image-based content.

CSIRO scientist Derui Wang said the technique offered a new level of certainty for anyone uploading content online.

Dr Wang said the tool, developed in partnership with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, subtly altered content, creating cyber “noise” which made it unreadable to AI models while remaining unchanged to the human eye.

He said the breakthrough could help artists, organisations and social media users protect their work and personal data from being used to train AI systems or create deepfakes.

“For example, a social media user could automatically apply a protective layer to their photos before posting, preventing AI systems from learning facial features for deepfake creation.

“Similarly, defence organisations could shield sensitive satellite imagery or cyber threat data from being absorbed into AI models.”

Dr Wang said the technique set a limit on what an AI system could learn from protected content.

He said it provided a mathematical guarantee that the protection would hold, even against adaptive attacks or retraining attempts.

“Existing methods rely on trial and error or assumptions about how AI models behave.

“Our approach is different. We can mathematically guarantee that unauthorised machine learning models can’t learn from the content beyond a certain threshold. That’s a powerful safeguard for social media users, content creators, and organisations.”

He said the technique could be applied automatically at scale.

“A social media platform or website could embed this protective layer into every image uploaded.

“This could curb the rise of deepfakes, reduce intellectual property theft, and help users retain control over their content.”

Dr Wang said while the method was currently applicable to images, there are plans to expand it to text, music, and videos.

Read the full study: Provably Unlearnable Data Examples

'Noise' protection can be added to content before it's uploaded online. | Newsreel
'Noise' protection can be added to content before it's uploaded online | Photo: Supplied by CSIRO