New police tool poisons data to hamper cyber crime

PhD candidate Elizabeth Perry. | Newsreel
PhD candidate Elizabeth Perry led the development of Silverer. | Photo: Supplied by AFP

Australian police are using “digital poison” in an attempt to thwart cyber criminals.

In a collaboration with Monash University, the Australian Federal Police (AFP) have set up an AI for Law Enforcement and Community Safety (AiLECS) Lab, which is developing a new disruption tool.

AFP Commander Rob Nelson said among its broad applications, was the ability to slow down and stop criminals producing AI-generated child abuse material, extremist technology propaganda, and deepfake images and videos.

Commander Nelson said the process, known as “data poisoning”, involved the subtle alteration of data to make it significantly more difficult to produce, manipulate and misuse images or videos using AI programs.

He said AI and machine learning tools (MLs) required significant amounts of online data to produce AI-generated content, so by poisoning this data, AI models then created inaccurate, skewed or corrupted results.

“This also makes it easier to spot a doctored image or video created by criminals.”

PhD candidate Elizabeth Perry led the development of the AI-disrupter, called ‘Silverer’, over 12 months at the AiLECS Lab.

Ms Perry said the name was a nod to the silver used to make mirrors and who, similarly, the tool would be used to create reflections of an original image.

“In this case, it’s like slipping silver behind the glass, so when someone tries to look through it, they just end up with a completely useless reflection,” she said.

“Before a person uploads images on social media or the internet, they can modify them using Silverer. This will alter the pixels to trick AI models and the resulting generations will be very low-quality, covered in blurry patterns, or completely unrecognisable.

“Offenders making deepfakes often try to use a victim’s data to fine-tune an AI of their own; Silverer modifies the image by adding a subtle pattern to the image which tricks the AI into learning to reproduce the pattern, rather than generate images of the victim.”

Commander Nelson said data-poisoning technologies were still in their infancy, and being tested, but showed promising early results in terms of law-enforcement capability.

“Where we see strong applications is in the misuse of AI technology for malicious purposes,” he said.

“For example, if a criminal attempts to generate AI-based imagery using the poisoned data, the output image will be distorted or completely different from the original. By poisoning the data, we are actually protecting it from being generated into malicious content.”