Artificial intelligence has been shown to be vulnerable to cyber attacks which manipulate what it sees, with dire implications for technology, such as self-driving cars.
Researchers at NC State University, in the United States, demonstrated a new way of attacking artificial intelligence computer vision systems.
Study author Tianfu Wu said the new technique, called RisingAttacK, was effective at manipulating all of the most widely used AI computer vision systems.
Associate Professor Wu said this could be used in “adversarial attacks,” where someone manipulated the data being fed into an AI system to control what the system saw, or did not see, in an image.
“Someone might manipulate an AI’s ability to detect traffic signals, pedestrians or other cars – which would cause problems for autonomous vehicles,” he said.
“Or a hacker could install code on an X-ray machine that causes an AI system to make inaccurate diagnoses.”
Associate Professor Wu said the team wanted to find an effective way of hacking AI vision systems because these vision systems were often used in contexts that could affect human health and safety.
“That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it.”
Read the full study: Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian.