Warning of ‘swathe of risks’ from generative AI

Lawyers are warning of a swathe of risks and ethical dilemmas from generative AI | Newsreel
A paper from McCullough Robertson Lawyers warns that information acquired from generative AI may be full of potential pitfalls. | Photo: LightField (iStock)

Businesses and individuals have been warned about a “swathe of serious risks and ethical dilemmas” that have emerged from the rapid onset of generative artificial intelligence (AI).

A paper by McCullough Robertson Lawyers Partner Belinda Breakspear and Senior Associate Jacob Bartels says these risks are emerging ahead of any domestic regulatory framework to directly address generative AI.

The paper says one of the risks is AI producing “hallucinations” – instances where the AI tool generates output that is not backed by data or is otherwise “plainly wrong or misleading”.

Most current generative AI systems are also not secured, meaning information entered becomes public and is used to train the model.

“Generative AI is a powerful tool that is developing at astonishing pace,” the paper says.

“Unlike previous public-facing iterations of AI (like Siri or Autocorrect), generative AI models now produce their own sophisticated content and can respond to an ever-expanding range of stimuli including text, images, audio, and video.”

“Businesses, individuals, and governments of all levels are increasingly leveraging generative AI to streamline processes and increase efficiency.”

“However, this rapid adoption brings with it a swathe of serious risks and ethical dilemmas that deserve careful attention.”

Ms Breakspear and Mr Bartels said generative AI models were trained on massive data sets to generate predictive outputs.

As a result, the AI models could only know what they were provided through their training data.

Among the “headline risks” were copyright breaches, with copyright holders increasingly focused on infringements to their IP rights from AI models using copyright materials.

Privacy and confidentiality risks included training or input data used by an AI model containing confidential information or information about identifiable individuals.

The use of this data raised “major privacy and confidentiality concerns”, and organisations had to be mindful of their obligations under the Privacy Act 1988 (Cth), and other relevant privacy legislation.

Training data for AI tools could also introduce biases and perpetuate unfair outcomes. Common examples included gender, and biases based on race or socio-economic conditions.

“The key ethical risk of generative AI relates to automated decision making (and one needs only to remember the Robodebt issue to understand the potential risks of this),” the paper says.

“Wherever possible, (organisations) should avoid using generative AI in decision making processes.”

“Where this is not feasible, (they need to) to ensure that any use of generative AI is transparent and explainable, and any outputs are independently validated by a person before being relied upon.”

The authors said, while there was no current domestic regulatory framework directly relevant to generative AI, there were clear signs that this was a priority for policymakers.

 

Partner content