Mandatory AI ‘guardrails’ out for review

Computers in business environment | Newsreel
The Federal Government has released draft regulations for the use of Artificial Intelligence. | Photo: Goroden Koff (iStock)

Proposed Federal Government regulations will require users of Artificial Intelligence to disclose its use and establish systems for people impacted by AI systems to challenge outcomes.

Under draft “mandatory guardrails” , released today, developers and deployers of AI would also need to ensure “meaningful human oversight”.

They are among 10 proposed mandatory guardrails for high-risk AI, underpinned by a set of principles which would be used to define “high risk”.

Federal Industry and Science Minister Ed Husic said the Government had consulted with public and industry about AI and were told they wanted to see stronger regulation.

Minister Husic said business asked for clarity on AI regulation so they could confidently seize the opportunities that AI presented.

“The Tech Council estimates Generative AI alone could contribute $45 billion to $115 billion per year to the Australian economy by 2030,” he said.

Minster Husic said an AI expert group had informed the Government’s Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings which included:

  • A proposed definition of high-risk AI.
  • 10 proposed mandatory guardrails.
  • Three regulatory options to mandate these guardrails.

He said the three regulatory approaches could be:

  • Adopting the guardrails within existing regulatory frameworks as needed.
  • Introducing new framework legislation to adapt existing regulatory frameworks across the economy.
  • Introducing a new cross-economy AI-specific law (for example, an Australian AI Act).

In the paper, the following mandatory guardrails are suggested for organisations developing or deploying high-risk AI systems:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks.
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance.
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight.
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  7. Establish processes for people impacted by AI systems to challenge use or outcomes.
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
  9. Keep and maintain records to allow third parties to assess compliance with guardrails.
  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails

The Government has also released a new Voluntary AI Safety Standard with immediate effect.

Minister Husic said it provided practical guidance for businesses where their use was high risk, so they could start implementing best practice in the use of AI.

“The Standard give businesses certainty ahead of implementing mandatory guardrails.

“In step with similar actions in other jurisdictions, including the EU, Japan, Singapore, the US, the Standard will be updated over time to conform with changes in best practice.”

He said this new guidance would help domestic businesses grow and attract investment, while managing the risks.

View the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings.

View the Voluntary AI Safety Standard.