Most Australian public servants have a new rule book when working with Artificial Intelligence (AI).
From this week, Australian Public Service (APS) agencies have begun implementing the new Policy for the responsible use of AI in government.
A statement from the Federal Digital Transformation Agency said continued assistance would be provided through support for training, AI assurance, capability development and technical standards.
The policy, which does not apply to the “national intelligence community”, which includes the likes the Office of National Intelligence, the Australian Signals Directorate and ASIO, sets out a number of principles, including:
- Safely engage with AI to enhance productivity, decision-making, policy outcomes and government service delivery for the benefit of Australians.
- APS officers need to be able to explain, justify and take ownership of advice and decisions when utilising AI.
- Have clear accountabilities for the adoption of AI and understand its use.
- Build AI capability for the long term.
Under the policy, agencies need to complete publicly-available and regularly-updated AI transparency statements that include information, such as:
- The intentions behind why the agency uses AI or is considering its adoption.
- Classification of AI use according to usage patterns and domains.
- Classification of use where the public may directly interact with, or be significantly impacted by, AI without a human intermediary or intervention.
- Measures to monitor the effectiveness of deployed AI systems, such as governance or processes.
- Compliance with applicable legislation and regulation.
- Efforts to identify and protect the public against negative impacts.
- Compliance with each requirement under the Policy for responsible use of AI in government.
- When the statement was most recently updated.
Learn more about Policy for the responsible use of AI in government.