Australian Agencies Get AI Governance Framework
The Digital Transformation Agency has released a technical standard to assist government agencies to embed transparency, accountability and safety measures across artificial intelligence system lifecycles.
The Australian Government AI Technical Standard establishes requirements for AI systems from initial design through to decommissioning, covering in-house systems, vendor solutions, pre-trained AI models and managed services.
"The standard is designed with public trust front of mind," said Lucy Poole, General Manager of Digital Strategy, Policy and Performance at DTA.
“The AI technical standard isn’t about adding more processes to its users. It’s designed to integrate with what agencies already do,” adds Ms Poole.
“It allows agencies to embed responsible AI practices into existing governance, risk and delivery frameworks.”
The framework follows three phases: Discover, Operate and Retire. During the Discover phase, agencies must define system purpose, assess ethical risks and biases, ensure data quality and privacy measures, and evaluate accuracy and robustness through adversarial testing.
The Operate phase requires integration safeguards, secure launches with documentation, and continuous performance monitoring to detect biases and data drift. The Retire phase mandates controlled decommissioning with data retention compliance.
The DTA says agencies are “encouraged” to begin applying the AI technical standard to guide their development and use of current and future AI systems
“Our technical standard was developed with extensive research of international and domestic practices – and comprehensive consultation with the APS,” said Ms Poole.
“At every stage of the AI lifecycle, the Standard helps agencies keep people at the forefront, whether that’s through human oversight, transparent decision-making or inclusive design.”
https://architecture.digital.gov.au/standard/government-use-ai