Government AI Survey Reveals Adoption Gaps

While the Australian Government last month released a set of AI Technical Standards, significant capability gaps exist across public sector agencies, with some yet to begin adoption despite mounting risks from technological inaction.

Digital Transformation Agency Deputy CEO Lucy Poole told the Tech in Gov 2025 conference that the AI standards establish critical frameworks for data interoperability, algorithmic transparency and model validation across government AI systems.

"In government, trust is everything. One misstep can stall progress and erode the social license we rely on. And once it's lost, it's a long road back," Poole said during Tuesday's keynote address.

The standards emerge as the DTA's AI Accountable Officials Survey of 80 agencies revealed adoption disparities. While most agencies are actively trialling AI tools and building staff capability, some have yet to begin implementation in what Poole described as a high-risk position.

"In the rapidly changing landscape of technology, standing still is not a neutral position; it exposes organisations and the communities they serve to significant risk," she warned.

Poole identified legacy systems as the primary impediment to AI adoption, citing outdated infrastructure, poor data management and compatibility issues that make integration challenging and costly.

"Legacy systems hinder AI adoption due to outdated infrastructure, poor data management, and compatibility issues, making integration challenging. Modernising these systems is costly and time-consuming," she said.

The resource drain affects capability development, with legacy system maintenance "diverting resources from AI initiatives, slowing down the adoption process."

Smaller agencies face particular challenges, reporting gaps in skills, resources and funding alongside concerns about data privacy and legal complexity. The survey findings highlight sector-wide capability issues that could undermine progress and public trust.

Standards Enable Platform-Scale Implementation

The AI Technical Standards represent a shift from experimental pilots toward enterprise-scale deployment, Poole explained. Drawing on a CSIRO Data61 metaphor, she emphasised that governance frameworks enable rather than constrain innovation.

"Brakes don't slow us down - they help us go faster," she said, quoting Dr Liming Zhu. "The safeguards we build - our standards, governance, and assurance tools - aren't barriers. They're enablers."

The standards were developed through cross-government collaboration and embed transparency, accountability and safety requirements. They aim to support agencies moving "from pilots to platforms - from isolated experiments to whole-of-government capability."

Pool showcased The National Library of Australia's oral history digitisation as exemplifying scalable AI deployment, with systems transcribing one hour of audio in 90 seconds. "That means decades of voices—stories, lived experiences—are now searchable and accessible to researchers, educators, and the public," Poole said.

Risk Management and Public Trust

The presentation acknowledged that weak implementation by individual agencies could compromise sector-wide progress and public confidence.

"One weak link in our chain can have serious consequences - compromising not only the effectiveness of our systems, but also the trust of the public," Poole warned.

The standards framework aims to maintain public trust while enabling innovation, with governance mechanisms providing "the guardrails, the map, and the brakes we need to navigate complexity with confidence."

Poole concluded by referencing Mo Gawdat's observation that AI adoption represents more than technological change: "This isn't just a story about technology. It's about us—human nature, ethics, and how we choose to handle this powerful tool."

The DTA serves as the Australian Government's adviser for whole-of-government digital and ICT strategies, policies and standards, including procurement oversight.