ATO's AI Governance Falls Short, Audit Reveals
The Australian National Audit Office (ANAO) has found that the Australian Taxation Office (ATO) has only "partly effective arrangements" in place to support its adoption of artificial intelligence, despite having 43 AI models in production.
The audit, part of the ANAO's new focus on providing assurance over governance of emerging technologies, highlights significant gaps in the ATO's approach to AI implementation, risk management, and ethical oversight.
Auditor-General Dr. Caralee McLiesh notes in her foreword that while AI promises "better services, enhanced productivity and efficiency," it also brings "potential for increased risk and unintended consequences" - potential that appears to be inadequately addressed at the ATO.
The report comes at a critical time for AI governance in Australia's public sector, with 56 government entities reporting AI adoption in their operations and two parliamentary inquiries examining AI use in government services.
Strategic Framework Lacking
The audit revealed that while the ATO developed an automation and AI strategy in October 2022, it "has not established fit-for-purpose implementation arrangements for this strategy." The tax office is still developing an AI policy and risk management guidance, with completion not expected until December 2025.
Although the ATO has established a policy for staff using publicly available generative AI tools, auditors found the organization "does not have sufficient centralised visibility and oversight of its use of AI," undermining its ability to effectively govern AI use across the organization.
Perhaps most concerning, the ANAO found that 74 percent of the ATO's AI models in production did not have completed data ethics assessments, despite the agency having a data ethics framework in place. This oversight "undermines the ATO's ability to deliver and to assure the delivery of AI that aligns with ethical principles," the report states.
The audit also highlighted that the ATO "has not sufficiently integrated ethical and legal considerations into its design and development of AI models," limiting its ability to demonstrate that its AI systems are fair, reliable, privacy-protecting, transparent, and contestable.
Risk Management and Monitoring Deficiencies
According to the ANAO, enterprise risks related to AI at the ATO are "above tolerance," and the organization has identified that its current risk assessment processes "are not sufficient for AI-specific risks."
Additionally, auditors found "no evidence of structured and regular monitoring of ATO-built AI models in production," though the tax office is reportedly developing a monitoring framework to address this issue.
The ATO has begun taking steps to address these shortcomings. In September 2024, it established a Data and Analytics Governance Committee "in recognition that stronger governance arrangements were needed." By November, the ATO had appointed its Chief Data Officer as its accountable official under the government's Policy for the responsible use of AI.
The agency is also working to introduce an enterprise-wide approach to monitoring AI model performance, though this project isn't expected to be completed until December 2026.
This audit represents the ANAO's first step in a new line of work focused on the governance of emerging technologies in public administration. Dr. McLiesh indicated that the ANAO will "continue to focus on governance of AI while it develops the capability to undertake more technical auditing of the AI tools and processes used in the public sector."
As AI adoption continues to accelerate across government services, the findings from this audit suggest that governance frameworks are still catching up to technological implementation, potentially leaving agencies exposed to unmanaged risks.
With the Australian Government positioning itself as "an exemplar in the safe and responsible use of AI," this audit provides valuable insights into the challenges facing public sector entities as they navigate the rapidly evolving AI landscape.