Most AI Initiatives miss Expected ROI - ISACA
Artificial intelligence is now embedded almost everywhere in business, but the returns and the guardrails are not keeping pace, according to new research from professional association ISACA. Ninety percent of digital trust professionals say employees in their organisation are using AI, yet only 22 percent report that AI return on investment has met or exceeded expectations.
The 2026 AI Pulse Poll surveyed more than 3,400 digital trust professionals globally across IT audit, governance, cybersecurity, privacy and emerging technology roles. It examines AI use, policies and standards, workforce impact, incident response readiness and security across the sector.
Incident response capability is the standout concern. More than half of respondents - 56 percent - are unsure how long it would take to halt an AI system in response to a security incident. Thirty-nine percent do not know whether their organisation has a documented process for shutting down or overriding AI systems.
Governance frameworks are tightening but remain incomplete. Only 38 percent of organisations have a formal, comprehensive AI policy, up from 28 percent in 2025. A further 30 percent have a limited policy and 25 percent have no active policy at all.
ROI uncertainty is widespread. Twenty-three percent of respondents say it is too early to tell, 22 percent do not know the ROI, 20 percent cite limited ROI so far, and 22 percent report AI ROI has met or exceeded expectations.
Jamie Norton, Vice Chair of the ISACA Board, said the findings show AI has shifted from an information technology issue to a governance and leadership issue.
"What we're seeing now is a shift from experimentation to accountability," said Mr Norton. "Organisations are moving quickly to embed AI into operations, but many are still developing the policies, governance structures and skills needed to ensure those systems deliver long-term value safely and responsibly."
AI use cases are concentrated on productivity rather than transformation. Sixty-two percent of respondents use AI to increase productivity, the same proportion use it to create written content, 50 percent automate repetitive tasks and 49 percent analyse large data sets.
The skills picture is also shifting. Seventy-eight percent of respondents say AI skills are very or extremely important to their profession, up from 72 percent in 2025. A third of organisations train all employees on AI, up from 22 percent the previous year, and 36 percent plan to increase AI-related jobs in the next 12 months.
Risk priorities have hardened. Forty-five percent rate AI risks as an immediate priority and 38 percent are confident in their board's understanding of and action against AI risks. The most-cited AI risks are misinformation and disinformation at 82 percent, privacy violations at 74 percent, social engineering at 60 percent, loss of intellectual property at 58 percent and job displacement at 42 percent.
Detection capability is improving but trust remains fragile. Forty-one percent are confident in their own ability to detect AI-powered misinformation, up from 30 percent in 2025, while only 36 percent are confident in their organisation's ability to detect it.
The poll notes that half of Oceania respondents say boards and executive leadership are ultimately accountable if AI systems cause harm or serious error. The finding aligns with the direction of Australian government policy, which has signalled mandatory guardrails for high-risk AI systems while a voluntary AI Safety Standard is in market.
