New Study Shows 63% in A/NZ actively use AI at Work

A new poll of global digital trust professionals reveals high employee usage of generative Artificial Intelligence (AI) in Australia and New Zealand (63%). Althopugh few have company policies around its use (only 11% have a formal policy), there is a  lack of training (80% have no or limited training), and fears around its exploitation by bad actors (97% report being at least somewhat worried), according to Generative AI 2023: An ISACA Pulse Poll.

Employees in ANZ are commonly using AI to create written content (51%), increase productivity (37%), automate repetitive tasks (37%), improve decision making (29%) and provide customer service (20%).

The poll found that many employees at respondents’ organisations are using generative AI, even without policies in place for its use. Only 36 per cent of ANZ organisations say their companies expressly permit the use of generative AI (compared to 28 percent globally), only 11 percent say a formal comprehensive policy is in place, and 21 percent say no policy exists and there is no plan for one.

Despite this, 63 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 26 percent aren’t sure.

However, despite employees quickly moving forward with use of the technology, only four percent of respondents’ organisations in ANZ are providing training to all staff on AI, and more than half (57 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 32 percent of respondents indicated they have a high degree of familiarity with generative AI.

The poll explored the ethical concerns and risks associated with AI as well, with 38 percent of ANZ respondents saying that not enough attention is being paid to ethical standards for AI implementation.

Just over one-third of their organisations consider managing AI risk to be an immediate priority, 32 percent say it is a longer-term priority, and 17 percent say their organisation does not have plans to consider AI risk at the moment, even though respondents note the following as top risks of the technology:

  1. Misinformation/Disinformation (90 percent vs 77 percent globally)
  2. Loss of intellectual property (IP) (68 percent vs 58 percent globally)
  3. Social engineering (65 percent vs 63 percent globally)
  4. Privacy violations (64 percent vs 68 percent globally)
  5. Job displacement and widening of the skills gap (Tied at 35%)

More than half (54 percent) of respondents in Australia and New Zealand indicated they are very or extremely worried about generative AI being exploited by bad actors. Seventy percent say that adversaries are using AI as successfully or more successfully than digital trust professionals.

“Even digital trust professionals report a low familiarity with AI—a concern as the technology iterates at a pace faster than anything we’ve seen before, with use spreading rampantly in organisations,” said John De Santis, ISACA board chair. “Without good governance, employees can easily share critical intellectual property on these tools without the correct controls in place. It is essential for leaders to get up to speed quickly on the technology’s benefits and risks, and to equip their team members with that knowledge as well.”

Read more in the infographic outlining these findings, along with other AI resources, including the AI Fundamentals Certificate, the complimentary The Promise and Peril of the AI Revolution: Managing Risk white paper, and a free guide to AI policy considerations, at https://www.isaca.org/resources/artificial-intelligence.