Shadow AI Economy Bypasses IT Governance: MIT

Workers at 90 per cent of companies are using personal AI chatbots for daily tasks without IT approval, creating significant governance and security risks, according to a new MIT study.

The research reveals a stark disconnect between formal enterprise AI investments and actual usage patterns. While companies have invested $US30-40 billion globally in generative AI initiatives, 95 per cent report zero impact on profit and loss statements from official AI projects.

Meanwhile, employees routinely use personal ChatGPT accounts, Claude subscriptions and other consumer-grade AI tools to automate work tasks. This "shadow AI economy" operates largely invisible to IT departments and executive leadership, raising concerns about data security, compliance and intellectual property protection.

The MIT Project NANDA study, titled "State of AI in Business 2025," analysed over 300 publicly disclosed AI initiatives, interviewed representatives from 52 organisations and surveyed 153 senior leaders.

Only 40 per cent of companies have purchased official large language model subscriptions, yet employees in more than 90 per cent of organisations regularly use personal AI tools for work. Nearly every survey respondent reported using LLMs as part of their regular workflow.

Shadow AI usage creates potential compliance risks under privacy regulations, while bypassing established security protocols and data governance frameworks.

Workers demonstrated clear preferences for AI versus human involvement across different task types. Seventy per cent prefer AI for drafting emails and 65 per cent for basic analysis, while 90 per cent said they prefer humans for mission-critical work.

The study identifies key reasons why personal AI tools outpace enterprise solutions: immediate utility, workflow flexibility and low barriers to adoption. Consumer tools like ChatGPT offer adaptability and persistent memory that many custom enterprise solutions lack.

Enterprise AI projects face a "chasm between pilots and production" due to complex integrations, inflexible interfaces and limited learning capability. Internal "build" projects fail twice as often as externally sourced "buy" solutions.

For IT governance professionals, the research suggests organisations must recognise shadow AI patterns and develop frameworks to harness employee innovation while maintaining security and compliance standards.

The study challenges five common assumptions about enterprise AI, finding that few jobs have been replaced, generative AI isn't transforming business operations as predicted, and problems stem from tool limitations rather than regulations or model performance.