Ewa Zborowska
Ewa Zborowska (Research Director, AI Europe)

Remember the good old days when “Shadow IT” was just about rogue Excel spreadsheets and unauthorized Dropbox accounts? But the times they are a-changing! Now we’re dealing with something far more insidious: Shadow AI. And no, it’s not just lurking in the corners of your or anyone’s organization anymore. Now, it’s driving productivity gains while simultaneously creating security nightmares that, hopefully, keep CISOs wide awake at night.

From private drives to GPT instances

Shadow IT has been the bane of enterprise administrators’ existence for decades. We’ve all seen it: marketing teams building up their own CRM systems, sales departments hoarding customer data in personal cloud drives, and finance teams creating elaborate Excel macros that, unnoticed, somehow have become mission-critical applications. But nowadays we have Shadow IT on AI steroids.

Because instead of innocent unauthorized OneDrive instances, we have unauthorized ChatGPT accounts, private Perplexity subscriptions, custom Copilots, and Excel automation scripts integrated with GPT APIs. And they ALL operate completely outside of IT oversight.

As many experts repeat: shadow IT hasn’t disappeared – it has evolved. And artificial intelligence has given it turbocharged engine.

The staggering scale of unauthorized AI adoption

IDC’s Global Employee Survey from April 2025 reveals that 39% of EMEA employees are using free AI tools at work, another 17% use AI tools they privately pay for. Only 23% of employees declare they use AI tools provided by their organization, and it still does not mean they are not using private tools simultaneously. Another survey I’ve come across shows that 52% of workers won’t admit to using AI in their jobs. And the percentage of sensitive corporate data being fed into AI tools has skyrocketed from a not insignificant 10% to over 25% in just one year.

Why are these numbers so high? The answer is frustratingly simple: on a basic level, AI can be ridiculously easy to use. You need a browser, a prompt, and you’re done. No coding, no server configuration, no IT tickets that sit in queues. Just pure, immediate productivity enhancement. Maybe with a bit of compliance catastrophe on the side, but who’s looking?

March or die

However, let’s be brutally honest about why else these numbers are so high. Employees aren’t just using AI tools to work smarter – they’re often using them to survive increasingly unreasonable workplace expectations. In an era where headlines scream about companies replacing entire departments with AI, workers are fighting hard to prove their relevance.

The pressure is palpable and justified. When employees read about firms cutting 30% of their workforce while boasting about AI-driven efficiency gains, the message is clear: march or die. Shadow AI adoption isn’t just about productivity enhancement, more than anything it can be about professional self-preservation.

This creates a weird dynamic where the very people organizations depend on feel compelled to hide the tools that make them valuable. Are they being rebellious or just rational? When your job security depends on meeting targets that seem designed for superhuman capabilities, you’ll probably use whatever tools necessary to achieve them, authorized or not.

Most AI tools don’t require dedicated client applications. They operate seamlessly through web browsers or as mobile apps, making them almost invisible to traditional IT monitoring systems. The vast majority of ChatGPT, Google Gemini or other tools usage at work happens through non-corporate accounts, meaning corporate data or IP is being processed by AI models that organizations have zero visibility into, zero control over, and zero ability to audit.

How pursuit of productivity kills strategic AI adoption

Many organizations, in their relentless pursuit of productivity metrics and efficiency gains, are creating an environment where employees feel compelled to hide their AI usage to meet impossible expectations. This creates a vicious circle where leadership demands productivity improvements while threatening job cuts, employees discover AI tools that help them meet unrealistic expectations, IT blocks access, so employees use unauthorized tools to avoid becoming the next layoff statistic.

The result? Organizations end up with lower overall AI adoption rates than they could achieve, precisely because they created a fear-based environment where survival instinct eats strategy for breakfast. Define irony: companies that publicly celebrate AI’s potential to replace human workers are simultaneously frustrated by their inability to achieve coordinated, strategic AI implementation.

The education paradox or one-time is here to fail you

And here’s where most organizations spectacularly miss the mark. They roll out a single “AI Awareness” training session, check the compliance box, and wonder why employees still go rogue.

Basic communication theory tells us that people need to hear a message seven times before it truly registers. Yet organizations treat AI education like a software update: deploy once, assume adoption. The learning curve for responsible AI usage isn’t a gentle slope. Or maybe gentle but the road will be long and winding. Employees need ongoing, contextual education that evolves with the changing AI landscape. They need to understand not just the “what” and “how,” but the “why” behind AI governance policies. (And you need AI governance, do we even need to say that?) When people understand the reasoning behind restrictions, compliance rates soar. When they don’t, Shadow Everything thrives.

Smart organizations recognize that AI literacy requires sustained and strategically planned education programs. They build comprehensive learning pathways that revisit core concepts with increasing depth over time, ensuring employees develop genuine understanding rather than superficial compliance. This investment isn’t just about risk mitigation – it’s about creating a workforce capable of strategic, responsible AI adoption.

Hope for the transparency solution? BYOAI!

The IEEE Computer Society proposes a solution that might make traditional IT nervous: BYOAI (Bring Your Own AI). This approach emphasizes transparency, risk assessment, and responsibility while allowing employees to work with their preferred AI tools.

The concept acknowledges a fundamental truth that many organizations refuse to face, although they should have learnt already: you can’t stop Shadow AI, or anything else for that matter, adoption through prohibition. Prohibition will only drive it deeper underground, where it becomes even more dangerous. Think about Chicago, Valentine’s Day, circa 1929, So if a ban is not the answer, then what? The easiest, yet most reliable, way to mitigate risk is good old, albeit boring, education…

Embrace reality, manage risk

Shadow AI isn’t going away. The productivity gains are too compelling, the tools are too accessible, and the competitive pressure too intense. Organizations have two choices: build frameworks for managing Shadow AI or watch it manage them.

What will smart companies do?

  • Invest heavily in ongoing employee AI education (not one-shot training)
  • Create transparent AI governance frameworks
  • Design security policies that enable rather than restrict innovation
  • Build trust through collaboration rather than control
  • Measure success by strategic AI adoption, not just productivity metrics

The question isn’t whether Shadow AI is a threat or an opportunity – it’s whether your organization will respond with wisdom or wishful thinking. Choose wisely!

Listen back to Ewa on the following webcast: AI in 2025: Deliver or Wither

To learn more about how International Data Corporation (IDC) can support your technology market data needs, please contact us.

Spread the love