Tiraverse logo

Shadow AI Is Already in Your Business — Here's How to Take Control

Sophie Marchetti 4 min read

Let’s start with a scenario you’ve almost certainly encountered, even if you don’t know it yet.

It’s Tuesday morning. Your sales manager pastes a client contract into ChatGPT to get a quick summary. Your finance lead uploads a spreadsheet of customer payment data for analysis. Your marketing coordinator feeds a confidential pricing strategy into a free AI writing assistant.

None of these people are doing anything malicious. They’re trying to work faster. But every one of those interactions just sent your business data to a third-party server, where it may be stored, analysed, or even used to train future AI models.

This is shadow AI. And it’s almost certainly happening in your business right now.

The Scale of the Problem

Netskope’s 2025 Cloud and Threat Report found that 72% of enterprise generative AI usage is shadow IT — employees using personal accounts to access tools the company hasn’t sanctioned. They tracked over 1,550 distinct generative AI applications in use across enterprises. The average organisation now detects 223 monthly attempts by employees to include sensitive data in AI prompts.

Trustmarque’s AI Governance Index puts the gap in stark terms: 93% of UK organisations now use AI, but only 7% have fully embedded governance frameworks.

Real Incidents, Real Consequences

The CISA Incident (January 2026): The acting director of CISA — the United States’ own cybersecurity agency — uploaded documents marked “For Official Use Only” to public ChatGPT, triggering internal security alerts and making international headlines. If the head of a cybersecurity agency can accidentally leak through a public AI tool, anyone can. (CSO Online)

The OmniGPT Breach (February 2025): OmniGPT, a platform aggregating multiple AI models, was breached — exposing over 34 million lines of private conversations, email addresses, phone numbers, and uploaded files containing credentials and billing details. A threat actor posted the entire dataset on a hacking forum. (Cyber Insider)

The UK Regulatory Picture

The Data (Use and Access) Act 2025 is now in force. The ICO’s enforcement posture has shifted dramatically: 2025 saw approximately £19.6 million in enforcement actions — eight times the fine yield of 2024. The average ICO fine jumped from £150,000 to over £2.8 million. The maximum penalty under UK GDPR remains £17.5 million or 4% of global annual turnover, whichever is higher.

The direction is clear: the ICO is issuing fewer, larger fines and targeting systemic failures. An organisation that knows its employees use unvetted AI tools and hasn’t taken steps to govern that usage is exactly the kind of failure that draws attention.

A Practical Response Framework

Level 1 — Policy and Awareness (do this week)

Create a clear “traffic light” AI use policy. Green: public information, brainstorming. Amber: internal processes, non-sensitive docs (sanctioned tools only). Red: PII, financial data, contracts (never share with public AI). Keep it one page — jargon invites eye-rolls.

Level 2 — Approved Tool Stack

Evaluate and approve specific AI tools with data processing guarantees (no training on your data, UK/EU data residency). Provide sanctioned alternatives so employees don’t need to go rogue.

Level 3 — Private AI

Deploy AI tools on your own infrastructure using open-source models. No data leaves your premises, no per-query costs, no third-party access.

Tiraverse Take: We build governed AI workflows that keep sensitive data on UK soil — sanctioned prompt libraries, masked logging for compliance, and human review steps when an agent behaves oddly.

What to Do This Week

  1. Ask your team. Send a five-question anonymous survey: Which AI tools do you use? What data goes in? You’ll learn more from honesty than from assumptions.
  2. Draft a one-page AI use policy. Start with the traffic light system above.
  3. Investigate private AI options. If your team already uses AI daily, the question isn’t whether to provide tools — it’s whether to provide safe ones.

Need your team to keep the productivity boost and stay inside ICO guardrails? We can roll out a private AI gateway in four weeks. Get in touch.

Next read: Build vs. Buy: Custom Software · Agentic AI for Small Businesses

Sources: Kingsley Napley — UK Data Governance 2026 · ICO Enforcement 2025 · Mole Valley Chamber — SME AI Adoption