Shadow AI: What it is, Why it’s a Risk, and How to Detect it
Your employees are using AI tools right now. The question is whether you know which tools they’re using, what data they’re putting into them, and what’s happening to that data once it leaves your network.
According to Microsoft’s research, 71% of UK employees have used unapproved consumer AI tools at work, with more than half doing this every week. That’s today’s reality, in organisations just like yours, across every sector. And when a data breach is linked to shadow AI, the average additional cost is $670,000 above the cost of a standard breach.
Shadow AI isn’t about malicious employees trying to bypass your controls. It’s about people using tools that make them more productive, without understanding the compliance and security risk they’re creating in the process. Client contracts pasted into ChatGPT. Customer records uploaded to an AI summarisation tool. Internal financial data run through an unapproved model that retains everything it processes. In each case, your data has left your control, your GDPR obligations may have been breached, and you have no audit trail.
The good news is that the visibility gap can be closed.
In this guide, we’ll cover what shadow AI is, why it creates real UK GDPR and security risks for UK-based organisations, where it hides in your environment, and how to build the detection and monitoring capability to start closing the gap using tools you almost certainly already have.
Key Takeaways
- Shadow AI is the use of AI tools by employees without IT or security team knowledge or approval and it’s happening in most UK organisations right now
- 71% of UK employees use unapproved AI tools at work; shadow AI breaches cost an average of $670,000 more than standard data breaches
- The risk isn’t limited to public LLMs like ChatGPT, AI features embedded in CRM systems, help desks, and productivity tools create the same exposure
- A CASB, DLP, IAM, and SIEM form the four pillars of effective shadow AI detection, and most organisations already have at least some of these in place
- Shadow AI is a direct GDPR risk: any AI tool processing personal data without a lawful basis or data processing agreement creates regulatory exposure for your organisation
- Start with discovery, not blocking, understanding what’s in use is the essential first step before any controls can be effective
What is Shadow AI?
Shadow AI is the use of artificial intelligence tools, applications, and services by employees without the knowledge, approval, or oversight of IT and security teams. It’s the AI equivalent of shadow IT, but with a significantly higher risk profile.
Unlike an unsanctioned cloud storage app or messaging platform, AI tools don’t just store data, they actively process, analyse, and in many cases permanently retain the information that employees put into them.
When an employee uses an unapproved AI tool to summarise a meeting, analyse a document, or debug code, they’re not just introducing an unmanaged tool into your environment. They’re creating an unmanaged data processing relationship with an external model that you don’t control, haven’t assessed, and may have no visibility into.
How Shadow AI Differs from Shadow IT
Shadow IT created visibility gaps. Shadow AI creates additional data sovereignty gaps, and that’s a materially different risk.
When an employee used Dropbox without IT approval, the risk was that you didn’t know where the file was stored. When an employee pastes a client contract into ChatGPT, the risk is that the data may now be part of a training dataset accessible to millions of users globally.
When your workforce uses a tool operated in a jurisdiction with no equivalent data protection regulation, that data may never come back under your control.
Shadow AI is about unmanaged data flows to external models, and the gap between the two in terms of compliance exposure and business risk is significant.
Where Shadow AI Hides in Your Organisation
One of the most challenging aspects of shadow AI is its breadth. It’s not a single problem you can solve by blocking one website.
It appears across multiple layers of your organisation’s technology environment, often in places you’d least expect.
Public LLMs and Browser-Based AI Tools
The most visible layer is the direct use of public large language models: ChatGPT, Claude, Gemini, Perplexity, DeepSeek, and similar tools. Employees access these through web browsers, often using personal accounts that aren’t associated with your organisation and therefore bypass any corporate monitoring you may have in place.
Browser-based AI extensions compound the problem. These tools integrate AI capabilities directly into the browser, often requesting permissions to access everything open across multiple tabs.
An extension that ‘helps you write emails’ may simultaneously have access to your CRM and email address database, your document management system, and your finance platform, all accessible through a single, unvetted browser plugin.
According to research by Netskope, 47% of employees accessing generative AI platforms are doing so through personal accounts their organisations aren’t overseeing. You can’t monitor what you can’t see.
AI Features Embedded in Sanctioned SaaS Tools
Here’s where the problem becomes particularly complex. You probably already have approval on your CRM platform, your help desk software, your project management tools, and your Microsoft 365 environment.
What you may not have assessed is the AI features those tools are quietly rolling out as default.
The Copilot button that now appears across every Microsoft application. AI writing assistance in your CRM. Automated summarisation in your customer support platform. AI-driven analytics built into tools you approved years ago.
In many cases, these features are enabled by default, without any notification to your IT team and without any opportunity to assess the data processing implications.
Employees aren’t bypassing your controls when they use these features. They’re using tools you’ve approved, but the AI capabilities within them haven’t been subject to the same scrutiny.
Personal Devices and BYOD
For organisations operating BYOD policies or hybrid working arrangements, the challenge extends further still.
Employees using personal devices have an entirely separate AI usage footprint that corporate monitoring can’t reach. Data extracted from business systems can be processed through AI tools on a personal phone or laptop without leaving any trace in your corporate environment.
DigitalXRAID’s Head of SOC Engineering, highlighted a specific concern in this area on a recent webinar: some of the tools being used in this way belong to organisations in jurisdictions with no meaningful data protection regulation, and no obligation to secure the data they process.
For UK organisations with GDPR and data protection obligations, this creates real exposure.
Why Shadow AI is a UK GDPR and Security Risk
Understanding the risk in practical terms is the starting point for any effective response. Shadow AI creates two distinct but related categories of exposure: regulatory and security.
The GDPR Dimension
Under UK GDPR, any processing of personal data requires a lawful basis, a data processing agreement where a third party is involved, and compliance with the rules around international data transfers. When an employee submits personal data to an unapproved AI tool, including customer records, employee information, health data, financial details, none of those conditions are typically met.
The tool hasn’t been assessed. There’s no DPA in place. If the tool operates outside the UK or EU, there may be no equivalent data protection framework at all.
The ICO’s position is clear: your GDPR obligations don’t stop at the boundary of your approved tool stack. If your employees are processing personal data through external AI tools using your organisation’s data, you’re potentially liable for that processing.
The training data dimension adds another layer. Many AI tools use the data submitted to them to train or improve their models, unless users actively opt out and most employees won’t know to do this, or won’t have the option through a free consumer account.
A client contract or customer dataset uploaded today could become part of a model that surfaces that information to another user in a different context entirely.
For a fuller understanding of how AI intersects with your compliance obligations, read the guide to building an AI governance framework and an overview of the EU AI Act.
The Security Risk Dimension
Beyond GDPR, shadow AI creates a range of direct security risks that your cyber risk programme needs to account for.
Data leakage through unvetted models is the most immediate concern. When sensitive business information enters an external AI system, you lose control of where it goes, who can access it, and how it’s used.
Around 60% of AI-related data exposure incidents are linked to shadow AI.
Browser-based AI extensions deserve particular attention from a security perspective. These tools often request broad permissions across everything processed in the browser, effectively creating a data exfiltration channel that sits entirely outside your perimeter defences.
An extension that appears benign may be capturing data from every tab an employee has open, including authenticated sessions on business-critical systems.
There’s also the risk of AI outputs based on unreliable or deliberately compromised models. DeepSeek, for example, represents a specific data sovereignty concern for UK organisations: as a Chinese-operated model with no UK or EU data protection obligations, data submitted to DeepSeek may be subject to Chinese national security laws with no equivalent transparency or redress rights.
Using it to process any sensitive business or personal data creates a risk that most organisations haven’t explicitly assessed.
The absence of an audit trail compounds all of these risks. Without visibility into what’s being submitted to AI tools, you can’t detect incidents, you can’t investigate them, and you can’t demonstrate to a regulator that you exercised reasonable care.
How to Detect Shadow AI in Your Organisation
If you don’t have any discovery capability running, you’re flying blind. The detection framework below is built around four capability pillars, approached in a vendor-agnostic way, the right tools depend on your existing stack, not on any single vendor relationship.
Cloud Access Security Broker (CASB)
A CASB is the primary shadow AI discovery engine. It catalogues cloud application usage across your environment by ingesting firewall logs, proxy data, and network traffic, then risk-scores every application it identifies, including AI tools.
Most modern CASB platforms now include dedicated AI tool catalogues and classification capabilities.
For Microsoft-heavy environments, Microsoft Defender for Cloud Apps provides strong AI discovery capability with a built-in catalogue of AI applications and the ability to apply session controls or block unapproved tools.
Zscaler is a strong option for organisations not operating a full Microsoft stack. The key principle, as DigitalXRAID’s Head of SOC Engineering consistently emphasises, is to focus on capability rather than vendor: identify the CASB that fits your environment, not simply the one you already have licensed.
Data Loss Prevention (DLP)
DLP policies monitor and control what data employees are sending to external services, including AI tools. Microsoft Purview’s Data Security Posture Management capability addresses this directly. Sensitivity labels applied to content can trigger alerts when that content is submitted to a known AI endpoint.
In audit mode, this gives you immediate visibility into what data is leaving your environment through AI tools, without disrupting legitimate use. More advanced DLP configurations can also inspect at the prompt level, flagging or blocking sensitive content as it’s typed into an AI interface rather than waiting for a file upload or data transfer to trigger an alert.
In block mode, it prevents sensitive data from reaching unapproved tools entirely.
The practical starting point that DigitalXRAID’s Head of SOC Engineering recommends is: in Microsoft Purview, deploying a DLP rule that matches against your existing sensitivity labels and AI destinations is close to a single-click action.
You can have initial visibility within hours, not weeks.
Identity and Access Management (IAM)
Many employees access AI tools by signing in with their corporate identity through OAuth. Conditional Access policies can detect these sign-ins to unapproved services and block them, while OAuth consent monitoring provides a log of every external service that employees have authenticated against using corporate credentials.
This layer is particularly valuable because it catches AI usage that bypasses network-level monitoring. A browser-based AI tool accessed over a personal hotspot won’t appear in your proxy logs, but it will generate an identity event if the employee authenticates using their corporate account.
SIEM and SOC Analytics
Traditional SIEM platforms weren’t built with AI usage monitoring in mind, but they can be extended with custom detection rules.
DNS requests to known AI domains, anomalous data transfer patterns to AI API endpoints, and OAuth consent events for AI services all provide detection signals that a well-configured SIEM can surface.
User and Entity Behaviour Analytics (UEBA) adds another dimension, once you’ve established a baseline of normal AI usage across your organisation, anomalous spikes become visible. An employee who typically generates ten Copilot interactions per day suddenly generating a hundred warrants investigation.
This is the kind of contextual detection that sits naturally within a Security Operations Centre (SOC), which is exactly why DigitalXRAID’s managed SOC service integrates AI visibility monitoring as a core component of its detection capability.
Building a Three-Phase Shadow AI Detection Programme
You don’t need to solve everything at once. Shadow AI visibility can be built iteratively, with each phase delivering immediate value while laying the foundation for the next.
This is the practical and phased approach that DigitalXRAID’s Head of SOC recommends:
Phase 1: Discover
Enable cloud app discovery in your CASB and configure it to ingest firewall and proxy logs. Query your identity provider for OAuth grants to AI services, looking back 90 days is a practical starting point that will reveal the majority of current AI tool usage.
Deploy a basic DLP alert rule covering sensitivity-labelled content and known AI destinations. At this stage, you’re in audit mode: the goal is visibility, not restriction.
By the end of Phase 1, you’ll have your first honest picture of what’s being used, by who, and at what frequency.
Phase 2: Assess Risk
Risk-score the AI tools your discovery has identified. Most CASB platforms include risk ratings, use them as your starting framework.
A tool like Microsoft Copilot with Enterprise restrictions enabled sits at one end of the risk spectrum. DeepSeek, with its Chinese data sovereignty implications and absence of EU/UK data protection obligations, sits at the other.
Map data flows to understand which departments are using which tools, and what categories of data they’re submitting. Speak to business unit leads: not everyone using an AI tool knows it hasn’t been approved, and understanding what employees believe is sanctioned versus what isn’t gives you important context for policy development.
This phase connects directly to your existing third-party risk management processes and to the risk assessment work that underpins your AI governance framework.
Phase 3: Control
Apply Conditional Access policies to block unapproved AI applications while explicitly permitting approved tools. Session controls through your CASB can enforce data handling rules on sanctioned AI.
Set up automated alerting for new AI app OAuth registrations so that newly introduced tools are caught before they become embedded. Communicate your AI acceptable use policy to all employees, even if it’s an initial draft.
Employees who understand what’s monitored, what’s permitted, and why, comply at dramatically higher rates than those who face unexplained restrictions.
Establish a quarterly shadow AI audit cadence.
Why Blocking Everything Doesn’t Work
The instinct to block all AI tools is understandable. It’s also counterproductive.
Blanket AI bans don’t eliminate shadow AI, they drive it underground. Employees who’ve discovered genuine productivity value in AI tools don’t stop using them because IT has blocked the corporate network, they switch to personal hotspots, personal devices, and personal accounts.
The result is the same data leaving your control, but with even less visibility than before.
The more effective approach is to understand what’s being used, risk-score each tool, provide approved and compliant alternatives for the highest-risk use cases, and make the boundaries clear.
There are two ways to approach AI access control: you can block specific unapproved applications while allowing everything else, or you can block all AI applications by default and explicitly permit only approved tools.
DigitalXRAID’s SOC specialists recommend most organisations to take the second approach but the right choice depends entirely on how your business operates and what AI tools your teams genuinely need.
The goal isn’t to stop AI adoption. It’s to ensure it happens in a way you can see, govern, and protect.
What to Do This Week: Immediate Actions for UK Security Leaders
You don’t need a major programme to start. Most of the following actions leverage tools you already have, and several can be completed within a working day:
- Enable cloud app discovery in your CASB. Zero configuration required, immediate AI visibility begins
- Query your identity provider for AI service OAuth grants in the last 90 days
- Create a DLP alert rule in Microsoft Purview for sensitivity-labelled content submitted to known AI domains
- Audit AI Copilot licence assignments across your estate, including who has access and what are they doing with it
- Deploy a SIEM detection rule for new AI app OAuth consent events
- Communicate an AI acceptable use policy to all staff, even if it’s an initial draft that will be developed further
None of this requires a major investment or a new procurement. The first step is simply deciding to start.
How DigitalXRAID Can Help You Get Visibility of Shadow AI
Building AI visibility capability is achievable, but it requires the right expertise to configure effectively, interpret correctly, and maintain as the AI landscape evolves.
DigitalXRAID works with UK organisations across all sectors to deliver exactly this capability.
Managed SOC with AI Visibility
DigitalXRAID’s managed SOC service uses Microsoft’s security suite, including Defender for Cloud Apps, Microsoft Purview, and Microsoft Sentinel, to provide continuous visibility into how AI tools are being used across your organisation.
DigitalXRAID’s CREST and NCSC accredited SOC analysts build and maintain the detection rules, monitor the alerts, and provide the context that turns raw telemetry into actionable intelligence. You get AI visibility without having to build or maintain the capability in-house.
AI Governance and Compliance Consultancy
Getting the policy and governance framework right around shadow AI requires expertise across compliance, data protection, and security operations.
DigitalXRAID’s consultancy team supports organisations from initial AI inventory and gap assessment through to acceptable use policy development, ISO 42001 alignment, and ongoing governance programme management.
Our vCISO and DPO-as-a-Service offerings provide senior-level expertise on a flexible basis for organisations that don’t have those capabilities in-house.
AI Security Testing
Once you have visibility into what’s in use, the next step is understanding whether your AI systems are secure. DigitalXRAID’s LLM and GenAI penetration testing service identifies the vulnerabilities that shadow AI and unsanctioned tools create in your environment, and provides the evidence your team needs to prioritise and remediate them.
If you’d like to discuss your AI visibility posture and understand what’s currently happening in your environment, get in touch and we’ll start with a straightforward conversation about where you are and what you need.
Final Thoughts: You Can’t Protect What You Can’t See
The phrase that DigitalXRAID’s Head of SOC Engineering uses to close every conversation about AI visibility is the most accurate summary of where most UK organisations are today: “you can’t protect what you can’t see”.
Right now, for the majority of organisations, shadow AI is completely invisible and the data flowing through it is completely unprotected.
The financial and regulatory consequences of that invisibility are real and measurable. A $670,000 additional breach cost. GDPR exposure from unassessed data processing. Intellectual property entering training datasets you didn’t consent to. Reputational damage from incidents you had no warning of.
But this is a solvable problem. The tools exist. The methodology is clear. And the organisations that build AI visibility now, before an incident forces their hand, will be in a fundamentally stronger position than those that wait.
Get in touch with DigitalXRAID’s team to discuss your AI visibility posture and take the first step towards understanding what’s really happening in your environment.
Frequently Asked Questions: Shadow AI
What is shadow AI?
Shadow AI is the use of artificial intelligence tools, applications, and services by employees without the knowledge, approval, or oversight of IT and security teams. It includes the direct use of public LLMs like ChatGPT or Claude, AI features embedded within sanctioned SaaS tools, and AI tools accessed through personal accounts or personal devices. It’s called shadow AI because it’s invisible to the organisation and invisible means unprotected.
How is shadow AI different from shadow IT?
Shadow IT refers to the use of unapproved software, services, or devices within an organisation. Shadow AI goes further: rather than simply introducing an unmanaged tool, it creates an unmanaged data processing relationship with an external AI model. Data submitted to shadow AI tools can be used for model training, processed in jurisdictions without equivalent data protection laws, and retained indefinitely, risks that are categorically different from a file stored in an unapproved cloud drive.
Is shadow AI a GDPR risk for UK businesses?
Yes. Under UK GDPR, any processing of personal data requires a lawful basis, a data processing agreement with third-party processors, and compliance with rules on international data transfers. When employees submit personal data to unapproved AI tools, none of these conditions are typically in place. The ICO’s guidance is clear that GDPR obligations apply regardless of which tools employees use to process personal data, making shadow AI a direct and enforceable compliance risk.
What tools can detect shadow AI in my organisation?
The four core detection capabilities are a Cloud Access Security Broker (CASB) for AI app discovery and risk scoring, Data Loss Prevention (DLP) for monitoring sensitive data submitted to AI tools, Identity and Access Management (IAM) for detecting OAuth authentications to AI services, and SIEM for custom detection rules targeting AI usage patterns. Most organisations already have some of these in place, the gap is typically in configuration and AI-specific detection logic, not in the tools themselves.
Should I block all AI tools to prevent shadow AI?
Blocking all AI tools tends to drive usage underground rather than eliminate it. Employees switch to personal devices and personal accounts, reducing your visibility further rather than removing the risk. The more effective approach is to establish what’s being used, risk-score each tool, provide approved alternatives for high-risk use cases, and make clear policies with enforcement. Blanket blocking works only when combined with genuine approved alternatives that meet employees’ productivity needs.
What is a CASB and how does it help with shadow AI detection?
A Cloud Access Security Broker (CASB) sits between your users and cloud services, providing visibility and control over cloud application usage. For shadow AI detection, a CASB catalogues every AI tool in use across your environment, assigns risk scores based on factors like data handling practices and compliance status, and can block or apply session controls to unapproved tools. Microsoft Defender for Cloud Apps and Zscaler are commonly used CASB platforms in UK enterprise environments.
Is DeepSeek safe to use in a UK organisation?
DeepSeek presents specific data sovereignty concerns for UK organisations. As a Chinese-operated AI model, it may be subject to Chinese national security laws that allow government access to data it processes. It doesn’t carry EU or UK data protection obligations, meaning personal or sensitive business data submitted to it has no equivalent protection to that required under UK GDPR. Most UK organisations should treat DeepSeek as high-risk and restrict its use for any data classified as sensitive or above.
How do I build an AI monitoring programme for my organisation?
Start with three phases: discover, assess, and control. In Phase 1, enable CASB cloud app discovery, query your identity provider for AI service OAuth grants, and deploy a basic DLP alert rule. In Phase 2, risk-score what you’ve found, map data flows, and speak to business units about what they’re using and why. In Phase 3, apply Conditional Access to block high-risk tools, enable session controls on sanctioned AI, set up automated alerting for new AI app registrations, and communicate an AI acceptable use policy. Establish a quarterly review cadence, this isn’t a one-off exercise.



