Back to articles

Shadow AI: Your Employees Are Using AI Without Your Knowledge. And It Could Cost You Dearly.

March 4, 2026Mario Bouchard, M. Adm., CISSP

Twenty years ago, your employees were installing Dropbox and plugging USB drives into their workstations to work around IT department delays. We called it "shadow IT." It was annoying, but manageable.

Today, the same instinct is back, but with consequences of an entirely different magnitude. Your employees are using ChatGPT, Gemini, Copilot, and other artificial intelligence tools to draft reports, analyze data, and speed up their work. Often with the best of intentions. Almost always without oversight. And sometimes, with data you'd never want leaving your organization.

This is what we call shadow AI.

The Real Problem Is the Blind Spot

Let's be clear: AI is an extraordinary productivity tool. The real issue is that you don't know how your teams are using it.

The numbers speak for themselves. According to a BlackFog study (Sapio Research, November 2025, published January 2026), 86% of workers use AI tools at least once a week for work. Among those using unapproved tools, 58% report relying on free versions, the ones that offer no guarantees around security, data governance, or privacy. Netskope reports that 47% of generative AI platform users access them through personal accounts, completely outside the enterprise's security perimeter.

And these aren't fringe behaviors. BlackFog reveals that 49% of workers use tools not approved by their employer. Worse still: 63% believe it's acceptable to do so if the company doesn't offer an approved alternative.

What It Actually Costs

IBM's Cost of a Data Breach Report 2025 puts a number on the impact: organizations with high levels of shadow AI face an average breach cost of approximately $4.74M USD, $670,000 more than those with low or no shadow AI ($4.07M). Twenty percent of surveyed organizations reported suffering a breach directly attributable to shadow AI.

In these incidents, the most frequently compromised data types are customer personal information (65% of cases) and intellectual property (40%). And the most troubling finding: 97% of AI-related breaches involved inadequate access controls.

This is a governance problem, not a technology problem.

Why Your Employees Are Working Around the Rules

Before pointing fingers at your teams, ask yourself: why are they doing this?

The answer is almost always the same. There's a gap between how fast employees need solutions and how fast the organization provides them. Generative AI is three clicks away. A technology approval process often takes weeks, sometimes months. Faced with tight deadlines, people find their own solutions.

BlackFog confirms it: 60% of employees are willing to take risks to meet their deadlines. That's resourcefulness in a system that can't keep up with reality.

The Regulatory Risk: Privacy Laws Change the Equation

For organizations operating in regulated environments, shadow AI creates direct compliance risk. Privacy legislation across North America (including Quebec's Law 25 (phased in, final phase effective September 2024), Canada's PIPEDA, and evolving U.S. state privacy laws) imposes strict obligations around personal information: privacy impact assessments, informed consent, breach notification, and governance of automated decision-making.

When an employee copies and pastes customer data into an external AI tool without the organization's knowledge, it's potentially a regulatory violation. The organization can't report an incident it hasn't detected, nor demonstrate compliance for a data processing activity it doesn't know about.

The regulatory reality: if an incident presents a risk of serious harm, organizations must notify the relevant privacy authority and affected individuals with diligence, and maintain an incident register. Penalties can be severe. Quebec's Law 25 carries administrative sanctions up to $10 million or 2% of worldwide revenue, and criminal penalties up to $25 million or 4% of worldwide revenue. In a shadow AI context, the organization can't even claim good faith. It simply didn't have visibility.

The Scenario No One Wants to Live Through

A financial analyst is preparing a quarterly report. To save time, they copy-paste an Excel file (unpublished financial results, 12,000 customer names) into ChatGPT. Free version. Personal account.

The report goes to the executive team within the hour. Flawless result.

No one saw a thing. No alert. No log.

Now, the consequences:

  • 12,000 personal records transmitted to an external service without consent.
  • OpenAI's policy: for services intended for individuals (e.g., ChatGPT), content may be used to improve and train models, with an opt-out option via data controls. Without an explicit action from the user, submitted data may feed model training.
  • If the incident presents a risk of serious harm under applicable privacy law: mandatory notification to the regulator and affected individuals.
  • Order of magnitude (IBM): 12,000 records × $166/record ≈ $2.0M. This figure is an order of magnitude based on IBM's average cost per record in breach studies, not a precise rate, but it gives a sense of the exposure.
  • The board's question: "What controls were in place to prevent this?"

This scenario plays out on an ordinary Tuesday morning in any organization that lacks visibility into AI usage.

Where to Start: Concrete Steps You Can Take Monday Morning

There's no shortage of articles about shadow AI. What's missing is concrete action. Managing shadow AI comes down to four continuous actions: detect what's being used, govern with a clear policy, equip your teams with secure alternatives, and measure regularly to adjust course. Here's how to put them into practice right now, without a massive budget or a transformation project.

Ask Your IT Team Five Questions

You don't need a full audit to get an initial picture. At your next meeting with your IT lead or CISO, ask these questions:

  1. "Do we know which AI tools are actually being used in the organization?" Not just the ones we've purchased. The ones people are actually using, including free browser-based versions.
  2. "Can we see network traffic to AI platforms?" Firewalls and proxies can identify connections to openai.com, anthropic.com, gemini.google.com, and others. If no one's looking, it's a blind spot.
  3. "What happens when an employee pastes sensitive data into ChatGPT?" If the answer is "we don't know" or "nothing," you have your answer.
  4. "Do we have an AI usage policy?" And if so, do employees know about it? A policy that no one reads protects no one.
  5. "Is our AI usage documented in our privacy impact assessments, as required by applicable privacy law?" If AI doesn't appear anywhere in your PIAs, there's a gap between your operational reality and your compliance posture.

If the answer to more than two of these questions is "I don't know," you have a blind spot that could cost you dearly the day an incident occurs.

Run a Quick AI Usage Audit

You don't need a six-month engagement. Start with three simple actions:

  • Analyze your network traffic to known AI platforms. Most firewalls can generate this report in minutes.
  • Run an anonymous internal survey. Simply ask your employees which AI tools they use and for what types of tasks. You'll be surprised by how candid people are when sanctions aren't on the table.
  • Check the browser extensions installed on your workstations. Several AI tools integrate directly into Chrome or Edge without any visible installation.

The good news: organizations that take the steps to detect shadow AI incidents internally succeed in 50% of cases according to IBM 2025, up from 42% in 2024 and 33% in 2023. The trend is moving in the right direction.

Write a Policy, Even an Imperfect One

An AI usage policy doesn't need to be a 40-page legal document. It needs to answer five questions:

  • Which tools are authorized?
  • What types of data can be submitted?
  • Which uses are prohibited?
  • Who is responsible for governance?
  • What do you do when in doubt?

What matters is that the policy is readable, realistic, and known. A 40-page document that no one consults isn't a policy. It's an alibi.

And above all: treat your policy as a living document. AI evolves at breakneck speed. New tools appear every week, use cases change, and risks shift. A policy written in February will be incomplete by June. That's normal. The winning approach is iterative: publish a first version now, gather feedback from your teams, adjust quarterly based on new realities. A policy that evolves with your organization will always be more effective than a "final" policy gathering dust.

Offer Alternatives Before You Ban

Banning AI without offering an alternative is like taking away the cell phone without providing a landline. Your employees will work around the ban. They always have. Enterprise solutions exist (ChatGPT Enterprise, Azure OpenAI, locally hosted solutions) that let you maintain control over data while delivering the productivity gains your teams are looking for.

What's Coming Next: AI Agents

Today's shadow AI primarily involves individual users copying and pasting data into AI tools. But the next wave is already on its way. According to Gartner (August 2025), 40% of enterprise applications will integrate AI agents by the end of 2026, compared to less than 5% in 2025. These agents don't just answer questions. They execute tasks, access systems, and make decisions autonomously.

For organizations that don't yet have visibility into their employees' current AI usage, the arrival of AI agents will multiply the blind spot exponentially.

The Question Is No Longer "If," But "How"

Shadow AI isn't a problem that will resolve itself. Netskope has measured a tripling of generative AI users in the enterprise and a sixfold increase in query volume (from 3,000 to 18,000 per month on average). Organizations upload an average of 8.2 GB of data per month to AI tools, a volume that keeps growing.

Twenty years ago, the question was whether your employees were using unauthorized USB drives. Today, the question is the same, but the data walking out the door is worth infinitely more, and the door is open 24 hours a day.

The question for leaders is no longer whether your employees are using AI. They already are. The question is: do you have the visibility and governance needed to make that usage a performance advantage rather than a hidden risk?


Need Help Taking Back Control?

InfoSec helps organizations with AI governance and privacy compliance. If you don't have a dedicated CISO, our vCISO services provide the strategic leadership to govern AI usage, without a full-time hire.

Assess your cybersecurity maturity for free


Mario Bouchard is President of InfoSec Sécurité de l'information Inc., a strategic cybersecurity consulting firm based in Quebec City. With over 30 years of experience, he helps CISOs and IT leaders turn cybersecurity into a delivery accelerator rather than a roadblock. infosecurite.com


Sources

  1. BlackFogThe State of AI Usage in the Workplace (Sapio Research, November 2025, published January 2026). Statistics on AI adoption by workers (86% weekly usage, 49% unapproved tools, 58% free versions among unsanctioned users, 60% willing to take risks, 63% find usage acceptable without alternatives). blackfog.com

  2. NetskopeCloud & Threat Report 2026 and Shadow AI and Agentic AI 2025. Data on personal account usage (47%), tripling of genAI users, sixfold increase in query volume, and data upload volumes (8.2 GB/month). netskope.com

  3. IBMCost of a Data Breach Report 2025 (PDF). Key data: organizations with high shadow AI levels at $4.74M vs $4.07M for low/none (+$670,000), 20% of organizations affected, 97% inadequate access controls, average cost of $166 per record (customer PII), $178 per record (intellectual property), internal detection at 50% (vs 42% in 2024, 33% in 2023). ibm.com/reports/data-breach

  4. Gartner — Press release, August 26, 2025. Projection: 40% of enterprise applications will integrate AI agents by end of 2026 (vs less than 5% in 2025). gartner.com

  5. OpenAI — Data usage policy: for services intended for individuals (e.g., ChatGPT), content may be used to improve/train models, with an opt-out option via data controls / privacy portal. openai.com/policies

  6. Privacy Legislation — Quebec's Law 25 (phased in, final phase effective September 2024), Canada's PIPEDA, and evolving U.S. state privacy laws. Quebec administrative sanctions: up to $10M or 2% of worldwide revenue. Criminal penalties: up to $25M or 4% of worldwide revenue. Notification required to regulator and affected individuals if risk of serious harm, with incident register requirement. legisquebec.gouv.qc.ca