Somewhere between the promise of productivity and the reality of deployment, Microsoft’s AI assistant did something it was never supposed to do. It read the mail.
Confidential mail, specifically. Drafts users never sent. Messages flagged with sensitivity labels, protected by data loss prevention policies, sealed behind the kind of enterprise-grade controls that IT departments spend considerable budget maintaining. None of it mattered. Microsoft 365 Copilot Chat swept through those folders anyway, summarised what it found, and surfaced it to users who had every reason to believe those contents were protected.
Microsoft has since pushed a fix and offered the standard assurance that no one accessed anything they weren’t already authorised to see. That framing, while technically careful, quietly sidesteps the more uncomfortable question: why was the AI reading it at all?

Speed Has a Cost, and Enterprises Are Paying It
The pressure to ship AI features fast is no longer a Silicon Valley cliché — it is a boardroom mandate. Microsoft, Google, and their rivals are locked in a race where the prize is enterprise adoption at scale, and falling even one quarter behind feels existential. In that environment, governance does not lead to development. It chases it.
Gartner analyst Nader Henein put it plainly: organisations want to switch off questionable features and wait for oversight to catch up, but the weight of competitive hype makes that politically untenable inside most companies. Disabling Copilot means explaining to leadership why your organisation is sitting out the AI revolution. Few executives are willing to have that conversation.
The result is predictable. Features land in production environments before edge cases are fully understood, and the edge cases turn out to involve confidential NHS communications and corporate emails that were explicitly marked off-limits.
The Default Problem
At the core of this incident sits a design philosophy that the industry has resisted changing for years: new capabilities default to on. Users and administrators must actively discover, understand, and disable features they never asked for — often without clear documentation that those features exist.
Professor Alan Woodward of the University of Surrey has argued for inverting that logic entirely: make AI tools private-by-default, require deliberate opt-in, and let organisations expand access on their own terms rather than scrambling to contain access they never granted. It is a reasonable position. It is also notably, one that it would slow adoption metrics, which may explain why it remains a minority view inside the companies building these tools.
What This Actually Tells Us
One misconfiguration does not indict an entire technology. But the pattern it represents deserves scrutiny. This is not the first time an enterprise AI tool has accessed data outside its intended scope, and given the complexity of these systems and the pace of their development, it will not be the last.
The more significant issue is institutional. Organisations are deploying tools whose full behaviour they do not yet understand, inside regulatory environments — healthcare, finance, legal — where the consequences of data exposure are not abstract. Microsoft’s fix arrived. The question worth sitting with is how many similar issues are currently running in production, undetected, in environments where the stakes are considerably higher than a leaked draft email.
The AI read the mail. This time, someone noticed.

