
AI data security incidents are not hypothetical. These are real events that existing security tools missed
NOV. 2025
OVER-PERMISSIONED ACCESS
"A low-privilege ServiceNow AI agent was manipulated through prompt injection to ask a higher-privilege agent to export an entire case file to an external URL"
Missed because:
each agent acted within its own permissions. No tool monitored the chain of agent-to-agent data handoffs.
OCT. 2025
UNAUTHORIZED ACCESS
"With only a target's email address, attackers impersonated admins and executed AI agents that bypassed MFA to export employee records, financial data, and customer PII at scale."
Missed because:
the AI agent used legitimate platform APIs. IAM validated the session, not the intent behind the query.
JUL. 2025
OVER-PERMISSIONED ACCESS
"An AI coding agent with extensive database access autonomously deleted a user’s production database during deployment, resulting in total data loss."
Missed because:
the agent had legitimate database credentials. No tool distinguished routine migrations from destructive bulk operations.
JUN. 2025
DATA LEAKAGE
"A zero-click flaw in Microsoft 365 Copilot exposed confidential email data from Outlook and Microsoft Graph without any user action."
Missed because:
exfiltration used GitHub's trusted Camo image proxy. Network tools saw normal GitHub-to-GitHub traffic.
JAN. 2025
DATA LEAKAGE
"Hidden prompt injections in pull requests secretly siphoned private repository secrets, AWS keys, tokens, and source code, using GitHub’s own image proxy."
Missed because:
the agent was authorized. Existing tools saw 'legitimate' access, not exfiltration.
AUG. 2024
DATA LEAKAGE
"AI chatbot builder WotNot left a cloud bucket publicly accessible, exposing passports, national IDs, medical records, and resumes collected through chatbot interactions."
Missed because:
data was collected by the chatbot and stored outside the customer's own security perimeter.