89% of businesses adopt AI workloads, but fail to manage exposures
According to the ‘State of Cloud and AI Security 2025’ report by Tenable, developed in collaboration with the Cloud Security Alliance, 89% of organisations today are either running (55%) or piloting (34%) AI workloads.
This rapid adoption of artificial intelligence (AI) has dramatically expanded the attack surface. More than one-third of organisations with AI workloads (34%) have already experienced an AI-related breach, driven by exploited vulnerabilities, model flaws, and insider threats. Yet, most security programmes remain focused on futuristic scenarios rather than today’s exposures.
The report reveals that the root cause of the breaches stems from foundational security failures, not complex model manipulation. The top causes were exploited software vulnerabilities (21%), AI model flaws (19%), and insider threats (18%). By contrast, organisations reported being most concerned about novel, futuristic risks such as model manipulation (18%) or unauthorised AI models (15%), showing a clear disconnect between real-world AI exposures and perceived threats.
The study also revealed a compliance-heavy but technically shallow approach to AI exposure. More than half of organisations (51%) rely on frameworks such as the NIST AI Risk Management Framework or the EU AI Act to guide their strategies. Yet, when it comes to classifying and encrypting AI data, 78% of organisations have not implemented both practices – indicating that most are missing at least one of these foundational safeguards. Only 22% classify and encrypt AI data, and just 26% conduct AI-specific security testing, such as red-teaming.
“The data shows us that AI breaches are already here and confirms what we’ve been warning about: most organisations are looking in the wrong direction,” said Liat Hayun, VP of Product and Research, Tenable. “The real risks come from familiar exposures – identity, misconfigurations, vulnerabilities – not science-fiction scenarios. Without addressing these fundamentals, AI environments will remain exposed.”
Regulatory frameworks are a necessary foundation, but they cannot substitute technical depth; the speed of AI adoption requires organisations to go beyond compliance and integrate AI-specific exposures into their broader security strategies.
The research indicates that organisations should:
- Treat compliance as the starting point, not the finish line.
- Prioritise foundational controls – identity governance, misconfiguration monitoring, workload hardening, and access management - within AI environments.
- Embed AI-specific exposures into unified risk strategies across hybrid and multi-Cloud infrastructures
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.