News
AI-Focused Data Security Report Finds Thousands of Risky AWS Policies Per Account
A new data security report from Varonis uncovers a major cloud governance gap: the average Amazon Web Services (AWS) cloud environment contains more than 3,000 over-permissive access policies, creating a massive and largely invisible attack surface for bad actors to exploit.
While the report surveyed data risks across a range of cloud platforms -- including Microsoft 365 and Salesforce -- AWS was singled out for its volume and sprawl of access policies, with tens of thousands of permission rules per account and thousands flagged as overly broad or risky.
Varonis, a data security and analytics specialist, released its 2025 State of Data Security Report on May 20, highlighting how excessive permissions and AI-driven risks are leaving cloud environments dangerously exposed.
The report, based on an analysis of 1,000 real-world IT environments, paints a troubling picture of enterprise cloud security in the age of AI. Among its most alarming findings: 99% of organizations had sensitive data exposed to AI tools, 98% used unverified or unsanctioned apps -- including shadow AI -- and 88% had stale but still-enabled user accounts that could provide entry points for attackers. Across platforms, weak identity controls, poor policy hygiene, and insufficient enforcement of security baselines like multifactor authentication (MFA) were widespread.
As part of the wide-ranging report, the company specifically analyzed organizations using AWS.
"AWS alone has over 18,000 possible identity and access management permissions to manage," the report noted, underscoring the daunting complexity of securing cloud environments at scale. With such a sprawling permission set, organizations may struggle to enforce least-privilege access. The report cites this complexity as a factor in widespread access misconfigurations, but does not directly link it to specific over-permissive policy criteria.
[Click on image for larger view.] AWS Analysis (source: Varonis).
Varonis identified problematic AWS policies through its analysis of real-world environments, but the report does not specify how those policies were classified as over-permissive. The analysis suggests that many organizations still lack the controls or visibility needed to effectively audit and tighten cloud access across their environments.
Beyond policy sprawl, the report surfaced several other risks unique to or especially prevalent in AWS environments:
- Massive policy count: The average AWS account had more than 20,000 managed policies, complicating access oversight and increasing the chance of misconfiguration.
- Over-permissive configurations: The report flagged thousands of AWS policies as overly permissive, though it does not specify the exact traits (such as wildcard actions or broad scopes) used in its classification.
- Non-human identities under-secured: Varonis flagged poor credential management and excessive privileges associated with APIs and service accounts.
- Public exposure risks: Some AWS identities were configured to share public links that could expose internal data to unauthorized users or AI tools.
- Unmasked training data: Sensitive cloud data used in AI model training was frequently left unencrypted or exposed to anonymous users, increasing the risk of model poisoning or data leakage.
The report surfaces a range of trends across all major cloud platforms, some revealing systemic weaknesses in access control, data hygiene, and AI governance.
While we're focusing on AWS here for readers of AWS Insider, the report heavily focused on AI, with the company in an accompanying blog post saying:
AI is everywhere. Copilots help employees boost productivity and agents provide front-line customer support. LLMs enable businesses to extract deep insights from their data.
Once unleashed, however, AI acts like a hungry Pac-Man, scanning and analyzing all the data it can grab. If AI surfaces critical data where it doesn't belong, it's game over. Data can't be unbreached.
And AI isn't alone -- sprawling cloud complexities, unsanctioned apps, missing MFA, and more risks are creating a ticking time bomb for enterprise data. Organizations that lack proper data security measures risk a catastrophic breach of their sensitive information.
Key findings include:
- 99% of organizations have sensitive data exposed to AI tools: The report found that nearly all organizations had data accessible to generative AI systems, with 90% of sensitive cloud data, including AI training data, left open to AI access.
- 98% of organizations have unverified apps, including shadow AI: Employees are using unsanctioned AI tools that bypass security controls and increase the risk of data leaks.
- 88% of organizations have stale but enabled ghost users: These dormant accounts often retain access to systems and data, posing risks for lateral movement and undetected access.
- 66% have cloud data exposed to anonymous users: Buckets and repositories are frequently left unprotected, making them easy targets for threat actors.
- 1 in 7 organizations do not enforce multifactor authentication (MFA): The lack of MFA enforcement spans both SaaS and multi-cloud environments and was linked to the largest breach of 2024.
- Only 1 in 10 organizations had labeled files: Poor file classification undermines data governance, making it difficult to apply access controls, encryption, or compliance policies.
- 52% of employees use high-risk OAuth apps: These apps, often unverified or stale, can retain access to sensitive resources long after their last use.
- 92% of companies allow users to create public sharing links: These links can be exploited to expose internal data to AI tools or unauthorized third parties.
- Stale OAuth applications remain active in many environments: These apps may continue accessing data months after being abandoned, often without triggering alerts.
- Model poisoning remains a major threat: Poorly secured training data and unencrypted storage can allow attackers to inject malicious data into AI models.
The report offers a sobering assessment of how AI adoption is magnifying long-standing issues in cloud security. From excessive access permissions in AWS to shadow AI, stale user accounts, and exposed training data, the findings make clear that many organizations are not prepared for the speed and scale of today's risks. The report urges organizations to reduce their data exposure, implement strong access controls, and treat data security as foundational to responsible AI use.
About the Author
David Ramel is an editor and writer at Converge 360.