AI Risk · Security

Unauthorized System Access

AI systems can present a new vector of cyberattacks due to weak authentication, poor access controls or misconfigured permissions.

📋 Description

AI systems often expose endpoints, model APIs, data pipelines, or training environments that, if inadequately protected, can become attack vectors. Unauthorized access may occur due to weak authentication, over-permissioned accounts, hardcoded credentials, or misconfigured cloud resources. This can lead to model theft, manipulation of outputs, exposure of training data, or unauthorized deployment of AI agents. Because many AI components operate outside traditional IT monitoring, attackers may exploit these gaps to persist undetected.

🔍 Public Examples and Common Patterns

- Incident 898: Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials: Attackers reportedly exploited stolen cloud credentials obtained through a vulnerable Laravel system (CVE-2021-3129) to allegedly abuse AI cloud services, including Anthropic’s Claude and AWS Bedrock, in a scheme referred to as “LLMjacking.” The attackers are said to have monetized access through reverse proxies, reportedly inflating victim costs to as much as $100,000 per day.

📐 External Framework Mapping

- Databricks AI Security Framework: 12.4 - Unauthorized privileged access
Cite this page
Trustible. "Unauthorized System Access." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/unauthorized-access/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform