AI Risk · Security

Excessive Agency

AI Systems may be granted write permissions to other systems that can result in undesirable actions.

📋 Description

AI Systems can be designed to take actions in an automated fashion. For example, sending an email or issuing a refund. Without proper controls, these actions can be done in undesirable situations. Multiple root causes are possible, including a routine misclassification/hallucination or exploitation by an adversarial party.

The risk becomes more complex in agent-based or multi-agent systems, where autonomous decision-making creates the effects of errors:

- A single AI agent operating in a workflow or decision loop may act on incorrect classifications without human review, triggering irreversible outcomes.
- In multi-agent systems, even with more stages in the process, one agent’s error may propagate or escalate through downstream interactions with other agents, making failures harder to detect and correct.

Excessive Agency is a vulnerability that enables damaging actions to be performed in response to unexpected, ambiguous, or manipulated outputs from an LLM, regardless of what is causing the LLM to malfunction.

Common triggers include:

- Hallucination or confabulation caused by poorly-engineered benign prompts, or just a poorly-performing model.
- Direct or indirect prompt injection from a malicious user, an earlier invocation of a malicious/compromised extension, or (in multi-agent/collaborative systems) a malicious/compromised peer agent.

🔍 Public Examples and Common Patterns

- AIID Incident 111: Amazon Flex Drivers Allegedly Fired via Automated Employee Evaluations: Amazon Flex's contract delivery drivers were dismissed using a minimally human-interfered automated employee performance evaluation based on indicators impacted by out-of-the-driver's control factors and without having a chance to defend against or appeal the decision.

📐 External Framework Mapping

- OWASP LLM Top 10: LLM06:2025 - Excessive Agency
- IBM Risk Atlas: Automating tasks with AI agents
- Databricks AI Security Framework: 9.13 - Excessive Agency
Cite this page
Trustible. "Excessive Agency." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/excessive-agency/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform