AI Risks

A comprehensive, expert-curated taxonomy of AI risks for enterprise governance. Each risk includes descriptions, real-world examples, recommended mitigations, and mappings to regulatory frameworks including the EU AI Act, NIST AI RMF, and ISO 42001.

Generative AI

Agent Memory Manipulation

Attackers exploit vulnerabilities in how AI agents store, maintain, and utilize contextual information and memory across sessions.

Generative AI

Agent Orchestration and Multi-Agent Exploitation

Malicious actors can create attacks that target vulnerabilities in how multiple AI agents interact, coordinate, and communicate with each other.

Generative AI

Agent Untraceability

AI Agents can execute complex actions and interact with multiple systems in a manner that is difficult to log and audit.

Generative AI

Anthropomorphizing Conversational Agents

Users may attribute human-like qualities, emotions, or intentions to AI conversational agents, leading to unrealistic expectations and potential misuse.

Security

Asset Theft

Data, Models and additional IP can be stolen due to ineffective storage and encryption practices.

Privacy

Confidential Data in Input

Prompts submitted to AI systems may include confidential, sensitive, or proprietary information leading potential to privacy violations, data leakage, or IP exposure.

Legal

Copyright and IP Violations

Generative AI models can output content that violates copyright or IP laws.

Legal

Data Legality

Dataset should be reviewed for legal concerns including commercial license restrictions on public datasets and privacy restrictions for user data.

Security

Data Poisoning

Data poisoning attacks involve the intentional injection of misleading or corrupted data into the training dataset of AI models, aiming to degrade the performance or…

Security

Denial of ML Service

Adversaries can overwhelm an AI system with a large requests resulting in degraded performance or system downtime.

Bias and Fairness

Disparate Outcomes for Individuals and Groups

AI Systems may produce different outcomes across different populations.

System

Environmental Impact

AI Systems can have a negative environmental impact through the training process or through the decisions made during inference.

Security

Excessive Agency

AI Systems may be granted write permissions to other systems that can result in undesirable actions.

System

External Model Deprecation

External models may be removed or change in quality.

Performance

Generalization Failure and Performance Drift

Model performance can be worse than expected in the deployed environment.

Generative AI

Hallucination

LLMs can output information that is factually incorrect but presented as fact.

Generative AI

Harmful and Inappropriate Content

Generative models can output harmful content (e.g hate speech) that is inappropriate or illegal.

Generative AI

Harmful Code Generation

Code generated by LLMs may contain vulnerabilities.

Privacy

Inadequate Data Collection Practices

Data should be collected in a manner that is compliant with the regulations in the area that users operate in. This includes both collecting consent…

Privacy

Inadequate Data Retention and Deletion Practices

Data retention policies regulate how long data may be stored, for what purpose, and how it must be secured.

System

Inadequate Monitoring and Logging

If an AI system doesn't have proper monitoring and logging, problems like errors, misuse, or attacks can go unnoticed, and can be hard to find…

Generative AI

Indirect Prompt Injection

Indirect Prompt Injection Attacks involve modifying an LLMs behavior through external content accessed by the model.

Users

Insufficient Human Intervention Options

AI systems need to be designed with sufficient human intervention options in mind.

Security

Insufficient Incident Response

The organization lacks processes or capabilities to detect, respond to and recover from incidents involving AI systems.

Legal

Insufficient Record Keeping

Legal and regulatory frameworks may have requirements about data that must be kept from AI system decisions.

Users

Insufficient User Training

Insufficient training can result in users misinterpreting system outputs or misunderstanding system limitations.

Legal

Lack of AI Use Disclosure

Insufficiently disclosures the use of AI can have negative legal implications and create mistrust from those impacted by the system.

System

Lack of Data Provenance

Training data may come from a variety of sources and may undergo complex transformations. Insufficient tracking may lead to performance, security and legal challenges.

Legal

Lack of Explainability

AI systems produce outputs that lack transparency and can not be directly explained by humans.

Bias and Fairness

Lack of Representation in Generated Content

Groups of individuals may be over or under represented, or misrepresented, in model-generated content.

Privacy

Leaking Personal Data

A generative model can reveal personal information (i.e. PII) about individuals from the training data or connected systems (e.g. in a RAG set-up).

Privacy

Leaking Proprietary Data

A generative model can reveal proprietary or confidential information from the training data or connected systems (e.g. in a RAG set-up).

Operational

Loss of Revenue

AI system failures or incidents can result in loss of revenue for organization's deploying the systems.

Performance

Low Data Quality

Training data quality has a direct impact on model quality. Quality checks should be applied to the original data sources and to any preprocessing assumptions.

System

Low Model Traceability

AI Systems may be developed in an ad-hoc manner resulting in challenges with reproducibility and accountability.

Security

Malicious Use

Generative AI systems can be used maliciously to the detriment of individuals or society.

Performance

Misclassification and Model Errors

AI Systems may produce an output that is incorrect.

Security

Model Evasion Attack

Model Evasion attacks manipulate inputs to get desired outputs from a model.

Legal

Model Use Restrictions

Publicly available models may have restrictions on their commercial use.

Operational

Negative Morale Impact

The introduction or use of an AI system leads to decreased employee engagement, satisfaction, or motivation.

Operational

Negative Reputational Impact

AI system failures or incidents can result in bad publicity for an organization.

Performance

Output Inconsistency

Models can produce inconsistent results for the same or similar inputs.

Users

Overreliance on AI

Overreliance on AI occurs when excessive trust is placed AI system and results in reduced human oversight.

Bias and Fairness

Performance Gap Between Populations

Models may exhibit a performance gap between different populations.

Performance

Poor Data Labeling Quality

The accuracy and consistency of labels assigned to training data significantly impact the performance and reliability of AI models.

Performance

Poor Document Retrieval Accuracy

Failures in the retrieval component of retrieval-augmented systems (e.g. RAG) can lead to inaccurate, irrelevant, outdated, or conflicting documents being surfaced.

Generative AI

Prompt Manipulation and Hacking

LLM inputs can be manipulated to get an output different from the system’s intended purpose. This behavior is sometimes referred to as jailbreaking.

Security

Supply Chain Compromise

External Datasets, Models, Software and Hardware may be compromised by bad actors resulting in adversarial attacks.

Security

System Information Extraction Attack

Malicious actors can extract information about datasets, models and system prompts from an AI System, and use it to subvert the system or steal sensitive…

System

System Outage

Technical systems can malfunction due to a variety of hardware, software or vendor issues.

Security

Unauthorized System Access

AI systems can present a new vector of cyberattacks due to weak authentication, poor access controls or misconfigured permissions.

Users

Unauthorized Use

Individuals within the organization can use the system for purposes that are out-of-scope for the system.

Operational

Underutilization

Users may fail to adopt, trust, or effectively leverage an AI system, leading to suboptimal outcomes and unrealized technology investments.

Operational

Unexpected Costs

Implementing and maintaining AI Systems may lead to unexpected spending associated with maintenance, implementation and compliance.

Performance

Unexpected Inputs

AI systems may be exposed to inputs outside of an expected range and need to have a planned failure mode.

Legal

Use by Minors

Special restrictions may apply to systems that are accessible to minors

Users

User Frustration

Interacting with AI systems can lead to frustration for the end user, especially in situations where the system does not functions as intended is required.

Users

User Resistance

User resistance occurs when system users do not want to utilize AI Systems due to reasons like distrust or lack of training.

Generative AI

Worker Displacement

Widespread use of generative AI cause organizations to eliminate jobs.

Cite this taxonomy
Trustible. "AI Risks Taxonomy." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/

Manage AI Risk at Scale

Trustible embeds this risk taxonomy directly into enterprise AI governance workflows, so teams can identify, assess, and mitigate risks without starting from scratch.

Explore the Platform