A comprehensive, expert-curated taxonomy of AI risks for enterprise governance. Each risk includes descriptions, real-world examples, recommended mitigations, and mappings to regulatory frameworks including the EU AI Act, NIST AI RMF, and ISO 42001.
Attackers exploit vulnerabilities in how AI agents store, maintain, and utilize contextual information and memory across sessions.
Malicious actors can create attacks that target vulnerabilities in how multiple AI agents interact, coordinate, and communicate with each other.
AI Agents can execute complex actions and interact with multiple systems in a manner that is difficult to log and audit.
Users may attribute human-like qualities, emotions, or intentions to AI conversational agents, leading to unrealistic expectations and potential misuse.
Data, Models and additional IP can be stolen due to ineffective storage and encryption practices.
Prompts submitted to AI systems may include confidential, sensitive, or proprietary information leading potential to privacy violations, data leakage, or IP exposure.
Generative AI models can output content that violates copyright or IP laws.
Dataset should be reviewed for legal concerns including commercial license restrictions on public datasets and privacy restrictions for user data.
Data poisoning attacks involve the intentional injection of misleading or corrupted data into the training dataset of AI models, aiming to degrade the performance or…
Adversaries can overwhelm an AI system with a large requests resulting in degraded performance or system downtime.
AI Systems may produce different outcomes across different populations.
AI Systems can have a negative environmental impact through the training process or through the decisions made during inference.
AI Systems may be granted write permissions to other systems that can result in undesirable actions.
Model performance can be worse than expected in the deployed environment.
LLMs can output information that is factually incorrect but presented as fact.
Generative models can output harmful content (e.g hate speech) that is inappropriate or illegal.
Data should be collected in a manner that is compliant with the regulations in the area that users operate in. This includes both collecting consent…
Data retention policies regulate how long data may be stored, for what purpose, and how it must be secured.
If an AI system doesn't have proper monitoring and logging, problems like errors, misuse, or attacks can go unnoticed, and can be hard to find…
Indirect Prompt Injection Attacks involve modifying an LLMs behavior through external content accessed by the model.
AI systems need to be designed with sufficient human intervention options in mind.
The organization lacks processes or capabilities to detect, respond to and recover from incidents involving AI systems.
Legal and regulatory frameworks may have requirements about data that must be kept from AI system decisions.
Insufficient training can result in users misinterpreting system outputs or misunderstanding system limitations.
Insufficiently disclosures the use of AI can have negative legal implications and create mistrust from those impacted by the system.
Training data may come from a variety of sources and may undergo complex transformations. Insufficient tracking may lead to performance, security and legal challenges.
AI systems produce outputs that lack transparency and can not be directly explained by humans.
Groups of individuals may be over or under represented, or misrepresented, in model-generated content.
A generative model can reveal personal information (i.e. PII) about individuals from the training data or connected systems (e.g. in a RAG set-up).
A generative model can reveal proprietary or confidential information from the training data or connected systems (e.g. in a RAG set-up).
AI system failures or incidents can result in loss of revenue for organization's deploying the systems.
Training data quality has a direct impact on model quality. Quality checks should be applied to the original data sources and to any preprocessing assumptions.
AI Systems may be developed in an ad-hoc manner resulting in challenges with reproducibility and accountability.
Generative AI systems can be used maliciously to the detriment of individuals or society.
Model Evasion attacks manipulate inputs to get desired outputs from a model.
Publicly available models may have restrictions on their commercial use.
The introduction or use of an AI system leads to decreased employee engagement, satisfaction, or motivation.
AI system failures or incidents can result in bad publicity for an organization.
Models can produce inconsistent results for the same or similar inputs.
Overreliance on AI occurs when excessive trust is placed AI system and results in reduced human oversight.
Models may exhibit a performance gap between different populations.
The accuracy and consistency of labels assigned to training data significantly impact the performance and reliability of AI models.
Failures in the retrieval component of retrieval-augmented systems (e.g. RAG) can lead to inaccurate, irrelevant, outdated, or conflicting documents being surfaced.
LLM inputs can be manipulated to get an output different from the system’s intended purpose. This behavior is sometimes referred to as jailbreaking.
External Datasets, Models, Software and Hardware may be compromised by bad actors resulting in adversarial attacks.
Malicious actors can extract information about datasets, models and system prompts from an AI System, and use it to subvert the system or steal sensitive…
Technical systems can malfunction due to a variety of hardware, software or vendor issues.
AI systems can present a new vector of cyberattacks due to weak authentication, poor access controls or misconfigured permissions.
Individuals within the organization can use the system for purposes that are out-of-scope for the system.
Users may fail to adopt, trust, or effectively leverage an AI system, leading to suboptimal outcomes and unrealized technology investments.
Implementing and maintaining AI Systems may lead to unexpected spending associated with maintenance, implementation and compliance.
AI systems may be exposed to inputs outside of an expected range and need to have a planned failure mode.
Interacting with AI systems can lead to frustration for the end user, especially in situations where the system does not functions as intended is required.
User resistance occurs when system users do not want to utilize AI Systems due to reasons like distrust or lack of training.
Widespread use of generative AI cause organizations to eliminate jobs.
Trustible embeds this risk taxonomy directly into enterprise AI governance workflows, so teams can identify, assess, and mitigate risks without starting from scratch.
Explore the Platform