AI Mitigations

Expert-curated AI risk mitigation strategies for enterprise governance. Each mitigation includes implementation guidance, suggested evidence, and links to the specific AI risks it addresses.

Organizational

Access Controls

Implementing measures to ensure that only authorized individuals can access, modify, or utilize AI systems and their data.

Organizational

AI Literacy Training

Educating employees about AI systems, their potential impacts, and ethical and regulatory considerations.

Product

AI Use Disclosure/Disclaimers

Clearly disclosing the use of AI to system users.

Organizational

AI Use Policy

Establishing a documented set of policies and procedures outlining expectations for how to use AI tools.

Technical

Algorithmic Bias Mitigation

Incorporating techniques for mitigating bias into the model pipeline.

Product

Appeal Process for System Subjects

Creating a process that allows individuals to contest decisions made by an AI system.

Technical

Audit Logs

Maintaining detailed records of activities within AI systems.

Technical

Code Version Control

Using a version control system (e.g. Github) to keep track of all code used during development and deployment.

Technical

Collect Diverse Training Data

Gathering data from various sources to ensure AI models are fair, unbiased, and accurate across different scenarios

Technical

Data Anonymization Preprocessing

Removing sensitive information from training data.

Technical

Data Encryption

Encrypting training and inference data to prevent unauthorized access.

Technical

Data Monitoring

Implementing data quality checks on datasets.

Technical

Data Separation

Keeping AI system data separate from other types of data.

Technical

Data Versioning

Maintaining a clear record of the exact data used to train different model versions.

Technical

Differential Privacy

Using techniques to obfuscate private information in datasets.

Organizational

Documentation Standards

Setting rigorous standards and processes for documenting AI models and dataset.

Technical

Ensemble Model Methods

Combining several base models to produce a more robust final model.

Technical

Explainable Models

Using models that are inherently transparent and understandable.

Product

Explanations for System Outputs

Providing clear explanations alongside AI system outputs.

Organizational

Feedback Mechanisms and Stakeholder Participation

Soliciting participation from impacted stakeholders throughout the AI lifecycle.

Technical

Gold-Standard Validation Data

Creating a high-quality dataset for evaluating data labeling and model performance

Technical

Hallucination Detection Guardrails

Implementing a mechanism for detecting hallucinations in the output of models.

Product

Human Override System

Creating systems and tools that allows an individual to modify the output of an AI system.

Product

Human Verification or Approval

Incorporating human review and approval processes in AI systems.

Technical

Identify Verification for Access

Requiring user authentication to access the AI System.

Organizational

Incident Documentation

Maintaining records of incidents and their resolutions.

Organizational

Incident Response Plan

Creating a plan that outlines roles, responsibilities, escalation paths and external communication protocols for AI incidents.

Technical

Input Checks

Implementing tools to filter and validate inputs to AI systems.

Technical

Internally-built models

Building systems from scratch to avoid using vulnerable components from other sources.

Product

Limit Public Release of Information

Limiting the public release of technical information about the system.

Organizational

Manual Data Review

Manually reviewing training data to ensure quality and identify potential biases or errors.

Organizational

Manual QA

Using manual quality assurance tests to verify system accuracy.

Technical

Minimize Access of AI System

Granting AI Systems access to only the minimum set of external systems and resources needed to function effectively.

Product

Model Documentation

Providing users with technical information about the AI system's data, design, performance and capabilities.

Technical

Model Encryption

Encrypting models and other assets during storage and transfer.

Technical

Model Hyperparameter Controls

Adjusting hyperparameters to control the diversity, creativity, and determinism of the model's outputs.

Technical

Model Monitoring System

Implementing a system to track and evaluate the performance of deployed AI systems.

Technical

Model Retraining

Retraining models on new data on a regular schedule.

Technical

Model Tracking System

Implementing a system to track different versions of AI models, data, and code.

Organizational

Multiple Annotators

Using multiple annotators to improve the quality and accuracy of data labeling.

Product

Opt-In System

Limiting AI-supported decisions to cases where users explicitly request it.

Organizational

Organizational Data Policy

Implementing comprehensive policies and procedures to manage data collection and storage activities.

Technical

Output Checks

Implementing tools to flag unsafe content in system outputs.

Technical

Performance Requirements

Defining and enforcing the least acceptable level of accuracy for a model.

Technical

Periodic System Review

Review system performance and conducting new risk assessment on a periodic basis after a system is deployed.

Technical

Prompt Boundary Defenses

Using prompts that create a clear boundary between the instructions and the user input.

Technical

Prompting for Reasoning and Self-Correction

Using prompting techniques, like Chain-of-Thought and Self-Refinement, can reduce the likelihood of LLM hallucinations.

Product

Provide Human Alternative

Including an option for a human to intervene and take over the task.

Technical

Rate-Limit System Inputs

Rate limiting access to AI Systems to prevent malicious use.

Organizational

Red-Team Testing

Adversarially testing a system for potential vulnerabilities.

Product

Require Age Confirmation

Implementing an age verification system for users.

Organizational

Resource Efficiency and Renewable Energy

Assessing the impact of the AI System on natural resources and prioritizing renewable energy sources.

Technical

Restricted Development Environments

Creating restricted development environments that limit access to external resources.

Technical

Retrieval-Augmented Generation

Combining a Large Language Models with an external knowledge bases.

Technical

Sanitize Training Data

Sanitizing training data prior to use to remove both inappropriate and poisoned content.

Product

Seamful Design

Intentionally incorporating and highlighting frictions within the user interface to encourage reflection, critical thinking, and intentional use.

Technical

Secondary Models

Maintaining secondary models that can be deployed in the event that primary models fail.

Technical

Secure Asset Sharing

Using secure and encrypted transfer methods when moving assets, like data and models.

Technical

Self-hosted Models

Hosting externally built models inside of existing architectures or firewalls.

Technical

Structured Inputs and Outputs

Using structured inputs and outputs over free-form text to improve reliability and safety.

Technical

Synthetic Data

Using synthetic data to augment and expand datasets to more completely cover the types of data seen in the deployed setting.

Technical

System Prompt

Providing an instruction to an AI model to guide its responses and behavior according to specific guidelines or objectives.

Technical

Unit Tests

Creating simple tests that validate whether parts of the system function as expected.

Product

User Assessment

Testing the operator's competency in human-in-the-loop systems.

Product

User Consent

Obtaining consent from the user before using their data to train or fine-tune AI systems.

Organizational

Verify Data and Model Sources

Taking measures to ensure that external data and model sources are trustworthy.

Technical

Vulnerability Scanning

Systematically examining AI systems and related assets for potential security weaknesses, threats, and signs of malicious activity.

Cite this taxonomy
Trustible. "AI Mitigations Taxonomy." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-mitigations/

Operationalize AI Mitigations

Trustible recommends specific mitigations based on your AI use cases and risk profiles, with evidence tracking and accountability built in.

Explore the Platform