top of page

Adopt the new, auditable standard for global AI governance

ISO/IEC 42001 is positioned to be the first global auditable standard to enable organizations to demonstrate trustworthiness in their AI systems.

What is ISO 42001?

ISO/IEC 42001 is a voluntary standard for organizations to implement an artificial intelligence (AI) management system. Under ISO 42001, an organization’s AI management system sets policies, procedures, and objectives for their AI systems. ISO 42001 is intended to set a baseline governance standard for all AI systems within an organization, rather than focusing on specific types of AI systems (i.e., high-risk AI).

trustible-iso-logo.png

Key Requirements of ISO 42001

  • Create an AI Management System (AIMS), and clearly document the internal and external context of your organization related to AI. This includes identifying relevant stakeholders, regulations, and scope of AI use for your organization.

    Requirement

    Trustible helps collect and document all the information related to AI systems, and offers policy templates aligned with ISO 42001 to help organizations bootstrap their AI Management System.

    How Trustible™ Helps
  • Identify the relevant executive leaders, and map out roles and responsibilities for AI governance. These roles should be reflected in organizational policies and clearly communicated throughout the organization.

    Requirement

    Trustible offers policy templates aligned with ISO 42001 to help organizations accelerate their path to compliance, as well as out-of-the-box workflows, dashboards, and reports to efficiently inform senior leadership about relevant AI systems.

    How Trustible™ Helps
  • Requires organizations to implement processes for AI governance to ensure AI risks are properly captured and documented. This includes performing risk assessments, impact assessments, and building risk treatment plans.

    Requirement

    Trustible offers out-of-the-box workflows for risk and impact assessments, and offers risk and mitigation recommendations to accelerate building and implementing risk treatment plans. In addition, Trustible can deliver the latest best practices on AI risk management to help organizations stay on top of the fast-moving AI environment.

    How Trustible™ Helps
  • Organizations must allocate the appropriate resources towards AI governance, build an internal knowledge base about AI systems,  and ensure allocated staff is appropriately trained and educated on AI risks.

    Requirement

    Trustible offers both AI compliance training, as well as continuously updated AI risk insights including risk measurement guidance, recommended mitigations, model risk ratings, and updates on AI regulatory compliance practices to help keep AI resources up-to-date on the latest AI best practices. In addition, Trustible supports the industry leading AI inventory solution that integrates across the tech stack to help track all necessary information about AI use cases, models, and vendors.

    How Trustible™ Helps
  • Organizations must have clear paper trails, and processes for maintaining their AI management systems over time.

    Requirement

    Trustible helps organizations create auditable paper trails of AI system proposals, risk assessments, deployment approvals, etc and can generate reports to evaluate how efficiently an organization is governing their AI.

    How Trustible™ Helps
  • Organizations need to identify which AI systems require additional monitoring, and how effective their own management system is through internal audits and reviews.

    Requirement

    Trustible helps organizations identify which AI uses cases that require regular reviews, and has guided workflows for conducting internal audits or periodic reviews.

    How Trustible™ Helps
  • Organizations need to continuously improve their AI governance processes and structure, and have formal plans for identifying and fixing any gaps or non-compliance instances.

    Requirement

    Trustible helps organizations continuously iterate on their AI governance practices, and helps automatically detect non-compliance with internal AI policies, or non-compliance with regulations based on up-to-date regulatory insights and best practices.

    How Trustible™ Helps
trustible-feature-spotlight-automatically-check.png

Navigate ISO 42001 with Trustible™

Risk & Impact Assessments

Identify, manage, measure, and mitigate potential risks or harms in your AI systems.

trustible-icon-ai-inventory.png

AI Policies

Develop and enforce AI policies that protect your organization, users, and society.

trustible-icon-ai-inventory.png

Documentation

Centralize your AI documentation in a single source of truth.

trustible-icon-ai-inventory.png

FAQs

  • ISO 42001 was designed for any organization that develops, deploys, and/or uses AI. While the requirements are more prescriptive than other existing AI governance and risk management frameworks, ISO 42001 is meant to be scalable for organizations of any size.

  • While ISO 42001 is voluntary, it is not uncommon for components of voluntary standards to become legal requirements. Policymakers, especially in the EU, may gravitate towards ISO 42001 as an enforceable standard for AI governance.

  • ISO 42001 includes annexes that map the requirements to a series of controls and implementing guidance. The additional guidance is meant to provide a more granular roadmap for organizations seeking to comply with the standard.

bottom of page