Adopt the best-in-class AI framework for US-based companies
The NIST AI Risk Management Framework is highly regarded as an effective playbook for private and public sector organizations to adopt AI responsibly.
What is the NIST AI Risk Management Framework?
The NIST AI RMF is the U.S. federal government’s first comprehensive framework to identify and manage risks associated with the development and deployment of AI. Released in January 2023, the NIST AI RMF is organized around four core risk management functions: Govern, Map, Measure, and Manage. Each of the four functions have underlying categories and sub-categories of risk management actions and outcomes. The NIST AI RMF is accompanied by a series of companion documents meant to offer a practical roadmap for organizations to implement the framework.
Key Requirements of the AI RMF
Create organizational policies, processes, practices, and roles to govern AI. This includes ensuring that teams are sufficiently diverse, and trained so that they can properly identify and recognize AI risks.
Requirement
Trustible’s offer policy templates aligned with the NIST AI RMF to help orgs bootstrap their AI governance, and then helps them implement those policies efficiently at scale with out-of-the-box guided AI governance workflows.
How Trustible™ Helps
The key risks of AI systems are identified based on information about a system’s intended goals, costs, deployment context, and potential impacts to individuals, groups, and society.
Requirement
Trustible helps organizations build a full inventory of their AI use cases, model, and vendors and then offers risk recommendations to help track relevant risks, and their potential impacts.
How Trustible™ Helps
Ensure the likelihood and severity of specific AI risks are appropriately measured and tracked over time.
Requirement
Trustible’s risk taxonomy offers best-in-class guidance on how each risk can be measured, and helps organizations document model level risk testing conducted. In addition, Trustible’s integrations with MLOps tools helps organizations set up, and track risk measures over time.
How Trustible™ Helps
Ensure that identified risks are mitigated to an appropriate degree, and that the organization is responsive to updating risk treatments over time based on system performance and feedback.
Requirement
Trustible helps organizations identify and build a risk treatment plan, and helps them track formal sign-off on residual risks. In addition, Trustible offers a taxonomy of risk mitigations and up-to-date insights to help organizations implement best practices mitigations.
How Trustible™ Helps
Navigate the NIST AI RMF with Trustible™
AI Inventory
Centralize EU AI Act required documentation in a single source of truth across AI use cases
AI Policies
Develop and enforce AI policies that protect your organization, users, and society.
Risk Management
Identify, manage, measure, and mitigate potential risks in your AI systems.
FAQs
The NIST AI RMF is meant to be a voluntary framework. However, the NIST AI RMF is being operationalized through President Biden’s October 2023 executive order on AI, as well as through some state laws. As federal lawmakers continue to discuss how to regulate AI, components of the NIST AI RMF may also become enforceable through federal legislation.
Organizations that design, develop, or deploy AI in any context should consider how the NIST AI RMF can assist them in establishing an AI governance structure. AI use cases can also evolve over time and implementing a risk management framework now can help address potential use case changes in the future.
The NIST AI RMF identifies the following as characteristics of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.
NIST provides a number of resources to assist organizations with understanding and implementing the AI RMF. Those resources include the NIST AI RMF Playbook, an AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various independent perspectives.