Why AI Governance is the Next Generation of Model Risk Management

For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.

But the world has changed.

The arrival of general purpose AI, GenAI, and AI agents introduces an entirely new class of risk surfaces—one that traditional MRM practices were never designed to handle. Organizations are building and deploying AI to layer over top models, driving faster computational decision making and semi to fully autonomous optimization, in an effort to drive efficiencies and maximize the accuracy and performance of these models. The questions institutions must now ask themselves is simple but urgent:

Are yesterday’s risk management frameworks enough to keep up with today’s AI?

The answer is increasingly clear: No.

Why Traditional Model Risk Management Falls Short

  1. A New Class of Models Traditional MRM assumes statistical models are built in-house, with well-understood data pipelines and model logic. In the advent of AI, while the statistical models may be built and maintained internally, the AI models layered over top may be pretrained by third-party vendors and fine-tuned with proprietary data. This introduces “black box” risks, limited transparency into training data, and unclear ownership boundaries. Classic MRM inventories simply don’t account for vendor-driven AI ecosystems that evolve daily. And even where custom or open source AI models come into play, trained on internal data only, the potential for hallucination, model drift, and other well-known and documented performance behaviors still require monitoring and mitigation.
  2. Static Controls in a Dynamic Landscape – MRM frameworks rely on periodic reviews, annual validations, and static documentation. Yet AI models change in real time via new prompts, fine-tuning, system integrations, new training data, and more. A once-a-year validation exercise cannot keep pace with models that learn and adapt daily.
  3. Scope Misalignment – Risk tiering in MRM is differentiated between material and non-material models. With AI, where we think of low, medium, and high risk use cases leveraging AI, the distinction and alignment where AI risk falls into MRM risk tiering isn’t always clear cut. As a more consumer-friendly example, a chatbot used for internal IT support may seem low risk until an employee shares sensitive customer data, creating hidden compliance and privacy risks. In MRM, a material model that drives insurance underwriting decisions, if using an AI model that’s prone to drift or lacks regular bias testing, could spell even more significant liability for the model deployer. The scope criteria of MRM, where AI is in use, needs to evolve to take into account modern AI’s fluid use cases.
  4. Human and Organizational Challenges – In traditional MRM, ownership of models is usually concentrated within risk or quant teams. But AI use cases, management, and governance span organizations, from compliance, to legal, to risk management, to finance, and even HR. This creates a new class of “model owners” who may lack the governance literacy that quants take for granted. Without expanding the governance stakeholders, institutions risk fragmented accountability.
  5. Hyper-Connected and Customer-Facing Systems – Traditional statistical models in MRM were often siloed, hosted in the cloud/server, or even represented in spreadsheets. By contrast, modern AI systems are deeply integrated into organizational workflows: they have access to internal documents, APIs, and in some cases even “write access” to operational systems through agentic features. This connectivity expands the risk surface, particularly around cybersecurity and data leakage. 

Enter AI Governance: The Next Generation MRM

AI Governance extends and modernizes MRM by embedding continuous oversight, automation, and cross-functional accountability into the AI lifecycle. Crucially, governance does not begin at the AI model layer, it starts at the use case layer.

Why the AI Use Cases Matters:

Traditional MRM starts by identifying and validating models. But where AI interacts with models, it’s also the use case—how and where the AI model is applied—that defines the real risk profile. 

Take for example, an investment body that’s deploying a heavily-customized open source LLM. It’s unlikely that this LLM, which represents a significant investment of capital as well as technical and actuarial resources, is being leveraged for only a single use case. That same LLM may be powering multiple use cases – from trading activities, to risk quantification, to even performance reporting.

The underlying model may be identical, but the risks are radically different. This is why governance frameworks must prioritize the use case as the primary unit of analysis and control. 

How AI Governance Relates to the Three Lines of Defense

Beyond focusing on the use case layer, AI governance also maps naturally onto the Three Lines of Defense structure that financial services organizations already use for risk management. The first line (developers and deployers) ensures that technical controls and safeguards are embedded directly into AI systems. The second line (oversight teams or centers of excellence) provides cross-functional governance and escalation for higher-risk use cases. And the third line (internal or external audit) delivers independent assurance. By aligning AI governance with this familiar model, institutions can extend the rigor of MRM while ensuring governance is integrated at every level of organizational defense—transforming AI oversight from a static compliance exercise into a dynamic, enterprise-wide control framework.

Key Shifts in the Governance-Driven Approach:

  • From Annual Review to Continuous Monitoring: AI Governance enables active risk tiering and evidence tracking, ensuring that each use case is continuously evaluated based on its context, data sensitivity, and downstream impacts.
  • From Model-Centric to Use Case-Centric: Instead of only cataloging models, governance systems maintain a centralized inventory of AI use cases—mapping which models, vendors, and datasets are powering which business applications.
  • From Internal Models to External Supply Chains: Traditional MRM focused on in-house models. Today, many AI models are procured from software vendors, open source libraries, or built on foundation models with opaque training data. This makes Third-Party Risk Management (TPRM)  and supply chain risk central to AI governance, requiring stronger due diligence, shared accountability with vendors, and continuous monitoring of external providers.
  • From Static Templates to Dynamic Workflows: AI models and systems can behave differently depending on user, data, or context. Machine translation, for example, may be benign in one case but risky in another. This makes static monitoring insufficient; governance must be dynamic, context-aware, and continuously updated.
  • From Siloed Owners to Multi-Stakeholder Accountability: Use case owners may sit in risk, compliance, finance, HR, or elsewhere—not just risk or quant teams. Governance frameworks engage all stakeholders through guided workflows that ensure consistent accountability.
  • From Defensibility to Proactivity: Rather than documenting controls after the fact, AI governance helps institutions design use cases responsibly from inception, embedding compliance, fairness, and safety checks directly into deployment pipelines. This ultimately enables more trusted, safe, and secure AI adoption. 
  • From Isolated Models to Hyper-Connected Systems: AI often integrates across systems and mission-critical workflows, creating risks that extend beyond the AI model itself. Governance must account for these integrations, not just the underlying algorithms.

Why This Matters for Financial Services

Financial institutions don’t have the luxury of waiting. In a recent AI Governance Maturity Assessment that Trustible conducted with over 150+ participants, only 19% of financial services organizations believe their AI governance program is prepared to enable their organization’s broader AI strategy. 

Too often, governance is framed purely as a compliance or defensive exercise. But in reality, AI governance is the growth enabler that allows institutions to move from experimentation to enterprise-scale adoption. AI governance isn’t just about preventing mistakes; it’s about giving executives the confidence to align AI investments with corporate strategy, prioritize the right use cases, and scale responsibly. Without governance, AI strategies stall in the pilot phase because business leaders and regulators lack confidence in outcomes. With governance, organizations gain the trust, accountability, and structure they need to scale AI responsibly across the enterprise.

The Federal Reserve’s SR 11-7 guidance on model risk remains foundational, but it was written in an era of regression models—not self-learning, multi-modal agentic AI systems. As regulators begin to turn their attention to AI, firms that fail to modernize AI governance will find themselves vulnerable not only to compliance risk, but also to reputational damage and loss of business opportunity. 

A Provocative Outlook: AI Governance is No Longer Optional

In the coming years, AI governance will define competitive advantage in financial services. Just as MRM became a regulatory expectation after the 2008 crisis, governance for AI will become the new baseline for operational resilience.

Firms that treat AI governance as a “check-the-box” exercise will fall behind. By embedding AI governance into the AI lifecycle, financial institutions can shorten time-to-value, reduce costly rework, and ensure that AI initiatives deliver measurable business impact.

Share:

Related Posts

Should the EU “Stop the Clock” on the AI Act?

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

Read More

What is the “Perfect” AI Use Case Intake Process?

Last week at the IAPP AI Governance Global Governance conference in Boston, Trustible brought together AI governance leaders from Leidos and Nuix to explore a deceptively tactical but mission-critical question: What does the “perfect” AI intake process look like?

Read More