What AI Governance Looks Like After Year One

Key Insights

  • Year 1 builds the foundation. Year 2 is where it gets tested.
  • Periodic reviews should scale with risk level, not apply uniformly to every system.
  • Thin AI inventory records become a liability the moment a regulation changes scope.
  • Manual governance processes weren’t built for agentic AI. Start adapting now.


Where This Conversation Starts

Last week at the IAPP Global Privacy Summit in Washington, D.C., Trustible co-hosted a panel that skipped the Year 1 fundamentals entirely. The panel assumed most organizations in the room already had the following in place: AI policies written, AI literacy training implemented, a use case intake process established, initial risk and impact assessments completed, governance committee roles defined, and secure AI platforms created.

What comes next is where most programs have less clarity on: how AI governance holds up over time as models change, regulations evolve, and the AI portfolio keeps growing.

Trustible CTO and Co-Founder Andrew Gamino-Cheong moderated the discussion with Kimberly Zink, Chief Privacy Officer at Korn Ferry, and Derek Han, Partner for AI, Cyber and Privacy at Grant Thornton. Here’s what the conversation surfaced.

Scenario 1: Major Model Changes

Your organization’s key AI system was built on GPT-4o, which is being deprecated. The vendor says to upgrade to GPT-5.3. What amount of governance do you do?

Every use case should have a repeatable set of evaluations documented in advance, not assembled under pressure when a deprecation notice arrives. Reading the vendor’s model system card is necessary but not sufficient. If the replacement model performs worse on your specific task, that’s a governance event: documentation needs to reflect it and affected users need to be notified.

The harder question is what counts as a “substantial modification” in the first place. Deploying to a new jurisdiction, adding a new tool connection, or swapping the underlying model all potentially qualify. Organizations that handle these situations well defined that threshold before the change happened, not after.

Scenario 2: Periodic Reviews

It’s been one year since your highest-risk AI system was deployed. Time for its first formal review. What does that process look like?

Not every deployed system gets the same review. Governance intensity should scale with the risk level established at intake. Higher-risk systems warrant structured evaluation and cross-functional involvement. Lower-risk systems may need only a lightweight check. Business owners need to be part of this process, not just the governance committee.

One thing the panel flagged: model drift doesn’t announce itself. For a customer support chatbot, it might show up as subtle shifts in tone or rising escalation rates. Catching it requires sampling actual outputs and evaluating them against the guardrails set at deployment. Scheduled reviews also matter less than ad hoc ones triggered by material changes in how a system is used, what data it touches, or who it affects.

Andrew Gamino-Cheong, Trustible (Left) | Kimberly Zink, Korn Ferry (Middle) | Derek Han, Grant Thornton (Right)

Scenario 3: Regulatory Updates

The EU AI Act just got amended with new criteria for “high risk” use cases and new compliance obligations. How do you assess its impact on your existing AI systems?

Nearly 7 in 10 businesses report difficulty understanding their obligations under the EU AI Act, per IAPP’s EU Digital Laws Report 2025, and that uncertainty measurably suppresses AI investment. The panel’s honest take: many organizations can’t answer a regulatory scope question quickly because their inventory doesn’t capture the right information. Use case category, PII usage, automated decision-making, and deployment geography are the fields that determine what’s newly in scope.

This is also a cross-functional exercise. Legal, compliance, and business unit owners all need to be involved. Tools with automated framework mappings help, but only if the underlying use case data is structured and current. The inventory problem is a prerequisite for any successful regulatory change management process.

Scenario 4: AI Governance Program Iteration

ISO 42001 requires regular evaluations of your AI Governance program. It’s been a year since launch. What do you do?

Track the metrics that tell you whether the program is actually working: volume of use cases reviewed, how many were flagged high risk, how much risk was mitigated, and how long the governance cycle takes end to end. These are the inputs needed to justify resources, identify bottlenecks, and demonstrate maturity to boards and regulators.

The harder conversation is agentic AI. Manual governance workflows weren’t designed for systems that act autonomously, chain decisions, and scale faster than any review queue can handle. You can’t govern agentic AI with processes built for static models. Organizations need to start building AI-assisted governance now, before agentic deployment outpaces the function responsible for overseeing it.


Year 1 is about getting structure in place. Year 2 is about making it hold. That’s the work worth doing.

Questions about how Trustible supports Year 2 governance activities? Request a demo.

Share:

Related Posts

16 Types of AI Governance Platforms, Explained

A buyer’s guide to what “AI governance” actually means across different tools, and what to look for when it matters.