Skip to content
  • Home
  • Solutions
    • By Objective
      • Accelerate Responsible AI Adoption
      • Build and Maintain Stakeholder Trust
      • Reduce AI Risk
      • Comply with Regulations and Standards
    • By Framework
      • EU AI Act
      • Colorado SB205
      • NIST AI RMF
      • ISO/IEC 42001
  • Product
    • Product Overview
    • AI Assessment
    • AI Inventory
    • AI Policies
    • AI Risk Management
    • AI Regulatory Compliance
  • Resources
    • Blog
    • AI Model Ratings
    • AI Maturity Assessment
    • AI Governance Glossary
  • About
    • About
    • Careers
    • Partners
    • Contact
    • Terms of Service
    • Privacy Policy
    • Sitemap

Contact Us

We recognize AI governance can be overwhelming - we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.

Edit Content

Insights

Blog, Insights

Why AI Governance is the Next Generation of Model Risk Management

September 29, 2025 trustible

For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.

Blog, Insights

Should the EU “Stop the Clock” on the AI Act?

September 25, 2025 trustible

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

Blog, Insights

What the Trump Administration’s AI Action Plan Means for Enterprises

August 5, 2025 trustible

The Trump Administration released “Winning the AI Race: America’s AI Action Plan” (AI Action Plan) on July 23, 2025. The AI Action plan was published in accordance with the January 2025 Removing Barriers to American Leadership in AI Executive Order. The AI Action Plan proposes approximately 90 policy recommendations within three thematic pillars: Pillar I addresses […]

Blog, Insights, Research

FAccT Finding: AI Takeaways from ACM FAccT 2025

July 15, 2025 trustible

Anastassia Kornilova is the Director of Machine Learning at Trustible. Anastassia translates research into actionable insights and uses AI to accelerate compliance with regulations. Her notable projects have involved creating the Trustible Model Ratings and AI Policy Analyzer. Previously, she has worked at Snorkel AI developing large-scale machine learning systems, and at FiscalNote developing NLP […]

Blog, Insights

Trustible’s Perspective: The AI Moratorium would have been bad for AI adoption

July 2, 2025 trustible

In the early hours of July 1, 2025, the Senate overwhelmingly voted to strip the proposed federal moratorium on state and local AI laws from the Republican’s reconciliation bill. The moratorium went through several re-writes in an attempt to salvage it, though ultimately 99 Senators supported removing it from the final legislative package.  While the political […]

Insights, Whitepaper

AI Governance Triggers: When to Act and Why It Matters

March 25, 2025 trustible

The rapid evolution of artificial intelligence—with continuous advancements in models, policies, and regulations—presents a growing challenge for AI governance teams. Organizations often struggle to determine when governance intervention is necessary in order to balance risk oversight without imposing excessive compliance burdens. This eBook introduces the concept of “AI Governance Triggers” to provide clarity on the specific AI model events that should prompt governance activities.

Blog, Insights

Understanding the Data in AI

February 11, 2025 trustible

Data governance is a key component of responsible AI governance, and it features prominently in every emerging AI regulations and standards. However, “data” is not a monolithic concept within AI systems. From the massive datasets collected for training large language models (LLMs), to user feedback loops that refine and improve outputs, multiple “data streams” flow through any modern AI application.

Blog, Insights

What is AI Monitoring?

November 12, 2024 trustible

When many technical personas hear the term monitoring, they often think of internal monitoring of the AI system.

Blog, Insights

Understanding AI Stakeholders with Trustible’s AI Stakeholder Taxonomy

October 16, 2024 trustible

Trustible developed an AI Stakeholder Taxonomy that can help organizations easily identify stakeholders as part of the impact assessment process for their high-risk use cases

Blog, Insights

Everything you need to know about the NY DFS Insurance Circular Letter No. 7

July 22, 2024 trustible

On July 11, 2024, the New York Department of Financial Services (NY DFS) released its final circular letter on the use of external consumer data and information sources (ECDIS), AI systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts. A circular letter is not a regulation per se, but rather a formalized interpretation of existing laws and regulations by the NY DFS. The finalized guidance comes after the NY DFS sought input on its proposed circular letter, which was published in January 2024.  

Posts pagination

1 2 Next

Search

Categories

  • Blog (49)
  • Corporate News (13)
  • Insights (18)
  • Research (2)
  • Webinar (2)
  • Whitepaper (7)

Recent posts

  • Why AI Governance is the Next Generation of Model Risk Management
  • Should the EU “Stop the Clock” on the AI Act?
  • What is the “Perfect” AI Use Case Intake Process?

Tags

AI Adoption ai complexity ai compliance ai framework AI FRAMEWORKS AI Governance ai GOVERNANCE COMMITTEES ai inventory ai model ratings AI Policy AI policy analyzer ai privacy AI Regulation ai regulations ai risk AI risks AI safety ai value benefits of ai colorado Colorado SB 205 Databricks EU AI ACT EU Regulation Europe Frameworks fransicso sanchez insurance ISO 42001 jason hirsch Jon Leibowitz larry quinlan Machine Learning Model Ratings model risk model risk management nist nist ai rmf privacy regulatory Responsible AI standards Startup taxonomy trustible

The turnkey solution to maximize trust & make AI governance easy

Trustible’s platform makes it easy to operationalize Responsible AI and prove AI Governance to your stakeholders.

Insights

Delivering best practices for Responsible AI.

 

Simplicity

Translating business and technical requirements.

 

Collaboration

Mobilizing the right people at the right time.

 
Enable trustworthy &
responsible AI

Manage & migrate AI risk, build trust, and accelerate Responsible AI development.


    Where AI Governance Gets Done
    Linkedin X-twitter Youtube

    PRODUCT

    • Overview
    • AI Inventory
    • Risk Management
    • AI Policies
    • AI Assessment
    • Regulatory Compliance

    SOLUTIONS

    • Reduce AI Risk
    • Accelerate Responsible AI Adoption
    • Build & Maintain Stakeholder Trust
    • Comply With Regulations and Standards

    ABOUT

    • About
    • Careers
    • Resources
    • Partners
    • App Status
    • Security
    Login

    [email protected]

    +1 (301) 392-7232

    Trustible™ Privacy Policy & Terms of Service

    Copyright © 2025 – TRUSTIBLE. All Rights Reserved