Skip to content
  • Home
  • Solutions
    • By Objective
      • Accelerate Responsible AI Adoption
      • Build and Maintain Stakeholder Trust
      • Reduce AI Risk
      • Comply with Regulations and Standards
    • By Framework
      • EU AI Act
      • Colorado SB205
      • NIST AI RMF
      • ISO/IEC 42001
  • Product
    • Product Overview
    • AI Assessment
    • AI Inventory
    • AI Policies
    • AI Risk Management
    • AI Regulatory Compliance
  • Resources
    • Blog
    • AI Model Ratings
    • AI Maturity Assessment
    • AI Governance Glossary
  • About
    • About
    • Careers
    • Partners
    • Contact
    • Terms of Service
    • Privacy Policy
    • Sitemap

Contact Us

We recognize AI governance can be overwhelming - we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.

Edit Content

Research

Blog, Insights, Research

FAccT Finding: AI Takeaways from ACM FAccT 2025

July 15, 2025 trustible

Anastassia Kornilova is the Director of Machine Learning at Trustible. Anastassia translates research into actionable insights and uses AI to accelerate compliance with regulations. Her notable projects have involved creating the Trustible Model Ratings and AI Policy Analyzer. Previously, she has worked at Snorkel AI developing large-scale machine learning systems, and at FiscalNote developing NLP […]

Blog, Insights, Research

Towards a Standard for Model Cards

May 5, 2023 trustible

This blogpost is intended for a technical audience. In it, we cover: The term “Model Card” was coined by Mitchell et al. in the 2018 paper Model Cards for Model Reporting. At their core, Model Cards are the nutrition labels of the AI world, providing instructions and warnings for a trained model. When used, they […]

Search

Categories

  • Blog (49)
  • Corporate News (13)
  • Insights (18)
  • Research (2)
  • Webinar (2)
  • Whitepaper (7)

Recent posts

  • Why AI Governance is the Next Generation of Model Risk Management
  • Should the EU “Stop the Clock” on the AI Act?
  • What is the “Perfect” AI Use Case Intake Process?

Tags

AI Adoption ai complexity ai compliance ai framework AI FRAMEWORKS AI Governance ai GOVERNANCE COMMITTEES ai inventory ai model ratings AI Policy AI policy analyzer ai privacy AI Regulation ai regulations ai risk AI risks AI safety ai value benefits of ai colorado Colorado SB 205 Databricks EU AI ACT EU Regulation Europe Frameworks fransicso sanchez insurance ISO 42001 jason hirsch Jon Leibowitz larry quinlan Machine Learning Model Ratings model risk model risk management nist nist ai rmf privacy regulatory Responsible AI standards Startup taxonomy trustible

The turnkey solution to maximize trust & make AI governance easy

Trustible’s platform makes it easy to operationalize Responsible AI and prove AI Governance to your stakeholders.

Insights

Delivering best practices for Responsible AI.

 

Simplicity

Translating business and technical requirements.

 

Collaboration

Mobilizing the right people at the right time.

 
Enable trustworthy &
responsible AI

Manage & migrate AI risk, build trust, and accelerate Responsible AI development.


    Where AI Governance Gets Done
    Linkedin X-twitter Youtube

    PRODUCT

    • Overview
    • AI Inventory
    • Risk Management
    • AI Policies
    • AI Assessment
    • Regulatory Compliance

    SOLUTIONS

    • Reduce AI Risk
    • Accelerate Responsible AI Adoption
    • Build & Maintain Stakeholder Trust
    • Comply With Regulations and Standards

    ABOUT

    • About
    • Careers
    • Resources
    • Partners
    • App Status
    • Security
    Login

    [email protected]

    +1 (301) 392-7232

    Trustible™ Privacy Policy & Terms of Service

    Copyright © 2025 – TRUSTIBLE. All Rights Reserved