Skip to content
  • Home
  • Solutions
    • By Objective
      • Accelerate Responsible AI Adoption
      • Build and Maintain Stakeholder Trust
      • Reduce AI Risk
      • Comply with Regulations and Standards
    • By Framework
      • EU AI Act
      • Colorado SB205
      • NIST AI RMF
      • ISO/IEC 42001
  • Product
    • Product Overview
    • AI Assessment
    • AI Inventory
    • AI Policies
    • AI Risk Management
    • AI Regulatory Compliance
  • Resources
    • Blog
    • AI Model Ratings
    • AI Maturity Assessment
    • AI Governance Glossary
  • About
    • About
    • Careers
    • Partners
    • Contact
    • Terms of Service
    • Privacy Policy
    • Sitemap

Contact Us

We recognize AI governance can be overwhelming - we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.

Edit Content

trustible

Corporate News

Trustible Welcomes New Advisors to Strengthen Enterprise and Legal Expertise in AI

May 23, 2024 trustible

We are thrilled to announce the addition of three new members to the Trustible Advisory Board: Larry Quinlan, Jason D. Hirsch, and Francisco Sánchez. Their deep expertise in AI, enterprise technology, regulatory strategy, and product counseling will guide Trustible customers and leadership on global challenges at the intersection of technology, law, and government policy. 

Blog

Everything you need to know about Colorado SB 205

May 17, 2024 trustible

On May 17, 2024, Colorado Governor Jared Polis signed the Consumer Protection for Artificial Intelligence (SB 205) into law, the first comprehensive state AI law that imposes rules for certain high risk AI systems. The law requires that AI used to support ‘consequential decisions’ for certain use cases should be treated as ‘high risk’ and will be subject to a range of risk management and reporting requirements. The new rules will come into effect on February 1, 2026. 

Whitepaper

Enhancing the Effectiveness of AI Governance Committees

May 13, 2024 trustible

Organizations are increasingly deploying artificial intelligence (AI) systems to drive innovation and gain competitive advantages. Effective AI governance is crucial for ensuring these technologies are used ethically, comply with regulations, and align with organizational values and goals. However, as the use of AI and AI regulations become more pervasive, so does the complexity of managing these technologies responsibly. 

Blog

A Framework for Measuring the Benefits of AI

May 1, 2024 trustible

Introduction  Significant research has been invested in studying AI risks, a response to the rapid pace of deployment of highly capable AI models across a wide variety of use cases. Over the last year, governments around the world have established AI Safety institutes tasked with developing methodologies to assess the impact and probability of various […]

Blog, Corporate News

Trustible Announces New Model Transparency Ratings to Enhance AI Model Risk Evaluation

April 23, 2024 trustible

Organizational leaders are looking to better understand what AI models may be best fit for a given use case. However, limited public transparency on these systems makes this evaluation difficult. 

In response to the rapid development and deployment of general-purpose AI (GPAI) models, Trustible is proud to introduce its research on Model Transparency Ratings – offering a comprehensive assessment of transparency disclosures of the top 21 Large Language Models (LLMs).

Blog

Inside Trustible’s Methodology for Model Transparency Ratings

April 22, 2024 trustible

The speed at which new general purpose AI (GPAI) models are being developed is making it difficult for organizations to select which model to use for a given AI use case. While a model’s performance on task benchmarks, deployment model, and cost are primarily used, other factors, including the data sources, ethical design decisions, and regulatory risks of a model must be accounted for as well. These considerations cannot be inferred from a model’s performance on a benchmark, but are necessary to understand whether using a specific model is appropriate for a given task or legal to use within a jurisdiction.

Blog, Insights

Why AI Governance is going to get a lot harder

April 9, 2024 trustible

AI Governance is hard as it involves collaboration across multiple teams and an understanding of a highly complex technology and its supply chains. It’s about to get even harder. The complexity of AI governance is growing along 2 different dimensions at the same time – both of them are poised to accelerate in the coming […]

Whitepaper

Analysis – How Trustible Helps Organizations Comply With The EU AI Act

April 5, 2024 trustible

The EU AI Act sets a global precedent in AI regulation, emphasizing human rights in AI development and implementation of AI systems. While the eventual law will directly apply to EU countries, its extraterritorial reach will impact global businesses in profound ways. Global businesses producing AI-related applications or services that either impact EU citizens or supply EU-based companies will be responsible for complying with the EU AI Act. Failure to comply with the Act can result in fines up to 7% of global turnover or €35m for major violations, with lower penalties for SMEs and startups.

Whitepaper

Analysis – Mapping the Requirements of NIST AI RMF, ISO 42001, and the EU AI Act

April 1, 2024 trustible

Navigating the evolving and complex landscape for AI governance requirements can be a real challenge for organizations. Previously, Trustible created this comprehensive cheat sheet comparing three important compliance frameworks: the NIST AI Risk Management Framework, ISO 42001, and the EU AI Act. This easy to understand visual maps the similarities and differences between these frameworks, […]

Blog, Insights

3 Lines of Defense for AI Governance

March 25, 2024 trustible

AI Governance is a complex task as it involves multiple teams across an organization, working to understand and evaluate the risks of dozens of AI use cases, and managing highly complex models with deep supply chains. On top of the organizational and technical complexity, AI can be used for a wide range of purposes, some of which are relatively safe (e.g. email spam filter), while others pose serious risks (e.g. medical recommendation system). Organizations want to be responsible with their AI use, but struggle to balance innovation and adoption of AI for low risk uses, with oversight and risk management for high risk uses. To manage this, organizations need to adopt a multi-tiered governance approach in order to allow for easy, safe experimentation from development teams, with clear escalation points for riskier uses.

Posts pagination

Previous 1 … 3 4 5 … 7 Next

Search

Categories

  • Blog (49)
  • Corporate News (13)
  • Insights (18)
  • Research (2)
  • Webinar (2)
  • Whitepaper (7)

Recent posts

  • Why AI Governance is the Next Generation of Model Risk Management
  • Should the EU “Stop the Clock” on the AI Act?
  • What is the “Perfect” AI Use Case Intake Process?

Tags

AI Adoption ai complexity ai compliance ai framework AI FRAMEWORKS AI Governance ai GOVERNANCE COMMITTEES ai inventory ai model ratings AI Policy AI policy analyzer ai privacy AI Regulation ai regulations ai risk AI risks AI safety ai value benefits of ai colorado Colorado SB 205 Databricks EU AI ACT EU Regulation Europe Frameworks fransicso sanchez insurance ISO 42001 jason hirsch Jon Leibowitz larry quinlan Machine Learning Model Ratings model risk model risk management nist nist ai rmf privacy regulatory Responsible AI standards Startup taxonomy trustible

The turnkey solution to maximize trust & make AI governance easy

Trustible’s platform makes it easy to operationalize Responsible AI and prove AI Governance to your stakeholders.

Insights

Delivering best practices for Responsible AI.

 

Simplicity

Translating business and technical requirements.

 

Collaboration

Mobilizing the right people at the right time.

 
Enable trustworthy &
responsible AI

Manage & migrate AI risk, build trust, and accelerate Responsible AI development.


    Where AI Governance Gets Done
    Linkedin X-twitter Youtube

    PRODUCT

    • Overview
    • AI Inventory
    • Risk Management
    • AI Policies
    • AI Assessment
    • Regulatory Compliance

    SOLUTIONS

    • Reduce AI Risk
    • Accelerate Responsible AI Adoption
    • Build & Maintain Stakeholder Trust
    • Comply With Regulations and Standards

    ABOUT

    • About
    • Careers
    • Resources
    • Partners
    • App Status
    • Security
    Login

    [email protected]

    +1 (301) 392-7232

    Trustible™ Privacy Policy & Terms of Service

    Copyright © 2025 – TRUSTIBLE. All Rights Reserved