Skip to content
  • Home
  • Solutions
    • By Objective
      • Accelerate Responsible AI Adoption
      • Build and Maintain Stakeholder Trust
      • Reduce AI Risk
      • Comply with Regulations and Standards
    • By Framework
      • EU AI Act
      • Colorado SB205
      • NIST AI RMF
      • ISO/IEC 42001
  • Product
    • Product Overview
    • AI Assessment
    • AI Inventory
    • AI Policies
    • AI Risk Management
    • AI Regulatory Compliance
  • Resources
    • Blog
    • AI Newsletter
    • AI Model Ratings
    • AI Maturity Assessment
  • About
    • About
    • Careers
    • Partners
    • Contact
    • Terms of Service
    • Privacy Policy
    • Sitemap

Contact Us

We recognize AI governance can be overwhelming - we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.

Edit Content

Insights

Blog, Insights

Everything you need to know about the NY DFS Insurance Circular Letter No. 7

July 22, 2024 trustible

On July 11, 2024, the New York Department of Financial Services (NY DFS) released its final circular letter on the use of external consumer data and information sources (ECDIS), AI systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts. A circular letter is not a regulation per se, but rather a formalized interpretation of existing laws and regulations by the NY DFS. The finalized guidance comes after the NY DFS sought input on its proposed circular letter, which was published in January 2024.  

Blog, Insights

AI Policy Series 3: Drafting Your Public AI Principles Policy

July 5, 2024 trustible

In our final blog post of this AI Policy series (see Comprehensive AI Policy and AI Use Policy guidance posts here), we want to explore what organizations should make available to the public about their use of AI. According to recent research by Pew, 52 percent of Americans feel more concerned than excited by AI. This data demonstrates that, while organizations may understand or realize the value of AI, their users and customers may harbor some skepticism. Policymakers and large AI companies have sought to address public concerns, albeit in their own ways.

Blog, Insights

AI Policy Series 2: Drafting Your AI Use Policy

June 26, 2024 trustible

In this series’ first blog post, we broke down AI policies into 3 categories: 1) a comprehensive organizational AI policy that includes organizational principles, roles and processes, 2) an AI use policy that outlines what kinds of tools and use cases are allowed, as well as what precautions employees must take when using them, and 3) a public facing AI policy that outlines core ethical principles the organization adopts, as well as their stance on key AI policy stances. In this second blog post on AI policies, we want to explore critical decisions and factors that organizations should consider as they draft their AI use policy.

Blog, Insights

AI Policy Series 1: Drafting Your Comprehensive AI Policy

June 20, 2024 trustible

As organizations increase their adoption of AI, governance leaders are looking to put in place policies that ensure their AI deployment aligns with their organization’s principles, complies with regulatory standards, and mitigates potential risks. But where to start in developing your policies can oftentimes be overwhelming. 

Blog, Insights

Why AI Governance is going to get a lot harder

April 9, 2024 trustible

AI Governance is hard as it involves collaboration across multiple teams and an understanding of a highly complex technology and its supply chains. It’s about to get even harder. The complexity of AI governance is growing along 2 different dimensions at the same time – both of them are poised to accelerate in the coming […]

Blog, Insights

3 Lines of Defense for AI Governance

March 25, 2024 trustible

AI Governance is a complex task as it involves multiple teams across an organization, working to understand and evaluate the risks of dozens of AI use cases, and managing highly complex models with deep supply chains. On top of the organizational and technical complexity, AI can be used for a wide range of purposes, some of which are relatively safe (e.g. email spam filter), while others pose serious risks (e.g. medical recommendation system). Organizations want to be responsible with their AI use, but struggle to balance innovation and adoption of AI for low risk uses, with oversight and risk management for high risk uses. To manage this, organizations need to adopt a multi-tiered governance approach in order to allow for easy, safe experimentation from development teams, with clear escalation points for riskier uses.

Blog, Insights

3 Levels of AI Governance – It’s not just about the models!

September 22, 2023 trustible

While AI has been used in enterprise and consumer products for decades, only large tech organizations with sufficient resources were able to implement it at scale. In the past few years, advances in the quality and accessibility of ML systems have led to a rapid proliferation of AI tools in everyday life. The accessibility of these tools means there is a massive need for good AI Governance both by AI providers (e.g. OpenAI), as well as the organizations implementing and deploying AI systems into their own products.

Blog, Insights, Research

Towards a Standard for Model Cards

May 5, 2023 trustible

This blogpost is intended for a technical audience. In it, we cover: The term “Model Card” was coined by Mitchell et al. in the 2018 paper Model Cards for Model Reporting. At their core, Model Cards are the nutrition labels of the AI world, providing instructions and warnings for a trained model. When used, they […]

Blog, Insights

4 Ways to Prepare for Upcoming AI Regulations

April 4, 2023 trustible

Governments and regulators around the world are taking a closer look at how to manage the AI’s potential risks and benefits. While regulations vary by country and industry, it’s important for companies developing and deploying AI to stay ahead of the game and prepare for potential regulatory challenges.

Posts pagination

Previous 1 2

Search

Categories

  • Blog (51)
  • Corporate News (14)
  • Insights (19)
  • Research (2)
  • Webinar (2)
  • Whitepaper (7)

Recent posts

  • Informative banner of Trustible AI blog post.
    Everything You Need to Know about California’s New AI Laws
  • Informational image about the Trustible Zero Trust blog.
    When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI
  • AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

Tags

AI/ML AI Action Plan AI Adoption ai compliance AI FRAMEWORKS AI Governance ai inventory ai model ratings ai monitoring AI moratorium AI Policy AI Regulation ai regulations ai risk AI risks AI safety California carahsoft colorado DAGF Databricks data governance data science EU AI ACT FAQ Frameworks fundraising governance triggers insurance intake ISO 42001 Jon Leibowitz Machine Learning model outputs Model Ratings nist nist ai rmf public sector regulatory risk assessments Schellman Startup tiers of assessments training data trustible

The turnkey solution to maximize trust & make AI governance easy

Trustible’s platform makes it easy to operationalize Responsible AI and prove AI Governance to your stakeholders.

Insights

Delivering best practices for Responsible AI.

 

Simplicity

Translating business and technical requirements.

 

Collaboration

Mobilizing the right people at the right time.

 
Enable trustworthy &
responsible AI

Manage & migrate AI risk, build trust, and accelerate Responsible AI development.


    Where AI Governance Gets Done
    Linkedin X-twitter Youtube

    PRODUCT

    • Overview
    • AI Inventory
    • Risk Management
    • AI Policies
    • AI Assessment
    • Regulatory Compliance

    SOLUTIONS

    • Reduce AI Risk
    • Accelerate Responsible AI Adoption
    • Build & Maintain Stakeholder Trust
    • Comply With Regulations and Standards

    ABOUT

    • About
    • Careers
    • Resources
    • Partners
    • App Status
    • Security
    Login

    [email protected]

    +1 (301) 392-7232

    21972-312_SOC_NonCPA_Blk

    Trustible™ Privacy Policy & Terms of Service

    Copyright © 2025 – TRUSTIBLE. All Rights Reserved