Skip to content
  • Home
  • Solutions
    • By Objective
      • Accelerate Responsible AI Adoption
      • Build and Maintain Stakeholder Trust
      • Reduce AI Risk
      • Comply with Regulations and Standards
    • By Framework
      • EU AI Act
      • Colorado SB205
      • NIST AI RMF
      • ISO/IEC 42001
  • Product
    • Product Overview
    • AI Assessment
    • AI Inventory
    • AI Policies
    • AI Risk Management
    • AI Regulatory Compliance
  • Resources
    • Blog
    • AI Newsletter
    • AI Model Ratings
    • AI Maturity Assessment
  • About
    • About
    • Careers
    • Partners
    • Contact
    • Terms of Service
    • Privacy Policy
    • Sitemap

Contact Us

We recognize AI governance can be overwhelming - we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.

Edit Content

Blog

Blog

3 Types of Risk & Impact Assessments, And When to Use Them

August 8, 2025 trustible

A common requirement in many AI standards, regulations, and best practices is to conduct risk and impact assessments to understand the potential ways AI could malfunction or be misused. By understanding the risks, organizations can prioritize and implement appropriate technical, organizational, and legal mitigation measures. While there are standards for these assessments, such as the […]

Blog, Insights

What the Trump Administration’s AI Action Plan Means for Enterprises

August 5, 2025 trustible

The Trump Administration released “Winning the AI Race: America’s AI Action Plan” (AI Action Plan) on July 23, 2025. The AI Action plan was published in accordance with the January 2025 Removing Barriers to American Leadership in AI Executive Order. The AI Action Plan proposes approximately 90 policy recommendations within three thematic pillars: Pillar I addresses […]

Blog, Insights, Research

FAccT Finding: AI Takeaways from ACM FAccT 2025

July 15, 2025 trustible

Anastassia Kornilova is the Director of Machine Learning at Trustible. Anastassia translates research into actionable insights and uses AI to accelerate compliance with regulations. Her notable projects have involved creating the Trustible Model Ratings and AI Policy Analyzer. Previously, she has worked at Snorkel AI developing large-scale machine learning systems, and at FiscalNote developing NLP […]

Blog

Navigating The AI Regulatory Minefield: State And Local Themes From Recent Legislation

July 12, 2025 trustible

This article was originally published on Forbes. Click here for the original version. The complex regulatory landscape for artificial intelligence (AI) has become a pressing challenge for businesses. Governments are approaching AI through the same piecemeal lens as other emerging technologies such as autonomous vehicles, ride-sharing, and even data privacy. In the absence of a […]

Blog

Trustible Becomes Official Implementation Partner for the Databricks AI Governance Framework (DAGF)

July 8, 2025 trustible

Despite the explosive growth of AI, most enterprises remain unprepared to manage the very real risks that come with its adoption. While the opportunities are vast—from smarter products to more efficient operations—the path to realizing AI’s full potential is fraught with challenges around performance, cybersecurity, privacy, ethics, and legal compliance. Without a strong AI governance […]

Blog, Insights

Trustible’s Perspective: The AI Moratorium would have been bad for AI adoption

July 2, 2025 trustible

In the early hours of July 1, 2025, the Senate overwhelmingly voted to strip the proposed federal moratorium on state and local AI laws from the Republican’s reconciliation bill. The moratorium went through several re-writes in an attempt to salvage it, though ultimately 99 Senators supported removing it from the final legislative package.  While the political […]

Blog, Insights

Understanding the Data in AI

February 11, 2025 trustible

Data governance is a key component of responsible AI governance, and it features prominently in every emerging AI regulations and standards. However, “data” is not a monolithic concept within AI systems. From the massive datasets collected for training large language models (LLMs), to user feedback loops that refine and improve outputs, multiple “data streams” flow through any modern AI application.

Blog

Navigating AI Vendor Risk: 10 Questions for your Vendor Due Diligence Process

January 2, 2025 trustible

AI is everywhere, but the race to add AI from vendors has embedded unknown risks into your supply chain. Knowing what type of AI your suppliers use is difficult enough, let alone knowing how to ensure your due diligence adequately addresses the unique risks it may pose. Yet, customers and regulators are increasingly probing into […]

Blog, Insights

What is AI Monitoring?

November 12, 2024 trustible

When many technical personas hear the term monitoring, they often think of internal monitoring of the AI system.

Blog, Insights

Understanding AI Stakeholders with Trustible’s AI Stakeholder Taxonomy

October 16, 2024 trustible

Trustible developed an AI Stakeholder Taxonomy that can help organizations easily identify stakeholders as part of the impact assessment process for their high-risk use cases

Posts pagination

Previous 1 2 3 … 6 Next

Search

Categories

  • Blog (51)
  • Corporate News (14)
  • Insights (19)
  • Research (2)
  • Webinar (2)
  • Whitepaper (7)

Recent posts

  • Informative banner of Trustible AI blog post.
    Everything You Need to Know about California’s New AI Laws
  • Informational image about the Trustible Zero Trust blog.
    When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI
  • AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

Tags

AI/ML AI Action Plan AI Adoption ai compliance AI FRAMEWORKS AI Governance ai inventory ai model ratings ai monitoring AI moratorium AI Policy AI Regulation ai regulations ai risk AI risks AI safety California carahsoft colorado DAGF Databricks data governance data science EU AI ACT FAQ Frameworks fundraising governance triggers insurance intake ISO 42001 Jon Leibowitz Machine Learning model outputs Model Ratings nist nist ai rmf public sector regulatory risk assessments Schellman Startup tiers of assessments training data trustible

The turnkey solution to maximize trust & make AI governance easy

Trustible’s platform makes it easy to operationalize Responsible AI and prove AI Governance to your stakeholders.

Insights

Delivering best practices for Responsible AI.

 

Simplicity

Translating business and technical requirements.

 

Collaboration

Mobilizing the right people at the right time.

 
Enable trustworthy &
responsible AI

Manage & migrate AI risk, build trust, and accelerate Responsible AI development.


    Where AI Governance Gets Done
    Linkedin X-twitter Youtube

    PRODUCT

    • Overview
    • AI Inventory
    • Risk Management
    • AI Policies
    • AI Assessment
    • Regulatory Compliance

    SOLUTIONS

    • Reduce AI Risk
    • Accelerate Responsible AI Adoption
    • Build & Maintain Stakeholder Trust
    • Comply With Regulations and Standards

    ABOUT

    • About
    • Careers
    • Resources
    • Partners
    • App Status
    • Security
    Login

    [email protected]

    +1 (301) 392-7232

    21972-312_SOC_NonCPA_Blk

    Trustible™ Privacy Policy & Terms of Service

    Copyright © 2025 – TRUSTIBLE. All Rights Reserved