16 Types of AI Governance Platforms, Explained

A buyer’s guide to what “AI governance” actually means across different tools, and what to look for when it matters.
How Leidos Made AI Governance the Foundation for AI Innovation

Trustible turbocharged our use case throughput, unlocking greater speed in our AI deployment across the organization.” – Geoff Schaefer, VP of AI Strategy and Governance, Leidos. Leidos builds and deploys AI systems that operate in some of the most regulated, mission‑critical environments in the world. As its AI portfolio expanded across defense, intelligence, civil, and […]
How Thalamus Is Setting a New Standard for Trustworthy AI in Medical Education with Trustible

ARLINGTON, VA | APR 3, 2026 — Thalamus, the leading platform for Graduate Medical Education (GME) recruitment, has selected Trustible as its AI governance platform. Thalamus will use Trustible to structure and operationalize how it governs AI across its products and operations, giving the GME community transparent, verifiable evidence of how AI is governed in […]
Trustible Partners with Coalition for Health AI to Accelerate Responsible AI Adoption in Healthcare
Trustible, a leading provider of AI governance software for enterprises, today announced it has joined the Coalition for Health AI (CHAI)’s Partner Program, helping to set standards for how AI models can be responsibly governed. CHAI is a provider-led coalition committed to developing industry best practices and frameworks to further innovation, safety and security for […]
Leidos and Trustible Launch Joint Initiative to Redefine AI Governance with Agents

Collaboration applies proven AI principles to help automate governance, reduce friction, and support AI innovation and adoption across government missions. Arlington, Va. – FEB. 4, 2026 — AI governance is too often a brake on innovation. Trustible and Leidos (NYSE: LDOS) are working to change that. Today, the companies announced a partnership to redefine AI […]
A Pragmatic Blueprint for AI Regulation

An AI startup’s proposal for fair, pro-growth, pro-AI, non-partisan, AI regulation AI is one of the most transformative technologies of the century, with the potential to accelerate scientific research, improve healthcare outcomes, and help small businesses compete with larger enterprises. The United States currently leads the world in AI development. Yet despite this leadership, a […]
Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database

Trustible, a leading provider of AI governance software for enterprises, today announced a partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is leading RAIC’s inaugural cohort of corporate sponsors, and will integrate AIID incident data directly into its platform and collaborate with RAIC on research into […]
Everything You Need to Know About New York’s RAISE Act

New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.
Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025)

On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.)