How Leidos Made AI Governance the Foundation for AI Innovation

Leidos case study with Trustible AI governance platform

Trustible turbocharged our use case throughput, unlocking greater speed in our AI deployment across the organization.” – Geoff Schaefer, VP of AI Strategy and Governance, Leidos. Leidos builds and deploys AI systems that operate in some of the most regulated, mission‑critical environments in the world. As its AI portfolio expanded across defense, intelligence, civil, and […]

Leidos and Trustible Launch Joint Initiative to Redefine AI Governance with Agents

Trustible and Leidos Logos

Collaboration applies proven AI principles to help automate governance, reduce friction, and support AI innovation and adoption across government missions. Arlington, Va. – FEB. 4, 2026 — AI governance is too often a brake on innovation. Trustible and Leidos (NYSE: LDOS) are working to change that. Today, the companies announced a partnership to redefine AI […]

A Pragmatic Blueprint for AI Regulation

An AI startup’s proposal for fair, pro-growth, pro-AI, non-partisan, AI regulation AI is one of the most transformative technologies of the century, with the potential to accelerate scientific research, improve healthcare outcomes, and help small businesses compete with larger enterprises. The United States currently leads the world in AI development. Yet despite this leadership, a […]

Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database

Trustible, a leading provider of AI governance software for enterprises, today announced a partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is leading RAIC’s inaugural cohort of corporate sponsors, and will integrate AIID incident data directly into its platform and collaborate with RAIC on research into […]

Everything You Need to Know About New York’s RAISE Act 

New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.

Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025)

On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.)