On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.)
The Path to Agentic Governance: Innovations, Lessons Learned, and Our 2025 Milestones
In 2025, Trustible delivered the continuous, scalable programs needed for faster AI adoption at the same time that AI governance itself was shifting from principles and pilots to real production.
Our strengthened intelligence, collaboration, automation, and change management capabilities helped enterprises deploy AI deeper into workflows, decisions, and customer experiences.
5 AI Governance Trends Heading into 2026
AI has moved from experimental pilots to systems that shape real-world decisions, customer interactions, and mission outcomes. Organizations across sectors, including financial services, healthcare, insurance, retail, and the public sector, now depend on AI to run core operations and deliver better experiences. And their enthusiasm to adopt the technology responsibly is also growing.
Trustible Recognized in the 2025 Gartner® Market Guide for AI Governance Platforms
Trustible, a leading AI governance platform provider, is pleased to be listed as a Representative Vendor in the 2025 Gartner Market Guide for AI Governance Platforms. We believe this is a milestone that signals the start of an inflection point, when AI governance is no longer optional, experimental, or theoretical; it’s now a business imperative […]
AI Governance Best Practices for Healthcare Systems and Pharmaceutical Companies
In the rapidly evolving landscape of healthcare, AI promises to revolutionize patient care, but it also brings significant risks. From algorithmic bias to data privacy breaches, the stakes are high. Effective AI governance is essential to harness the benefits of these technologies while safeguarding patient safety and ensuring compliance with regulations. This article delves into the critical challenges healthcare systems and pharmaceutical companies face, offering practical solutions and best practices for implementing trustworthy AI. Discover how to navigate the complexities of AI in healthcare and protect your organization from potential pitfalls.
Introducing the Trustible AI Governance Insights Center
At Trustible, we believe AI can be a powerful force for good, but it must be governed effectively to align with public benefit. Introducing the Trustible AI Governance Insights Center, a public, open-source library designed to equip enterprises, policymakers, and consumers with essential knowledge and tools to navigate AI’s risks and benefits. Our comprehensive taxonomies cover AI Risks, Mitigations, Benefits, and Model Ratings, providing actionable insights that empower organizations to implement robust governance practices. Join us in transforming the conversation around trusted AI into tangible, measurable outcomes. Explore the Insights Center today!
When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI
Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?
Why AI Governance is the Next Generation of Model Risk Management
For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.
Should the EU “Stop the Clock” on the AI Act?
The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.
Trustible and Carahsoft Announce Strategic Partnership to Bring AI Governance Platform to Government Agencies
Collaboration Enables Streamlined Access to AI Governance Solutions for the Public Sector ARLINGTON, Va., and RESTON, Va. – August 38, 2025 – Trustible, a leader in AI governance, risk and compliance, and Carahsoft Technology Corp., The Trusted Government IT Solutions Provider®, today announced a strategic partnership. Under the agreement, Carahsoft will serve as Trustible’s Master Government […]










