AI in healthcare isn’t starting from a regulatory vacuum. It’s starting from an environment that already treats digital tools as safety‑critical: medical device rules, clinical trial regulations, GxP controls, HIPAA and GDPR, and payer oversight all assume that failing systems can directly harm patients or distort evidence. That makes healthcare one of the few sectors where AI is being plugged into dense, pre‑existing regulatory schemas rather than waiting for AI‑specific laws to catch up.
Trustible Recognized in the 2025 Gartner® Market Guide for AI Governance Platforms
Trustible, a leading AI governance platform provider, is pleased to be listed as a Representative Vendor in the 2025 Gartner Market Guide for AI Governance Platforms. We believe this is a milestone that signals the start of an inflection point, when AI governance is no longer optional, experimental, or theoretical; it’s now a business imperative […]
AI Governance Best Practices for Healthcare Systems and Pharmaceutical Companies
In the rapidly evolving landscape of healthcare, AI promises to revolutionize patient care, but it also brings significant risks. From algorithmic bias to data privacy breaches, the stakes are high. Effective AI governance is essential to harness the benefits of these technologies while safeguarding patient safety and ensuring compliance with regulations. This article delves into the critical challenges healthcare systems and pharmaceutical companies face, offering practical solutions and best practices for implementing trustworthy AI. Discover how to navigate the complexities of AI in healthcare and protect your organization from potential pitfalls.
Introducing the Trustible AI Governance Insights Center
At Trustible, we believe AI can be a powerful force for good, but it must be governed effectively to align with public benefit. Introducing the Trustible AI Governance Insights Center, a public, open-source library designed to equip enterprises, policymakers, and consumers with essential knowledge and tools to navigate AI’s risks and benefits. Our comprehensive taxonomies cover AI Risks, Mitigations, Benefits, and Model Ratings, providing actionable insights that empower organizations to implement robust governance practices. Join us in transforming the conversation around trusted AI into tangible, measurable outcomes. Explore the Insights Center today!
Everything You Need to Know about California’s New AI Laws
The California legislature has concluded another AI-inspired legislative session, and Governor Gavin Newsom has signed (or vetoed) bills that will have new impacts on the AI ecosystem. By our analysis, California now leads U.S. states in rolling out the most comprehensive set of targeted AI regulations in the country – but now what? The dominant […]
When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI
Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?
AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management
As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.
Why AI Governance is the Next Generation of Model Risk Management
For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.
Should the EU “Stop the Clock” on the AI Act?
The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.
What is the “Perfect” AI Use Case Intake Process?
Last week at the IAPP AI Governance Global Governance conference in Boston, Trustible brought together AI governance leaders from Leidos and Nuix to explore a deceptively tactical but mission-critical question: What does the “perfect” AI intake process look like?










