Operationalizing AI Governance in Insurance

Federal regulators have largely focused on issuing guidance and initiating inquiries into AI, whereas state regulators have taken a more proactive stance, addressing AI’s unique challenges within sectors such as insurance. 

The New York Department of Financial Services released a draft guidance letter proposing standards for identifying, measuring, and mitigation potential bias from use of ‘External Consumer Data and Information Sources’ for underwriting and pricing. This proposal similarly mirrors a regulation already in effect in Colorado that was finalized last year. The National Association of Insurance Commissioners released their model bulletin on AI risk management last year, and we expect further states will announce similar proposed regulations. These proposed rules likely won’t face the same level of uncertainty that stems from the legislative process.

This means that operationalizing AI governance in the insurance sector is a need-to-do. 

Watch our LinkedIn Live hosted on Wednesday, February 28 at 12:00 P.M. EST / 9:00 A.M. PST for a deep dive on this topic.

Speakers:

Andrew Gamino-Cheong – Co-Founder & CTO, Trustible

Tamra Tyree Moore – VP & Corporate Counsel, Data, Privacy, & AI, Prudential

Shontael (Elward) Starry – AI Ethicist; Data Scientist, Nationwide

Ellie Jurado-Nieves – VP & Ass.t General Counsel, Strategic Public Policy Initiatives, Guardian Life

Agenda:

▪ Who “owns” AI Governance? 

▪ Current state of laws and regulations for AI in insurance

▪ Challenges with complying with Colorado Reg 10-1-1

▪ How bias and fairness may differ between different types of insurance

▪ Best practices for how technical and non technical teams can better collaborate for AI governance

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More