Everything you need to know about the Colorado AI Life Insurance Regulation (Regulation 10-1-1)

What is Colorado Regulation-10-1-1 ?

In July 2021, Governor Jared Polis signed SB 21-169 into law, which directed the Colorado Division of Insurance (CO DOI) to adopt risk management requirements that prevent algorithmic discrimination in the insurance industry. After two years and several revisions, a final risk management regulation for life insurance providers was officially published in September 2023 and took effect on November 14, 2023.

Under the final regulation, applicable life insurance providers that use ‘external consumer data and information sources’ (ECDIS) as a component of the life insurance process (e.g., setting policy premiums or reviewing claims) must set up a risk management program to ensure that use of ECDIS does not result in unfair discrimination. Life Insurance providers must also submit reports to the CO DOI on their compliance with the regulation.

The CO DOI is also expected to issue qualitative testing requirements for life insurers’ ECDIS models. Comments on the proposed draft were due on October 26, 2023 and are currently under review by the CO DOI.

Who is impacted by it ?

The CO life insurance regulation specifically applies to any life insurance provider that does business in the CO. In addition, any vendors or subcontractors used by a life insurance provider will be subject to the regulation (e.g., organizations handling underwriting or processing claims).

While the CO life insurance regulation focuses on life insurance providers, the original legislation covered all forms of insurance. For practical purposes, the CO DOI has started with life insurance in order to adapt the risk management and test requirements to a specific area, however it has also initiated the consultation process for car insurance. Eventually all forms of insurance products in CO will be subject to the regulation.

What exactly is ‘ECDIS’ ?

ECDIS is defined as any piece or source of data about a customer that is ‘non-traditional’ to the insurance industry or in actuarial science. This may include things such as social media habits, educational attainment, or biometric information. Many publicly available data sources on the internet may be low quality, not representative of the population, or be proxies for protected attributes. Therefore, providers will be required to have a structured risk management process for using such data.

The definition of ECDIS does not explicitly state what qualifies as ECDIS, but does provide a non-exhaustive list of ECDIS examples and excludes ‘traditional underwriting factors.’ While the regulation does not define ‘traditional underwriting factors,’ the proposed quantitative testing regulation includes a list of ‘traditional underwriting factors’ and, if adopted, any items outside of that list could qualify as ‘ECDIS.’

What do I need to do to comply with the regulation ?

There are three main components of the regulation. First, life insurers must create and maintain, in perpetuity, a risk management framework to oversee and govern any use of ECDIS. This will require life insurers to define their risk management policies and processes where ECDIS is currently in use or will be in the foreseeable future. This will also involve identifying a board level committee to oversee risk management functions, as well as creating an internal cross-functional oversight team. Second, life insurance providers will need to take inventory of their AI use cases as it pertains to ECDIS. This may involve a large multi-team effort to establish if any ‘non-traditional’ data is being used for any insurance process. Finally, life insurers must submit regular reports to the CO DOI. The first report is due in June 2024, and requires life insurers to disclose their progress in adopting the regulation. Subsequently, starting in December 2024 and every year after, life insurance providers must submit an annual report with a narrative summary of their risk management practices surrounding ECDIS.

Life insurers should also be on notice that, once adopted, they will be required to test for unfair algorithmic discrimination. The draft testing regulation is expected to be specific to various insurance related processes (e.g., determining policy premiums) and stipulate acceptable thresholds between protected categories. Data scientists should be trained on how to identify algorithmic discrimination following the testing requirements once the proposed CO testing regulation is adopted. Life insurers should also anticipate documenting each action to demonstrate compliance.

How does this align with NAIC’s draft bulletin on AI risk management ?

In October 2023, the National Association of Insurance Commissioners (NAIC) circulated a draft model bulletin about the use of ‘AI Systems’ by insurers. The latest draft calls for a similar risk management framework as the CO life insurance regulation, but applies it to any insurance-related use of AI, not just those uses related to ECDIS. The most recent draft is also more prescriptive about the required risk management practices (e.g., documentation for internal controls at each stage of the AI system’s life cycle) and the scope is significantly broader, such as including requirements for third-party AI vendors. While state DOIs are free to implement their own regulations with regards to AI systems, once finalized, the NAIC model bulletin could be adopted by state DOIs with few, if any, modifications.

How does it compare to other proposed regulations and AI frameworks like the NIST AI Risk Management Framework, ISO 42001, and the EU AI Act ?

Many regulations and guidelines proposed within the US and internationally call for organizations to adopt a risk management framework for their AI system(s). For instance, the EU AI Act requires a risk management framework for ‘high risk’ AI systems, which would include insurance. Additionally, both the NIST AI Risk Management Framework (RMF) and the proposed ISO 42001 standard provide detailed policies, processes, and controls for organizations to manage AI risks.

The emerging landscape on AI risk management frameworks is also showing signs of convergence and overlap, whereas compliance with one could satisfy requirements elsewhere. Specifically, the NAIC bulletin highlights the NIST AI RMF as the model framework that insurance providers should consider adopting. By implementing the NIST AI RMF, insurers could satisfy requirements in the CO life insurance regulation, as well as many of the expected provisions in the NAIC bulletin. Continuing convergence with and adoption of existing frameworks and regulations, like the NIST AI RMF, can help insurance providers hedge against any additional future AI related regulations.

How can Trustible help me with this regulation?

Defining relevant organizational policies and inventorying AI use cases in compliance with regulations is the Trustible platform’s core function.

Trustible’s software platform builds in the CO life insurance regulation as a preconfigured framework. Trustible’s AI use case inventory can help your organization document your uses of AI, as well as identify which systems leverage ‘ECDIS’ and what risk mitigations to put in place per use case.

Trustible’s policy center recommends relevant best practices for your risk management policies and helps you set up workflows to automatically enforce them. Internal policy experts have mapped each line of the regulation to attributes and requirements in the platform, so that technical and non-technical team members can work together to log relevant information in a single source of truth. Finally, Trustible monitors all other states for similar AI-related regulatory requirements and can help your organization quickly identify which of those are similar or differ from the CO life insurance regulation.

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More