Everything you need to know about Colorado SB 205
May 20
5 min read
0
146
0
What is Colorado SB 205 ?
On May 17, 2024, Colorado Governor Jared Polis signed the Consumer Protection for Artificial Intelligence (SB 205) into law, the first comprehensive state AI law that imposes rules for certain high risk AI systems. The law requires that AI used to support ‘consequential decisions’ for certain use cases should be treated as ‘high risk’ and will be subject to a range of risk management and reporting requirements. The new rules will come into effect on February 1, 2026.
The bill moved quickly through the Colorado House and Assembly, as it was initially introduced on April 10, 2024. While Governor Polis ultimately signed the bill into law, he expressed some concerns about its impact on the AI industry and urged lawmakers to ‘fine tune the provisions’ before it comes into effect.
Who is impacted by it ?
The law is relevant to organizations that develop or deploy a high risk AI system in Colorado, and identifies specific areas where using AI to assist in a ‘consequential decision’ will be considered high risk. These areas include: access to education, employment opportunities, financial approvals, access to public benefits, housing, insurance, and legal services. The bill clearly defines the boundaries for what constitutes AI, consequential decisions, and the ways in which AI may be used as a ‘substantial factor,’ although there is not yet any case law precedent to test these definitions. Deployers with fewer than 50 employees and that meet additional criteria are exempt from keeping a risk management program, conducting impact assessments, or providing a public statement.
Colorado residents are also afforded certain rights under the law. Those rights include receiving certain information prior to a high risk system making, or being a substantial factor in making, a consequential decision, as well as contesting adverse consequential decisions from the use of a high risk AI system.
What is in the law ?
The law exclusively focuses on ‘high risk AI systems’, and imposes the following:
Duty of Care to avoid algorithmic discrimination
This sets a clear expectation that developers and deployers of high risk systems have an obligation to assess their systems for potential discrimination, harm and impacts.
Requirement for a risk management program
Requires developers or deployers of high risk systems to implement a risk management program; the legislation identifies the NIST AI RMF and ISO 42001 as appropriate standards to satisfy this requirement, as well as allows for compliance with similarly stringent standards (i.e., the EU AI Act). The risk management program requirement includes conducting an impact assessment on a regular basis, including after substantial modifications are made to an AI system.
Consumer transparency requirements
The bill requires that high risk AI system deployers inform consumers when AI will be used, provide high level details about the system and its intended purpose, and grant users the ability to opt-out, or provide corrections to any personal information about that that will be used in the system. These provisions do not require the AI model itself to be explainable, but rather that data used as inputs be disclosed. The provisions also align with Colorado’s existing privacy law.
Incident Reporting obligations to the Colorado Attorney General
The bill requires employers to provide the Colorado Attorney General with a report of any algorithmic discrimination stemming from a high risk AI system within 90 days of its detection. Deployers or developers can avoid enforcement actions if they discover (internally or through red-teaming) and cure violations of the law, and are otherwise in compliance with the NIST AI RMF or another nationally or internationally accepted risk management framework. Currently, the incident obligations are limited to discrimination only and not other forms of harm.
Enforcement
Enforcement rests solely with the Colorado Attorney General’s Office and does not prescribe fines or penalties. The Attorney General is empowered to promulgate rules, which will likely address specific consequences for violating the law. The Attorney General will also have the ability to request documentation to verify that an organization is in compliance.
How similar is this to the EU AI Act ?
The core focus of SB 205 is very similar to the EU’s impending AI Act. Both primarily target ‘high risk’ AI systems, define them in a similar way, and set out risk management requirements for their deployment. The EU AI Act also contains additional provisions for general purpose AI models (similar provisions were struck from SB 205 during debate), and imposes a stronger enforcement regime including fines, implements a structure for auditing bodies for certain high risk uses, and creates a central office to oversee and set AI standards. Overall, the EU AI Act is more stringent, with high risk systems also being subject to data governance and quality testing standards that are not specifically addressed under the Colorado law. However, SB 205 has a clearer expectation for what information must be provided to consumers if AI is involved in making a consequential decision, which is similar to information requests that are covered under the EU data protection regulation (GDPR).
How does this compare to other State and federal legislation and regulations ?
The Colorado law is similar to legislation proposed in several other states, including California and Connecticut, although progress and debate in those states have been slower. The Colorado law also resembles some proposals at the federal level, although the U.S. Senate has put a larger emphasis on boosting AI innovation and development over AI safety legislation. The risk management requirements echo recent work done by Colorado’s Department of Insurance, which finalized a regulation in 2023 requiring life insurance providers leveraging AI and certain kinds of data to implement a risk management system and report on it to the Insurance Commissioner.
How can I get started preparing for the law’s implementation ?
The first step for any organization doing business in Colorado, the first step will be identifying affected AI systems. This will involve creating an inventory of AI systems and assessing them against the high risk definition laid out in the law. In addition, the law specifically points to both the NIST AI Risk Management Framework and ISO 42001 as standards for the risk management system. Organizations that adopt these standards will satisfy compliance requirements under the Colorado law, and can additionally help future proof organizations against additional impending legislation.
How can Trustible help me with this regulation?
Trustible can help both developers and deployers fully comply with the core requirements of Colorado SB 205. Trustible helps organizations adopt both the NIST AI Risk Management Framework, and ISO 42001, including support for creating compliant policies, identifying relevant risks for each AI system, and assisting with the heavy documentation requirements. In addition, Trustible supports guided workflows for the required impact assessments, annual reviews, and analyzing potential AI incidents and generating the appropriate reports for them.
Contact Us to learn more about how Trustible can help you comply with Colorado SB 205.