Everything you need to know about the NY DFS Insurance Circular Letter No. 7
Jul 22
4 min read
0
46
0
On July 11, 2024, the New York Department of Financial Services (NY DFS) released its final circular letter on the use of external consumer data and information sources (ECDIS), AI systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts. A circular letter is not a regulation per se, but rather a formalized interpretation of existing laws and regulations by the NY DFS. The finalized guidance comes after the NY DFS sought input on its proposed circular letter, which was published in January 2024.
What is covered by the circular letter?
The circular letter is intended to cover, among other entities, all insurers authorized to write insurance in New York that use ECDIS, AI systems, or predictive models as part of their underwriting and pricing operations. The guidance has a number of key components.
Insurers are expected to implement measures that address unfair or unlawful discrimination. Insurers are expected to support its use of ECDIS with generally accepted actuarial standards of practice and are based on actual or reasonably anticipated experience. The circular letter stipulates that insurers should assess whether their ECDIS could be a proxy for status in a protected class that could result in unfair or unlawful discrimination. Insurers are also expected to conduct and document regularly scheduled qualitative and quantitative assessments for unfair or unlawful discrimination in their ECDIS or AI systems.
The circular letter advises insurers to have governance policies and procedures in place to address their use of ECDIS and AI. Senior managers are expected to provide effective oversight by implementing policies and procedures that include clearly defined roles and responsibilities for those dealing with ECDIS and AI systems, AI-related training programs for new and existing employees, and comprehensive documentation to account for all AI systems, including all ECDIS relied upon for such AI systems.
Insurers should also implement risk management measures at each stage of its AI systems’ lifecycle. These measures include considering the risks posed by AI models, both individually and in the aggregate. Insurers should also ensure that they have qualified employees to oversee risk management activities for their AI systems.
Additionally, the circular letter addresses the appropriate oversight of third-party vendors. Insurers are expected to maintain policies and procedures for the acquisition, use of, or reliance on ECDIS and AIS developed or deployed by a third-party vendor. Insurers are also encouraged to include contractual terms that allow the insurer to audit the vendor and require cooperation for investigations related to the insurer’s use of the third-party vendor’s product or services.
How does this compare to Colorado’s insurance regulation?
There are some key differences between Colorado’s Regulation 10-1-1 and the NY DFS circular letter. As we have previously discussed, Colorado’s regulation only pertains to life insurers whereas the NY DFS guidance covers all types of insurers. Interestingly, the circular letter is limited to AI systems used for underwriting or pricing, as compared to Colorado’s regulation, which applies to ECDIS or AI systems used in any insurance practice. Colorado’s rules include annual reporting requirements, which is absent from the NY DFS circular letter. Instead, the NY DFS advises insurers to maintain documentation to respond to regulator requests. Colorado also provides some flexibility in how life insurers deal with third-party vendor oversight.
In what ways does this align with NAIC’s bulletin on AI risk management?
In December 2024, the National Association of Insurance Commissioners (NAIC) approved a final model bulletin that addresses how insurers should manage their use of AI systems. Approximately 11 states have adopted the model bulletin as of April 2024 (AK, CT, IL, KY, MD, NV, NH, PA, RI, VT, and WA). While New York did not adopt the model bulletin as written, there is some overlap. For instance, both take a similar approach to maintaining documentation, in that there are no recurring reporting requirements. Both also suggest including contractual terms for audits and investigation cooperation for third-party vendors.
What do I need to do to comply with the circular letter?
There are a few basic steps to help your organization get started with complying with the circular letter. First, ensure that your organization has a comprehensive AI policy. This will help you establish roles and responsibilities, employee expectations and competencies, as well as ongoing risk management assessment activities. Second, create an inventory of AI systems used by your organizations. Not only is this part of the circular letter’s guidance, it will provide valuable insights into how your organization is deploying AI, as well as assist in conducting risk and impact assessments. Finally, establish policies and procedures to manage your third-party vendors. At the very minimum, there are some key questions to ask new or existing vendors that can help your organization understand potential risks in your AI supply chain.
How can Trustible help me with this regulation?
Defining relevant organizational policies and inventorying AI use cases in compliance with regulations is the Trustible platform’s core function.
Trustible’s AI use case inventory can help your organization document your uses of AI, as well as identify which systems leverage ‘ECDIS’ and what risk mitigations to put in place per use case. Each use case can be clearly mapped to a model or vendor, ensuring you have full visibility into the supply chain of your AI systems.
Trustible’s policy center recommends relevant best practices for your risk management policies and helps you set up workflows to automatically enforce them. Internal policy experts have mapped each line of the regulation to attributes and requirements in the platform, so that technical and non-technical team members can work together to log relevant information in a single source of truth.
Finally, Trustible monitors all other states for similar AI-related regulatory requirements and can help your organization quickly identify similarities and differences across insurance regulations.