On March 18, 2025, Wisconsin joined twenty-three other states that have adopted the NAIC Model Bulletin on the Use of Artificial Intelligence (AI) Systems by Insurers. The Wisconsin Office of the Commissioner of Insurance (OCI) issued the Bulletin without any significant changes from the original version issued by the National Association of Insurance Commissioners (NAIC). Here’s what insurers in Wisconsin need to know.
Why did the OCI issue the Bulletin?
The OCI issued the Bulletin to, in its words, remind all insurers that “decisions or actions impacting consumers that are made or supported by” AI (and other advanced technologies) must still “comply with all applicable insurance laws and regulations.” This includes laws that address unfair trade practices and unfair discrimination. In other words, use of AI doesn’t insulate insurers from their obligation to comply with existing laws and regulations—and the Bulletin’s various requirements are meant to promote compliance with those laws, even when insurers are using emerging AI technologies.
Where else is the Bulletin in effect?
As of writing, twenty-three other states have adopted the NAIC’s Bulletin in full or with little customization: Alaska, Arkansas, Connecticut, Delaware, District of Columbia, Illinois, Iowa, Kentucky, Maryland, Massachusetts, Michigan, Nebraska, Nevada, New Hampshire, New Jersey, North Carolina, Oklahoma, Pennsylvania, Rhode Island, Vermont, Virginia, Washington, and West Virginia.
What does the Bulletin require?
The Bulletin identifies principles-based policies, procedures, and expectations that are not intended to prescribe specific practices or specific documentation requirements for insurers that use AI. Rather, such insurers are permitted to demonstrate compliance with applicable law in their use of AI through means other than those described in the Bulletin. The Bulletin identifies policies and procedures relating to governance, risk management and internal controls that would apply to the relevant insurance product(s)’s life cycle and the AI systems being used by an insurer. These expectations include:
- Formalized Written AI Program: Insurers are expected to develop, implement, monitor, and maintain a written program relating to the insurers’ use of AI where AI makes, supports, or aids in decision-making relating to regulated insurance practices, including addressing governance, mitigating adverse consumer outcomes, risk management, internal audit functions, and third-party vendor management. The AI program should (a) consider the insurers’ use of AI across the entire insurance life cycle; (b) include appropriate processes and procedures; (c) be custom and specific to the insurer’s use and reliance on AI; and (d) should consider and be aligned with the potential risk to consumers arising from the insurers use of AI.
- Governance: Insurers should ensure that their AI governance accountability structure is clearly defined and comprised of members with appropriate authority within the chain of command who are held accountable by the insurer’s board/board committee.
- Consumer Notice: Insurers should provide notice to consumers that AI systems are in use. Additionally, the program should provide access to appropriate levels of information related to the stage of the insurance life cycle and the use of AI at that stage.
- Risk Management and Internal Controls: Insurers should have documentation that outlines the insurer’s risk identification, mitigation, and management framework and internal controls for AI systems. The internal controls should address: (a) oversight and approval of development, adoption or acquisition of AI systems, including identification of constraints and controls; (b) data practices and accountability procedures; (c) management and oversight of predictive models; (d) validation, testing, and retesting to assess generalization of outputs made by the AI systems; (e) how the insurer addresses the protection of non-public information; (f) data and record retention; and (g) (if using predictive models) a narrative description of the predictive model's intended goals and objectives.
- Third-party AI Systems and Management: Insurers are expected to conduct diligence relating to their vendors including adopting standards, policies, procedures, and protocols related to how the insurer assesses, acquires, uses, and relies on (a) data obtained from third parties to develop AI systems; and (b) the AI systems of third parties. These procedures and protocols should address contractual requirements with such third parties and how the insurer ensures the performance of such contractual requirements. For example, these contractual requirements should include auditing rights for the insurer.
- Regulatory Inquiries: The model bulletin notes that, regardless of the existence, size or scope of the insurer's AI program, information may be requested from the insurer by the OCI including information relating to the use, deployment, and development of AI systems by the insurer.
What should Wisconsin insurers do right now?
Insurers operating in Wisconsin should be prepared to demonstrate that they have sufficient safeguards in place to ensure the use of AI does not result in violations of existing laws and regulations. Compliance with the Bulletin’s expectations will place insurers in a strong position to avoid inadvertent violations of insurance law and proactively demonstrate their compliance to the OCI. At a minimum, insurers should implement a written AI program that addresses the Bulletin’s expectations relating to appropriate governance structures, consumer notices, risk- and third-party management programs, and a plan for responding to inquiries from regulators.
Godfrey & Kahn continues to track these variations and can help develop an AI program and methods for complying with each of the Bulletin’s expectations. For more information, please contact a member of our Data Privacy & Cybersecurity or Insurance practices.