top of page

State of AI Governance

Reflections on the Status of Regulations and Management of AI Risk

By: Anthony Habayeb, CEO & Co-Founder of Monitaur


At this point, we have all been introduced to or involved in discussions about the benefits and challenges of using artificial intelligence (AI) in insurance. During the past 12 months in particular, we’ve seen new laws, regulations, and guidance from federal and state governments.


But exactly where are we in implementing AI governance and risk management practices? This was a burning question at the NAIC summer meeting in Chicago. As I travel home and reflect on several discussions, here are a few things that we can surmise at this point:


First, there seem to be three stages of maturity we can use to summarize the status of AI governance adoption: Define, Manage, and Automate.


  • “Define” is where the regulator or regulated are defining the rules, policies, roles, and requirements for AI governance.

  • “Manage” comes next. It is where all stakeholders are clear on what is expected and they can perform work towards conformity with the defined expectations.

  • “Automate” is where organizations show the most progress in their processes. They even use software or tools to make things easier or manage the pieces established at the Define stage.


Current Industry Status: 80% Define/ 20% Manage/ 0% Automate


As much as it might feel like AI adoption and regulation have advanced in the past couple of years, I can confidently tell you from the front lines…we’re still just getting started.


Welcome to the marathon…not the race (and that’s okay!)


As of this writing, 17 states have adopted the NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (the “NAIC AI Bulletin”). Several other states have made AI-specific rules through new laws (like Colorado SB-169) and directive circulars (NY DFS).


These developments give helpful directional guidance on the types of risk management processes and structures regulators expect; however, there is still much work to be done in the Define stage that helps both industry and regulators move forward.


In the coming months and years, the following list summarizes some of the gaps in definitions that require greater clarity before regulators and industry can more clearly make investments to operationalize and manage AI risk management.


  • What does “good conformity” or compliance to AI risk management regulations and guidance look like?

    • What sort of information or evidence should be provided in an attestation or filing to a regulator regarding its AI risk management program?

    • What sort of questions, tests or conversations support verifying the actual operational management and execution of AI risk management policies?

    • How much time does a company have to implement these new requirements and what happens when a gap is found? How is materiality evaluated? What opportunities for remediation exist?


  • How should oversight of non-carrier AI (third parties) be managed?

    • How much should or can the carriers be fully responsible and accountable for vendor-based AI?

    • Is there the need for regulators or some independent roles and structures to evaluate and maybe even certify these third parties on behalf of and enabling of all regulators and carriers?


  • Where does AI risk management versus ECDIS or non-traditional data usage start and stop?

    • AI has made us more aware of the risks and opportunities of new data sources being used on a large scale to affect consumers and industry, but the data and impact questions are independent from whether AI, underwriters or rule-based systems use the data.

    • Current regulatory developments either don’t clearly address expectations for ECDIS management or attempt to join the expectations with broader AI risk guidance:

      • The Bulletin addresses expectations for organizational and system risk management, but does not address what needs to be proven regarding data testing and/or impact assessments.

      • Colorado and NY blend AI risk management with testing, but the exact expectations of what tests are acceptable and how a carrier might show the balance of predictive value of a data element versus impact are not clear.

    • How do we chart a path forward that is more focused on the topic of ECDIS and non-traditional data use that supports responsible betterment of the industry while also protecting consumers - irrespective of whether the data is used by an AI system?


I’ve observed, and been lucky to contribute, to the broader industry coming together over the past several years to make progress on this topic. We are still in the early stages of the journey, but I believe all parties are moving together towards a positive future.


 


Anthony Habayeb is the CEO and Co-Founder of Monitaur, an AI governance company helping insurance companies and vendors to build, manage, and automate AI governance programs. You can read his “IRES Featured Member” interview from the Spring 2024 issue of The Regulator here.

Comments


bottom of page