VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Two weeks ago, Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s cybersecurity, privacy and artificial intelligence practice group, emailed me a note that said: “Here is where the AI governance laws really start.”
He linked to the Colorado Division of Insurance’s draft Algorithm and Predictive Model Governance Regulation, which was released on February 1. The draft rules impose requirements on Colorado-licensed life insurance companies that use external data and AI systems in insurance practices. Its development was required by statute after the state senate bill SB21-169 was passed in 2021 — which protects consumers from unfair discrimination in insurance practices.
Gesser said that these rules, specific to life insurance (for now), are the first set of AI and big data governance rules in the U.S. that will influence a boatload of other state, federal and international AI regulations coming down the pike.
In a new blog post, Gesser and his colleagues wrote: “Colorado has taken vague principles of AI ethics, such as accountability, fairness, transparency, etc., and turned them into the concrete requirements for policies, governance, and technical controls.”
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Experts have been waiting for clear AI governance rules
The Colorado AI rules, Gesser explained, are the equivalent of the game-changing cybersecurity rules that were passed by the New York Department of Financial Services (DFS) in 2017 — that helped create the entire cybersecurity regulatory framework that exists today across industries.
“Everybody who is living and breathing this stuff has been waiting for this in AI,” he said.
With New York’s DFS cyber rules, he explained, “you had a clear set of governance rules around access controls, encryption, multi-factor authentication for remote access, for example — all the things that are now very common.” Those rules, while they applied only to financial institutions licensed by the New York DFS, demonstrated that they could be implemented and pulled the whole market towards them, becoming industry best practices.
While the EU AI Act is being developed and is expected to also be highly influential, it likely won’t go into effect until 2025, Gesser pointed out, adding that the NIST framework that was released last week is “pretty high level” and lacks clarity and specifics. And as ChatGPT and other generative AI tools explode into the public consciousness, “we can’t really wait until 2025 to get guidance on some of the AI stuff that’s happening now. ”
A ‘major leap forward’ in AI governance
As a result, Colorado went through a lengthy process of engaging with stakeholders. “The insurance industry has been saying, we have to make decisions on real models all the time, we want to get this right,” said Gesser. “So if you give us guidelines, assuming they’re not crazy, we’ll adhere to them but we’re not going to invest hundreds of millions of dollars on models and then you say, well, they need to be fair, and we’re going to be like, what does that mean?”
The draft regulations, he explained, are quite detailed, requiring an inventory of AI models, a set of governing principles, reporting and oversight, the creation of a cross-functional committee that looks at mitigating high-risk AI, and documentation of efforts.
“This is a major leap forward,” Gesser said. “You have, for the first time, a set of pretty concrete rules to follow that are a mix of technical controls and governance and policies that you can benchmark against.”
Why is life insurance leading in AI governance?
Life insurance is “the smart place to start” when it comes to AI governance, said Gesser, because “it’s got all the things that you need for having a good sensible regulatory regime.”
For one thing, there’s a lot of data involved in life insurance analysis. “So to the extent you’re worried that there are going to be novel datasets that are going to be used, and maybe those datasets are proxies for things that you shouldn’t consider, life insurance is a pretty good one to be concerned about,” he explained.
Life insurance is also a good example because there are clear winners and losers — if you have to pay more for your premiums based on certain factors, or you get denied insurance, for instance. Other industries like hiring are like that, Gesser pointed out, which is why regulations around AI employment are being implemented or considered, including in New York City and New Jersey.
In general, “you’re likely to see AI regulation in heavily-regulated industries, where there’s a consumer protection element to it,” he said — so other types of insurance, financial services and healthcare will likely be affected.
Governance will likely be eventually required across all lines of insurance
Colorado began the stakeholder meeting process with a focus on life insurance — and the draft regulation (3 CCR 702-4) applies to “all life insurers authorized to do business in the state of Colorado.”
However, Gesser noted that in a February 7th stakeholder meeting about the draft rules, the Division of Insurance (DOI) signaled that these governance requirements are likely to be consistent eventually across all lines of insurance and practice.
Public comment on the draft rules is due by February 28 and after that comment period, the DOI will will begin the formal rulemaking process.
“This is a big deal,” said Gesser. “And I think this is going to be a big deal whether the final rules look exactly like this, or whether they’re slightly different.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.