The insurance industry, perhaps reflecting its cautious conservatism, has been slower than many other industries to leverage AI platforms and solutions to replace previous industry practices. Accordingly, the insurance industry is ripe for change, as the use of AI increasingly becomes the norm in other industries. Industry leaders are focusing on uses in marketing (making it faster and easier to buy insurance), policy design (with premiums more focused on risks such as driver behavior), underwriting analytics (using nontraditional data sets), claims (with more rapid settlements based on historical data), and fraud detection (by identifying patterns indicating fraud).
Needless to say, the implementation of AI within the insurance industry presents challenges. From a practical perspective, insurance companies utilize a wide array of different programs and methodologies to conduct their current operations. Insurance data is thus inconsistently tracked and generally collected for reasons other than the new AI uses. Much of the data is collected by error-prone methods and some is collected from people who have a financial incentive to slant the results toward insurance reimbursement. The spotty and inconsistent quality of insurance data limits the possibility for AI to generate accurate results. The amount of personal data presents data privacy and cybersecurity concerns.
Not surprisingly, state insurance regulators, while generally receptive to the benefits that can be derived from increased use of AI by insurance companies, are sensitive to the potential risks and negative effects. For example, regulators have concerns about how policyholder information will be safeguarded. In addition, the use of AI for insurance underwriting raises questions about whether the algorithms may generate results that would be unfairly discriminatory. The results might be unfairly discriminatory because the AI is trained based on data that includes unfairly discriminatory decisions, because the AI might make decisions that in fact reduce risk but do so in a way that adversely affects a protected class, or because the AI's algorithms are flawed. The reasons for the AI's decision may be so complex or involve so many rating factors as to not be explainable. Other regulatory concerns, including:
- How do vicarious liability principles work when an AI platform breaks the law?
- Can an AI act "intentionally," "knowingly," or "recklessly"?
- When is failure to supervise a machine negligent; or, how would a "reasonable person" monitor an AI?
- What is bias or discrimination for an AI? (A mathematical algorithm implemented on a computer wouldn't have "intent.")
- How do disparate impact principles apply in the AI context?
- How do you monitor a computer for bias or discrimination?
- What records does a financial institution need to retain with respect to an AI system's activities?
- What is an acceptable form for those records?
- How must regulators be able to access them?
- How should regulators supervise AI? To what extent do they need to understand the inner workings of a system?
- What replaces asking a human decision-maker for his or her reasons for a decision? Does the output of AI need to be "explainable" and, if so, to whom?
- How might regulators use AI in RegTech applications?
- Should the consent of the insured or potential insured be required for automated profiling?
In order to address these concerns, it is likely that state insurance regulators will seek enhanced oversight over the use of AI by insurance companies. In fact, in connection with the monitoring of the impact of new technological developments, including the increased use of AI, on the insurance industry, the National Association of Insurance Commissioners (NAIC) has established an Innovation and Technology Task Force. As AI is increasingly being used within the insurance industry, the NAIC's Innovation and Technology Task Force is expected to consider methods by which state insurance regulators can regulate such use, which may include the drafting of model laws or regulations. Although such regulatory developments have yet to materialize, insurance companies would do well to remain attuned to any such initiatives by insurance regulators.
We in the Regulatory Corner would be remiss if we did not note that adoption of AI by insurers also involves numerous non-regulatory issues. Those include, for example, intellectual property rights in AI systems, rights in data, contracts with technology providers, technology due diligence in acquisitions and investments, and laws governing privacy and data security.
About Mayer Brown
Mayer Brown is a global legal services provider comprising legal practices that are separate entities (the "Mayer Brown Practices"). The Mayer Brown Practices are: Mayer Brown LLP and Mayer Brown Europe – Brussels LLP, both limited liability partnerships established in Illinois USA; Mayer Brown International LLP, a limited liability partnership incorporated in England and Wales (authorized and regulated by the Solicitors Regulation Authority and registered in England and Wales number OC 303359); Mayer Brown, a SELAS established in France; Mayer Brown JSM, a Hong Kong partnership and its associated entities in Asia; and Tauil & Chequer Advogados, a Brazilian law partnership with which Mayer Brown is associated.