In a groundbreaking exploration of AI in insurance, the 2024 Ethical AI in Insurance Consortium Survey has uncovered a critical juncture for insurers: while 80% of companies are either already using AI for business decisions or plan to do so within the year, they encounter substantial challenges pertaining to cost, data quality, and bias. The findings underscore an industry on the brink of a technological evolution, revealing that only 14% of the companies are currently leveraging AI in operational decisions. Yet, this figure is set to climb sharply, indicating a significant future impact on the insurance sector, particularly in claims and underwriting.
“The implementation of AI in our organization has transformed the way we approach claims,” says Douglas Benalan, CIO of CURE Insurance. “We’re already observing substantial gains in operational efficiency and accuracy. However, the journey is not without its ethical challenges, making the need for industry-wide collaboration and proper frameworks paramount.”
Some key findings of the EAIC survey include:
– Identification of the leading departments in current AI adoption: IT (69%), sales (57%), and marketing (51%).
– Significant reported improvements in operational efficiency (57%), accuracy (37%), and revenue (37%) due to AI usage.
– A call for more robust processes, as 69% of respondents are dissatisfied with current approaches to report and address AI model biases and inaccuracies.
“AI in insurance relies on high quality and comprehensive choice of inputs into the models,” comments Abby Hosseini, Chief Digital Officer at Exavalu. “While the benefits of leveraging vast amount of data to enhance decisions is undeniable, the ramifications of using poor quality data and the prevalence of biased and selective input into the insurance AI models cannot be overlooked. It’s encouraging to see insurers calling for comprehensive employee education and ongoing AI model assessments to mitigate these risks.”
Additional highlights include:
– The sense of urgency in educating employees on AI biases, with 57% of respondents stressing its importance.
– The paramount need for regulatory guidance, with only 23% satisfied with the support provided by regulatory bodies.
– A strong consensus on the necessity of training employees in AI legislation and ethical dimensions, with most companies advocating regular AI audits.
“As insurers navigate the complex landscape of responsible AI implementation, it’s clear that regulatory guidance is paramount,” Paige Waters, Partner at Locke Lord. “Some states are starting to lead the charge by issuing AI regulations and guidance, but it is clear that insurance regulators also are relying on existing laws to regulate AI practices while attempting to balance innovation with responsible AI use.”
The results are available now for insurers and insurtechs aiming to navigate the complexities of AI integration. Industry leaders can take advantage of these insights by visiting the Consortium’s website, where they can download the full survey results and have access to the new EAIC’s Ethical AI Code of Ethics and a strategic diagram for how to use AI without bias.
“In a rapidly evolving landscape, it’s imperative we steer AI adoption towards enhancing rather than compromising the integrity of the insurance industry,” states Robert Clark, Founder and CEO of Cloverleaf Analytics. “This survey is a clarion call to all stakeholders to commit to ethical AI use, ensuring positive outcomes for both the industry and the insured.”
About Ethical AI in Insurance Consortium:
Established with the mission of resolving existing issues related to bias and AI within the insurance sector, the EAIC serves as a driving force for positive change in the industry. The consortium addresses current challenges faced by insurers, insurtechs, and all stakeholders involved, while also proactively identifying and providing solutions for future concerns that may impact the integrity of the insurance business. For more information, visit http://www.ethicalinsuranceai.org.