EXL Analytics and AI Insurance Leaders Exchange – Executive Summary
The time has come to rethink “reinsurance”
Eleven executives in the insurance industry met virtually to share leading practices and discuss topics of mutual interest based on an agenda created through advance interviews. The discussion centered on data transformation.
Core Data Management Layer
Key Takeaways
“As we evolve our business, now it’s time to think about how we use that foundation. We’re on an Azure stack, so how do we manage that amazing foundational capability and start to think about how to meet the needs of our business and functional teams in a more real time way to give them the kind of dexterity they need to move the data closer to the end user.”
- With many companies in some stage of technology transformations, digital leaders are in the process of consolidating myriad legacy data platforms and working to integrate data sets and enhance their interoperability. It can be a challenge for insurance companies that may have entirely different data systems for back-office functions, such as HR or marketing, versus customer-facing data systems, and legacy and modern policy administration and claims systems. Many leaders are still working on the core layer and considering how to rebuild the data architecture in a more strategic and planned way as they transform their systems.
- The members discussed different platforms, vendors and tools that they are using to make these large-scale data transformations, including Azure, AWS, Google, Databricks, snowflake and others. In most cases, companies are moving a lot of their data into the cloud. However, the executives discussed how many companies are keeping their proprietary data and “secret sauce” information on premises. Data is typically stored in data lakes, but one member suggested the use of smaller “data ponds” instead since trying to get all data into one lake can result in a “swamp” rather than a clean repository of usable data.
- The group also discussed the evolution of latest data technologies like data mesh and data fabric. People also spoke about data for AI and AI for data as concepts. The goal behind the discussion was about how far ahead or how behind they are in getting the data fabric ready to access data and enable some of the data movement required by data lakes and other repositories. Many AI efforts falter because the foundational data problems are unresolved.
Structured vs. Unstructured Data
Key Takeaways
“Coming from the broker side, there’s a whole world of unstructured data. How do you take those unstructured data, and then vectorize them in a consistent way, and vectorize them in a way that helps us drive value for our business colleagues?”
- When approaching data, the focus is often on what to do with all the structured data that companies have in order to increase its interoperability and usefulness. However, many insurance companies, or companies in general, have a lot of unstructured data that needs to be put into data sets in order to be used. Generative AI is one possible solution to putting those unstructured data into more usable formats, because it can search data so much more quickly than humans can
- Many companies are putting a “medallion architecture” in place which organizes data into three layers: bronze, silver and gold. There was some discussion on how unstructured data can be enabled into medallion architecture. A member suggested that perhaps a “platinum” layer could be added.
Build vs. Buy
Key Takeaways
“What is the business value? For the organization, is it something we should build? I always tell folks here, we’re not Microsoft. We’re not Google. We are an insurance company, right? And so, we’re not spending $80 million over six years to do some cool; that’s not what we do. We’re an insurance company. We’re here to evaluate risk. To create more resilient futures for our clients.”
- A strong topic of interest among the group is whether to buy a LLM and AI/Gen AI tools and advanced data management tools from a large vendor and try to customize it to the company’s needs versus building a custom system for the company. There was some agreement that a company must first determine which capabilities it wants to have in-house for whatever reason (e.g., competitive advantage, market differentiator, etc.) and then decide how to get that done most effectively (which could still be a vendor solution).
- Several members pointed out that they are using hybrid cloud and on-prem environments for their data, an approach that can grow complicated quickly since interoperability is necessary between the different clouds. One positive is that large hyperscalers are increasingly building every capability, so the prospect of an all-in-one solution is becoming more possible. Prices are also changing dramatically as features that were expensive a few years ago are now included commodities. The lower prices and increased capabilities make buy options increasingly attractive so long as they fit into the overall strategy.
- While senior leaders often recognize the value of data and want their companies to be data-driven, this desire is not always accompanied by the funding or buy-in to make the necessary upgrades to core data layers, whether that involves buying or building. One participant has had success funding data work that is tied to other business initiatives but not securing funds for fixing legacy data debt issues. The result is that they are trying to deliver data work on specific business use cases while also fixing the underlying data layers, work that is very difficult to do simultaneously. Another leader echoed this challenge, noting that the only way to get people to care is to somehow tie the foundational data architecture to business problems in a seamless way. There was discussion about business units prioritizing fast delivery, leading to siloed or duplicative data solutions without a detailed full strategic alignment
AI Adoption and Skill Sets
Key Takeaways
“There's still more work to be done before you talk about vectoring of data or prompt engineering or AI feature embedding, things like that. Still a lot more work to be done.”
- The members briefly discussed how they are moving to adoption of AI. The value of generative AI, especially around querying and summarizing large amounts of data, is obvious, but “not quite there yet” in the view of several members. A main issue is the interoperability between different technologies and stacks—The discussion surrounded around having data in multiple clouds like Salesforce cloud, Guidewire cloud, Azure/AWS cloud environments and if the data is not structured and standardized, AI tools are not as effective. Some organizations are architecting their data foundations so that LLMs can be used on the back-end, but even this has limitations if the data is not in a good state to begin with.
- Many companies are allowing and even encouraging their employees to become familiar with generative AI. Companies are looking for business use cases to use the new technology and are looking ahead to employee skill sets that will be needed, including comfort levels with querying AI, as well as identifying and developing prompt engineering skills and Agentic AI skills. The general emphasis seems to be on insurance industry to invest in talent capable of managing data quality, vectorization, and AI readiness