Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

How AI and automation are reshaping risk with Bryan O’Neal from Coherent and Alice Boreman from Ernst & Young

In episode 22 of the Unstructured Unlocked podcast, you’ll hear from Tom Wilde, CEO of Indico Data, Bryan O’Neal, Head of Sales Engineering at Coherent, and Alice Boreman, Actuarial Transformation Partner at Ernst & Young, as they dive into how AI, automation, and better data are changing the game in underwriting and pricing for commercial P&C and specialty insurance. Want to boost your efficiency and accuracy? This episode is packed with actionable strategies to streamline your workflows, improve risk assessment, and optimize pricing decisions—helping you stay competitive and increase profitability in today’s AI-driven market. Don’t miss out on the insights that could take your business to the next level. 

Listen to the full podcast here: How AI and automation are reshaping risk with Bryan O’Neal from Coherent and Alice Boreman from Ernst & Young

 

Jeremy Stinson: Good morning. Good afternoon folks, and thank you for joining us for today’s webinar. Still have quite a few folks logging in, so we’ll give them a moment to get settled and then we will go ahead and get started. Thanks once again folks. Thanks for joining. Still have quite a few people logging in, so we’re going to wait another minute or two and then we will go ahead and get started. Thanks.

Okay, We’re going to go ahead and get started. So hello everyone and welcome to today’s webinar, how AI and Automation Are Reshaping Risk and Pricing. My name is Jeremy Stinson, head of marketing at Inco Data and I will be your host for today’s session. We brought together a fantastic panel to discuss how technology is transforming underwriting and pricing and commercial insurance, and we’re glad you could join us. So to start, we’ll go through some housekeeping real quick. So today’s session is being recorded and we’ll send out the live link within 24 hours. All attendees will remain muted during today’s session. We’d love your questions, so feel free to drop them in the q and a panel at any time and we’ll get to them at the end of today’s webinar. Before we dive in, we want to quickly review today’s agenda. So we’ll start with brief introductions, then spend some time on setting the stage as to why now is the time for carriers to focus on transforming risk selection and pricing. We’ll then get into our round table discussion where we’ll dive into the five key challenges that carriers face when it comes to risk selection and pricing in today’s markets. And then we’ll finally finish up with some q and a to kick us off, Alice, why don’t you go ahead and introduce yourself,

Alice Boreman: Everyone. My name is as Boman and I am a partner in EYs actuarial practice here in London. And I look after our transformation offering and actually over the last few years that has really been focused around pricing and underwriting transformation, particularly as we see a lot of the specialty carriers replatforming the rating solutions.

Jeremy Stinson:  Awesome, thanks Alice. Brian, over to you.

Bryan O’Neal: Great. Hi everybody. I’m Brian O’Neill. I run sales engineering for a tech company called Coherent, and what our company does is it lets you take Excel workbooks of any kind and convert them into APIs that you can plug up to the rest of your tech stack and govern them. And we are massively popular in the insurance space with companies using us for rating and any number of other use cases. One of the benefits of the job is getting to meet people like Tom and Alice, so I’m glad to be here. Thanks for having me join.

Jeremy Stinson: Awesome, thanks Brian. And Tom, why don’t you finish us up?

Tom Wilde: Great. Tom Wilde, the CEO of Indico data. Indico is an ag agentic automation solution for the insurance enterprise. We focus on automating workflows such as submission intake and claims intake, driving speed to quote, and also speed to claims adjudication.

Jeremy Stinson:  Awesome, thanks Tom. Alright, well with that being said, let’s dive in. So as I said in the introduction, before we go into each of the core five challenges we want to discuss that carriers are facing today when it comes to risk selection and pricing, we want to spend a little bit of time setting the stage as to why now is the time where carriers should focus on this topic. And so for us, we believe that the reason for why now lies in between technology and industry. There’s a few different things happening on each front that are driving the urgency for carriers to act now. So what I’d like to do is toss this over to our speakers to get their perspective on the why now question. Maybe Alice, we’ll start with you and then turn it over to Brian and then finally Tom.

Alice Boreman: Yeah, I mean I think it’s a really exciting moment because I think probably everyone has agreed for a while for the need for change and more robustness around pricing and technology to make sure we’re making decisions with all the data we have available to us. Ultimately, insurance is a data business, but we have managed with paper and giant spreadsheets for a long time and I think as we look now with regulators putting more pressure, people like Lloyd’s in the London market putting more emphasis on making sure we have the right pricing in place. And as we look into what seems to be a softening market, I know people have different views on the underwriting cycle, but I think rates are definitely softening in a number of areas and regulating continues to stack up. We need to do something. But actually I think the exciting thing about now is actually there’s loads of opportunities and technology available for us to pick up and it not be a big scary technology project, but really there’s lots of tools out there now that allow actuaries, underwriters, other areas of the business to pick things up and build tools that work for them without having to write 60 page requirement documents.

So I think we’re kind of bringing together those two things is really exciting.

Bryan O’Neal: I agree. I talk to actuaries all week long and they always want more and more data out of their business processes, whether it’s rating or any other thing. And the tooling available to actuaries these days is incredible. And so they are wanting to niche down into ever more esoteric granularity of data, I suppose is one way of thinking about it. And they expect all of the systems upstream of them to be able to provide it. And traditionally in tech that’s been a very difficult thing. Insurance is uniquely difficult for reasons we can get into I suppose. But I do see out in the market smart companies being a lot faster now in their ability to ship especially raters, but other pieces of business logic as well, you see a huge trend towards putting the logic back in the hands of the business to express and then architectures that allow that data to flow downstream cleanly. I think as AI approaches and you are able to expose more and more business logic directly to ais, that’s going to become even more important. And so the old days of taking months and months to build integrations and to ship rate changes, those old days have to come to an end and the high value tasks available to engineers, let me say it this way, you want your engineers to be spending their time on high value tasks, not recoding raters and simple business logic. So that’s what I see out there right now.

Tom Wilde: Yeah, just to build on that, I think there’s two almost simultaneous inflection points here. One is a huge opportunity and one is sort of an imperative to act on the opportunity side at, we described this as the dawn of the decision era where all of this investment in cloud data and now AI creates the answer to the sort of so what question as to why did we spend all this time and money aggregating all of this data and trying to organize all of this data? And it turns out that the companies who did a good job with that are very well positioned to take advantage of this decision era by applying things like generative ai, ag, agentic AI to drive more automation and higher fidelity, more effective decisions. I think on the imperative side, we’re at the sort of tail end of a seven year hard market run. As the market begins to soften, which traditionally we go through these periods of hard market, soft market, the ability to react and pay attention to the submission volume, pick the right risks, price, the right risks is going to be the difference between companies who continue to generate growth and profitability in a soft market. And those that don’t, the hard market hasn’t had as many requirements to be good at these things. So I think those two things are happening simultaneously and really create the need to act right now.

Jeremy Stinson: Awesome. Thanks Tom. Alright, so as we shift into the heart of today’s discussion, I wanted to quickly highlight the five core challenges that we’ll be addressing today. So these are real world obstacles that are keeping carriers from executing on the risk selection and pricing strategies with the speed and precision that today’s market is demanding. And so these challenges that we’ll talk through are data quality technology gaps. We’ll touch on the evolving risk landscape, we’ll dive into the regulatory changes and challenges that carriers are dealing with and then finally discuss talent and analytical capabilities that are needed in today’s market. So with that, let’s dive into our first topic, which is data quality and integrations. So underwriters are often forced to rely on incomplete, inconsistent or legacy data sources. And according to a recent study, 97% of carriers end up using data that fails to meet their own internal quality standards. First question for you Alice. What types of data are underwriters missing that would improve risk assessment and pricing accuracy in your opinion?

Alice Boreman: I think there’s kind of two things I think about here. I guess one, there’s so much data and different models available to us. I think about some of the tap models out there and actually often at the moment what we’re having to do is go to multiple systems to get it so the data is available to us, but it’s not easy for us to access and it’s really slowing down on time to quote. The other thing for me and the thing I spend quite a lot of time talking to my clients about when they are embarking particularly on pricing transformation is a lot of our raising are still parameterized using underwriter judgment. Now for the underwriting pool there’s huge value in that, but I think there’s also loads of value that we’re missing out on the data that sat in our organizations and we are not able to use it.

And I think whilst specialty large commercial risks are harder to do, big statistical models on, there are ways and means of doing it and I think we need to be making our pricing models more sophisticated. We need to be able to use the IP and competitive advantage we have within our organizations and get that into the hands of the underwriters. So the big push I have with my clients is let’s not just get everything onto a new system and have all of this data nicely structured and ordered and then not do anything with it. Let’s use it to update our models and make pricing transformation sort of a ever ending thing, but because we are constantly improving and leveraging the latest information to update our tools,

Bryan O’Neal: Right? Yeah, I feel like so much of the problem occurs upstream of people like Alice because what happens inevitably at an insurance company, I come from the systems world, from core systems background and most insurance companies, the way to look at it is you’ve got six or eight big pieces of your tech stack. You’ve got your policy admin system, you’ve got your claim system, you might have a CRM, you might have a broker portal of some sort, you might have your gl, you might have an ingestion tool like indico and all of these systems need to talk to each other to some degree. But those integrations are brutal. They are really hard. And anybody who’s done systems implementations for a living will tell you that the hardest part of doing any project is in fact the integrations. It’s not the configuration of the logic itself typically. And so this ends up either with your data and silos where every piece of the tech stack has its own operational database or you end up with lots of jumps between applications and data is lost at every jump. It’s like lossy compression if you will, is one way to think about it. And I tell you what, one of the things I lie awake at night thinking about these days is how does AI change that equation and what kind of architectures might we see on the horizon?

I suspect that before long it will be trivially easy to shred data down from some upstream source to some downstream source. It’s not easy today. Data migrations are notoriously hard, but I think that I’ve seen some companies out there that are working on this and showing a lot of promise. And then another architecture that I think might just slice the Gordian knot of insurance core systems is LLMs orchestrating workflows directly and calling distinct APIs up and down the tech stack in different parts of your application to accomplish a task and only getting the data they need and only passing it on and putting in intelligence at the center as the hub of all the spokes, if you will. I think that if you follow other big companies out there, Salesforce for example, has made a lot of headway in doing exactly this where they stand up MCP servers for example, in front of different services within their stack and there’s others as well that are moving down that path. Insurance of course is going to be very difficult. I think it’s from a regulatory perspective, one of the most demanding and the functionality goes deep. Tom, as you know, it is easy to say these things, but very hard to implement them in practice, right?

Tom Wilde: Yeah. I think we almost think of it with a supply chain metaphor, right? Where if you think about the real world supply chain, if you manufacture a physical good, you could choose to completely vertically integrate and do everything yourself. That’s kind of rare. More likely you have to work with a number of suppliers and a number of steps in the process. Maybe the raw materials are assembled somewhere and the finished good is assembled somewhere else. I think data has a lot of similarities and using this supply chain metaphor is really effective in that you’ve got to think about the raw materials. In the case of underwriting, you’re talking about first, second and third party data. First party data being what do you already know, what’s in your CRM systems, et cetera. Second party data being what is your customer giving you? And third party data being what does the market know about this risk that you can assemble?

And I think if you look at it that way, you begin to focus on upstream. You kind of touched on this, Brian, starting with the ingestion moment. Do that in a rigorous standardized way so that the data you get from that moment of ingestion, whether it’s a first notice of loss or a broker submission, you can drive the schema. That is the data that you’ll use to make your decision. So if you work backwards, how do we make great decisions? We make great decisions when we have the following data, where can we get that data? First party, second party, third party, okay, now what do we need to do to connect to and assemble that data? And you pointed this out, Brian, I don’t think there isn’t a world where there’s a magical centralized data brain. I think there’s a world where ag agentic is able to assemble that data for us kind of on the fly.

Although I think like you said, there’s a lot of work to do around understanding the access to those data repositories and it goes beyond APIs agents. Now you have to think about what span of control do these agents have? How much data can they pull? Because if that’s not carefully instructed, they’ll pull everything out of the data store. So I think that’s the real next wave here of opportunity to get at this data quality problem. And it sort of in some ways start at the end decision, work backwards to know what you need and then on the other way, start at the outer rim of where you first encounter that data and make sure it’s standardized and high fidelity. And if you do those things together, there’ll be a tremendous opportunity unlocked for more accurate underwriting, faster underwriting, and to some degree automated underwriting. Although I would say that we’re a long way from that. I think the pieces of the underwriting process are right for automation.

Jeremy Stinson: Yeah, that’s a really good transition into our next challenge that carriers are facing, which is all around manual processes and technology gaps. So as you can see on the slide here, underwriters are still spending upwards of 40% of their time on manual tasks such as data entry, document review, repetitive workflows, and not only does it slow them down, but it keeps them from prioritizing and focusing on the right things, which is risk selection, risk pricing, so on and so forth. So first question for you, Tom. How do you see the broader role of generative and agent AI reshaping risk selection and pricing decisions?

Tom Wilde: Sort of building on my last point, think of it as lowercase a automation versus capital A automation. And Brian, you touched on workflows. I think that if you can do a good job laying out the steps in the process that lead to success, then isolating those steps and make decisions around where is this really right for automation IE, we’re not going to add value by doing it manually. We’re going to add value by automating it and driving speed or data fidelity or data completeness. Then there are steps in the process where doing it manually does create competitive advantage. There’s particular insights or skills in the organization that mean that it still makes sense to do that manually. And so thinking about lowercase a automation pieces of the process rather than trying to boil the ocean and automate the whole thing, that’s what I’ve seen as patterns for success.

Jeremy Stinson: Now what about you, Alice? Yeah, I mean I think

Alice Boreman:  I still hear endless stories of rekeying and the pain of doing that, right? So I think that’s a huge, the integrations both Tom Brian have talked about is absolutely critical to that. If you’ve already put information about a risk into the system, you shouldn’t then have to be putting it into your pricing model and then again into your exposure management account model system and then into your past. It should just put each piece of information in once. And I think that is really, that exists and is eminently possible and I think it’s a really simple thing, but I think it’s one of the biggest wins. And then I think the second part of that is that means you can use more and more different models because you have the time and it takes less time anyway to bring those all together and the underwriters really doing the thinking and the stuff they’re really good at rather than administration.

So I think that’s huge and it doesn’t feel hugely exciting. And I think often when people want to talk about technology and ai, they want to talk about all these advanced modeling techniques and actually I just don’t think that’s where the big wins are initially, particularly in the markets I work in where data is much more sparse. You’re talking low frequency, high severity risks, you’re never going to get some of the machine learning modeling that you might have in certain personalized sectors, but the administration piece can be massively automated. I think the other thing that is interesting and comes with that is how you triage where underwriters are spending their time. So if you can very quickly with a limited set of information, work out for risk is something you’d be interested or not and benchmark it against other similar risks before you go and gather the rest of the data, you can quickly decide where to spend time. And I’ve seen some insurers, mainly reinsurers actually building models that are telling underwriters, we think you should spend X number of hours on this risk. I think that’s starting to get quite smart in terms of how you take the stuff out of the machine and use it. Well,

Bryan O’Neal: Yeah, let me extend that scenario a little bit. The one that Alice is presenting where you have underwriter pricing things in Excel off the side of their desk. This is classic London market, right? Let me give you some low hanging fruit here for how to solve this problem. Alright, coherent and incode together. Here’s the pitch. Those pricing spreadsheets that the underwriters are punching all of their data into right now, you’ve probably got dozens of lines of business being written that way if you’re maybe not dozens, but a fair number and the kind of problems that you’re seeing are costly and inefficient, re-keying, seeing data loss at the re-keying step, you’re seeing governance questions around how do you even know that the underwriters using the right version of Excel and that the data hasn’t been messed with between the time it was saved in a shared drive and the auditor got to see it and how do you more broadly drive some efficiency out of the thing?

How do you stream data from the underwriter’s work downstream to the analytics? Well before Coherent came along, you were really just stuck because it was prohibitively expensive to take a complicated rater and re-key it in a programming language into a Java or Python or.net or whatever with coherent. You can take that spreadsheet, upload it to us and immediately have a working API that you can plug into a workbench, an admin system or some other process that needs to consume that logic and that solves all of the pain points except for one, which is how do you cleanly ingest all of the unstructured data in the first place? Let’s say you’ve got an email that a broker has sent you with a schedule of values that is hundreds of locations long. How do you get that into your system, whether it’s a workman or whatever. And increasingly our clients come to us and say, we love the idea of uploading our property rater to coherent and integrating with it, but how do we get the schedule of values in?

What can you tell us? This is where shops like Indigo are really starting to shine because like Tom was saying a minute ago at the very first step of the food chain, you need to ingest all the data you need to sanitize it correctly and not lose any of it at the very first innings of the game. So I just think this is an obvious win for so many insurance companies out there and I’m kind of heavy on the rating use case a little bit today, but there really are a number of others where you can do this exact same process. So I think this is just an incredible hack for your business and will free up your engineering talent for the kind of higher value add tasks that they need to be doing. There’s

Tom Wilde: Sort of two great points, Brian and Allison. What I wanted to follow up on a couple of your points. There’s sort of two things that work here. They’re both compatible but also intentional a little bit, which is this idea of individual efficiency and institutional efficiency. The institution, I think Alice you touched on an insurer says, we want to spend this much time on risk analysis, right? That’s the institution saying we need this kind of profile to drive whatever bottom line metric it is combined ratios or GWP, and then you have the individual underwriters who have been doing this for decades a certain way and doing it differently means some investment of time to retool and accept a new process to get individual efficiency. So it’s kind of an interesting paradox that plays out where the change management becomes an important expectation setting to gain the benefits of automation because even though no underwriter would say they like keying in data, it’s comfortable and it’s something they’ve done for decades and they’ve been able to write policies this way. So especially in commercial lines, I think personal lines, the advent of mobile certainly and telematics has really transformed personal lines in a way that hasn’t in commercial lines.

Jeremy Stinson: Really good point. I guess sort of getting into the next challenge we see here and building upon the challenges that underwriters have with so much of their time being spent on manual workflows, manual processes to further complicate that, we’ll next talk about the evolving risk landscape. So think climate volatility, cyber threats, social inflation, each of these ads a layer of uncertainty and complexity to risk selection and pricing models. And you can see on the slide here, 70% of insurers say emerging risks are outpacing their ability to assess and respond effectively. First question for you, Alice, how are emerging risks like climate change, social inflation, so on and so forth, reshaping how folks should approach pricing?

Alice Boreman: I think that evolving risk landscape and frankly just the volatility of the world today, I don’t think anyone can say we’re living in a particularly predictable time and I think that has two impacts. So clearly those things change the risk profile of the products we’re already selling, but actually in a softening market we will look more and more to new products and product innovation to stay relevant, maintain our premium volumes and search for more elusive growth. And that means we need to be able to update our models quickly, source information from a whole range of sources and get really granular on what we’re thinking about. And I think if you think about some of the risks you mentioned, so climate change, whatever you think about climate change adjusted models or not, we are definitely seeing a higher frequency of events in areas of typically being less well modeled and I think that we’ve actually seen a huge amount of work in the cat modeling space.

I think by accident it hasn’t been an area we’ve tried to play in, but there’s a huge amount of change there partly because some of the big models are pushing people onto the cloud and that has sorts of impacts, but also because people are realizing there are perils and regions that they just don’t have well modeled and they need to find better ways to cover. And so you see people going down this a waste framework approach of bringing together lots of different cap models and quite niche models that just aren’t specialized in a specific parallel in specific regions around the globe and you need a way of bringing that all together. I think likewise, some of the more manmade risks like some of the cyber threats, there are emerging players out there and there’s huge amount of data that particularly something like cyber, but equally it’s kind of a risk we’re still figuring out.

We don’t have a lot of insurance data yet there haven’t been huge amounts of events in claims particularly for the more catastrophic events. And so actually really being able to get access to all of the publicly available data about some of these organizations and bring that together effectively is vital. Likewise, I guess we’ve all been grappling with social inflation for a long time. I think we’re still continuing to see that come through in reserves. In terms of prior deterioration, I think there’s some really interesting analysis there out there about the correlation between social inflation and some of the third party litigation funding that’s going on and some quite frightening statistics depending where you think that’s going to go. And so really staying close to what’s going on in the court systems, what filings are happening and picking up on those trends early is absolutely critical so that it’s not another horrible set of reservations in five years time, but there’s not a single answer and I think that’s why some of what we’re talking about today is so critical because we need to bring all of these models together into something we can get our heads around not like some of these risks only impact.

There’s not one-to-one mapping between these risks and a particular policy type. We’re often grappling with the impacts of these risks across a single product. And as an underwriter, how do you get your head around all of that information and make a decision about how to underwrite risk?

Tom Wilde: Yeah, I think that we’re witnessing this, I steady transition from underwriting being done mostly inside the four walls in a sense that we can know enough from our first and second party data what the customer tells us to write the risk to now and into the future. I think that’s in the process of inverting and in some cases may invert completely. I’ve heard in small commercial suggestions from carriers that, hey, in the future we may not ask the insured anything. We may be able to get 100% of what we need to know about that risk from outside of the four walls. And the ability, and again, you guys both touched on this earlier, the companies that will succeed are the ones who know what data they need at what time to write that risk. And that’s sort of the new imperative here is understanding things like cat risk and things that are happening outside is going to be absolutely critical, but importantly because that data will be basically in a sense commoditized that you’ll be able to get it from anyone will be able to get access to that data. It’s the synthesis of that and where you apply automation, that will be the difference.

Bryan O’Neal: One other way to approach this question is to ask what do your intermediaries want? What are the agents and the brokers and the MGAs want out of their carriers? They get a vote too. And this actually impacts risk selection quite a lot. If you were to do a survey of a hundred agents and brokers and MGAs out there and ask them what is their favorite carrier to work with and why? The answer you will get, I guarantee you because I’ve done this, the answer you’ll get is almost always some regional or local carrier that picks up the phone and talks to them. They never come out with the big name carriers with the big tech budgets and whatnot. They come out with, oh, I prefer to talk to Bill up in Atlanta or something like that because he understands the kind of risks that I’m trying to place.

And we always talk about insurance being a relationship business and this is actually a feature of the insurance industry, not a bug for reasons that we need not get into right now. But when you approach it this way, you start to think, all right, what does Bill in Atlanta actually need to do his job better when he picks up the phone? Why can’t that person have his or her own dashboard of really good indicators right in front of them, not just to make off the cuff discretionary decisions but really informed ones and to pass that information along to the main street agent. That is something that really, I think over the next 10 or 20 years, that aspect of insurance will not change actually. I think the market power that intermediaries and producers have stays constant and insurance carriers will need to consider them in their tech decisions because they are kind of the interface to the customer. They’re not the end insured, but they are the interface to them. I really think about this a lot in terms of delivering rapid answers to those producers. I really think you can build a healthy book that way by just being super responsive.

Jeremy Stinson: Thanks. Yeah, makes sense. Thanks Brian. Onto our fourth challenge, regulatory and competitive pressures. So this makes things even more complex for carriers obviously as insurers are walking sort of this tightrope between regulatory pressures on one side and increasing competition on the other other. And on the slide here, nearly half of carriers consider regulatory hurdles a major barrier to updating their pricing models. Maybe the first question for you, Brian, what are the key strategic challenges you think carriers are facing when adopting or adapting to, I should say tightening regulatory environments?

Bryan O’Neal: Well, I think I spend most of my time these days thinking about going forward, not the current state. What’s it going to be like when I try to bring in AI into a workflow? You should not assume that the regulators are going to be very forward thinking about this. Instead, they’re going to want to see the same kind of checks and processes at every step of the game that you see now. And if you want an AI to make a decision, you’re going to have to show checkpoints along the way. You’re going to have to ask, how did the AI make that decision? You’re going to have to show the ability to rewind or to roll back decisions the AI might’ve made. You have to show human governance along the way. And I really think as insurance companies start to take this seriously and entertain these architectures, they’re going to be faced with this question of, okay, how do I implement governance in a way that will look familiar to the regulators today?

That’s backwards compatible, if you will. And maybe at first, and maybe one intellectual shortcut to take here is that tasks at your company that are done by code today will continue to be done by code and tasks that are done by intelligences today. IE humans, those are candidates for an AI driven process. And that’s actually is kind of reasoning by analogy, but it’s a good jumping off point. I think that from there, the challenge is how do you avoid repaving the cow paths as it were, and not just taking a simple process that was always done in a given way because the humans were constrained in a certain way and replacing it in a similarly dumb AI enabled manner. I am very curious about what a process that’s designed for AI from the very beginning onwards starts to look like. And yeah, anybody out there who’s starting to solve those problems, give me a call, what I’m interested in these days?

Tom Wilde: Yeah, I think it goes back to the supply chain metaphor a little bit that I touched on, which is the regulators are always trailing the market. They have to figure out what people are doing with the technology to know what questions they want. You’re seeing regulation come in. I think as generative AI and agent AI is applied to these kinds of use cases and workflows, the regulators are going to show up and say, okay, you automated part of this process. I need you to be able to share with me the provenance of this decision. What data was used, what prompts were used, what model was used? I mean, imagine all of the sort of attributes of a decision that have to be tracked and reported on into the future. And I think you have to design that in. It’s going to be very difficult to retrofit that. So I think designing that into your processes with regulatory as one of the critical constituents stakeholders is going to be absolutely mandatory. Sorry, Alice, I think I kind of stepped on you a little bit.

Alice Boreman: Yeah, I mean I was actually really surprised by statistic and I think you guys considerations around that AI piece is really interesting and something I hadn’t really thought about. I think I suspect from a pricing act perspective, I suspect the statistic is higher in places like the US where there are all of the rate filing processes you have to go through. I think frankly, I think still in the London market regulation is pushing people to have better models and get those updated. But I do think it’s a really difficult challenge. I think there’s lots of future reporting requirements and governance, so we just dunno what it’s going to look like. We dunno what it’s going to look like around things like ai. There was a flowing of activity a few years ago in the market that we were going to have to do all of the sustainability reporting and in my mind a lot of that was going to have to come through underwriting pricing models. But it feels like that has tempered some more and doesn’t fill in the cards, but it does feel a bit frightening that we don’t really know what we’re going to have to produce. And so we need to capture the data in a flexible way as possible that we can respond and mash them together once we know what the rules of the game are.

Jeremy Stinson: Great, thank you Alice. That brings us to our final challenge of the day talent and analytical capabilities. So we spent a lot of time today talking about the tooling and the technology around how carriers can address all these challenges when it comes to risk selection and pricing, but it’s not just about the tools, it’s about the people as well. And again, you can see in the slide here where 40% of insurers site recruiting advanced analytics talent is a top barrier or as a top barrier to digital transformation. We’ll come back to you Alice on the first question here. During everything we’ve discussed today, what should a pricing team look like moving forward into the future?

Alice Boreman: Yeah, I think it’s a fascinating question and we host a heads of pricing round table here in London and no matter what question we pose to the group, we always somehow end up back with this sort of shape and skillset of a modern pricing team. And I think there are different right answers. I don’t think it’s a one size fits all, but one of the things we talk about a lot is as technology becomes a more important way of deploying our models, what kind of technology skillset do we need to have embedded in the team? I’m sure lots of people have had the challenges of trying to work with IT teams who are reporting somewhere else and sort of have conflicting objectives and interests. And so I think we increasingly see developers embedded within actuarial teams. And then I think the debate comes of whether those developers should be actuaries and with lots of tools and technology based on things like Python coding, we see lots of actuaries coming out of uni now with those kind of development skills, but I’m also a strong advocate that let the professionals do their job, but how do you get that working smoothly?

And I think if you can get the various different skill sets you need together in the same team, I think it can work really well. But I think it can be challenging, particularly in big groups who have big IT functions to do some of that and then not appear to be having shadow functions. That said, if you’ve got tools like Coherent and some of the other spreadsheet to API type tools out on the market, actually you can let the ACT do what the actuary’s good at and the tool does the rest. And different teams and types of actuaries will prefer different types of models. But I’ve definitely experienced the challenge of trying to hire pricing actuaries and Python skills and it’s a good place to to get paid well at the moment. I think,

Tom Wilde: I wonder if there’ll be, I haven’t seen it yet, but I wonder if there’ll be someone in the commercial space that’ll really focus on the employee experience as a differentiator. I think that a lot of these companies, and you kind of both have touched on this, it can be kind of a punishing environment in terms of the number of applications you have to use. The number of screens you have to jump through is constant change and I think it can be a very difficult job to do. Will there be a carrier who decides we’re going to really focus on making that employee experience the underwriter’s experience or the claims analyst experience, just make that first class as a way to attract and retain talent. It is kind of an idea that’s been kicking around. I haven’t seen anyone try to do that, but boy, what I’ve seen internally in some of these companies, it’s a very difficult job, a lot of seeing it and keying it and multi-screen and multiple data sets to have to manage. And it’s a difficult job. It requires a lot of concentration and focus just because of the legacy of these businesses and the systems and the changing nature of the market.

Bryan O’Neal: That’s an interesting take actually, and I feel like one of the differentiators here, to your point, could be which insurers actually give their employees freedom to experiment. This is such an exciting time to be, let’s say a high agency person who’s willing to go out to the frontiers of these new tools that are out there and to see what can be done. Because all of a sudden you might find ways to have 10 x the throughput that you were before in your job and nobody above you in the organization is going to see that or understand it. You uniquely as a person in your seat are in a position to make major breakthroughs here with all these new tools that are coming out. And I really feel like most insurance companies don’t get this right. They don’t allow their employees to do anything, but the ones who foster a spirit of innovation will reap the rewards very quickly.

Jeremy Stinson:  Awesome. Great. Thanks Brian. Alright, so that brings us to the end of today’s webinar. Folks jump, we want to spend some time answering the questions that you’ve shared with us throughout today’s session. So first one here, maybe I’ll toss this one to you, Tom, and then Brian and Alice, feel free to chime in. How can insurers ensure that AI driven risk models remain transparent and explainable?

Tom Wilde: Yeah, I think there’s two ends of the spectrum. I think we’re all excited by the promise of generative AI and the vast capabilities it’s displayed. I think figuring out how to guardrail that and building in explainability into each step in the process is mandatory in a big enterprise environment where you’re talking about risks that could be nine figures. So I think that it’s not just about prompting, it’s about the data provenance, it’s about controlling who can write those prompts, who can edit those prompts, what data is being included as part of those prompts. It’s a new enterprise discipline that really hasn’t existed before. In fact, I’ve sat with underwriters and seen the risk memos that they write and I’ve asked them, where did you come up with this? And the answer is credibly, I’ve been doing this for 25 or 30 years. That’s how I’m able to do this. Well, I think that’s a strong answer, but into the future as we want to synthesize more and more data sets, you’re going to have to blend automation, explainability and judgment into these processes.

Bryan O’Neal: So the question I think was around on, correct me if I’m wrong, but it was like how do you show transparency for pricing models, AI pricing models? And I’ve got a little bit of a contrarian take here. I don’t know that pricing models are really where people are going to use straight up AI like data in pricing out because man, that’s actually a hard one and I actually think that, I mean obviously I work here, but I really think that deploying your rating algorithms via Excel, and there’s other really good rating platforms out there, but this is the most obvious and cleanest to market, deploy your logic to excel and make it available as an API to your ais that solves most of your transparency problems.

Tom Wilde: It’s inherently explainable. The pricing model would be inherently explainable in that case. That’s right. I think it’s the data assembly upstream where that transparency is more at risk.

Alice Boreman: The model is just a model. It should be an indicator and directional, I think to make then decisions. So if you think about the large commercial spectrum, basic, it’s only ever an indicator and the underwriting is ultimately working out what to charge. And personally, Brian, I agree with you. I think AI in the short term is much more an operational when we don’t have the volumes of data, the personalized business to be doing machine learning models for our pricing. But also you need to be able to work out and explain your models regardless of how you’ve come up with them. And actually I think there’s some opportunity there. I think Ians and actors in particular are not necessarily always well known for the excellent documentation. It’s not the bit that anyone enjoys and actually if you can use gen AI type models to help get you along the way on your documentation for some great gains to be sought.

Jeremy Stinson: Maybe one final question, I think this might be a good one to wrap us up. We can stick with you, Alice, to answer this first. How can insurers overcome internal resistance when adopting new technology or automation solutions?

Alice BoremanYeah, so I think a couple of things. One, I think there might feel opposing, but I don’t think they are. I think you’ve got to have a vision and be able to articulate that really clearly to everyone. But from a pricing technology perspective, we often go in and the pricing team are in isolation trying to do the model replatforming projects. And the first thing we say is, I’m not getting involved in this project unless it and underwriting are also in the room. You’ve got to bring everyone with you and you’ve got to understand what this is trying to achieve. Because if it’s just putting your Excel rate onto new technology because that’s what everyone else is doing, stop now that you’ve got to have a vision for what you’re going to do going forward. Is that about avoiding re-keying? Is it about capturing the data in a structured way so you can do better modeling and update your model entirely in a year’s time?

You’ve got to have that longer term vision. What are the metrics and reporting that you want to be able to use at different levels of your organization? And I think if you have that, but tell them they’re not going to get that tomorrow, but that’s where we’re heading. That’s really exciting for people. I think you then have to, which could be opposing, but I don’t think it’s break it down into really small t tranches. So that might be starting with one model for one line of business, ideally with an underwriter who’s enthusiastic, I’m going to be a change champion for you going forward and prove it and be willing to learn. You might do it and actually it doesn’t work. It’s not the right thing and you have to change the way you get to that longer term vision. But I think having the big vision and then the piecemeal test beds to get there is the way. And yeah, these changes never a smooth or easy path, but I think bringing everyone together is really important.

Bryan O’Neal: I think insurance companies are uniquely wired to make decision making difficult, and I’m in sales, I see this every day. It’s very rare that an insurance company can make any kind of decision quickly. The bigger they are, the slower that is, and I actually will submit to you that is by design and it is a defense mechanism. It is more like an immune system to keep the company from, to keep the balance sheet safe. That’s the fundamental thing that has to be true for an insurance company always. And there’s much more downside risk typically for an insurance company than there is upside opportunity. That once you kind of understand that as the underpinning for the political landscape at your company, things can make a little bit more sense. Nobody is trying to veto you, nobody’s trying to make your life harder. I feel like if you’re the sort of person who’s trying to drive change at your company, prototype things up as well as you can and float them to the rest of your organization and let them start to grow on their own.

Let the project and let the idea start to become other people’s idea. Let it don’t try and be the champion of it necessarily, because if it’s just you and your own political capital at risk, then it doesn’t always work out. But if you socialize it well and your ideas fundamentally sound and there’s business value, somebody will understand it. You just got to recognize that there’s already a lot of large ships crossing the ocean at your company that need to get there. But a small course change here and there can lead to a big outcome if you’re in their early acting as a leader, a technical leader, and a thought leader in your company than you actually can drive change. Tom, why don’t you wrap

Jeremy Stinson:  Us up?

Tom Wilde: Yeah, I would echo what Brian and Alice said there. I think, and I like Brian, your point that I do think insurance companies have to inoculate themselves from doing things rapidly or capriciously because at the end of the day, the trust that the customers place, that the insurance company can back up the risk that they’ve underwritten is absolutely tantamount, right? If you don’t have that, then you don’t have an insurance company. So while as solution providers, it can be maddening to help them work through the evolution of how to use technology in these environments, I found that that is a feature, not a bug in how insurance companies operate similar in financial services as well.

Jeremy Stinson: Awesome. Well good. That does it for today’s webinar. So thank you to our panelists for joining us today. I thought this was a really great session. Thank you all for joining us. We’ll send the recording out here in the next 24 hours or so. I’ll also make sure to get to any of the questions that we didn’t have time to get you today as well. So thanks again and have a great rest of your day.

Tom Wilde: Thanks everyone. Thank you all.

Jeremy Stinson: Yeah.

Check out the full Unstructured Unlocked podcast on your favorite platform, including:

Subscribe to our LinkedIn newsletter.

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.