Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

Harnessing Generative AI for Innovation in Insurance with David Moorhead, Insurance Executive – Consulting at Ernest & Young

Generative AI is reshaping industries, and insurance is no exception. This rapidly evolving technology promises to revolutionize underwriting, claims processing, customer service, and beyond—offering insurers unprecedented efficiency and decision-making power. But how should companies adopt generative AI, and what challenges do they need to address? Recently, during the Unstructured Unlocked podcast, David Moorhead, an information technology executive at Ernst & Young, shared deep insights into generative AI’s game-changing role in the insurance industry.

Listen to the full podcast here: Harnessing Generative AI for Innovation in Insurance with David Moorhead, Insurance Executive – Consulting at Ernest & Young

 

Michelle Gouveia: Hey everyone, welcome to a new episode of Unstructured Unlocked. I’m your co-host Michelle Gouveia

Tom Wilde: And I’m your co-host Tom Wild.

Michelle Gouveia: And we are joined today by David Moorhead, the information technology executive at Ernst and Young. David, welcome to the podcast today.

David Moorhead: Thank you. Excited to be here and share some insights is I’m kind of on the front edge of E&Y and the services we provide ensure, especially around PNC and life with Gen AI solutions.

Michelle Gouveia: We are very much looking forward to the conversation. Before Tom and I get to our set of questions for you, do you mind giving a little bit of background on yourself, your experiences, what you’ve focused on in your career to date?

David Moorhead: Absolutely. So I am part of our AI and data practice, which includes gen ai. We used to brand that as advanced analytics, but that’s extended now to bring in gen AI and I focus on insurance. All I’ve done is PNC within ENY. I focus on not only our go-to-market efforts around gen ai, but our strategy and our delivery and how do we essentially innovate and bring gen AI into the umbrella of the data and analytics and kind of operationalize it and make it part of our client’s day-to-day execution ability.

Tom Wilde:  Excellent. Well David, I thought I’d start with a question that’s very current. I was just looking at an interview that Satya Nadella did about Genai and it kind of raised some eyebrows because he essentially said, yes, gen AI is transformational, but we really won’t know until we see the impact that we all believe it will have, which is an increase in efficiency. I think that’s what most people would say about generative ai, especially in industry, is that it’s going to make knowledge workers especially more efficient at their jobs. And his point was when we see productivity jump by double digits, like a 10% jump in productivity, we’ll know we’re onto something until that it’s just really conjecture and possibilities. So maybe to translate that to a question, what do you think, I dunno if you saw that comment he made, but curious your reaction to that and whether you agree with that and also maybe how far are we from a metric like that where we all sit up and say, oh yeah, this is actually having the impact we thought it would.

David Moorhead: Well, I think typical, any type of measurement, be careful what you measure and how you measure it. And I think the whole industry is waiting to see kind of like a broad brush statement that gen AI has automated this for the industry and I feel like a lot of our clients struggle with that same question, bringing in innovation, how do I build the business case to support using a new technology? And I think where we try to support them is by use case, by use case and break it up and try to keep it simple. I feel like there is this concept that I’m going to use gen ai, put it on my desktop and use it for all things. And I think that’s hard to measure and that’s somewhat subjective. Me as a consultant, can I be a better consultant? Can I gain efficiencies?

But by breaking it by use case I can almost align it to a project I can kind of fit for purpose and where we’ve had a lot of traction in the underwriting process, can I automate specific steps I have today by leveraging the gen AI capability, measure it and do a couple of things not only from the use case gain the efficiencies, but actually effectiveness. And I feel like you’ll hear a lot of us talk about as you internalize gen AI and you apply it to a use case, I’m not going to take the human out of the process, but I’m going to center around the human and I’m going to look for opportunities where I can take manual steps and automate that in a very measurable way. So we look for use cases that scale expertise, take the complex, I need to understand thousands upon thousands of pages of documents.

That’s a great gen AI use case and can it take those documents and put it in a way that a human can process and understand it and not only be more efficient and automate, but at scale, can I go through renewals, can I underwrite, can I buy new policies at a faster rate and can I take simple actions, simple steps and automate them so an underwriter that has expertise can apply it at scale to much more greater number of volume so I can do more policies, more renewals. Same thing on the adjuster side. So I think if you go function by function in the value chain of insurers, there’s a lot of use cases that you can really make measurable change that drives that early adoption. And I think the bigger question I get from a lot of our clients is if I have initial success, how do I scale? And I feel like that gets into kind of more sachin’s kind of question if I’m going to do this innovate and pilot, we’re waiting to see that answer on full gen AI scale across an enterprise, which I feel like we’re well on our way to but kind of proofs in the pudding to his point we’ll see it and believe it when we see it at scale. But a lot of success on applying this on a use case by use case basis.

Michelle Gouveia: David, how much of or what portion of leading to a successful AI implementation for any of the could be any product or use cases? You just mentioned a detailed process engineering portion, right? We talk about AI is not going to fix a broken problem, it’s probably just going to expound upon it. Unstructured or lack of data is not going to successfully train models to provide the analytics and the insights that you want. So upfront, how much work does an enterprise need to think about doing, whether it’s an underwriting workflow, claims workflow, and anywhere you would apply AI to really evaluate current state versus desired post AI implementation state?

David Moorhead: Well I think that’s a great question in the sense one of the main core reasons we align to a use case because we’re solving a problem and we’re not applying technology to automate a specific step. And I feel like to some degree there’s people and process and technology kind of aligned in that use case. How can we help the underwriter make a better decision? How can we help a claim adjuster kind of transition to a new claim file and do it in a way where they can understand it quickly and run alongside and help the claimant make better decisions in their recovery from their claim? And I think to your point, there’s process pieces to that. Obviously there’s technology and I do feel like a lot of people are like, Hey, I’m going to use gen ai, I’m going to solve a technical problem and there is some support for that.

I can look at my current process and see very manually intense steps. And when I see that, I always call that, especially when we’re looking at it strategically and where I put gen AI and what do I enable is don’t sacrifice good for great, right? I can wait do a full enterprise type level solution or I can fix a pain point that I have in my current process. So we’re seeing a mix. A lot of people are looking at using gen AI to automate, take very tactical approaches to fixing their process because just automating it has a pretty heavy lift right now. More strategic people are looking at this and saying, Hey, I can ask better questions, I can make better decisions. And typically when you go into that broader solution where you’re truly transforming the process, genai is a small step and I think things that we’ve seen is people kind of overreact with gen ai.

What I mean by that is I have a new tool and I want to use it for everything. And to me Gen AI is an evolution of more traditional AI and ML and even doc intelligence and I think there’s a blend of technologies that are cost effective. I can still use Doc intelligence to some degree to ingest documents and content, but through gen ai I can increase my hit rate, I can do it in a way where I have more volatility and flexibility on different types of documents and bringing it into a knowledge repository more effectively and more efficiently. And I think gen AI makes a lot of your more traditional solutions, more scalable, more flexible and quite frankly very dynamic where you can mature and add sophistication and functionality to it back to that good to great where I can fix manual steps and then evolve my process alongside Gen AI and apply and evolve to more complex use cases, solving more complex problems with a gen AI platform that I can be dynamic and agile and adjust as my business process adjusts. I can use gen AI to be more dynamic and do that in more sprint to sprint agile processing versus kind of the old waterfall, Hey, let’s reinvent the whole process. So there is a little bit of a mix to that answer, but I do feel like finding the starting point, keeping it simple is where we’ve had a lot of momentum and then looking to transform the entire process in a broader solution obviously has the bigger impact.

Tom Wilde: Broadly the insurance industry is in the business of probabilities. I mean that is kind of underpinning almost all insurances is trying to predict the future predict risk. But when you look at gen ai, and I agree with you, we often describe it more as a programming framework as opposed to sort of a Swiss army knife application. I think chat JPT is one of the most killer demos ever built and now is advanced significantly from that. But I think within industry, I think thinking of it as a Swiss Army application is a little bit dangerous as opposed to a programming framework that you can apply to certain kinds of problems. But on that front, so while insurance is about probabilities, the processes to get there largely need to be deterministic, right? You’re talking about understanding the actual underlying assets being insured, you’re talking about understanding the actual losses that have been incurred for the last 10 years. So gen AI is largely a stochastic tool being used in workflows that need deterministic outcomes. How have you guys counseled customers and how to think about that and do you agree with that I guess is maybe the first question, but then if so, how have you counseled customers to think about that?

David Moorhead: Well, back to that use case example, I don’t want to keep pulling on that, but I do feel like to your point, there’s some very simple things and I think when you look at the process, there’s very tangible outputs. And I feel like bringing gen AI back to that knowledge repository, there are so many steps within the insurance process that is labor intent, understanding a claim file of thousands of documents, being able to answer detailed questions quickly, which is back to old doc intelligent doc management type solutions. But Jenny, I gives us a way to extend that and keep ingesting data and get bigger, better, and smarter. And I feel like back to this building, these large language models to answer business questions better provide expertise. I actually feel like gen AI really me scale coveted skill positions. Can I hire, especially in specialty insurance underwriting requires really sophisticated underwriters.

Can I scale that? Well? I can if I take that deterministic model and answer simple questions, collect documents, create summaries, and gain that efficiency where I can scale that expertise and increase the volume. And I feel like where we look for the internal use case is where we can take expertise that’s coveted, difficult to hire, difficult to develop and provide scale through gen AI and not remove them from the decisioning process, but getting them to the point where simple questions, simple answers, sided materials to help inform a decision, allows us to scale and use that deterministic model, very binary good outcomes in a very safe way. But I think I say that, but there is this broader lens back to Michelle’s earlier question, right? Well, how do I know that’s true? If I take my documents and I start combining it with off the shelf large language models, how do I know my contents is safe?

And when I get that answer, how can I cite it and know that I’m getting an auditable correct answer? And I feel like hallucination and industry, how you architect these solutions and control the context of where you’re applying gen AI helps for many reasons because you can’t have bias, you can’t bring in decisions and not govern. And I think a lot of times people lose track of, we’ve always had to govern models, right? With advanced analytics, when we start predicting and we start automating decisions, traditional AI and ML has had the same challenge. So we got to extend our process, our governance, and a lot of these simple models to some degree easily to test, maintain and govern, right? So start where it’s simple and keep it a little bit deterministic and evolve. And I think when we use the term keep humans at the center, don’t focus on replacing the human and we say insurance, there are some very simple straight through processing.

I can look at for example, a fender and determine the severity of the damage and do straight through processing. I could do that with predictive analytics, I could use that with gen ai. And I think with gen ai it can be much more efficient, it can look at the image, look at other images and get us to an answer even faster than some of the traditional solutions. But it’s not denying coverage, it’s not making human subjective decisions that are a little bit more subjective and difficult to govern. So when we use this conversation of keeping humans at the center, it allows us to better manage risk and governance because it keeps it simple. The other thing it does is it actually helps the human in adoption. And I would highlight the one thing we’ve seen where organizations have gone from piloting center of excellence to gen AI at scale is this concept of humans comfortable with the new technology, right?

Back to simple decisions. Can I trust ai gen AI to help me in these decisions? And it’s almost like we look to have the underwriter, the adjuster, kind of push the technology. I wanted to start taking on more decisions for me to give me me more bandwidth. But we again look to have the human drive, the broader footprint for gen ai, start small and have the human pull that automation through. So don’t start with straight through processing, start with teeing up automation of simple tasks to point where the underwriter still or the adjuster controls the decisioning and the direction until they ask for that automation to even streamline the process more. And I think that pull through from humans builds trust in the AI and gen AI models because to some degree if it gives you the answer, you don’t have context. Humans aren’t quick to adopt or trust.

So there is a little bit of an evolution of the process using gen ai, starting simple and evolving to some more complex kind of process and automation, which if you jump and make that leap, just not seeing a lot of success, right? Because the humans frankly are still at the center and just to manage risk with new technology. And technology obviously in the gen AI space is changing not in yearly intervals and six month intervals. So there is this to some degree constant diligence and due diligence to keep the human in the decisioning process until the industry and that technology fully matures to the point where it’s DevOps, it’s integrated and it’s part of the bread and butter and clients’ cultures and insurers are able to manage it like a technology versus innovation. So a little bit of a long-winded answer there, Tom, but I feel like start simple leverage where traditional AI works, doc intelligence works and bring in these new capabilities that are a little bit more innovative and a little bit more, I guess futuristic in the sense of decisioning, but do it at a pace that the human’s comfortable

Michelle Gouveia: A lot there. David, and I’m probably going to pick on a piece of it that’s, it’s a smaller topic than the large regulatory compliance piece of it, but something you said in your answer where you were talking about adding more efficiencies or scale the fender example where you say, okay, a gen AI may be more practical to use there because not only is it reviewing the image, but it can pull from other images and do that analysis faster. I’m just curious from conversations you’ve had getting data, whether it be through submissions, through third party data sources, et cetera, has always been across all lines important. But for specialty lines or commercial lines, sometimes it’s been more of a challenge. And I’m just curious if having gen AI as I’ll call it a core capability, has changed the definition or the expectation of insurance carriers on what kind and at what scale other data they can access to incorporate that may otherwise not have been something they were seeking prior to gen AI being in their toolbox?

David Moorhead: Well, I think there’s always this aspect of trust, and I’ll give an example of a use case that at times sounds really good, but we struggle with execution and that is using telematics in an accident. You have sensors on a car, external data, you can through telematics to some degree rebuild an accident. Right now with gen ai, I can ingest that content, I can combine it with first notice loss call from a claimant and I can really blur the line where the telematics is feeding me as an adjuster content where I might be on a call with a claim and saying, Hey, I’m at the corner of Elman, Maine and I can look at the event, the external data coming in from telematics. And I’m like, are you sure? Because that’s the wrong address. Or I can tell from telematics that there really wasn’t an accident, there was no, and this could be fraud.

And I feel like back to our ability to keep humans at the center to guide us through this because there’s a lot of judgment involved and how you would handle that situation. And when you have broader data and you’re starting to get insights that could either potentially foster a better service because in that first notice of loss that call, I’m getting better information, I’m more focused on the person who just had an accident making them feel indemnifying supported than actually typing in data, right? Because gen AI now can take all that content, put it right into the core system to me, and now I’m recording this, it’s populating a lot of the content for me and giving me a great experience. But at some point to your point, what happens when we start going across that trust barrier where we start challenging the customer themselves and some of the accuracy and content because we have better information probably than they even do where they might not be able to answer some of the questions that we now through gen AI at our fingertips answer quickly.

And I think a lot of this gets into underwriting with hazard data and different pieces and having that human guide us through the interaction is very supportive and helpful. And I think there’s a lot of use cases where you can innovate and actually do too much in the automation, too much in the validation to the point where you lose some trust either on a customer in that case where it’s someone who has a claim or going through the quote or policy system or conversely third parties, brokers and agents where you are having information that to some degree you’re having at a rate that they’re not accustomed to. And some of this validation ability to streamline the process can actually get you into gray areas, not only gray areas of the law and governance, but in gray areas of appropriate decisioning and interactions with your customers. And third parties,

Tom Wilde: I mean certainly in the life space you could argue there’s very little the insurer needs the insured to provide at this point. They can know enough about that individual to write a policy. You’re starting, I mean the telematics example you give is a really interesting one that even in the case of a claim or an accident, there may be information asymmetry where the carrier knows more about that. Not that the insured is necessarily trying to be deceptive, but they may simply not have access to the same level of data. I was talking to someone in the small commercial space who said we’re not that far away from the underwriting process being give me the name of the company and the street address. That’s all I need. I don’t need you to provide me anything else. I will know everything there is to know about that business, at least everything to know to underwrite potential risk. It’s kind of a fascinating tilt from especially the broker’s role where the broker had to collect exhaustive data to provide it to the carrier to do underwriting. And we may see or maybe at the tipping point there.

David Moorhead: Well, and I think Tom, it’s interesting too because I feel like insurance is so broad. You got life, you’ve got property and casualty, and then when you start getting into specialty and brokerage and global insurers, the complexity can be credible. So insuring a car in my world, I think of my car and it seems pretty simple. I have a car, it has an engine,

Tom Wilde: It’s sort of atomic,

David Moorhead: But I get in special insurance and now I’m insuring a skyscraper commercial property or a nuclear power plant. And I feel like or Genai, genai has a huge role, not just straight through processing, but can you think of the content and the coverages and the products a broker has to pull together

David Moorhead: Ensure some form of power plant or a commercial entity and the ability to ingest that content, put in a knowledge repository and allow underwriters adjusters to make smart questions when it’s not a commodity, when it’s very complex and there’s overlapping carriers, there’s reinsurance, there’s accept coverages and limits and liabilities. And I would go to some of the global clients I have where they create a treaty where they’re trying to break up the risk through reinsurance. The level of complexity of some of those documents or contracts are thousands of pages. And being able to ingest that. So an underwriter and think about that using gen AI at renewal of a treaty a thousand page document, compare the two and tell me what are the differences, what are the outliers, what’s changed now blend that with advanced analytics, that traditional AI and ML to say, Hey, I’ve changed the rates, I’ve changed the coverages, and is that better or bad?

And I can use existing AI and ML technologies to analyze this and say, Hey, this is better risk for me. I have better appetite, let’s renew this quicker. And I think combining some of the existing more traditional predictive analytics in this gen AI model to gain that throughput automation and in that case make a better decision quickly. I’ve seen a lot of insurers like that ability for the human to move faster. I think one of the secret sauces in the insurance industry right now on some of these more complex policies or treaties or contracts, the person who moves the quickest actually gains and grows their business profitably at a scale much higher. So speed to market is always there and I feel like gen ai, not only automation and savings, but being able to better service, better answer questions for some of these more complex contracts, treaties and reinsurance products allows people to book the best business faster than their competitors.

So I think you’re seeing even with just taking a use case at a time, there’s some very material game-changing results. Back to your original question with Shin, I feel like some of the market leaders are really taking specific use cases and seeing material change. And what I mean by that, simple changes that gain very significant value and cost savings to fund that innovation energy to fund a new process of competency, a center of excellence just on the first use case. I feel like once you do that initial release, that initial use case, you can tangibly measure that value. You can see within your organization where you can apply it to the most impact. And then typically the use cases start falling and you want to scale, you want to operationalize and now you have energy to better manage the risk govern things and extend solutions into a full on gen AI platform and architecture versus just a simple point use case and starting place.

And I feel like that transition, it used to be two years ago when I started these conversations, explain gen AI to me, then insurers came to me and said, Hey, a year ago, how do I pilot? How do I start? I feel like a lot of the insurers are now I bought in, I get the concept of keeping humans at the center. I’ve had a lot of success on point solutions. How do I scale, manage and govern? And go back to Michelle, your original question. This is a process. Can I go across the value chain? Can I scale this for claims underwriting? Can I do better underwriting solutions not just for cyber, but can I do it for all my products? Can I do it for P and c? Can I do it for life? And I feel like that ability to scale is where people are now at that inflection point where sachin’s looking to, it’s easy to do on a use case basis, but can I do this at an enterprise level? And I think we’re waiting to hear back from the industry on that ability to scale, what is that new number of impact and how big and how game changing is this really? And I feel like based on our initial use cases, cases, we’re expecting pretty high numbers. So I’ll leave it there.

Tom Wilde: Go ahead Michelle. I stepped on you. Go ahead.

Michelle Gouveia: No, it’s okay. It’s a question to some of what you said there, David, about being able to extract the information from these large packets of information to be the first mover to make the business decision the you need to know what information you are seeking to be able to extract it. But then there’s also a fine art to how you draft the prompt, right? Making sure you’re asking the right question to get the right answer. And we talk a lot and a topic as we’ve also addressed here is the big regulatory and compliance environment and being sure that you’re being transparent about where AI is being used and how to use that data, et cetera. But curious your thoughts on what are internally when someone is overseeing, how prompts are drafted, where AI is used, how it’s used, what are the guardrails that people or enterprises will need to put in place to say, this is when you can and should use gen AI and ask the question, this is when you should do whatever the alternative action is. So curious there because it’s a little bit more, it’s

Tom Wilde: Funny, final guard rail it you’re building on that. I’ve asked this question to a few different company profiles, where does prompting live? Is it business? Where does prompting live as the new programming language? But anyways, go ahead David.

David Moorhead: There’s a lot to unpack there, but I think prompt engineer, data engineer, there’s always these new roles and new capabilities that we put into our development lifecycle. And I think prompt engineering to your point is who owns it? Who drives it, right? It’s a mix. But I feel like to some degree that’s what we’re good at. We figured out waterfall to agile and we keep refining how we go to market and bring in new skills and new capabilities. I think that’s low risk. But I think the question on how do I protect the organization, how do I govern this, right? And how do I put this into an operational model that I can monitor and secure this? And I feel like one of my biggest lessons learned in this is a lot of those capabilities exist today. And I think a lot of clients push back on me.

They’re like, well, I don’t want the model to hallucinate. I don’t want to bring in a bias, I don’t want to make a bad decision. And I think if you go back five years, we had the same challenges with predictive analytics, with straight through processing and traditional ai. And ML has the same risk and governance frameworks that we need to apply now. It’s a new tool, it’s a new technology and we need to refine those processes, but we already have those governance and risks functions and processes and frameworks in place. Do we need to extend them? Probably definitely need to bulletproof them and test them with this new technology because I think the biggest challenge I’ve seen with gen ai, once you go from a pilot or a use case that we started with to doing it at scale and decentralizing that development and you’re really operationalizing this where it’s kind of part of your life cycle and development cycle and agilely, you’re standing up things at a very rapid pace.

You really have to pressurize that governance model and making sure you are monitoring this because it is a new technology, but with any model that you’ve done in the past, you still have the same governance. So I feel like it’s a matter of awareness, communicating it and making sure back to this technology process and people kind of conversation, especially around the process, especially around governance, there is some heavy lifting to do there. And I feel like that step from pilot or use case to scale, that’s a little bit tougher for our clients. And I think that’s where a lot of people are looking to how much can I decentralize this? And I feel like a lot of insurers immediately allowed gen AI to be on desktop and do things. And they started to see without governance, without pressure testing this, without fully understanding who’s using it.

And that was the discipline we usually have in place with process framework, risk framework monitoring, but there is a little bit of that overlay with that governance arm and risk arm just to make sure we know who’s doing what, why, and how. Just like any development. So it’s less that it’s gen ai, it’s just the scale and the rate which gen AI sometimes takes off. So are my existing processes and governance prepared for this innovator’s dilemma? Once it starts to get adopted, it starts to scale and not one function, but all functions are using it and there’s this kind of ramp up, which is great from a business perspective like entrepreneurially, Hey, I can do things bigger, better, faster and really materially grow profitably, but with that then there’s risk and I have to be able to not, I don’t want to slow that down and minimize that impact. I want to optimize that without putting the organization at risk. And I feel like going hand in hand with that risk management function that’s there and existing, but you need to bring them along with the new capabilities, the applications, the use case, just not to overwhelm that risk function.

Tom Wilde: Maybe bringing the plane in for landing three years from now. How do we know what the impact was measurably? Is it combined ratios? Is it profitability? Is it the rate of growth? What will we look at and say, yeah, that was a moment because combined ratios have been kind of stubborn in insurance. They haven’t really moved that much. Insurance has lagged in terms of efficiency gains. I can’t remember, it was a study I saw. Maybe it was Harvard Business Review or something that insurance has lagged financial services and a few others in terms of efficiency gains,

David Moorhead: Well I think there is a concept of human at summit, so there’s only so far you can drive that. But I do feel like you’re going to see, I would say profitability is probably a bad measure. I think you can really measure this through service levels. Can I service, especially if we’re a mutual company, can I service a client faster, better outcomes, ensure them at a lower price point, gain efficiencies, lower my costs, but then also provide services and indemnify them faster with better results. I feel like there’s a lot of upside for the customer when we apply this and automate it. And I do feel like there will be some adjustment period, kind of the innovator’s dilemma. You can be the first, you can take on more risk and you’re going to gain market share, you’re going to gain profitability, but at some point it gets commoditized and that back to the internet.

I always kind of go back to we knew the internet was game changing but we couldn’t predict when and how. So we got all geared up for massive change and business to business and everybody’s going to adopt and that adoption went slower. And I look at gen ai, we know it’s game changing and I think a little bit of the opposite is happening. There’s faster adoption than I would expect. And I would also kind of highlight there’s unprecedented level of investment. You have vendors hyperscalers investing billions, you have platform software using it to make their specific software bigger, better, faster. So it’s very difficult to navigate this, but I do feel like there is that first mover advantage and at some point I think customers insurers are going to have the biggest benefit from all of this because we’re just going to become more efficient.

It’s going to become table stakes. And when that’s commoditized, I feel like the people or the insurers that have pushed the envelope and drove a little bit of this are going to be in a place where their market share is going to grow, they’re going to stabilize and I think the customer eventually will have the biggest benefit. And I think the question I get asked a lot and when I do these conversations three years ago, would you have predicted we would be here with gen ai? I think three years ago I didn’t really fully understand what a large language model is. The concept of that technology was very formed. So people ask me in three years, what do you expect? And honestly, I got stumped the first time someone asked me that because it’s kind of unprecedented. Where are we going to land with all this?

I feel like in polling some of my peers, they came up with the answer that I think is pretty true. I think one of the tangible things we’ll see is the elimination of software and core systems as we know it. I feel like with gen AI typing in data that’s going to get antiquated in the next three years, it’s going to be human conversations. The level of automation is going to get to the point where the core applications capturing of data, a lot of that mechanical part gen AI is going to quickly replace some of the more strategic human pieces I would like to say will be slower. But I have a feeling at this level of investment, I feel like the humans will never be replaced, but the role of the human, I’m not comfortable projecting anymore. I just realized the level of change that’s coming, the comfort level we’ve seen just in the last three years is a little bit uncharted territory for me.

So I can kind of predict the efficiencies within the applications, the data entry, the sharing of data, because gen AI takes a lot of that friction out of it. Its ability to generate code, migrate code, simplify software engineering. And I feel like a lot of people are looking to see just taking gen AI into that development cycle, building out these use cases. At some point it even goes beyond just simple prompt engineering. Where will I be able to generate code and extend things and almost stand up the next use case where I’m having a human tweak test and validate versus actually sit down and type code. Those are the things that I feel like I’m grabbing my popcorn and I’m going to monitor and watch and kind of watch the movie unfold in front of me. But it’s clearly exciting times and it’s not often you see hear pundits scratch their head on three to five year horizons just because I think if people give you a definitive answer, I think that’s a little bit of bravado at this point. I feel like at the rate we’re changing and automating, I don’t know where it’s going to end. To be frank. I’d be interested in you. Tom, you’re kind of on the front end of this too. Where do you see us in three to five years?

Tom Wilde: Yeah, I mean I think that it’s almost a metaphor of the waterline changes that I think in the short term there’ll be winners and losers. I think in the longer term though, I agree with you. I think because it’s accessible to everyone and the learning curves will be different in each company, in each industry, but that will ring itself out. And I think we’re really just going to set a new waterline where especially in insurance, I think you’ll see a huge leap in the overall industry’s efficiency and that’ll set the new kind of waterline in terms of what it is to run a successful insurance company. And I think profitability will grow. I think revenue per head will grow if you do it correctly. I think customer satisfaction should grow. So I think that’s kind of how I think about it is kind of the waterline will change overall, but it’ll be bumpy along the way there and there’ll be people who get there faster or slower

David Moorhead: And I’m excited to see where we land. That new waterline is definitely coming. I think I also struggled with when is going, when are vendors going to commoditize gen ai? When does it go from innovation to table stakes to just the way of COBAL code? Everything eventually goes through a live cycle where the next round of gen ai and I think where I get excited, I remember doing predictive analytics evolving into AI and ML and then quickly now to large language models. What’s that next level of technology that no one could predict? If I go back to those three years ago, mean just learning what a large language model model is and what’s coming. There’s that next round and that next generation of gen AI to that next level, just AI and ml leap to gen ai. What’s that next leap? I’m excited to find out and we’ll see, and I’m looking to folks to kind of commoditize and bring in that next layer of more sophisticated product.

I think there’s a lot of the concepts and energy around small language models and prioritizing and creating products. It’ll be very interesting people’s comfort level to build those, that proprietary nature of open source, closed proprietary models, oversharing of information. And I think to me that’s the other thing that we watch in a company and a firm is there is that governance and where do we start blurring the line with oversharing information and protecting the insured and the customers because I do feel like that risk is real as the technology outpaces our ability. You brought up life, right? There’s things that are confidential and we have to limit some of this innovation to stay to supportive of our customers and protect them from oversharing and good strong insurance decisions. And I do feel like that will be kind of a give and take as we commoditize some of this technology.

Tom Wilde: Well that’s great. That’s a good place to wrap. We’ve had a great conversation with David Moorhead, a technology and innovation executive at Ernst and Young. You’ve been listening to another episode of Unstructured Unlocked. I’m your co-host, Tom Wild,

Michelle Gouveia: And I’m your co-host Michelle Gouveia.

Tom Wilde: Thanks so much, David.

David Moorhead: Thank you

Michelle Gouveia: You everyone.

Check out the full Unstructured Unlocked podcast on your favorite platform, including:

Subscribe to our LinkedIn newsletter.

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.