Upcoming Webinar Revolutionizing underwriting clearance: a groundbreaking AI solution unveiled
November 12 at 8 AM PT   |   11 AM ET  
Save Your Spot
0
Days
0
Hours
0
Minutes
0
Seconds
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

Unstructured Unlocked episode 35 with Alex Taylor, Global Head of Emerging Technology, QBE Ventures

Alex Taylor, Global Head of Emerging Technology at QBE Ventures, joined the Unstructured Unlocked podcast to discuss how they evaluate potential companies for investment, the challenges of predicting technological advancements in AI and machine learning, and the differences between this generation of AI and previous hype cycles, such as blockchain. Hear how Alex envisions the insurance industry’s potential trajectory for the next five to ten years, guiding capital investments accordingly.

Listen to the full podcast here: Unstructured Unlocked episode 35 with Alex Taylor, Global Head of Emerging Technology, QBE Ventures

 

Michelle Gouveia: Hey everybody. Welcome to another episode of Unstructured Unlocked, Michelle Gouveia.

Christopher Wells: And I’m co-host Chris Wells,

MG: And today we are joined by Alex Taylor, the global head of emergent technology at QBE Ventures. Alex, welcome.

Alex Taylor: Thank you. Great to be here.

MG: We’re very excited to Do you mind starting off by telling everyone what the global head of emerging technology does?

AT: That’s a very good question and it’s one that I’m on a lifelong journey to discover myself, but one of those roles where it’s somewhat ephemeral, but people in a role like mine come from a lifelong career in technology and I am a passionate technologist. That means that I’ve had a series of startups, I’ve always had some kind of technology role in development or architecture or even technology communications, but in ventures it means that I translate all of the skills that I’ve built up over my career into looking at the companies that QBE might want to have a stake in. And that means that I sit down and I evaluate, is this company real? Do they do what they say they do? Are they going to grow? Do we have opportunity to grow alongside them? And that might sound simple, it’s in fact extremely complex, but it translates to we hope the mutual success of both QBE and also the companies that we choose to partner and invest in.

CW: So what’s the day-to-Day look like in this role? If there is such a thing?

AT: There’s a lot of deal scouting as you might imagine, but there’s a lot of research and discussion. So I’ve always said that our job in ventures in insurance is to look at where the industry might be in five to 10 years and invest our capital to be aligned with that. And I think as we all know, insurance is renowned for its conservatism, particularly in technology. That means that that job used to be simpler than it is now. It meant that we could say that there’s going to be dramatic change in geospatial or in systems maturity or in process maturity, and here’s 10 companies that might be able to do that. And I’m quite openly saying now that it’s become a lot more difficult to do what we do, particularly because of the advent of modern ai, particularly generative ai. It’s meant that that horizon that we could see over has got a lot closer. Now I don’t know that I could see where the industry or in fact the world’s going to be in two years.

CW: Yeah, I was going to ask you literally that question. You want to stay ahead of a puck and then I don’t know, there’s a giant rift in the ice, but this has happened before. There was an AI hype cycle five to seven years ago, and then of course there was the blockchain hype cycle. So these industries have been through these things. Are there any words of wisdom that you can share having weathered a few of these?

AT: It is interesting. Of course, I founded a blockchain company in 2012. I exited in 2019, so maybe that’s one credit up on the scoreboard there. But we got in super early in that particular space. But I liked think to what I said before that sometimes I can see over the horizon a little bit. But what I think is different about this generation of machine learning and ai, and to your point, obviously we have had AI and ML for quite some time. We have been through a hype cycle in this space before we’re in another one. Again, what I would encourage people to look at very closely is what’s different about this particular time? And our hypothesis in this is in generalized models and that’s what’s different. If you look at 10 years ago, clearly we had computer vision, we had language analysis, we had various models that could optimize for classification and other things.

What we didn’t have is models that could be arbitrarily instructed to perform different tasks equally well with the same foundation model. And the translation of that is that what we’re seeing in this generation is a capability much closer to what I would describe as human-like intellect and not human intellect. And I’m not calling a GI just yet, even though some in the industry are. But what we’re seeing is the fact that we’ve got this emergence of capability where something an equally well look at a picture of a damaged roof on a house, analyze a claim description, look for discrepancies and interact with a customer at the same time. And it’s that democratizing capability, the fact that people can access these models despite not being machine learning engineers that makes this fundamentally different. Suddenly the best in class people to use these models are not developers, they’re underwriters or claims adjusters or people working in finance and I don’t think that’s ever happened before.

And it’s the archetypical engagement with these models that I think is going to define why this is fundamentally different. And frankly, I don’t know that we have hit peak hype yet, but at the same time, we’re already seeing a lot more consumption of the capability than we have in other hype cycles. I mean blockchain’s the perfect example of course, because you almost have to prescribe to people that they had to use that technology. There was no pull really with ai. What we’re seeing now, 30% of organizations in many cases in discussions have had an insurance. People are actively using chat g BT and are equivalent on a daily basis and that’s because they see value.

MG: Go ahead. Sorry Chris, take a step back for a second. So in our world, in the VC world, a big part of maybe I’ll call it the before gen AI and the after gen AI realities is when you’re diligencing or you’re talking to these startup companies and these vendors, a big component of the review of them is their technological ip. What are they bringing to the table that’s so unique that it’s got a moat around it that no one else can build it? That’s what will make it sticky in the organization. That’s what makes it to decide, but is a point in the bucket for them of it makes sense to invest in you. Right? There’s there fast forward to now where gen AI kind of not every business model and every company, but in those companies that are using some type of ai, this generative AI is kind of table stakes now.

And so it makes that it’s accessible to everyone to your point, but it also makes it more difficult to identify what’s the differentiator. And then I think what really is a differentiator is from an insurance carrier perspective, the data that you can bring that’s the carrier’s ip, but there are tons of questions about data integrity, data protections. How do you think that the landscape, the partnerships with carriers, how does gen AI change one, how carriers are looking to partner with these companies, but two, how they’re concerned they may be changing how they need to think about their data and their protections.

AT: Yeah, it’s a great question and I’ll sort of go back to the beginning of what you asked and you’re absolutely right. I think for the last 10 years a lot of companies have said, we are using AI to do X or Y or Z. And a lot of the time the biggest challenge we had in VC was validating that statement. Are they actually using some form of AI or is it just a marketing term? Is it algorithmic? Is it classical? That was a relatively easy task after a while, once you understood the right questions to ask, and I got to the bottom of those things quite quickly. To your point, what we’ve seen now is that suddenly everybody can actually use ai, not originate ai but use it. And that makes it really challenging because everybody suddenly is doing whatever they were doing six months ago with AI and that was easy to bolt onto at the top because these models are very, very accessible.

It’s a hobby of mine at the moment, in fact, to use the Wayback machine archive dog to look at any company that says they’re using AI and see what they did 12 months ago because what they actually do now, suddenly of course we’re at peak height, so they’re doing that plus gen ai, which is fine and people should be doing that and exploring that, but it doesn’t necessarily mean they have a protectable moat to your point, and that’s how I do my moat discovery of looking at are you just consuming GPT-4 or clawed or are you running up a LAMA model that you’re very moderately migrated to be something that you’ve created? That tends to mean that in some cases you don’t have a moat. In some cases through partnerships you might, and to your point, it revolves around the data you have access to.

And this is a really interesting trend right now, having been in the insurance industry for a while now, we’ve always talked about using our data as our protectable asset that we might be able to use either first party or with another organization to train and refine a model to perform better than what’s in the rest of the industry. Now that used to be really hard. It used to be something that we talked about doing or in some cases didn’t want another company to do and put legal protections around. Now suddenly it’s dramatically easier and you’ve got entire communities of people on Reddit who spend their day refining models and it is trivially easy. Teenagers are doing it now. It is fantastic, and we are in this brave new age of ai. The ramifications of that though, the organizations are that they either have to get serious about using their data and protecting their data, particularly in partnerships, but to pay very close attention to what the legal agreement is between them and the organizations that might come into possession of their data.

And I think we’ve seen a lot of the lawsuits and in fact copyright protections guaranteed by Microsoft and now open AI saying that if you get sued for using somebody else’s data, we will indemnify you. The reverse is also true. You see a lot of insurers starting to very carefully test the waters on what happens if we create a fine tune version of GPT-4 or what happens if we buy a company training LAMA models to do X or Y or z. I think that we’re going to see a lot of this and in the next two to three years we are going to see a lot of, let’s call it a digital twin of insurance processes run by machines that are replicas of the way that it’s done classically by humans inside an insurer. So to use a QBE example, and not that we’re doing this yet, but if a QBE underwriting process in London for commercial property had that secret source and we like to think that it does, we are going to start seeing the evolution of machines doing that without human involvement or with limited human involvement, but dramatically faster and cheaper.

And the counterside to that is that companies that choose not to do that might find themselves getting left in the dust and that’s the risk here versus the reward if you choose not to do it. And I’ve been saying this a lot recently, choosing not to participate in this AI generation means that you’re exposed to others that do and then that’s what we need to pay attention to.

CW: Yeah, that’s very interesting. Talking about the moat and then talking about the risk, going back to something you said earlier, it’s not that there weren’t models, you couldn’t fine tune for purpose, right? Bert, you could fine tune, but you had to be able to write TensorFlow or PyTorch code and there was a built-in mechanism for safety there, which is you had to really understand what these models did. You probably had some background in statistics so you could understand when a model’s performing correctly and when it’s going to behave out of distribution or not with chat GPT, you just type things in or upload a picture and stuff comes out and it feels like it’s doing your job, you’re happy and otherwise you’re not. So as I’m thinking about some of the really critical core business processes in the insurance industry like underwriting or claims processing, that scares the bejesus out of me. Thinking about people just sort of willy-nilly letting these things do their jobs and people that are gen AI experts but not ML experts, making the decision to push something to production or not is a really scary prospect to me.

AT: It really is. And look, I think everybody’s doing early experiments in this space to their credit, regulatory bodies around the world have started to ask the right questions and they’re all asking the same questions and that is fundamentally who has oversight over what’s being done? How are you ensuring that protected classes are being protected and where’s the evidence for that? I think that we are going to see over the next 18 months a lot of bolt rattling and a few mistakes from companies that will be very public on hallucination effects from generative AI models, from companies that think they’ve seized on something that’s amazing but just falls short of the mark for a variety of reasons. There’s sort of two categories that I’ve been putting things in in this space recently. One of those is extractive, and this is broadly retrieval augmented generation as it’s called people looking at very long documents.

This is common in underwriting and insurance of course, where you might get a submission that has 250 pages and you might want 30 questions answered broadly to say, can I underwrite this according to my underwriting guidelines? If so, why can I point to the evidence of what it says in the underwriting guidelines themselves? And furthermore, what decision should I be making? Is this a bind, is it a decline? Should I refer to a senior underwriter? And why I think that as long as appropriate oversight is made in this process to make sure that a human’s responsible for saying, essentially checking it’s working and saying, yes, this is in the source documentation. I can see it highlighted on page seven by the model, and here’s the reference to the underwriting guidelines saying you can’t underwrite this risk or you can underwrite this risk. As long as that’s being done diligently, then I think that that will limit the exposure to error. The second that people go over the precipice of entirely automated decision-making, I think that we’ll get to the point where that can be done, but at the moment I think that that’s a step too far on risk for a very conservative industry. There’s a lot of promise, a lot of promise, but I tend to think that we need to be conservative here and I’m not a conservative guy purely because of the exposure and the risk of being wrong.

CW: Yeah, I think it’s a step too far and it’s also an unnecessary step to me because at the moment, the pressure on staffing and underwriting office or throughput to bind more business, the acceleration you can get from a well architected AI plus human workflow is enough. I think I’ve heard that repeatedly. Well,

MG: Absolutely. Oh, sorry Alex. I was going to say taking it away from underwriting for a second onto the claim side, what we’ve heard a lot is AI would be great for automating some of those internal operations. So I’m the claim adjuster, I’m in the service center and I’m answering calls, and so if I need to look something up or identify information really quickly, I can use a chat GPT or regenerative AI to get that answer for me. But we are a long time away from allowing AI to be the interface with our customer because at the end of the day, that is where you most typically an insurance carrier is interacting with the customer. So Chris, to your point that human plus ai, just from what my understanding is of the appetite and just the claims lane is that it’s never going to be a full AI process end to end, but Alex would welcome your thoughts on that and what you’re seeing in that vein.

AT: To Chris’s point, I don’t think it needs to be fully automated at the moment. We don’t need to directly expose these things to customers, but there’s a lot of very interesting things happening in broadly what are being referred to as co-pilots now. Thank you Microsoft for calling everything co-pilots. We’re seeing an evolution in different products and claims of helping people doing customer servicing to align to typified responses to customers. I saw a very interesting example, I think it was from KPMG recently in looking at helping claims staff, this is going to sound strange, appropriately emotionally respond to workers’ compensation claimants to make sure that they exhibit both a correct response to what’s taken place to the customer who might’ve been terribly injured, but also keep it on track to make sure that the claim is appropriately dealt with and making sure that they understand what the next best action is should be.

Something has typically been something that a human being is solely responsible for, but homogenizing responses, for better or for worse is something that these systems are quite good at doing and analyzing in real time. And one thing that we often don’t think about in claims processes is that where you’re directly interacting with a human being, you’re under quite a lot of pressure. So making sure that you’ve got a backup as it were, a non-human dare I say, to make sure that you are following corporate process, that you are responding in an appropriate way that what you are saying is in line with corporate procedure is something that reassures people quite a lot. But I mean looking at more objective products, there’s some fantastic examples I’ve seen recently in accelerating claims processes on claim adjusting. I mean, look at catastrophic events in the eastern seaboard of the United States.

A hurricane comes through 50,000 properties are damaged, which ones do we need to prioritize to make sure that our customers are looked after if that roof is being blown off? This is something that a year ago was really, really expensive and rightly so. Rooftop analysis for damage when you’ve got access to gray sky imagery used to cost five to $10 a roof. Now with the advent of multimodal models, we can look at a description of claim and text a picture of the before and arter a roof and a system can tell us who were the most impacted customers within minutes. And it’s profound because not only is it dramatically faster and can take and puts in multiple forms, but also it’s dramatically cheaper. Now we’ve gone from $5 to less than a cent per rooftop, and I think this is going to be the biggest driver of change. It’s a combination of accuracy and capability, speed and reduction in costs. In many cases, that first phase being done entirely without human input. Obviously there’s a review phase, but getting to that point of value much more quickly means that the way that claims will be dealt with will be fundamentally different. No longer do you have an adjuster walking around with a clipboard and a disaster area. Now you can do a lot of this as a desktop exercise.

MG: Well, I was going to make a snippy comment about I don’t know if the claims adjusters would like that or not really, it depends on where that goes.

AT: Our experience has been that they do interestingly, so there’s obviously two responses that we’re seeing inside the industry at the moment inside all industries. One of those is visceral. That’s my job, and the machine’s just done it faster and cheaper than I do, and it does it 24 hours a day. The other side is people thinking, what can I do now that that part of my job that used to be complex and time consuming can be done for me and I can focus my human skills on doing the thing that the machine can’t do yet. And I say yet, obviously this is a moving timeline and machines become more capable every day and that’s scaring everybody a little bit. But at the same time, I think there’s a lot of opportunity for growth if we can serve customers better, if we can bind policies more accurately, if we can create better products as a result of having more human interaction with customers in meaningful ways, that’s a win I think.

And a lot of people across all parts of the insurance value chain are starting to say, Hey, that does take a long time to do a read through of a submission document or it does take a long time to read through someone’s claims history in order to understand what’s taking here, what can we do now that we couldn’t do before as a result of that drudgery being taken away? And I think that’s really healthy. Clearly we have to keep reevaluating this, but there is a lot of opportunity here to realize what do humans have as innate skills that machines might not ever have and how can we take advantage of that to start creating new products and services and extensions to existing ones that make our customers feel more valued, more serviced, particularly in a light of a changing world of more risk, more catastrophic weather risk, more cyber risk, more people risk. There’s a lot of things that are changing. Frankly, I think that this age of AI might deliver us from part of the burden we’d otherwise have because of the increase in risk.

MG: A lot of what you just mentioned, that new products, new capabilities for individuals within their roles, different mechanisms of communicating with customers or distributing products to customers side of the required to get, there’s years and years of technological investment, whether that be third party data vendors or providers, other startups or innovative companies that have been brought in and often pieced together to supplement and help that person complete their role. When you think about all of that and the workflows that have been built, generat AI can come in and support that. But do you think because you can query on it because you can ask any question of this large database that a lot of that becomes just legacy technology debt, that there’s no place for that or is that always going to be the ground truth and then AI is just really helping to automate the process around it?

AT: It is a really interesting question, and I think the last 48 hours have started to emphasize to some people what might be coming over the next two years. And if you look at what OpenAI has announced in what they’re calling GPTs, what the rest of the world calls agents, my hypothesis in this space is that we’re starting to see generative AI systems become tool users much as humanity became tool users a few hundred thousand years ago. What we’re seeing is that the models themselves can tie together disparate capabilities that previously relied on swivel share operations as we call them, to create an output, a workflow effectively, and it’s this combination of generative AI in charge of a workflow for previously disconnected processes that I think will deliver the most value. And if you look at companies that have been operating in this space in insurance and banking for quite some time, so Pega UiPath are now we’re starting to see this generation of AI focused companies.

Sixfold is a recent example. Cyto is one that we happen to have invested in that are starting to realize that all of these things that makes insurance and banking inefficient. You’ve got a mainframe from 1975, you’ve got a paper-based system where you have to scan documents in and extract fields. You’ve got a human process where somebody’s got a transcript of a call, you can tie all of these things together and not necessarily have to define specifically how it’s going to work, just give the task itself to a machine with a vaguely defined output of what needs to take place and find that it will happen. So I think that over the next couple of years we’re going to see, let’s call it collaborative interaction between generative AI agents that are given one task as a cog in the machine that work collaboratively, much like a human team does to produce an outcome. And this is a huge area of growth. We are going to see tremendous capability evolve here. The risks obviously still have to be taken account of, but that’s the biggest opportunity to grow.

CW: I think one of the major obstacles to getting there is you talked about UiPath, they have a platform that you can use to sort of orchestrate and integrate that doesn’t exist yet, and maybe it exists if you can stay narrow and you’re entirely within the open AI ecosystem or anthropic ecosystem or whatever it is, but eventually people are going to need to mix and match these models and agents and different models have different capabilities, and right now, even just at the level of wrapping the various APIs like Lang chain light, LLM, it’s a gigantic tangle and mess. So what are you seeing emerging in terms of the sort of, I don’t know, call it the platform that everything can get plugged into like the rack that everything gets hooked together in? Yeah,

AT: The Swiss Army knife of integration,

It’s an interesting space and there’s multiple components here of course. So one of those, and it sounds super unsexy, it’s ML ops, right? ML ops is very important now because being able to do regression testing on components of a task or a workflow to make sure that it is performing, you can be regulatory defensible in terms of what’s taking place. I’ve said this for a while now. I think that we’re going to start seeing regulatory bodies being prescriptive with tasks as to how a machine has to perform in certain scenarios to be certified for use in production. And this is something that has actually happened in cinema previously. If you look at a Blade Runner as an example, some people might be familiar with Blade Runner putting hypothetical situations to a machine and expecting a certain output to classify it in a certain way. This is going to happen.

In fact, it’s already starting to happen in vehicle killer automation. Now you see the trolley problem in self-driving cars. What would a machine do if it’s driving down a road and somebody jumps out in front of it? Making sure that a release of software perform in a predictable way, not necessarily an identical way for each scenario, but to make sure that protected classes of people, certain types of operations are done consistently. This is something that has to happen from an individual component perspective, but then when you tie all of these things together, the thing that our industry isn’t prepared for yet, I don’t think any industry is prepared for, is what actually takes place when you get these combined effects, when you get these systems that work together for a goal.

Can you ultimately, and this is a big question, can you get a machine or do you want a machine to be responsible for its own decisions? Certainly under most western legal jurisdictions, a human has to have oversight. A human being has ultimate responsibility for actions taken place as part of a portfolio they operate. Is that going to shift? Are we going to say that because this machine was certified, my responsibility has been offloaded onto the regulator that said that was okay? Or are we going to see humans in the loop being the ultimate decision maker, the arbitrator, the responsible party for things that take place, but then looking at a process perspective, are we going to see this evolution from statically defined workflows, screen scraping a mainframe, bringing in unstructured documents, text from OCR to perform a particular task that’s very tightly defined as to this character at this location on a screen that says this meaning that you do this next? Or are we going to start seeing machines making intelligent decisions as to what something means at each stage, acting much more human-like and performing a task even where there is some wiggle room, some indecisiveness in terms of what the actual input is and what the output should be. These are all questions that we have to answer. I certainly don’t have the answer now, but I know that in order to use this technology, we’re going to have to be able to answer these questions.

CW: Yeah, it’s interesting. Humans have relied on machines for automation for a long time and it used to be that if your machine and the factory breaks down, you just decommission it or replace a part. But one of the challenges here is that OpenAI owns the means of production, and that’s a big weakness I think of this framework.

AT: It is, and it is really interesting when you dig down into the weeds of the community that is running and refining and fine tuning open source models, and there’s some really fascinating individuals involved in this. The reason that a lot of people are doing this is exactly what you say, dare I say it, not wanting to have the means of production rested from their control. If the means of production is in fact access to intelligence, machine intelligence specifically, we are seeing this for this race and comparative frameworks to look at the performance of GPT-3 0.5 or Claude or G, PT four now G PT four Turbo compared to LAMA two, LAMA three, which is coming sometime towards the end of winter according to meta, we’re seeing this sort of arms race between obviously incredibly capable models even though G PT four is Ruben, not to be a model, but a system of experts as they call it, six or seven, 200 billion parameter models versus quantized models that can run on a desktop GPU.

And the fact that neck and neck I think is very fascinating because it might mean that OpenAI doesn’t have a protectable mode. And there’s a famous essay that was allegedly leaked from Google about six months ago now that said, OpenAI doesn’t have a protectable mode and neither do we. I hope frankly that that might be true because if we centralize these capabilities entirely to the behemoths of industry, to your point, suddenly everybody is entirely reliant on a very large single point of failure organization that might change fundamentally and behavior and remove your ability to participate. And that’s dangerous.

CW: I’m just going to take a pause there. I need to go buy a bunch of Nvidia stock.

AT: Luckily for me, I did 80 years ago now, but yes, my second most successful investment after Bitcoin, I’ll never do another Bitcoin. When we got into Bitcoin it was $52.

CW: Amazing. Can I also just say how angry I am that we have 25 Marvel movies and only two Blade Runner movies? That is an absolute crime.

AT: They did do a good job with the remake, though I was impressed they could have destroyed it, but of all of the things in that class, I think they did moderately well.

CW: Alright, back to real life here.

MG: I was going to say Alex is previewing a Terminator movie. The machines take over the world in five to 10 years, but you were wind all of that. I just kept going back to the biggest challenge historically. One of the biggest challenges historically in the insurance industry is that information, because of all the legacy and technology debt is siloed, right? You’ve pieced together these systems. Even with these transformation initiatives, you still have a lot of that, right? Because you can’t fully shut something off. Even when you turn something on, everything’s locked into unstructured documents, it’s filed somewhere, you haven’t touched it, they don’t even know where it is half the time, right? And isn’t that infrastructure challenge still needs to be solved in order for any of this to really amount to anything that a quick bandaid that fixes one part of the process but doesn’t actually get to the crux of what you’re really trying to solve for, which is data-driven insights based on your view, the carrier’s expertise, risk, appetite and performance

AT: Potentially, but also maybe not. So the interesting thing I think that we’re seeing now because of the way that, and this is going to sound dismissive, that generative AI can solve most technical problems, at least to a certain extent, I suspect that we might actually see legacy systems remaining longer than they otherwise would because if you create a generative AI interface to say bringing in unstructured information from a mainframe, which is actually a use case that I’ve seen done in the industry, then you might forget about that legacy and the complexity purely because you can treat it as a solve problem. You’ve got a modern interface to that to bring it to somewhere else. You kick the can down the road as a result and focus on the new and shiny thing. Undoubtedly though, we do need to solve the problem of legacy. Now I’ve been involved with several major insurance companies that have mainframes that predate the moon lending.

That’s a problem and that’s something that needs to be solved. But the challenge that we’ve always had as an industry is that the figures quoted to move away from these systems are eye watering. My former employer, we spent 650 million migrating away from our legacy platforms. We went through four different CIOs just to achieve that, and that seems obscene until you look at the complexity of doing that in a highly regulated environment. Having said that, I hope that the opportunity in AI and gen AI might be the impetus that we need as an industry to say yes, that does cost a lot of money, but in order to have access to certain components of what we need to do here, we need to modernize our systems, we need to turn off the legacy. I mean, a classic example at the moment that we use in the space is geospatial data.

So geospatial data is complex, it’s large, it involves radar imagery from space, it involves photographs. If you are dealing with a mainframe that is purely text and can’t save images against a claim, it can’t save geospatial polygons against the outline of a property, then you are suddenly off an island where everybody else might be able to take advantage of these new toys. But you are stuck in 1975 where you can’t, and that’s a very conducive argument to boards to say, if you don’t do this, you have an existential threat on your hands and we can’t do this now because we have to get rid of these systems that’s starting to fall on ears that are slightly less deaf. But there’s still a long way to go. It’s not easy to spend hundreds of millions of dollars and certainly getting a sign off to do it is a thing that shareholders think about very carefully,

CW: Which we’ve talked a lot about the tech and you just scratched a bit of the surface of the change management problem that always comes with these things. So any automation you do, you’re changing people’s jobs. There’s work to do at the human level. I think that just sort of gets exacerbated by these new technologies that we’ve been talking about. What are you seeing out there?

AT: So there’s multiple ways to look at that. I think there is a dramatic need for people change management as a result of what we’re seeing in this generation of ai. I, I’ve been keeping my finger very much on the pulse of this particular question, having discussions with people who might be the most impacted as a result of what we’re seeing in automation, not just at the task level but at the program level. Now there’s a lot of skill sets that we’re seeing not just in insurance but more broadly starting to look like they could be entirely automated. There’s a great example, I dunno if you’ve seen it in a company called Egen, H-E-Y-G-E-N, that has effectively made the film dubbing industry obsolete overnight and effectively record a video of anybody and then translate the words that they say into any other language, including synchronizing their mouth motions.

We are going to start seeing these incredible progressions of things where people wake up in the morning and suddenly an entire industry might have been replaced. That does have a profound effect psychologically on people. Even if that’s not your job that was replaced, people are smart enough to say, Hey, that could happen to me or that could happen to part of what I do now, what I think is fascinating is twofold here. One is that people are not responding to this with a disengaged fear. People are actually using this technology, understanding this technology. I think that they’re thankful that they can engage even if they’re not a technologist. That means that the pull from people, even if they’re watching machine, do what they do, they’re experimenting with it, they’re learning it. I’ve seen people that perform tasks that it’s very likely machines will be able to do completely themselves in two to three years teaching themselves the prompt engineering skills required to get that kind of capability out of the machine itself.

Having said that, we do have both a macroeconomic challenge here and also an internal challenge macroeconomically. We have to understand what the impact might be on unemployment, and that’s certainly something that I’m not sure that modern western capitalistic systems are entirely ready for, purely because of the speed of change, but organizationally as well, we have to start understanding what we need to do in skills maturation in upskilling people to take advantage of the systems we’re putting in front of people. Because in the end, you can’t just drop a machine learning system, a generative AI system in front of a team that has to be built into their processes and people have to start understanding that they need to treat a machine less like a machine and maybe a bit more like a human being. And that’s a profound step. Previously we treated automation, robotic process automation specifically as a tool that performed a function. Now it’s something that might be a bit more collaborative rather than instructive.

MG: I think Alex, you hit on an interesting point there where the capabilities and the functionality of gen AI and how the non-technical people can use it, I think has removed some of that traditional friction of the insurance carrier, that the business owner, the business unit participant, and the innovation group, we have this new technology that we can bring in, it’ll make your job easier, you’ll be able to do X, Y, Z, and it’s like, no, no, I want to keep my job. I don’t understand the technological, that’s all a technological solution to what I’m looking at. This seems to remove some of that because everyone can see the benefit of it because they’re living it immediately. Once they start leveraging that, those capabilities,

AT: It absolutely does. So we did an experiment in Hong Kong in May this year where we sat down with a team of underwriters that had no technology skill, really certainly not any programming skill at all. There were a bunch of marine cargo underwriters, very, very good at their jobs, exceptionally well performing portfolio for QBE, and we said to them today, you are going to use generative AI GPT-4 in order to optimize and accelerate parts of the work that you do. And it was quite funny initially because the first reaction from then was, I have no idea what you just said and that’s ridiculous. I’m certainly not going to be doing that because that’s insane. And I said, no, it’s fine. Sit down. And what we’re going to start with is describing your job to the machine and obviously we had to get a bit of prompt engineering skill into the head.

They came up with that as a marine cargo underwriter at QBE. I look at it insurance submissions and I consider the vessel the qualifications of the crew, the things on the ship that we are insuring, the route the ship’s taking, all of these things that they do. And then we showed them with some fabricated marine cargo submissions that we’ve made that all of the things that they asked that took up to several days sometimes to perform could be done entirely automatically just by injecting the submission along with the initial prompt into the system. And their response to that was not fear or confusion, but genuine collaboration. This consciousness expanding point where suddenly they realized that a machine was not something that was to be feared or that even was scary but was tremendously useful and that they could create capability in from scratch because previously these teams had to spend months to years going and creating specific systems alongside solution architects and software engineers to perform a particular function to the point where most of the time they didn’t even bother. Now they could sit down in front of a terminal and in some cases do it for themselves or at least start to do it for themselves. And when they realize that we get this shift from technical uses being the primary consumers of machine learning to business users actually having the highest point of value and that’s profound.

CW: How does the next generation of underwriters learn underwriting in this world that we are entering?

AT: It’s a really interesting question. I think we’re going to start seeing this dividing line, which is a moving line between what humans are better at at least now and what machines are better at. Clearly, a lot of the, I’m consuming components of underwriting, understanding a customer’s risk, understanding the underwriting guidelines, reading through risks is something that humans have traditionally done, but they won’t be able to. They’ll still be able to do it, but we won’t be able to let them do it because the timeline for submission review needs to be that much dramatically faster. What we’re going to be seeing is that humans are presented with those initial points of value extracted by a machine saying, here are the 10 things that you need to pay attention to. You’ve got, let’s say a thousand submissions that have landed in your underwriting work mentioned box today.

Here are the 10 you can bind immediately right now and here’s why. And the underwriter might say, yes, I agree with that. Tick, tick, tick, tick, tick. That means that what might take a day for an underwriter to meet their target on today in terms of premium written might take them between nine o’clock and nine 15 in the morning. That means that the more complex risks, the things that machines might say, this is on the line and here’s why there’s nuance to this, and this is where you might need to go and send a risk control team out to go and look at this risk to understand if it’s presented correctly or there’s a discrepancy somewhere between what’s written and what I can see from space from aerial imagery on this building. A human needs to go and validate that to make sure that this discrepancy is either not real or that there’s more detail to it that’s being presented here.

You’re going to start seeing humans being the arbitrator on information that might be less clear cut between you can bind or you can’t bind. And I think that’s extremely healthy and valuable because human decision making is something that I think will still take quite some time to replicate in a machine. Getting to the point where you identify which decision needs human arbitration will be that much faster and to the point we made before, the key thing is here that companies that choose not to do that are going to start missing out on business. And that’s the sentinel that I’ve been telling people to watch for. If you have a portfolio that’s performed very well previously that you’ve been writing the business that you want to write and it’s very well shaped and your loss ratio is good, if you find yourself getting to broker submissions two to three days after they come in and when you go to communicate with a broker and say, we can bind this. They say, sorry, somebody already has bound that. That’s a sign that another carrier is starting to move in on your turf and using this technology. And that’s where all of this automation becomes super important. The reason that pretty much every carrier and broker on earth right now is looking at this closely on the usual bell curve is because they’re healthily afraid of what happens if this is done to them, not what happens if they do it themselves.

MG: It’s really interesting point. I hadn’t thought about it that way, but that’s how they are thinking about it for sure.

AT: It’s interesting, right? This is probably the one of the first generations of technology where the speed that you have, the time that you have to actually take advantage of it or the risk if you don’t take advantage of it, becomes existential. And I don’t think that the insurance industry, even in the last, let’s call it 40 to 50 years of digitization, has actually experienced something like this where the timeline to onboard yourself to it has been so short. And frankly, that’s why we are looking very carefully at investing in this space. Insurance carriers, the insurance industry is not renowned for being able to do this kind of thing very quickly. Ins, InsureTechs have a reputation of being able to do it. There will be a survivorship bias towards the ones that do it successfully. I do believe that this is something that’s going to be done very well by InsureTech that carriers can partner with. And we’re certainly already seeing that. There’s a lot of very early movers that are creating some very impressive capabilities that while the carriers are essentially still talking about the regulatory impact or the people impact, which are very important things, but just knuckling down doing it and showing what’s possible, this is generally where it comes in from the outside.

CW: That makes sense. I was going to ask one more question, but honestly Alex, I think that is a fantastic stopping point. You really hit the nail on the head there, so I’m going to wrap us up. This has been another episode of Unstructured Lock Unlocked. Our guest today has been Alex Taylor, head of Emerging Technologies at QBE Ventures, and my head is full right now. Alex, thank you so much.

AT: I’ve achieved my goal.

CW: All right, thanks folks. We’ll see you on the next one. Bye.

Check out the full Unstructured Unlocked podcast on your favorite platform, including:

Subscribe to our LinkedIn newsletter.

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.