Watch Christopher M. Wells, Ph. D., Indico VP of Research and Development, and Michelle Gouveia, VP at Sandbox Insurtech Ventures, in episode 19 of Unstructured Unlocked with guest Louis DiModugno.
Michelle Gouveia: Hey, everybody. Welcome to another episode of Unstructured Unlocked. I’m co-host Michelle Govea, joined by co-host Chris Wells, and our special guest today is Louis DiModugno. Lewis, great to see you again. Lewis and I go way back from our time at Harford Steam Boiler. So really, really excited to have you on today. For those folks that don’t know you as well as I do do you wanna give a brief introduction on yourself and your background?
Louis DiModugno: Sure. I’ve had the opportunity to be in the data space now for quite a few years. But prior to that, I was active duty military for 12 years, and then another 15 years as a reservist, retired as a full colonel. And from there I hung up that uniform and I started wearing my new uniform of the bow tie. And that’s really what has been my my moniker in my data space since I’ve joined the civilian side of the world. My work over the last 10 years specifically has been as a chief data officer. And so I had the fortune of being the first chief data and analytics officer for Axa us a Paris based company. But we ran the life and annuity space outta New York City.
And so I was the first there for over three years, and then I came to Hartford Steam Boiler, now known as H S B. And I had been the first chief data and technology officer there. And that has been over the last five plus years. So being in being a chief data officer, first off for anything over two years is considered a pretty good success. So having the opportunity for doing it for the last 10 years, I, I think I’ve earned some stripes there and really look forward to talking to you guys about it today.
Christopher Wells: Yeah. Welcome to the podcast. And I’ll just say before we get started, thank you for your service for so long. Absolutely. Sure. Louis, we, we talk a lot on the podcast these days about how data, how technology, how automation play a role in especially the underwriting and claims space. So maybe you could start off by just giving an overview of how those things all play together.
LD: Well specifically, you know, in the insurance space we just have so much opportunity to have access to data for both on the commercial and the personal side. And so, you know, when the policies are written, when that information is being gathered by the agents and the brokers there’s just so much opportunity to ask questions that could really help to give insight from a risk and understanding of the of what’s being insured. On the underwriting and side again, here are more opportunities to collect data. So, obviously the claims at the moment of truth, when your policy is being put into work, essentially you have the opportunity to collect more information about, you know, what happened and the aspects of what may be caused that claim. And over time, as you collect data across the number of claims, now you have essentially a an output variable that you can use as a target for all the information that you collect from the policy upfront.
So, again, you know, all of this really turns into a great big math equation of you know, if I have all this information upfront from an underwriting standpoint and from an understanding of the risk, maybe I can be better at predicting where claims would or could or should be applied and really have the opportunity to utilize the math behind that to do that. And that’s where AI and, you know, a lot of this math that’s come along with the capabilities that some of the AI and machine learning tools that are out there now are being emphasized and really being u utilized to get better understanding around predictive and prescriptive areas.
MG: Luis, the o the other thing that and I know you and I have had conversations about this, I think while, while we work together, is that everything you said totally agree with that data. Is there, we’ve talked a lot on this podcast about the challenge of getting that data out and making it usable, right? And so we talk about siloed systems, or even in the case of you know, from a, from a reinsurer model right there, there’s an extra link in that chain to try and get that information in, to, to, to run the calculation and even determine, you know, from their perspective, the payouts and things like that. How do, how, are there any differences in your experience on how you have to run a data strategy if you’re at like a, I’ll call it a primary insurer versus a reinsurer? And just kind of the nuances there, I would love to get your take on that.
LD: Sure. and, and what’s interesting in the insurance space in general is because you do have so many parties involved from the policy holder who has essentially all the information about their business or about their personal insurability through the agent and broker, which collects some of that information. And then you get it into the policy admin system, which only accepts some of that information. And then as you move down towards the reinsurer, you’re sharing only some of that information mm-hmm. <Affirmative> onto them. So it’s kind of like a funnel of data where you start out with a, a huge amount, and each step of the way, as you get into a point at, at different points of the e of the process, you’re losing data. And so now the question is, is are you ensuring that you’re capturing the pertinent data that you need for your tasks in that process step?
Or are you losing data along the way that maybe could have been very beneficial to you in some of the decisions that are being made by the underwriters and folks in the project or new product development space? And so one of the things, you know, you have to ask yourself what is the question that you’re trying to answer? And a lot of the times that’s really hard because we don’t know what questions to ask. Yeah. especially when we’re building these things out for the first time. And so you’ve gotta have a dynamic environment where you have the ability to expand that data environment and, and essentially capture or collect more information as necessary to be able to address the questions that you are trying to answer.
MG: And that’s probably I’m calling back to my own experience when, when working on a, on launching a new product, right? Is you don’t know what, when it’s a brand new product in a, in a brand new product line, right? That, that’s generally new in insurance, I’ll call it, you know, cyber insurance as an, as the example, it was brand new. You don’t know what the claims are gonna be. You don’t know what the real, the real risk is that you’re underwriting against. And so how do you identify, to your point, identify those questions, but also build a system that can be flexible and nimble enough to change as those data points, those insights come through. But I think your point on the data funnel is really interesting. You have to ask the question is fundamentally, is that workflow flawed in that the data is getting removed at each step? Like, is there a way that that data should just free flow? And if you don’t need it, it becomes, you know, something you don’t use or there, what kind of data privacy and you know, overarching guidelines you have to, to have in place for that. So that, that’s a really interesting point of the, the data, the data funnel gets narrower the deeper into the, the chain you get to.
LD: Yeah. I think you brought up you, you used I, I heard you used the word regulatory, you know a as, as you come through this at, you know, in privacy, the challenge with a lot of this data really becomes, you know, what is it that you can use, right? To go ahead and make some of your decisions? And what are, what is it that you can’t? Right? And so again, you’ve gotta be very legal understanding of, you know, what information is available, what can I use, how do I make sure that I’m not discriminating against the the commercial or the, the the personal policy holder. And, and again, there’s so many different ways that you can discriminate against them, which is really unfortunate. And this is where you get into the whole artificial intelligence and machine learning piece, is that there are proxies out there that aren’t directly aligned with, Hey, I’m gonna discriminate against you because of your race.
Well, I don’t have to go ahead and do that because maybe if I go ahead and just use a zip code, a zip code could easily go ahead and identify a lower socioeconomic environment that maybe is essentially held against that person in, in the risk analysis. And so the challenge there is, again, from the biases and fairness of what we’re learning and trying to do in this AI space, it’s very, very important that as these models are being built especially on the machine learning side, is that we do not incorporate any biases even unknowingly because those biases will continue to influence the outcome over time. Yeah. Essentially, what should be a nice, normal curve of distribution of who’s being covered and how they’re being covered, all of a sudden will start skewing to one side or another, maybe because of the biases of the programmer.
Okay. and again, those may be unintended. And some of it could be from the data, and if the data’s not properly properly prepared, again, there could be biases associated with that all the way through the changing of the model over time. So again, these machine learning models specifically are supposed to be learning. And so as you put more data or more claims or more underwriting aspects into it, it should be refining its outcome. And what you don’t wanna have happen is it to continue to refine towards a bias. I, I, I hope that’s understandable the way that I explained it. But the challenge is, is really from a standpoint of you have to be able to di identify the fairness right up front as to how you’re gonna be building these models. And there’s about 30 different, I I think there’s more than 30 out there now, different ways of mathematically defining fairness.
Yeah. And so once you’ve got those fairness definitions out there, then you can really start to try to understand what biases. I, I, I had done some research recently and I had come up with, there’s well over a hundred biases out there that are identifiable. And so to try to be an expert in all those areas, very, very hard. And one of the things that I’ve been touting recently, I had given us a, a talk at at Iowa State where it was focused around this idea of, you know, the future of this area is going to be a big piece of, it’s gonna, one of these things that they call the prompt engineer, right? Which is, how do I express the question appropriately without having the biases in there? But then there’s also going to be an interpreter engineer or an interpreting science scientist who’s going to have to be able to look at the outcome and determine that is this truthful Right? Is there no bias associated with it? You know, are, is is there a hallucination associated with what’s happening in my, in my model? You know, or is it trying to lie, you know, more frankly or am I really getting a truthful fair outcome?
CW: Yeah. It’s interesting. You know, I’ve, I’ve been in the analytics and machine learning space now for over a decade. And it used to be folks would complain nonstop about machine learning being, you know, just some black box, you know, voodoo. And I’ve seen, I’ve seen two things happen with these large language models. Now, in the prompt engineering one, I’ve seen data scientists completely forget rigor altogether. You know, and not, not doing their trained tests splits, you know, not thinking about it the right way. And I’ve also seen folks that have started to build out this skillset of now you have the ability to understand the black box, cuz you can literally just directly interrogate it and <laugh> that, that ability to take the answer to a prompt and chain it to like, show me your work. Mm-Hmm. <affirmative> really useful skillset and really nice way of sort of shining a light inside the black box.
LD: Sure. Sure.
MG: We, we’ve seen on, on our side a lot of companies what we’re, we’re like categor, like categorizing it as AI compliance is a lot of, as companies now starting to come up and, and have these solutions to say, we can, we can monitor your models and if you add in a new variable, we can determine if there’s a bias being added or, you know, review that. What happens if you now, from a regulatory standpoint, I’ll bring it up again, Louis, since you did, if you have to remove a variable that you’ve used in your pricing in the past, how does that alter you know, the underwriting that, that goes along with that, and how do you reassess what, what you’ve already bound and, you know, moving forward in, in renewal process and things like that. So this is very much an area that there’s a, we’re seeing a lot of entrepreneurial activity in, and there’s gonna be a huge focus, even from the regulator side we think coming down, coming down the pike over the next couple years. Well,
LD: And I think what’s interesting too is that you’re even starting to see some insurance companies starting to build a product around ensuring the models, right? Yeah. So, you know, the, the, the accuracy of the models and the, and the capability of these models, there’s risk associated with them. And so obviously if there is an opportunity to mitigate that risk or at least you know, prepare for it, then your insurance companies are going to, you know, see the opportunity and see if there’s a product that they can put in place there.
CW: Yeah, that’s, that’s fascinating. <Laugh> <laugh>, all of these new things that you and insurance companies have to do and are doing brings my mind back to a question we talked about on a recent episode, which was, there’s this projected shortage of labor sort of top to bottom in insurance companies. And I wonder how is that affecting you and how is that creating opportunity for you? And on the, you know, is it having an effect side of things? Are you finding sufficient, you know, talent for ml AI automation out there?
LD: Well, I, I, this is a great topic in in general because again, I think that there’s a, a good population of data scientists out there mm-hmm. <Affirmative>, the challenge is, is are there a good population of data scientists that know about insurance and, and understand the process of insurance, and essentially the math behind the underwriting and the actuarial science that, that, that props it up. And so the, the challenge becomes, okay, I’ve got these data scientists that I go ahead and I bring into the company. I’ve gotta educate them on insurance, but at the same time, I want them working, you know, nose to the grimstone, figuring out some of my problems mathematically to determine, you know, Hey, is there a product here or is there a profitability opportunity? Or can I go ahead and bring something to market that my customers are really interested in?
And, and the challenge there becomes, you know, in each one of these companies, there’s been, I I will say, you know, some companies have embraced it whole heart you know, whole hog and, you know, they’ve got hundreds of data scientists within their organization. Other companies got two Okay. <Laugh>. And, and the challenge then becomes, and this is where, you know, I think opportunities of things like you know, chat, G B T and some of these other capabilities that can help code, right? You get into this area of low-code, no-code space where someone who doesn’t necessarily have to be a data scientist or a coder, but they can go ahead and they can do preliminary hypothesis testing Yeah. Around data that they’re familiar with. So you go ahead and you take a claims professional has been with the company for 20 plus years, they’ve been collecting all this data for all this time, and now they’ve got a question, you know, Hey, I think I I’m seeing a trend, or, Hey, I think there’s an opportunity around, you know, age of roof in a certain zip code or a certain, you know county area.
And so they go ahead and they can start to, you know, use some of the low-code, no-code capability. And again, there are some companies that have really specialized in that space. So like a H two O or a data robot, you know, those folks out there that really have, have done a really good job around the gooey interface and being able to create an environment for someone who’s not a programmer to be able to access data and to start asking questions. Here’s where I think the opportunity is though, in the insurance space where maybe if you don’t have a large investment in your data scientists, or you don’t have data scientists who are very insurance savvy, is that you’ve got a underwriter or you’ve got a claims professional who can utilize these low-code, no-code environments, come up with some hypothesis, you know, ring it out a little bit to say, Hey, I think there’s a, there, there, mm-hmm.
<Affirmative> now you go ahead and you turn it over to your data scientist and you have them put the appropriate rigor behind the, you know, Hey, do I, did I have biased data? Hey, am I really focused on, you know, the right type of coding to make this a robust model that incorporates all the all the data elements that I needed into it? And then also get to the point where you can operationalize it, because that’s really the last five yards is, you know, e even if you’ve got a a claims professional who comes up with a great idea and can program it all, they can’t put it into a environment where it’s gonna be robust enough where maybe a e external organization can bounce up against it with a question. It’s gotta be put through an ML ops ringer so that, again, you’ve got good controls around it, the IT organization is fully bought into it, you’ve got the privacy, you’ve got the security associated with it.
All that has to happen. And so, again, I see this low-code, no-code environment opportunity of utilizing insurance professionals to really start to ask the questions themselves. And then as they come up with, you know, oh, there is a there, there, they can turn it over to the appropriate and it gets prioritized and, and put the through the ringer. So I really see an opportunity there. And again, i, I re I think that this opportunity for your insurance professionals to become more statistically savvy as they’re asking these questions to see that there’s lift associated with what they’re bringing together, it really becomes more of a opportunity to make sure that the appropriate work is getting to your high value data scientists or your low volume data scientists. And, and and, and be able to ring it out.
CW: So I, we’ve been promised low-code no-code before RPA was supposed to be that, and it didn’t, didn’t happen. Because it turns out you have to be able to think like a programmer to be effective, even if you, if you, even if you don’t have to program, you have to think like a programmer. So I worry about, I’m excited about what you’re talking about, but I also worry about it in the sense that when worked with very junior data people, you know, fresh out of a master’s program, something like that, you unleash them with tools, things that they can help, you know, make those tools scale. You tend to get a lot of what I would describe as very baroque models, like overly complicated, overly heavy. How do you, how do you prevent that? Right? It’s, it’s the same analogy. It’s like you’re not a data scientist, but you need to be able to think like a data scientist to do the job, even if you don’t have to do some of the sausage making in between.
LD: Yeah. So again, I, I’m, I’m not a advocate of just opening the box and giving it to everyone and say, you know, run, run amuck with it. Yeah. there definitely needs to be training. And, and there definitely needs to be, I, I I, I, I can’t emphasize this enough, you know, as you change an organization to go from, you know, Hey, this is how we’ve done work forever, right? To, Hey, now we’re gonna give you a new tool, and by the way, this is how we want you to use the tool. It’s that turning the, the culture of the organization, you know, to go from a culture of, you know, Hey, I’m gonna serve the client and I’m gonna be the best, you know, a client deliverer, there is you know, in in the organization to all of a sudden having this curiosity of, you know, Hey, is there an opportunity for me to do this better?
Or is there an opportunity for us to ensure that we’ve got, you know, the right risk profile on the, on the policy holders that we’re bringing to bear here. You know, the, the one thing that you guys had mentioned, or, or, or Michelle had mentioned a moment ago using cyber as a example. Okay. You know, the typical way that most products, you know, are really focused, especially on the commercial side, is tell me what industry you’re in. Tell me where your organization is. You know, tell me, you know, how many employees you got, and from all that information, I’m gonna go ahead and I’m gonna write you a cyber, a cyber product, and it’s gonna protect you against, you know you know, ransomware. It’s gonna protect you against denial of service. It’s gonna protect you against lost to wages.
It’s gonna, you know, it’s gonna do all these great things. And so, a again, initially we went into this with the aspect of, well, we’ll just collect this stuff that these companies normally give us, and now all of a sudden you start getting claims and the claims are, well, they penetrated because you know, we had one of our ports, you know, was, was open or we didn’t up update our Java recently, or, you know, it was all these technical aspects that we knew nothing about from a risk standpoint because we, it was never collected. Right? And even, you know, at the carrier level, or even at the reinsurer level, you’re not that close to the company to really understand their IT system, their IT environment, maybe their IT organization. Even if I knew that a company had a CSO or if they had a, you know, a professional C I O, you know, then all of a sudden I can think that, oh, well, maybe, you know, if I, if I know that they have that, then I can also make the assumption that they’re better at, you know, their environmental structure.
They’re better at managing upgrades and having the most current product you know software versions, all this other information that would’ve let me know from a risk standpoint that, you know what, on the cyber side, these guys got it covered. Yeah. Right. But if I go ahead and I, you know sign up a, a, a local pizza shop, right? And they want cyber coverage, and now all of a sudden I find out that the person who’s in charge of the, you know, all their IT of taking orders and everything is the guy who’s also, you know, mixing the sauce in the back room right now, now all of a sudden I’m gonna start asking myself questions that, Hey, should I be pricing them the same way as someone who has a professional IT organization? So, a again, you know, as as we learn more about these products and where the claims are coming from and what they’re being attributed to, it asks, you know, makes the insurance company ask more questions from a standpoint of, you know, am I collecting the right information for this specific pro product?
And, and, and am I pricing it appropriately?
CW: Yeah. It’s like writing a life policy without knowing a blood pressure, right? Yeah. Yeah. Interesting.
MG: Go. Going back to, so I, whenever I think about no code, low code, I always think about it from just in my experiences, you know, from a policy administration workflow perspective, right? Even less so about get, getting into the modeling and things like that. And I think generally speaking, there are certain workflows or maybe even certain product lines that lend themselves more easily now to, to those capabilities as a, you know, less, less complex or you know, more frequency of, of underwriting or even claims. How do you think about that? Are there, do you think there are certain product lines or certain capabilities that, that are, are ready to execute now versus somewhere down the line where we have to do X, y, Z to, to get there? So that low-code, no-code is truly effective?
LD: Well again, I think that there’s a big aspect of, you know, the culture of the organization and the adoption of being able to, one, give people access to the data, right? Mm-Hmm. <affirmative>. So, you know, there’s that whole aspect of, you know in the past, and again, because of security and privacy issues, you know, once data goes into the admin system, it stays in the admin system. And, and you only get reports at a very high or a segmentation level that, you know, doesn’t give you the insight that maybe, hey, there’s a specific carrier in a specific location that is influencing, you know, the claims within a specific product line. And, and you may not be able to see that unless you have that capability being able to really narrow down into those areas. And so, you know, I, I think that a lot of organizations in the past, you know, and even more recently, you know, we’re using better visualization tools so that we can start to see where there’s opportunity or where maybe there’s pockets of influence that are contributing to losses or you know, where claims are coming from.
And, and, and there are some really good visualization tools out there now that literally you can do some natural language processing with, where you can either talk to them or you can type in your question and it will come back. And not only will it come back with an answer, but it will also come back with the analysis of what influenced that answer. Which again, gives you the opportunity to dig down deeper and deeper as to, you know, why, why did that happen? And, and there’s, there’s opportunities there. So, you know, when we talk about low-code, no-code, again, there’s the programming side of it, of creating, you know, Python script that’s really reusable and, and capable of running against very, very large data sets. But I’m even talking, you know, this low-code, no-code is even, you know, this area or this capability within reporting where you can go ahead and you can open up an opportunity to a group to say, you know, Hey, here’s your profitability and it’s changed over time, and you can see that there’s potentially a trend over time. Now you can start asking the questions, why, what influenced that trend? You know, why is it going up? Why is it going down? Why is it seasonal? Right? And you’ll quickly find that there’s quite a few even reporting tools out there that can help you dig into that start to prove out some hypothesis. I
CW: Wanna, I wanna, I wanna bring that back to intake again. One of the things that I’ve seen over and over again is that as processes become sort of augmented by technology, or partially automated by technology, one of the things you find happening is, well, one, everyone’s job gets better and the team scales better. But a knock on effect downstream is that you have much cleaner data that’s now flowing into and throughout the organization. So are you seeing those knock on effects from like, AI powered intake solutions, or, or do you think it’s still early days?
LD: So, unfortunately, the, what I’ve seen from an AI powered intake solution has not lived up to, to the, to the billing. And so I’ve, I’ve worked with a number of companies specifically around wanting to emphasize ai intake. And what I’m continually seeing is it’s being purpose built within organizations as opposed to having out of the box capability. And again, a lot of that still has to go with, you know, how are companies collecting information, how, you know flexible is their policy admin systems that these, this data’s going into. Once it’s in these policy admin systems, does it continue to live there or do I have to, and do I clean it there? Or do I have to lift it out of there and bring it through some sort of governance process to ensure that I’ve got, you know, high quality high, highly accurate you know, augmented information that I can then send down my analysis and data science path.
It’s very hard for most organizations to be able to lift information directly from their policy admin systems and, you know, go, go direct to decision making or data science with it. There tends to be a, you know, a step in there where I’ve gotta do some data management. Maybe I’ve gotta go through a mastering aspect to bring together, you know, the same customer across many different product lines to be able to understand that, to see the claims coming in from different product lines, and be able to bring that all together so I have a master view of that customer or that location for that matter. There’s, there’s many different aspects of, of bringing that together. And again, I haven’t seen a lot on the policy admin side. You know, to me, what I’ve seen most recently in policy admin is the movement to the cloud, all right?
They’re gonna go ahead and they’re gonna, you know, make everything software as a service. And, and that’s good from a software standpoint, because again, I’m gonna have the, the latest and greatest version. I’m not gonna have to worry about having to do those kinds of upgrades. The, the vendor is gonna take care of all that for me. And, and that way, you know, it can, can continue to reap the benefits of new capabilities that are coming out of them. But, but I haven’t seen the one, the, the AI aspect of integration or, or acquisition of the data to really help. And then the mastering of the data itself hasn’t been a priority for, for the policy admin folks. It’s, again, it’s either a CRM initiative that’s outside of that, or it’s a mastering effort that’s through another new, another product.
CW: So it sounds like you’re saying there’s, there’s potential for like an AI powered intake system to build a cleaner data set, but you’re sort of a few levels obscured from
LD: Actual, and, and, and most organizations are talking about AI Yep. Capability in their, in, in their data acquisition.
CW: Yep.
LD: I just haven’t seen it be a smooth process yet. It’s still, you know, kind of company built. There, there’s still a lot of sequel in there, there’s still a lot of, you know stuff that you know again, it’s gonna take an IT organization to run it as opposed to, you know, Hey, here I am the claims person. I’ve got a great idea. I wanna bring in some data from a third party provider, like an Experian or a Dunham Bradstreet. And so they send me a data set. Now I’ve gotta integrate it. Hey, Mr. It, can you help me? You know, or, Hey, data engineer, can you help me? All right. It’s, it’s not that easy, smooth, Hey, I’m just gonna drop it and I’m gonna go ahead and, and see it, pick the fields that it should be matching against, which is what I would expect an ai acquisition capability to, to really be able to exercise against.
CW: Interesting. That’s a great answer. Very
LD: Cool. I do, I do have these grand ideas or these grand visions of how it’s all going to work in the future. And I keep pushing that you know, we’ll get there someday, but it still hasn’t. I, you know, I, the, the one thing that I, I talked to a team about recently, I make it as easy as I wanna see a Pizza Hut tracker. Okay, <laugh>, I want, I wanna see when the phone call came in, I wanna see what you ordered. I wanna see, you know, the pizza being made. I wanna know when it’s gonna show up in my doorstep and when I gotta meet you there. And all the, you know all, all the you know money exchanged behind the scenes, you know, it’s just a matter of, I called, I can see when my pizza’s gonna get here, and I’ll meet you at the door, and all these other things are happening behind the scenes.
And I can add, anytime I can kind of look to see what’s happening behind the scenes, you know, did they put the pepperoni only on half? Right. You know, all, all these pieces of the equation. I, I, I always take it back to that. And I, I, I’ve, I’ve always been enamored from the idea that, you know, when Pizza Hut first came out with that whole idea, I was like, this is it. You know, this is process visualization of my, you know, my wants being fulfilled. And here’s, here it is that, you know, I can see it at a moment’s notice of what’s going on. I wanna be able to see that with data and <laugh> and, and essentially from data coming through the front door all the way through being prepared and ready for the data scientists and what they’re doing with it.
CW: It’s funny, Michelle and I just had literally this exact conversation. Yeah.
MG: Another also food service related, I think Chris mentioned the DoorDash model. <Laugh>
CW: Mad Domino’s man.
LD: Yeah.
MG: Very funny. And, and Luis, you’re talking about all, like, obviously all, everything that you, you’ve just talked about takes a massive amount of effort, and that’s just starting fresh, right? So when you layer on the fact that there are, within these big insurance companies, large legacy systems, or just multiple systems where you have to try and tie that together, that’s, I think where you saw the, this emergence of, we need a new data warehouse, or where’s the data lake model that we can pull from? And even that comes with all of its challenges of again, are you querying for the right information, right? Like, there’s, there’s an example that, that we like to use as a team internally of you can ask five different people in insurance care, how much did we do in gross written premium last year? And you’ll get five different answers, right?
And that’s because where’s the source data coming from? What manipulations have been done to that data just based on, you know, the needs of, of your business unit over time? And so your point of being able to see behind the scenes of not only where’s my pizza in the process, but, but who is making that pizza? Right? Who is touched, who, who is responsibility? They wash their hands of that. Yeah, exactly. Right. I think that’s, that’s something that even companies that we talk to that say, we can integrate with your data, we can integrate with your systems, miss how complex some of those things are and why it’s so challenging to, sometimes the best solution fails because the underlying structure just isn’t, isn’t ready for it. I, I
LD: Imagine you’re touching on a third rail topic, actually, Michelle. Because again, you, you’re, you’re talking about data lineage, okay? Yeah. Yeah. You know, if I, if I’m using data in my model, where did it come from? Okay. Mm-hmm. <Affirmative>. And so now the question is, let’s take it one step further in the chat, g p T real, okay? And now all of a sudden somebody has a privacy question, right? Or something to the effect of, you know, Hey, how did you get my information? Now all of a sudden I’ve gotta go ahead and I’ve gotta track that back through to really understand. And from a regulatory standpoint, all of this privacy stuff associated with chat G P T and these models that are being built is, is going, is, is about to become a really, really big issue. So there’s the AI Bill of rights that the US put out in December from the Biden administration that essentially tells you all the things you can’t do, or that you, let me put it this way, that you shouldn’t do with with AI models and the data that’s available for it.
The EU has followed on with their gdpr, and now they’ve got an initiative around, you know, Hey, how are we going to go ahead and ensure that the data that’s being used to build these models is not privacy based? So again, are you gonna go ahead and take all of social media out of the models that are being built that are going to support the eu, right? Because again, now you’ve got all those challenges associated with it. And that’s, that’s a big thing in Europe in particular, is that they’re very big on, you know, hey, you can’t track people, you can’t listen to people, you can’t do facial recognition in, in the eu, right? That the, that’s a big no-no over there right now. And then you go to the uk, right? And the UK is separate from the EU now, but their their most recent declaration, I’ll call it from Parliament, was they wanna be an AI global superpower, right?
So again, when you go ahead and you look at each one of these organizations, mind you, I, I haven’t looked to see what you know, China or Russia have really put in writing. There’s, there’s not much in writing around regular regulation or what they can and can’t do in that space yet. But specifically of the areas that I just mentioned, of the us, the UK and eu, they’re all trying to kind of get out in front of this issue of, you know, hey, these chat G P T or these, you know gener you know generative ai capabilities have to have some sort of guidelines. They have to have some sort of controls or guidance or, or regulations around them to really help organizations understand what they can and can’t trust of what’s coming out of them.
Because you go ahead and you look at chat, g p t chat, g p t was trained on over, they say it was upwards of almost a third of the existing internet, right? Well, you go ahead and you look at the internet, and I don’t know that I want to trust, you know, Wikipedia being changed every other day, right? Or what gets put out on Parlor or truth or any of these other social networks that maybe have a, a skew to them. And now all of a sudden you go ahead and you see that, you know, Che G P T doesn’t have a lot of guidance to say what it can and can’t say and now all of a sudden things come out that are potentially not truthful.
CW: Well, yeah. Now it’s a, it’s a vicious cycle, right? Yeah. Cause you, you’re gonna have medium full of chat G p t articles, so chat G p t five is worse, right? Yep. It’s a big problem.
LD: Yeah. It’s it’s amazing. And, and this, and the, the, the, the real amazing thing about this stuff is that, you know, when we were first handed, you know, these, these cell phones, you know a few years back you know, a couple people had ’em initially, and then, you know, or you know, or people started out with blackberries or, or pom pilots or, or whatever it is. But, but again, you know, the, the guidance a around the uptake of that technology was such that, you know, it took a few years before everybody got one in their hand and, and got to the point where they were smartphones. And to the point where they are with the processing capability that they have now is, is pretty phenomenal. But it took years for us to get there. This G P T was unleashed in November and already, you know, we’re into another version of it, right? We’ve gone from chat G P T to G P T four, and the capability just between those two is exponential. And, and we’re just moving at such a fast rate that if we don’t get some controls and some regulations around what you can and can’t do with this stuff, we’re gonna get into a significant issue where we’re, we’ve painted ourselves into a corner and we can’t get out.
CW: This is a whole other episode, sorry, <laugh>, and we should have you back, because I can’t think of a lot of examples where regulation actually fixed the problems it was meant to with technology. So, well,
LD: And I don’t know if you’re familiar, you know, Elon Musk last week, two weeks ago, he came out with this open letter, you know, that essentially
CW: Test, test, ban, treat.
LD: Yeah. Said stop for six months. Well, who the hell’s gonna, you know, maybe we, maybe we could get the US or maybe we could get the eu, but you know, there are other countries that are just completely running rampant with this stuff that there’s no way they’re gonna stop. They, they’re gonna see it as a, an opportunity to either catch up or pass. You know, during that timeframe,
CW: That letter was so naive. Yeah, it just silly taking shots at you eon <laugh>, <laugh>. So again, we should come back to the regulatory aspect cuz you know, insurance, we talk about a lot. It’s and highly regulated, right? So it really matters. And I, I want to, I wanna bring it back to insurance specifically right here to ask you, like with all of the AI capabilities insurers already have and they’re trying to build, what do you think the biggest dangers are? If we don’t get this figure, we’ll talk about how to actually regulate it later, but how do, what are, what’s the biggest danger if, if we don’t get guardrails in place?
LD: I think that there’s a lot of opportunity companies to put to essentially build their own insular G P T, okay. Where they can be taking advantage of their own proprietary information and, and building it out into an environment where you can have a very intelligent bot be expert on, you know, many things about your company. I, I think, I think an area that is very highly automated opportunity, and I, I don’t mean to disparage anyone in, in this regard, but in the legal space, right? There’s an opportunity to really ingest a lot of what regulates the the you know, the industry. And then you go ahead and you take on the decisions that have been made within your organization from a legal standpoint. And this extends into the data privacy space as well. But you go ahead and you take all that and you layer it in, now all of a sudden you’ve got an opportunity to have an interactive bot that can talk to you specifically around, you know legal aspects, you know, vendor contracts you know, things that you can and can’t do or that you shouldn’t do in the name of the company.
You know, all these things essentially that you can get at a moment’s notice, you know, through a a Google type interface where you don’t have to wait for, you know, legal representation to be available to ask the question, and for them to do the research on it, it would be essentially instantaneously available. So, so I do see, you know, there’s, there’s a lot of opportunity there. The challenge though, in that regard is, and I don’t know if you saw what happened with Samsung last week, but essentially Samsung tried to do something similar to that, but somebody, instead of keeping it within the confines of the company, loaded it to actual chat G P T, and the next thing you know, it’s worldwide. And so, you know, proprietary Samsung information is available to everyone. So, so again, we just need to be really careful about how we go forward with this and be purposeful around what it is that we’re trying to address and accomplish.
I really think that there’s a lot of opportunity and, and, you know, don’t, don’t even get me started on you know going down the path of quantum computing involved with that. You know, we’re very quickly going to get to a point where there’s not gonna be a lot of what stuff left for us to do. You know, there, they’ll be really good at even coming up with new ideas around, you know, potential products based on, you know, what’s happening in the environment. So again, there’s, there’s lots of lots of opportunity and lots of changes I think are potentially coming down the, the, the path. I do think that we’ve got, you know, challenges within our own organizations to understand what’s happening. I think we’ve got even bigger challenges in that we’ve got legislatures and, and political bodies that aren’t aren’t smart on this stuff, right?
And they are gonna be doing some knee-jerk reactions, you know, almost everything. I’ll, I’ll take you back to the, the comment I had made about this AI bill of rights that the Biden administration put out there, almost every other sentence in it mentions the word bias. Okay? And so, again, somebody, somebody clued into that when, you know, when they were writing it down and building it out and saying, well, you know, this is gonna be discriminatory and all these bad things are gonna happen to all of our constituents. And so there was a focus on that, right? And so, again, my hope is that there’ll be more education, there’ll be more opportunity for people to understand and learn a little bit more around how this works. Hopefully they don’t get all their information from Chad, G p t or g p t four <laugh>, right? Or, or, or, or wound up with a Skynet type situation, <laugh>. But but again, I, I think that there’s really good opportunity to to utilize this in, in how we are how we’re building out the insurance products and, and opportunities there.
MG: Agreed. Yeah. It’s early, early days in the sense of there’s, there’s still a lot of, of versioning and things to come, but to your point, lots of challenges, but a lot of great opportunity too, if it, if it’s done right. This has been fantastic. You’ve given us a lot to, to think about. There’s probably three or four other episodes that we could, could do just from, from what we talked about today. So you know, would love to, to maybe have you back and, and chat through some of those other ones. I think Chris is, is dying to do it on just the regulatory landscape. So <laugh>, he wants to do that too. What’s more exciting than that? Yeah, <laugh>.
LD: Well, well, you know what, I, there was one thing that we had started down the path of, and again, I, I’m sorry to get distracted with some of the, the, the, the most current news. But one of the reasons why CDOs only last for two years is because the value and the benefits aren’t able to be extracted quickly when you look at it from a strategic view versus a tactical view. And so, again, I think a lot of organizations, a lot of CDOs companies, they, they see that they’ve got data in all these different environments. They know that there’s value if they can bring them together. They’ve got data scientists on the side over here screaming for data to come through in a high quality nature. And so, you know, the, the frustration or the lack of value creation in those first couple years becomes a challenge for a lot of CDOs.
And so they essentially either find their own door or find another place in the company or, or go on to greener pastures. What, what I’ve found, though, for myself and, and this is what really has been, I think, beneficial and, and other CDOs can learn from my mistakes, is having that overarching strategic view of how are we going to bring all of the data together is one aspect. And then again, you know, maybe you’ve got a long-term view and a, and a long-term investment associated with how you do that, but you’ve gotta be able to tactically bring this data together to address individual questions that are high priority, that are high value, and be able to do that in a high enough quality. It doesn’t have to be perfect in high enough quality so that you’ve got some level of confidence and what’s coming out, and you’re able to do that in a short, you know window of time, maybe a quarter, six months.
You know, those are the kinds of focuses that you need to do. If you wanna survive as a C D O, you gotta be able to create those value propositions on that tactical level in the, in that short term. And again, it doesn’t have to be perfect. It has to be good enough, it has to be enough data so that they can come up with a good idea or a good decision around it, and that they can go ahead and move forward with it. Like I said, there’s always gonna be this opportunity to kind of boil the ocean and make the big investment for, you know, the, the wider data gamut. That’s going to be more around automation. It’s gonna be more around, you know, new technologies associated with bringing it together, maybe a new data warehouse. But in the meantime, there’s gotta be that effort of gotta, you know, do these smaller initiatives to create value and to create to show the promise of the data going forward.
CW: Sage advice to all the CDOs out there, not just in insurance, but specifically in insurance where there’s really, I think we’re at the beginning of a hockey stick in terms of what’s going to be done with and around data. Also I’m thinking now that we should cheat teach junior math or junior statisticians, the Pareto distribution before we <laugh> distribution. That 80 20 is, yeah. Yeah. Much more useful. Well anyway, this has been another episode of Unstructured Unlocked. My co-host as usual is Michelle Govea, and today we’ve had the pleasure of being joined by Louis DeMaio, the Bow Tie Data Guy Lewis, really great talking to you. Thank you so much for your time and insights. Thanks
LD: For having, thanks for joining us, Louis.
LD: Yep. Thanks so much for having me.
CW: Bye.
Check out the full Unstructured Unlocked podcast on your favorite platform, including: