Upcoming Webinar

Better Data, Better Decisions: How AI and Automation Are Reshaping Risk and Pricing

March 31st at 10:00 AM ET

DAYS
HOURS
MINUTES
SECONDS
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

Risk Analysis for New Technology and Change Management in Insurance with Tami Pantzke, Former Senior Vice President Operations, Gallagher Re

Watch Indico Data CEO Tom Wilde step in as co-host alongside Michelle Gouveia, VP at Sandbox Insurtech Ventures, in season 2 episode 10 of Unstructured Unlocked with Andrew Carr, of Cartwheel.

Listen to the full podcast here: Risk Analysis for New Technology and Change Management in Insurance with Tami Pantzke, Former Senior Vice President Operations, Gallagher Re

 

Tom Wilde: Welcome to another episode of Unstructured Unlocked. I’m your host, Tom Wilde.

Michelle Gouveia: And I’m your co-host Michelle Govea. And we are thrilled to be joined today by Tammy Pansky, a seasoned reinsurance executive who most recently was senior Vice President of operations at Gallagher. Tammy, welcome to the podcast.

Tami: Hello.

Michelle: We’re really excited for the conversation today. I think before we jump into the long list of questions that Tom and I have for you, it would be great if you could just give the audience an overview of your background and your experience and where you spent the better part of your time during your career.

Tami: Okay. Like you mentioned, I had 30 years worth of experience or 30 years of experience in reinsurance, and I started off working in client services back in the nineties and then moved into risk management and broker team kind of moved back and forth between the two before I landed in operations. So I have that hands-on experience and the operational experience, but along the way I was always called into transition new technology, being an SME end user, providing feedback on what was working, what was not, creating processes and improving processes along the way. And then finally in the last 10 years of my experience or greater, I was in operations and taking all those skills into consideration as we set up teams in Nashville and India and work with teams in the UK and other locations having to train and bring people up to speed on new technology and new processes.

Tom: That’s perfect, and I think that’s kind of the topic we wanted to discuss today. Given your experience and background, when you think about technology transitions and evaluating new technologies, what’s generally the kernel that starts a new process? Is it competition? Is it market change? There’s a sense that the technology can now make things better than it could before. Where does the germination start from when you’re trying to evaluate a technology transition?

Tami: Okay,

Tom: Regulatory.

Tami: So some of this starts with your new budget rollout that comes out, that shows growth potential in how your company is planning to grow over the next five years, whether it be adding new clients or m and a activity. So everybody goes back to their huddles and look at, we look at our processes, our people and our technology to see do we have, can we scale this any further or do we need to add on or do something new? I would say that really is the first thing that starts the questions along the way if something is working or not. And as far as coming up with new technology, I would say you would examine that in the sense you don’t want your expenses to far exceed your growth potential. So you want to keep your margins pretty thin. So that would be in the consideration.

So in those initial examinations of looking at your five-year growth plan, you begin to look at your technology. You’ve heard from it, it’s limping along, or you’ve heard from the vendor that we’re no longer going to support that technology, we’re closing it off. And so now you’re pushed into a situation, you have to make some decisions. So you start with the first exercise you start with is with a client or a cost benefits analysis or review, and I’m sure you’ve heard that you would start off with looking at your current system, what would happen if we never changed anything, kept everything together. What is the cost? What is the benefits, what are the risks? Then you look at another second scenario of what if we kept the legacy system didn’t change anything, but we added on, we did some improvements. Can we, does it say it’s possible?

Is the technology there? There’s a lot of questions that it gets involved in. And then you ask the same questions all over, what are the costs? What are the benefits? What are the risks? And will we continue to be able to grow at the rate that we’re going to grow? Then you bring in one or two vendors, you do a review of a few vendors and choose one or two to compare and do cost benefits reviews against your legacy system, your legacy system with improvements, and then your two vendors and always in the backdrop of all of that is what is our growth potential? What are we planning? They’ll say we’re going to grow 5% a year, what does that mean and how are we going to grow? Because m and a would bring in a different type of scenario versus just adding more clients. And then you also have usually working under a tight budget when you’re a corporate company, you have to stick to a budget. So you lay that all out and you look at that and make decisions from there. Some of the things I wrote down, some things that you would want to look at, first of all, you would look at the 80 20 rule. So you want to make sure that anything that you’re looking at can at least do 80% of the functions that you’re hoping for,

And you look across the piece for that 80% to be there even in anybody new 80. Then you take a deeper dive into your 20%. What is the 20%? Is it the same 20% that legacy system can do or is it a different 20%?

So in that 20% is your most complicated functions, too complicated and too small to put into a system, and it’s also very manual and high risk. So those are where your risk would lie is in that 20%. So that’s why you take a look at what 20% are you guys, the new vendor looking at and what 20% are we looking at to see if it matches up? So that’s really important. You also do gap analysis during that frame. You are looking at process maps and looking at what it can do and where those functions take place and how they take place in what your legacy systems can do and what you really need it to do. Kind of looking at all those three different things, looking at the complexity of each of the systems, the manual process, the workarounds if you will, that are incorporated. You’re hoping that anything you bring in new can take away some of that workaround and in your risk analysis, what you’re looking at for all three functions is what controls are in place. And I think that in some of the systems I’ve worked in the past, they don’t have enough AI in that space to help with the controls that you need, especially in a company that’s highly regulated,

Has financial big financial items going on, and there’s legal actions, lawsuits that can take place. So what you want to examine are the controls, what do they have? Can it identify a risk, a user risk, a client risk, market risk? What can it do? Can it correct it? Do you have enough information, the system or your AI can correct that risk that’s been identified or does it just notify you and then somebody has to take an action, a manual action to fix, or does it stop all, stop all functionality until this issue is resolved.

Tom: I was wondering maybe this is super interesting in terms of the breadth of challenges AI brings to the table as well. I want to circle back to that one in a minute. Maybe starting go back to the top of the funnel if you will for a second. I’m curious if you observed changes in how you would source potential new solutions over the years in the roles that you had, meaning the old traditional software model was a lot of salespeople reaching out to you, pitching, setting up meetings, there’s suggestions if you read Gartner research and so forth, that is basically inverted over the last 20 years where the buyer now do more like 60 or 70% of the research before you even talked to a potential solution provider. Did you see that, that sort of evolution, how did it change?

Tami: So having worked in large corporations, we were not in our business unit necessarily involved in that type of research that is done at a higher level and certainly with the engagement of it, and when you have a large corporations that operates in various segments, whether retail, reinsurance or whatever, a solution doesn’t flow real well across the piece. And so some of the technology I was more aware of, it was always the decisions were pretty much decided or they were being reviewed and we were part of the review process of what they were hoping it works for this team. So by gosh, it should work for you too.

Tom: Yeah.

Tami: So you were a key stakeholder. I’d like to think that it’s smoother than that, but I’ve worked in what, four different reinsurance brokerage firms, they all operated the same.

Michelle: So it sounds more Tammy because when I worked in the carrier was we still had, I mean now it’s everyone’s in POC or pilot land, but it used to be bakeoffs, right? You get two or three vendors in and they do, it’s effectively what a POC is. You’re just running them concurrently. But it was, here’s the outcome we need and then which vendor performs it better, right? Against I think some of the criteria that you were outlining ultimately

Tami: RFP,

Michelle: Exactly. Ultimately to then make a decision. But sounds like what you’re describing in response to Tom’s question is a little bit more of less concern about the evaluation of the vendor because that’s happening at a different level. You are more concerned about, or tell me if I’m wrong, about what it would be like to implement that solution, right? So you were more concerned about what does this actually do to the business process? Who are the people that need to go through change management training or what are the things that we need to consider downstream in terms of where the data goes, where the information is flowing to? So I’d love to dive into that because that I think is a piece that not all, but some vendors miss in that, oh, we can do what they say, so let’s just go. And they don’t take into account the impact on existing workflows, some of those regulatory and compliance requirements, et cetera.

Tami: Okay. What is your question there?

Michelle: Taking a similar review of how you talked about the evaluation process. What are the key things that you would always question or consider when thinking about implementing one of these new technologies into an existing workflow to replace another technology to sit alongside it, et cetera?

Tami: Okay, so first let me say is that some of the technology, some of the bigger items are pushed down to you. Some of the smaller items you’re told it’s available if you want to use this technology, and that’s when you would take a look to see if you wanted to use that. As far as how do you hear about it? Repeat that question. I’m sorry, what did you say? I think

Tom: The question’s really driving at the change management aspect of implementation, right? So a lot of times the solution fighter may have all the feature function and it’s a perfect alignment, but with technology transitions, especially from your role in operations, how did you bring the team along? And we’re going to talk about this in a minute, sort of what are the success and failure modes, but on this topic, how did you bring change management into the picture here in training and enablement so that once the technology was selected, how do you know it’s going to be adopted?

Tami: So that’s really important. So in the change management, which I led a lot of that for North America, and then I was leading some of that for international specialty, is looking at the product, looking at the process maps was really important to see how do the functions work through process maps and how do they work for us? And if we adopt it, you had to prior to have a really good understanding of your FTE and the talent level of each of the members in the FTE. So if new technology was coming in and you were looking at it and you see that it’s more complicated but requires less FTE, you know that your talent base has to move up a notch or two or otherwise, you also look at, I saw different processes or technology that would reduce FTE. So what does that mean? Well, do we release people or do you keep them on board?

I mean, you have to plan this all out through the process as you’re going before you start having your initial meetings with your managers. And some of the thought process is in that you have to still look at your growth plan five years, maybe you’re a little bit heavy here and during your transition it’s really bulked up, but overall as sales come that ft e will flow right into your management. So the first thing you do is you bring everybody in and you want the vendor, I just call them vendors, we call ’em tech companies coming in or whatever the technology is doing both a presentation and then a demonstration of the product itself. Then from there you would want to do a test where the team would come in, your SMEs and some of your managers, your key end users come in and you wouldn’t want anybody who would have a touchpoint.

You want a broad spectrum and we would go through and figure out who should be in that test base to provide really concrete feedback to you. And honestly, you kind of do some of that before you jump all in is just to know what the challenges are. And in that you’re listening to your vendor and what’s really key, does the vendor understand what your problem statement was to begin with? Do we usually have already identified our root causes and the solutions that they’re presenting, do they mirror and match up? So you do that, you grab the feedback. I will say when you’re bringing a team together for that, you want your managers involved and that change doesn’t come easily to many people. They like the status quo. I would say probably 50 to 70% don’t want anything to change. So you have to bring in your managers and tell them. And when they first look at the product, and sometimes even in the test space, you get to take a temperature reading of where your resistance is going to come from and at what point. And so you want to identify that resistance early on. And if I see that managers have sent their B team or their C team to the presentation and the demonstration, it always kind of, okay, we have an issue here,

Tom: Warning. Yeah,

Tami: It’s a warning. And so you kind of monitor that. So then the next phase is before you do anything, you select and do a pilot, you select whatever it is, and I’m just telling you what we’ve always done.

Wherever I’ve been four or five companies, it’s always the same. They start off with a pilot and you want some quality people in your pilot because even at a manager level, if at all possible, because they somebody with critical skills and critical thought processes on what’s working or not working, and it’s during that time we start to discover that various terminology is used differently than how we use the terminology versus the vendor. So you spend a lot of time with definitions. When you say this, what does that mean? So some of that is going through the, what do you call it, the user book, the guide, what do you call the directions on how to use

Tom: The documentation? Yeah,

Tami: Yeah, the documentation. And so you would look at that and tweak that along the way in the pilot phase just to make sure people understand. So kind of start building your fact sheets. You’ve seen those. If you see this, this is what they mean, and here’s really what you do. So we start building that, and by the time the pilot is done, plus all during the pilot, you never turn the legacy system up. You’re doing dual entries, you’re doing one into the pilot and one into the dual, whatever it may be. And then looking at your metrics, your users and your metrics, and you try to see how it came out. That gives you a lot of guidance on how you’re going to move forward in rolling things out, quite honestly. So you might find in the beginning for an action might take five minutes initially by the end of the pilot, you’re seeing that they’re doing it in one minute, so the proficiency starts to show up and it kind of gives you a better sense of proficiency, probably telling you a lot of junk that we look at.

Tom: Yeah, no, these are good examples though

Tami: To understand. And then it gives us an understanding how big we should roll out in the future. Now if it’s something small, sometimes you just roll it over and it’s 100% and you’re there. I’ve never experienced that. It’s always, we go in phases rolling things out in phases. And one of the companies I worked out in a phase, it is tragic, especially when I was in client services because you’re doing double entry, you are doing entry in the first and then in the second. So that’s when you notice your FTE and your resources get really kind of packed up doing two. And you do probably spend more time on the first phase than even on the pilot because that flushes everything out. You’re having regular meetings with all the stakeholders and we’re talking through all the whining that goes on and answering questions and resolving and working with a vendor. They said they couldn’t do this, but let me check and then, oh, you can too. They missed this step here. And different things like that. So this is a

Tom: Good point. Good flow, yeah. To talk about. Let’s talk about maybe one that didn’t work. What were some markers of a failed implementation and what were some of your key lessons learned there?

Tami: One of them, it was handled internally as far as we had the function and they were working, and I would say the team on that was very small and probably worked in a dark closet somewhere, the back of a basement somewhere. So nobody really knew what was going on. And I think their thought process was, if we don’t have the fewer amount of people we have in the bucket here to help with this, the faster we can get it out and get it implemented and all this other stuff. The problem is is that there wasn’t a lot of engagement. And so when it did finally roll out, when that product finally rolled out, you want to go with the 80 20 rule. I think it was 90 10, 10 being the only part that was working for everyone.

Tom: Oh boy. Yeah.

Tami: And I think the lesson learned there, there wasn’t enough communication across the board to know what the problem statement was and where the root causes of those problems. So when you’re not engaging every representation across the piece, I think you limit your success rate. That was one of ’em. Another one that I thought was kind of interesting, this was many years ago where as a team we had been working and working on this technology that we were going to be rolling out and we were finalizing, and it was all of us. The group discovered many, many, many months into the project, and I mean many months and many meetings that it would not connect to our legacy system. So it was abandoned.

Tom: Yeah, yeah. That’s a must have.

Tami: Yes. So lesson learned there is bring it in very early, let ’em poke around and tell us what works and doesn’t work, what we can use and what we can’t.

Michelle: Tami, I’m having flashbacks to representing the business on a core systems implementation many years ago in a past life, and I don’t appreciate the flashbacks. Thanks for that. But in hearing you speak about the different phases of the projects and making sure you’ve got all the stakeholders involved and signed off at every stage for when that actual go live happens and you’ve got all the pieces in place, everything you’ve said holds true. It aligns with what my experiences were as well. But when I think of that, I think of it being this large transformational project, a core system that’s being implemented that’s replacing or sitting alongside legacy. And in my day to end, we’re talking to startups or to carriers who are leveraging vendors of those mean. There’s still those that are going through these large transformation, but a lot of the conversation we have about implementation or running RFPs or finding vendors to do something are more surgical.

I think they’re smaller initiatives that are more focused on bringing AI to an underwriting workflow or a claims workflow. And so it doesn’t require the same scale or phasing of implementation. And I think that’s happened. One, because you’re not uprooting in an entire policy system that is the base of your day to day, but it’s also a little bit about the response of the industry having to move a little bit faster and say, okay, we’re going to run a limited scope POC, or we’re going to have these gates that you need to get through. And we’re comparing two to three vendors at a time. And so just curious your experience on or how should people think about when it’s an implementation of something that’s a little bit smaller scale, so you don’t have as many stakeholders involved. You need to move faster. You don’t maybe have the luxury of checking every box to make sure that everything flows correctly. How should people think about getting teams moving and using a new technology that’s coming? Maybe that’s more just to prove the technology works and less so having gone through those stages for a large implementation. So how much quicker can something move when it’s a smaller scale implementation?

Tami: When it’s smaller scale, you can definitely move faster. And I’ve seen that and it’s like very department specific.

Michelle: Yes. Yeah, exactly.

Tami: And when it’s department specific, what I’ve seen from my perspective, there are groups and one of the teams was a younger set and they were very tech savvy, if you will, and had introduced many technologies to the system thinking that they would be, and we did POCs and would always, the outcome was always fabulous, and it was my job working on a broker team reviewing some of the technology that was in a different role, but we would review the technology and it was for the benefit of our clients that we would have this small tech and they were going for it. And I would say what I found is there was a lot of enthusiasm for technology, but not the right questions being asked. And so I saw a lot of money being put into initiatives that didn’t have a full vetting, if you will, and if they were for our clients all, you always want to improve the experience for your client, so you see something new and you are all excited, you present it to your clients, they’re all excited too.

And so you’re like, well, we’ll provide the portal, we’ll give you access. And then you start running user reports and you see that nobody’s used it. And then we saw many examples of things being discontinued within six months of buying something a product from. And I understand the idea of POC, but I would say there was a lot of missteps in that area where it just didn’t roll out well. So what would I say that would make that experience better? Because I’m telling you, our young people are just always just biting for more technology, but they all have different opinions. This is great, this is great. You still need smart heads in there just to figure out where does this go? How does it grow? And not just in an immediate response. I think there is a lot of immediacy to that and overlap of other projects that were not considered or other technology that you already have on hand, and it wasn’t considered. I think there’s in that space, I think there’s, you have to put some, and I saw as time went on, more started circling the wagon around the decision makers, making sure the right people were in the room, the white right questions were asked to eliminate that failure. And I would say that’s where I saw the most failures in this

Tom: Maybe wrapping up, how do you think about with the pace of technology change now and the enterprise’s ability to really digest that, what does the future hold for these technology transitions? Because in the outside world, it’s like every six months the world changes, but in the enterprise, you can’t possibly rip everything out and do it again in six months. It might take six to 12 months just to implement something.

Tami: So that is true. And probably about four is longer than that. Five or six years ago, I was in Boston at an Accord meeting. I don’t know if you’ve heard of Accord, and the president of Accord did a speech and he looked over the last 20 years to see how much money insurance companies put into their technology of their overall budget. It was the lowest, it was like 1% or something. So there’s a lot of aging systems out there. I know that it’s improving, but the problem is with the new technology constantly coming, nobody wants to dip into quickly because it might change in a year. And I would say during 2000, 2010, you saw this huge boom and now we’re seeing another big boom

Tami: Being ai. I think there’s a slowness to get in insurance and reinsurance because of that. Unless it’s smaller scale, not your big projects, excuse me, your project platform changes in that when they’re in, they hang around for about five to six years. You might stretch ’em out to seven, but they’re changing at that pace. And if you’re trying to attach new technology to something that was probably 10 years old, really you’re going to have,

Tom: We see that where core systems that maybe historically were on-prem, but now are moving to the cloud, and you get caught in this in-between where, just to what you’re saying, nobody wants to connect stuff to the old version because it’s going to be transitioned to the new version. So you get this weird stasis that might last for a couple years while you’re trying to sort that out.

Tami: Yes, absolutely.

Tom: Yeah. Well, great. This has been a really interesting conversation. We’ve been talking to Tammy Pansky, who has just a tremendous depth of experience in insurance ops and it shows here in terms of the complexity of selecting and adopting these technologies. So just wanted to wrap it up. Thank you so much, Tammy, for chatting today.

Tami: Thank you. Have a nice day.

Tom: Great.

Tami: Bye.

Check out the full Unstructured Unlocked podcast on your favorite platform, including:

Subscribe to our LinkedIn newsletter.

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.