Indico Data receives top position in Everest Group's Intelligent Document Processing (IDP) Insurance PEAK Matrix® 2024
Read More
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

Unstructured Unlocked episode 28 with Alan Ringvald, CEO at Relativity6

Watch Christopher M. Wells, Ph. D., Indico VP of Research and Development, and Michelle Gouveia, VP at Sandbox Insurtech Ventures, in episode 27 of Unstructured Unlocked with Abhi Kothari, Practice Director at Everest Group.

Listen to the full podcast here: Unstructured Unlocked episode 27 with Abhi Kothari, Practice Director at Everest Group


Michelle Gouveia: Hey everybody. Welcome to another episode of Unstructured Unlocked, Michelle Govea.

Christopher Wells: And I’m co-host Chris Wells

MG: And we are thrilled to be joined today by Alan Ringwald, co-founder and c e o of Relativity six. Alan, welcome to the podcast.

Alan Ringvald: Thank you so much. It’s a pleasure to be here. It’s good to see you guys.

MG: You too. We’re excited to have you. So you and I go way back. I know Alan really well, but for those out there that aren’t familiar with you and Relativity six, do you mind sharing a little bit about your background and how you got here?

AR: Yeah, no, for sure. I’m not sure how I got here myself. It’ll be a good recounting of that story. Well

MG: Via invitation for sure. At least here.

AR: Good, good, good to know. Yeah, so just quickly about myself started my first company in college. It was one of those moments where I realized just a small little insight that doing laundry sucks and a lot of my friends didn’t know how to do it well, and it’s actually just a pain point. I still struggle with it honestly. And there were a ton of laundromats outside of my college campus and I was like, why hasn’t anyone connected those two things together so that it could be serviced? So built a platform around that, scaled it while I was still in college to, I think it was 15 schools by the end of it. Ended up continuing on with that company for a couple years. Sold it to, if you would believe it, there were larger players in this space, and I sold it to the largest player in this space.

And then thought building startups was easy and boy was I wrong, which is foreshadowing into everything else. But then I went off to Google for four-ish years, which was great. Really interesting experience being there pretty early right after the I P O, so I’m dating and aging myself, and it was cool. Got to do a lot of different things within there, but at the end of it was looking kind of like that entrepreneurial bug was at me again, and I got the opportunity to help start a candy company in Boston. So I got pitched this idea that somebody wanted to start a company that made candy that tasted like Snickers and Reese’s, but came from all kind of naturally sourced ingredients like good sugars, if that’s even possible, fiber, make it like a healthy type thing, but make it cool and fun and compete with Hershey’s and Mars and all that.

So I thought that was really interesting, even though I didn’t have any experience in C P G or I thought it was cool. So I went on and kind of joined the founding team there to try to figure that problem out. So I did that for two years and it was a crazy experience, but at the end of the day we got into 30,000 doors on day one. So it was a very large kind of launch of an unknown brand, which turns out is not a good thing in C P G. So that’s a whole nother podcast, but kind of better to start smaller. But everyone loved our value prop. The product was cool and I learned a lot there. So I did that for not too long, two years. And then I had an interesting situation happen where at that candy company, we actually had a lot of celebrity and athlete type investors and one of them and became pretty friendly with one of the pro athletes that was kind of involved in the product.

And when I’d left the candy company, he’d asked me to help him do some kind of marketing and branding and agenting type stuff. Basically nobody cared about him from a sponsorship perspective when he’d started and he was like, Hey, can you help me figure out what I could do? And this was pre influencer, so it was like, let’s make you a channel and a brand and let’s connect you with the 7 million other Patriot, new England Patriots fans that exist that care about you, and you could make money from them directly at some point. So that was kind of like the seed of a larger company that I ended up running where it was basically doing that for pro athletes kind of across the board and internationally and all that. So did that and then went off to m i t to first work there. Worked there in the entrepreneurship kind of department, helping students commercialize ideas and ended up getting jealous and then deciding I wanted to go to school there myself. So ended up going to the business school school and meeting my now co-founder at the company I’m doing now, which is called Relativity Six. So sorry, I know that was longer than I wanted it to be, but hopefully

MG: Gives you some, I’m on the edge of my seat to figure out how all of that gets to Relativity six and what Relativity six says. Yeah, it

CW: Doesn’t make any sense and I love it.

AR: Right? It makes no sense. Exactly. So

CW: What is the common thread there? How do you as a serial entrepreneur figure out what to work on and why and how to make those choices?

AR: Yeah, I think it’s just moment in time looking at something and seeing could you turn this into some sort of a business? That stuff, the mechanics of that always fascinated me and I figured my view is I’m not smart enough to come up with some idea on my own thinking about it, but if I’m in the world and have context around me, I can see some things and then love the idea of executing. Everyone likes to talk about it, but what would it really take to make it real? And I think I have just enough time, recovery time after each one of them to forget all the horrible parts and there are many horrible parts. So yeah.

CW: That’s great. What are some of those hallmarks that you look for in terms of an idea that’s worth executing on?

AR: Yeah, I mean it’s classic, but is there real pain and are there enough people with that same pain to make it worth the while if it’s not too custom for one type of person? Is there scale? I’ve always been fascinated by scale actually. For whatever reason. It’s always my dream to build something that scales, and I think that’s the main criteria actually. And then obviously a pain is a good one. That’s why I think the candy company was interesting. There wasn’t enough pain actually. It was more like us telling you to be healthier and it’s not really, there’s no pain. I mean, there’s obvious health pain of course, but that’s a later problem for a lot of people and they just don’t want to think about it. It’s the whole, don’t diagnose it, let’s fix it later, which is obviously bad, but I think played out in the way that company ended up, it went really big and then it had to scale down a lot because not enough people, I guess cared to be honest.

CW: Michelle, you’re going to have to reign me in at some point, but I’m going to keep pulling this thread. I’m fascinated when you say you’re looking at the potential for scale, does that mean it has to be blue ocean or you’re willing to tread into some bloody waters to make that scale happen?

AR: Yeah, no, I mean, I don’t know of a blue ocean for anything that’s actually important or valuable. I would love to swim in those waters, but I’m blood soaked, always have been. So yeah, but you can be faster to market, and that’s kind of what I’m experiencing at Relativity six too. I think we hit on an insight not to transition into that too much. I love talking about the past.

MG: Alan’s reigning you in himself, Chris, is what he’s saying.

CW: Fair enough. I’ll stop. No,

AR: No, go on. No, but yeah, I think if it’s valuable enough, eventually the market or the environment kind of gathers for that, but you can be quicker to market if what we do now. I think we were really fast to market on an obvious problem and we’ve had frankly, a ton of followers, fast followers, and so it could start out blue, just to finish the analogy, it can start out blue, but it gets red pretty quickly from my experience. Yeah.

CW: And then let me do one more follow up. You talked about the pains that you forget after a long enough Cool from this. Give me your top three.

AR: Top three what painful moments or what?

CW: What’s that?

MG: She tries hard to forget those. Chris,

CW: I want you to unpack the trauma for us.

AR: Alan’s what it sounds like.

MG: We’ve got an hour. We’ve got an hour.

AR: No, I mean it’s just the paying of, I think uncertainty that you live with I think is the hardest. It doesn’t go away, right? You’re always don’t know. This is a good and bad thing. You don’t know the future of what’s going to happen. Typically when you work at a big corporate job, you kind of know what’s going to be going on in six months for the most part. You have a sense of that you’ve had a routine and you have expectations from the greater company and all that. Something like this for a very long time. You have no idea what things are going to look like in six months and it’s painful, but also the reason you do it, I think, but kind of just walking, it doesn’t go away. You walk around it on the weekend, you walk around with your friend when you’re with your friends or you’re at a movie. It’s not. You wear it and it can get to you after a while, to be honest, because yeah, there’s just a lot of uncertainty you’re living with.

CW: No, I’m worried. Are you taking care of yourself?

AR: Not enough. This is helping actually. Therapy is so expensive. This is perfect. Thank you.

MG: That’s how we’ll market this episode. Awesome.

AR: Alan’s therapy hour. It’s perfect.

MG: So I will bring it back a little bit. Alan, can you talk a little bit about what Relativity six does? Sure. And how the idea came about?

AR: A hundred percent. Yeah. I guess the best way I can say it is back when I was in school, long time ago now, it was more just like, okay, AI and ML is very powerful and interesting. Lifetime value is something I’ve always thought about as well in all the companies that I’ve been with involved with. And I always thought that would unlock so much if you could actually do that. Well, so broadly speaking, it was a project about could we bring the most modern ML techniques together with data, whether that’s external or internal, and start being accurate around L T V. And that’s got a lot of different expressions. One would be around churn, one would be around obviously projecting out future spend other, we kind of came out with eight different components around L T V. We went with one for way too long and the winner was actually sitting there the whole time.

So for the first five years of the company, we went around selling around churn and retention and detecting propensity to churn. And another one of those models was around upsell, cross-sell. So they kind of two components, and it was an interesting talking point. It’s certainly in my opinion, not a scalable business for a lot of different reasons, but we went down that path for a while and it wasn’t until, and so we started the company in late 2016 actually, and it wasn’t until 2021, like Q two of 21 that we took another one of our components and turned that into a company and scale that from effectively zero to multiple millions of a r r in a very short amount, like 18 months or so, and about 50 customers within insurance by basically having one core insight and one core thing that we were solving.

And that thing was M pleads, here we go. It’s answering the question of what is it that a company actually does? Now that sounds kind of ridiculous, and when I tell my friends that, they’re like, what do you mean that’s not a thing? But it turns out it’s actually really hard to figure out if a carpenter’s a carpenter or if a carpenter’s a roofer. They look alike, they have the same kind of names, they do a lot of the same type of work. It’s just a roofer is way riskier than a carpenter. From a pricing perspective, let’s say in insurance, and specifically as you think about contractors, they’re a tough class, a tough segment because of the nature of their jobs. If they’re on a job site and then a customer’s like, Hey, can you fix my roof? A lot of times out of 10 they’re going to say, yeah, sure, I’ll figure that out. And now they’re roofing and eventually over time they build that arm up and then they start offering roofing. But did they tell their broker that they started with that they’re doing that, or are they updating their website that they did that? A lot of times they’re not. So that’s a class where insurance gets that wrong a lot and the cost of misclassifying is immense. Yeah. Can

CW: You connect the dots there between knowing exactly what you’re underwriting some risk, right? It’s some company. How’s that connect to lifetime value?

AR: Yeah, no, for sure. I think the point is that risk can and does change. Just think your company, the companies you guys work for, are they doing the same thing that they did when they started? There’s always an evolution of sorts and sometimes those changes aren’t really a big deal, but sometimes they really are, right? They have very big implications on L T V and broadly speaking, what they should be paying, right? Insurance is great because they have a lot of history, most of the time around product lines and what’s going on. But the hard part is just understanding. So we know that roofers are dangerous. The fourth most dangerous job in America, every time I walk by something and I see someone on a roof, I’m scared that I’m going to have to catch them or something. They always look wobbly every time. I don’t know why, but every time I look at a roofer, it looks like they’re kind of wobbling around, so very dangerous, but also very hard to detect that that’s what they’re doing most of the time if they’re not coming out and saying they’re a roofer.

So that’s really the value prop of the, and you can translate that into all different kinds of segments. The way that I like to pitch Relativity six is your large B two B database providers. No need to name names. We all buy their data and there’s good reason to buy their data for sure, but they really tell you what a company was and our mission and what we do and what we strive to do for our customers is we try to tell you what a company is right now. So that’s the difference of the two things. And we feel like there’s a real category and home for that, which is what we’re driving towards. We continue to stay focused on industry. Again, it sounds so easy, but it’s an incredibly hard problem. Insurance specifically mis classifies 50% of the submissions that they get and that they bind actually. So huge implications for carriers.

And again, hard problem. A lot of people now I think trying to solve it coming in, people have been there before us, but I think coming in the new wave of how do you do it with new methods, which is kind of what we’re doing, but why I feel comfortable with all that is I know it’s an incredibly hard problem, and everyone else looking at this is they’re doing other things. They’re kind of wrapping it around other product offerings and getting specific about which industries they focus on or the user types they focus on. They’re going to do things for under. And so for us, we’re so obsessed with solving this specific problem that we know where the gaps are and we’re spending our resources fixing those gaps, whereas others are just, they’re not going to be able to spend the resource or time to solve it holistically like we’re striving to do. So it takes a lot of discipline. Every day somebody comes out with asking for a new thing that hopefully we can build, but just trying to remember past experiences and stay focused and execute. I think it’s cliche, but there’s always going to be noise. There’s always going to be new competition, there’s always going to be something, but if you can just get up every day and actually put in the work, I think that’s all you can really. Sorry, I’m sounding like a self-help book or something, but

MG: We’ve already established what this hour is for, right, exactly. So no worries.

CW: I paid some royalties to Adam Green.

MG: One question for you, Alan. So you’re obviously solving, based on what you just said, the accuracy metric, right? So how correct or how accurate can you be when you’re pricing these? Is there also a reduction in time metric that you guys measure against? So these underwriters are having to sift through submission data, having to validate that data in a lot of cases, probably manual research. How does Relativity Six improve both of those metrics? The accuracy and the time to process?

AR: Yeah, no, for sure. I don’t know if you’ve ever taken a name and address and then tried to map it to a six digit NAS code or an ISO code, and I mean

MG: All the time in my spare time, that’s my hobby.

CW: Michelle’s hobby. It’s weird.

AR: Well, I would feel bad for you if that was true, because it’s a miserable, miserable task. It really is.

MG: To be clear, I’m not here. I need that kind of help. I’m good.

AR: Sure, sure. But yeah, so we automate away that the internet research that you would do, and the reason you need that versus going to large B2B database provider is because most times when you’re looking for that code on those big database providers, and it’s a smaller business, it’s usually not, right? We’re classified by a lot of these major B2B day sprayers as a furniture store. I swear that’s actually, yeah. So it’s just bad. It’s not useful at all. So that’s why we feel so confident in continuing to focus on just this.

CW: Yeah. So I want to zoom out maybe one level. How does your solution fit into the overall process? Where’s it start it end that

AR: Look like? Yeah, so I think it’s very impactful at the way top of the funnel, right quick, as high up as you can go, the better. So either at the retail broker level or at the carrier submission level, I guess pre-fill is the term used. So user fills in some basic information about a company and then some auto suggest about what that company actually does show up, and then you go through the flow. So that’s a very popular way that our product is utilized. So that’s one. Then just as you work your way down the flow before something is bound, if you’re kind of embedded within an underwriter’s bench or platform that they’re working out of, very valuable there as well post bind audit. So we’ve bound it, but we have a little bit of period of time just to double check then. And then pre-renewal as well.

Has anything changed before we go out and renew this policy? So I’d say those are the four. But I think generally speaking, what is cool in the future, and I think where things should go actually is more of a continuous monitoring versus in stages. Certain segments of businesses are always changing, and I think it’s important that underwriters are alerted to that as much as necessary, but certain moments in time, it could be really useful to know what your risk profile is as a carrier. Really like, fine, maybe we classify them as a carpenter and they’re really a roofer, but what’s our exposure for the rest of the year? Let’s keep tabs on this company and see what’s going on. I think that’s really,

CW: Yeah, sorry. I was just going to say, it’s like repricing a portfolio every day for an asset management company.

AR: Well said. Yeah, that’s exactly right. And why can’t that be done in P N C insurance? It should. I think because businesses do change, especially small ones, and they go in and out of business all the time, and there’s just a lot of advantage for anyone who wants to take that seriously.

CW: Yeah, you guys are no longer,

MG: We talked a lot on the podcast about when you’re partnering with vendors or when you’re trying to automate a process, being fully tuned into what you’re actually trying to solve for, what are the metrics that matter and that are really important? And Chris talks a lot about the baseline of accuracy. You’re never going to get to that a hundred percent accuracy, but when you are partnering or going into insurance carriers or brokers and pitching, is there a surprise or a shocked moment when they run a pilot with you and they have no idea how inaccurate maybe some of their book was, or what’s that process like when you’re selling this underwriting automation solution to them? What are they looking for? What are you actually solving for them?

AR: Yeah, no, it’s a great question, and this is a pretty subjective data point in many ways, which is interesting. The three of us could look at a business and depending on sometimes how we feel kind of grade them differently. And that’s the reality of how it goes, right? They’re human underwriters looking at things and human brokers also looking at things. So context really matters when you’re grading. So I think for us, honestly, it’s much more about them buying into you versus the hard pilot because there isn’t a one-to-one answer a lot of the times. And if you’re comparing us to something that the underwriter did last year, they graded it. No offense to underwriters at all. They do a lot of different things, but they get this data point wrong a lot as any person would when they’re going through things. So we try to, I think with us, at least at this point, we have enough customers that keep using us and renewing us and all that to say, this is impactful.

This is effective. You’re probably running around 50% accuracy right now. We’re in the eighties to nineties, depending on input. So that’s, you’re going to get significant gains, and at this point, buy into our methodology and what we’ve kind of proved versus let’s look at us versus that. When it starts getting into that, I understand there’s a need and we’re happy to do. It always has to come with a caveat of, look, that’s not really the right way in our opinion to judge this. The really way to judge is put it into production over time and see the gains that you get and measure that way. So it’s imperfect. But insurance is, I think one advantage is it is a trust relationship industry, and this translates to that as well. And it’s not a fast or first mover. So the fact that I can say we have Liberty Mutual and blah, blah, blah, a lot of different other large carriers customers goes an incredibly long way, and then the customers vouch for you.

And so I think that to me is more important. And even in the pilot testing stage of like, look, here’s how we do it. Here’s what we do. Are we perfect? No, absolutely not. This is really hard. But guess what? We’re the only team a hundred percent focused on it being right. So we’re incentivized to be as good as possible, and you want that too. So we’re aligned in terms of how we want to work together, and it’s served us well. The product is great, but it’s not, I guess, all just about the product. It’s the feedback loop, which matters a lot better. Training data matters a lot. We’re very transparent about our process, and the cool thing is we know how to improve, and we do improve tangibly as we go, which has been a fun part of it working in this space. I think

CW: All of that.

AR: Yeah, I’ll shut up.

CW: No, this is great stuff. It triggers a bunch of thoughts for me. One is, you talked about how evaluating what a company is and is a bit subjective, and then you gave a concrete number for accuracy. So what are the ingredients of that accuracy metric that you’re talking about?

AR: Sorry, just to make sure I had it. So how do we actually measure confidence and accuracy of a prediction?

CW: Yeah. What’s ground truth, for example, that you’re comparing?

AR: Yeah, no, for sure. So for us, our products constantly evolving, but a big value prop is we’re live, meaning we’re going onto multiple search engines in real time, including our own. So we’ve actually built our own from scratch and use that as, yes, we’ve indexed the web, which is crazy, but I think a huge advantage for us. So the value prop is that, and then confidence and accuracy is related to were we able to detect this entity with a level of confidence? Could we find this company on the public web somewhere? And that’s a huge component. And then the other huge component is there’s about a thousand or so classes of business, and we’re actually very transparent about the strength of those classes, and that’s related to the training data quality. So if those are above a certain threshold, it’ll be above a certain score. So we give a confidence score with every prediction and what’s cool and sounds like, oh, that’s so obvious, but what we work really hard on is consistency around the confidence score. Meaning if it’s a 0.7, that means it’s 70% accurate. If it’s 0.8, it’s 80%, stuff like that. So that underwriters and underwriting groups and S can confidently set a threshold basically and trust that, and then they have their own risk tolerance for accuracy at that point. It’s just a level that they pull. So hopefully that answers

CW: It. Super helpful if only all meteorologists were as disciplined as you all are. The other thing I wanted,

AR: Sally, they’re not,

CW: I think Michelle and I can both validate for you that there is a prime mover problem in insurance buying technology, but then at the same time, no one wants to be the last mover, and so you get a lot of peer behavior, and so what you were talking about really resonates with me.

AR: Yeah, it’s a tough way to go to market for sure. Sorry, sorry, go ahead, Chris.

CW: That’s okay. The other thing I want to do is sort of switch directions a little bit, and you talked about underwriting and how underwriters do their jobs. We’ve heard a lot that there’s a talent gap in underwriting, and I was just curious your thoughts on where the opportunities are in the InsureTech stack to shore that up and how Relativity six really helps in and of itself?

AR: Yeah, no, a hundred percent. I mean, from my perspective, what I’m hearing on that end and how that impacts us, ultimately it is what I’m hearing is just really hard to hire good underwriters, and you need really good underwriters to have a really good business. So a priority that I’m hearing on the underwriting side is, help me help my underwriters have a better experience. Let’s help them do their jobs more effectively. Let’s not have them do redundant things. Let’s have them do value added things. So any area where you can, we were saying kind of jokingly before, like Michelle, right? Researching, give you a name and research it. That’s added work. And the thing I didn’t say, it can take 15, 20 minutes to locate the business, assess what they do, and then map it to a class code, and then you might have to do that hundreds of times in a day.

Will Relativity six and others solve that? A hundred percent? No, but it could take away a lot of the work. The way I positioned it also is what if I could take away a large chunk of this from your day? Not all of it, but a lot of it. Is that still valuable to you? And the answer is always yes, because it’s non-value added work that they shouldn’t, the machine can help here. So that’s how it’s impacting us is it’s just creating more demand for stuff like ours that can take care of the underwriters that are already there and kind of supercharge them as much as when

MG: I’m going to shift. I’ll pivot from that too. The topic that is on everybody’s minds recently is generative ai, especially the chat G P T. And we’ve talked machine learning and AI solutions have been around for years and insurance carriers and their techs have been leveraging those capabilities, but it seems that now that it can be in each individual’s hands, there’s new excitement about it. What are you hitting up against when you’re talking to insurance carriers or underwriters about Relativity six vis-a-vis a chat G P T like solution? And then actually, how do you think about AI becoming more available at the hands of an underwriter, how it would impact that talent gap that we were just talking about?

AR: Yeah, for sure. I think about that one a lot, right? When that launch, especially chat G P T four, A real moment. That was a moment that I think a lot of us in this space didn’t see a lot of people got caught a little bit flatfooted to be honest about it. Very powerful, interesting technology. I use it in my day-to-day life, actually a lot. And it’s really valuable in a lot of different ways for sure. But as it relates to either what we’re doing or other AI companies, it’s actually very interesting. The way that I guess is this concept of, I’m butchering this phrase, I can’t, but it’s like the master of none thing. It’s like you’re a jack of all trades. It’s a very general intelligence, and that’s insanely cool, the fact that it can answer a million different types of questions about types of things, but it’s not going to be your go-to for niches or specific things that you might need.

It’s just not ever going to the generalization of it creates, there are papers written about it and you can think what you think, but there is a deterioration we believe happening with it because of how general it is. There is fluctuation in quality, generally speaking, and inconsistencies at scale. I mean, what’s happening from a compute perspective right now is absolutely insane. What has to happen behind the scenes for chat G B T to run is like a miracle. It is crazy, and I think it’s very early days in that. So you are seeing some fluctuations in that, but I think if you’re a carrier and you’re thinking about it, it’s so tempting to get into it and it makes sense. But I would just caution that you’d have to have a relationship with Microsoft directly and not at all go with whatever’s out there for consumers because terms and conditions constantly we’re looking at every day, they’re constantly changing.

You can’t call someone if there’s a problem. You can’t put it into a workflow in any way, not only from, so that’s on the technical business side of it. There’s no enterprise contract to sign unless you’re a very, very large company. And so it’s very early days. I think it basically opens everyone’s minds up as to what is possible, but there’s an on the ground reality of when you’re working in a regulated industry, there’s privacy and IP concerns. Who owns what kind of data are you feeding in? I know you can turn it off, but the fact that that’s even the fact that you can turn it on and off means they’re not really, privacy has come second or third or fourth here. It’s not something that it’s not top of mind as they’re scaling. This would be a concern if I was a bank or insurance company. So there’s that. And then the technical concept of unless you’re talking to top brass at Microsoft right now, they can shut you off at any moment. And so putting this into a key workflow is pretty reckless we found. But all of that said, we do like it. We use it when we can. There’s actually really interesting internal uses for it. But yeah, so I mean all kinds of things. Sorry for, so what you’re

MG: Saying is the insurance regulators love it or are going to love it, is what you’re saying. Yeah,

AR: Going to be. It’s crazy, right? It’s crazy. I know a lot of carriers tell their employees not to use it, and I think that might even put it into a workflow or to rely on that data for something as important as underwriting. And by the way, if you actually read Open AI’s terms and conditions, you can’t use any of their information on chat G B T for credit or lending decisions right now. So if you’re using it, but that’s constantly shifting. We’re so early in it that I think it’s going to be a side thing for a long time and it’s good. Again, it opens everyone’s minds up, but it’s not the old reliable that you’re looking for when you’re putting it into a stack that’s important for making money, in my opinion.

CW: Okay. Alan, we’re coming up last 10 minutes. This is where I ask you to pull out your crystal wall. So two part question. One, do large enterprises get comfortable with sending their data over the wire to an Azure or Google or an a W S for large language models, or do they get serious about pulling large language models behind their own firewall and running them themselves?

AR: Okay, so I think I got that basically your crystal ball question. Will carriers be comfortable doing some of this Gen AI type stuff with partners, or are they going to want to build their own ecosystems for it? I think it’s, yeah, perfect. No, I think so. It’s going to be a blend, right? It depends on the organization. I mean, today you could work with companies with a w s or Azure where as part of your environment, I think the key thing is comfort around one key thing, which is can I delete my data and who controls it? And where is it? I want to know where it is. Can you tell me where it is? And then if I feel like it tomorrow, I can press a button and it’s deleted and you can show me that it’s deleted. I think if those things are true, I think they’ll definitely be open to it.

They’re always going to be those carriers that want to do everything themselves and have the resources to do that. It’s going to be really expensive, like crazy expensive. I don’t even know if they’re even understanding the level of compute needed, especially as more use cases open up and all that. But I think the majority of the carrier world will be okay with a W Ss or Azure or any one of those large. Again, assuming they can totally buy into the fact that you actually have control. I think that’s the big issue right now is you use chay, B G T right now, you have no idea where that’s going and who’s using it for training and all that terrifying fun stuff. And that’s one of our value props is we have a private cloud. At any moment you press a button and it’s deleted, it’s yours. There’s no one else owning it but you and all those things. So I think it’ll evolve into that though.

CW: Okay. And then second part of my crystal ball question is the following. Say companies do try to build this behind their firewall with their own large language model resources. Are they training like an underwriting G P T? Is it a carrier X G P T general across the org? Which way do you think that goes?

AR: Interesting. That’s a good question. Yeah, it is, right? Because there you’re sitting on so much proprietary information, that’s really cool. I bet. Or what I would do is definitely not, I mean, it depends if a carrier has an internal initiative, that’s going to be the other question is are they going to bring in the talent to make that stuff happen back to your talent question internally or not? And that’s going to be a major investment. Otherwise, there could be opportunities for companies, but it’s going to be bespoke, I think, for a lot of it, right? It’s back to my fear of scale. Can you scale that? If you’re building internal models for proprietary data sets that you can’t transfer over? Wouldn’t be something I would want to personally get into, but I’m sure there’s a whole world of consultants that would be ready to do that. So yeah, I mean, I think because the key to this at the end of the day is going to be proprietary quality training data, and there’s always going to be better transformers and neural networks and stuff like that. It’s just going to come down to that really, really important resource of this data that no one else has. So they should do it themselves, in my opinion. It’s a good question. Thank you.

MG: Well, thanks for that prediction. We’ll have to see what comes true over the next few years. But it is now recorded, right? It is here to stay. Your response

AR: That can’t go back.

MG: As always, chat, G B T is a solid place to end. Thanks again for joining us. This has been another episode of Unstructured Unlocked Michelle

AR: Co-host Chris Wells,

MG: And we were joined today by co-founder and c e O of Relativity six, Alan Ringwald. Thanks again for coming in.

AR: Of course. Really good to talk to you. Yeah, it was fun.

Check out the full Unstructured Unlocked podcast on your favorite platform, including:

Subscribe to our LinkedIn newsletter.

Unstructured Unlocked podcast

April 10, 2024 | E44

Unstructured Unlocked episode 44 with Tom Wilde, Indico Data CEO, and Robin Merttens, Executive Chairman of InsTech

podcast episode artwork
March 27, 2024 | E43

Unstructured Unlocked episode 43 with Sunil Rao, Chief Executive Officer at Tribble

podcast episode artwork
March 13, 2024 | E42

Unstructured Unlocked episode 42 with Arthur Borden, VP of Digital Business Systems & Architecture for Everest and Alex Taylor, Global Head of Emerging Technology for QBE Ventures

podcast episode artwork

Get started with Indico

1-1 Demo



Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.