The insurance underwriting process naturally involves vast amounts of data, which is why so many insurers are interested in automating the underwriting submission process. But automation invariably raises questions about data quality and accuracy that are fundamental to maintaining good loss ratios and overall competitiveness.
Art Borden, a recent guest on my Unstructured Unlocked podcast, delved into the data quality issue during our discussion. Borden’s deep experience in insurance includes 12 years as VP of Underwriting Process and Technology at CNA. Prior to that, he was at Zurich Insurance, leading business analysis efforts for the creation and deployment of business process management workflows for its enterprise underwriting processes. So, Borden understands well the topic of bringing automation to underwriting processes.
The question of data accuracy is important because only with accurate data can underwriters make good decisions. But Borden pointed out data accuracy can also be a competitive threat.
“What if your competitor builds a better process for validating data and you are competing against them and you’re writing risks that you probably shouldn’t write because their data process is better than yours?” he asked. “This is the hidden competitive pressure the industry feels.”
In short, better data analysis is the equivalent of a better mousetrap, he said. Whoever has it will win over time.
Related content: Underwriting exec explains how applying technology to insurance underwriting processes keeps brokers happy
Automation and data accuracy
Better analysis, of course, starts with having good, accurate data. This is where insurance companies (and others) tend to get hung up when it comes to automation.
They may have a good intelligent intake solution to help automate the submission process, including pulling relevant data from PDFs, emails, and other documents. Now the question becomes, how accurate is the data the intelligent solution is pulling in? Also, if the data is 80% accurate, which 20% is inaccurate?
“Those 20% questions can really get you in trouble if you underwrite the wrong risk based on the wrong data,” Borden said. “So, there’s an evolution going on in the industry to tighten that down and to build better models [with improved accuracy]. And that’s a challenge for the industry.”
Related content: MSRB chief data officer explains how to extract value and insights from unstructured data
How accurate is our current data?
It’s interesting, however, that this accuracy discussion comes up in the context of insurance automation – but not so much with respect to data resulting from the manual processes insurance companies are trying to replace.
The real question should be: can automation produce data that’s more accurate than what we have now? Answering that means you have to know how accurate the data you’re currently producing is. Only then can you say whether the 80% accuracy figure from your automated process is an improvement or not.
As Borden pointed out, brokers are typically in the middle of the insurance underwriting process, sending submissions on behalf of clients. “Brokers don’t necessarily have perfect information, either” he said. A broker will call their insured, who may have failed to update some specific bit of information. “So, you start out with imperfect [data] there at the source, at the customer level.” Brokers may misinterpret something a client is asking for, as well. All of that data can wind up on a submission.
“So are we sitting at a 100 percent perfection at that point?” he asks. “Certainly not.”
Gauging third party insurance data
Insurance companies are increasingly using third party data sources as part of the underwriting process. That data, too, has to be assessed for accuracy vs. the data insurance companies have from their own applications.
“There’s a whole methodology you have to build to test and retest those things,” Borden said. That’s because data quality from the marketplace has been getting better every year. If their quality is rated 80% this year but 85% two years from now, that makes a big difference. “So, it’s not a one and done kind of process.”
He argues that insurance companies need a group responsible for monitoring such third party data for quality. “Maybe you didn’t historically need it because you never really called on third party data to do this before,” he said. “But in the future, don’t you need it? Yeah, you probably do. … because you’re using it to underwrite your business.”
Interesting food for thought, no?
Those were just a few of the insights from my chat with Art Borden. Click here to read a transcript of the entire conversation, or check out the podcast on your favorite platform, including: