The insurance underwriting process naturally involves vast amounts of data, which is why so many insurers are interested in automating the underwriting submission process. But automation invariably raises questions about data quality and accuracy that are fundamental to maintaining good loss ratios and overall competitiveness.Ā
Art Borden, a recent guest on my Unstructured Unlocked podcast, delved into the data quality issue during our discussion. Bordenās deep experience in insurance includes 12 years as VP of Underwriting Process and Technology at CNA. Prior to that, he was at Zurich Insurance, leading business analysis efforts for the creation and deployment of business process management workflows for its enterprise underwriting processes. So, Borden understands well the topic of bringing automation to underwriting processes.Ā
The question of data accuracy is important because only with accurate data can underwriters make good decisions. But Borden pointed out data accuracy can also be a competitive threat.Ā
āWhat if your competitor builds a better process for validating data and you are competing against them and you’re writing risks that you probably shouldn’t write because their data process is better than yours?ā he asked. āThis is the hidden competitive pressure the industry feels.āĀ
In short, better data analysis is the equivalent of a better mousetrap, he said. Whoever has it will win over time.Ā
Related content: Underwriting exec explains how applying technology to insurance underwriting processes keeps brokers happy
Automation and data accuracyĀ
Better analysis, of course, starts with having good, accurate data. This is where insurance companies (and others) tend to get hung up when it comes to automation.Ā
They may have a good intelligent intake solution to help automate the submission process, including pulling relevant data from PDFs, emails, and other documents. Now the question becomes, how accurate is the data the intelligent solution is pulling in? Also, if the data is 80% accurate, which 20% is inaccurate?Ā
āThose 20% questions can really get you in trouble if you underwrite the wrong risk based on the wrong data,ā Borden said. āSo, there’s an evolution going on in the industry to tighten that down and to build better models [with improved accuracy]. And that’s a challenge for the industry.ā
Related content: MSRB chief data officer explains how to extract value and insights from unstructured data
How accurate is our current data?Ā
Itās interesting, however, that this accuracy discussion comes up in the context of insurance automation ā but not so much with respect to data resulting from the manual processes insurance companies are trying to replace.Ā
The real question should be: can automation produce data thatās more accurate than what we have now? Answering that means you have to know how accurate the data youāre currently producing is. Only then can you say whether the 80% accuracy figure from your automated process is an improvement or not.Ā
As Borden pointed out, brokers are typically in the middle of the insurance underwriting process, sending submissions on behalf of clients. āBrokers donāt necessarily have perfect information, eitherā he said. A broker will call their insured, who may have failed to update some specific bit of information. āSo, you start out with imperfect [data] there at the source, at the customer level.ā Brokers may misinterpret something a client is asking for, as well. All of that data can wind up on a submission.Ā
āSo are we sitting at a 100 percent perfection at that point?ā he asks. āCertainly not.ā
Gauging third party insurance dataĀ
Insurance companies are increasingly using third party data sources as part of the underwriting process. That data, too, has to be assessed for accuracy vs. the data insurance companies have from their own applications.Ā
āThere’s a whole methodology you have to build to test and retest those things,ā Borden said. Thatās because data quality from the marketplace has been getting better every year. If their quality is rated 80% this year but 85% two years from now, that makes a big difference. āSo, it’s not a one and done kind of process.ā
He argues that insurance companies need a group responsible for monitoring such third party data for quality. āMaybe you didnāt historically need it because you never really called on third party data to do this before,ā he said. āBut in the future, donāt you need it? Yeah, you probably do. ⦠because you’re using it to underwrite your business.ā
Interesting food for thought, no?Ā Ā
Those were just a few of the insights from my chat with Art Borden. Click here to read a transcript of the entire conversation, or check out the podcast on your favorite platform, including:Ā