Probably by now you’ve toyed around with ChatGPT and perhaps other GPT-3-based artificial intelligence technologies. If you’re in the insurance industry you may well be thinking about how to apply the technology to help with data analytics, fraud detection, underwriting automation, and more. You’d also be wise to consider issues like explainability, and the limitations of the technology that give rise to the need for expert, adult supervision.
Indico Data is an AI company, and a leader in deploying GPT-based AI models, so we field more than our share of questions about ChatGPT, including from our many insurance industry clients and prospects who are interested in various aspects of insurance process automation. Given that, my Unstructured Unlocked podcast host and I decided to dedicate our latest episode to discussing what ChatGPT and other, similar AI technologies based on Generative Pretrained Transformer (GPT) models mean for the insurance industry.
Listen to the full podcast here: Unstructured Unlocked episode 14 with Michelle Gouveia
The art of the prompt with ChatGPT
As noted up top in the podcast, ChatGPT is an artificial intelligence language processing model (a topic we explored in some depth in this previous post on GPT-3) and insurance companies are no strangers to AI. In fact, I think of actuaries as the original data scientists and data modelers. For as long as insurance has been around, actuaries have been at work examining events that happened in the real world and gleaning data from them to drive business decisions.
My co-host, Michelle Gouveia of the venture capital firm Sandbox Industries, noted insurance companies have lots of historical data at their disposal from previous claims and such. It’s huge to have functionality that can look at that data and help you “isolate instances of importance” and make business decisions, she said.
With respect to ChatGPT, though, given its data set is the entire internet, she points out the tricky part is asking the right questions. “The real art is in drafting the prompt,” Gouveia said. “I’m seeing job postings for prompt engineers and things like that.”
Related content: GPT-3: Hype, reality, and the Indico generative AI origins story
Using AI to improve underwriting and detect fraud
Gouveia noted that one insurance GPT use case where GPT technology may be useful is in taking data from claims to help better inform underwriting. It could be identifying patterns in claims related to issues such as demographics, time of year or day, and so forth.
“What pattern can you identify in your claims that can help with reserves, or adjudicating a claim faster,” she said. “And then eventually taking those learnings and sending them to underwriting and saying upfront, ‘If these characteristics are in a submission it’s riskier, or it’s less risky, and we want to write it or we don’t want to write it.’”
Fraud detection is another area where ChatGPT or similar technologies may be able to help. Simple examples include things like auto claims for personal lines, but more complex examples might include automating fraud detection in workers’ compensation claims. GPT may be used to identify patterns that indicate fraud, such as numerous cases involving the same doctor.
“AI is really powerful there because you can identify what traits you want to query to see if there’s a pattern,” Gouveia said. Chief claims officers may want to write a query to see if there’s any pattern at all to a series of claims.
Related content: Automated processing turns insurance claims from cost center to differentiator
ChatGPT issues and limitations for insurance
Explaining the results it comes up with is another issue for ChatGPT, in part because it is always extremely confident in its results. For example, I asked ChatGPT to solve a differential mathematical equation that any college student could handle. It totally whiffed. But the answer it gave sounded good and would’ve seemed plausible to someone who didn’t know much about the topic.
With some additional feedback from me, it did get to the correct answer. But it points to the need for a human to be overseeing the technology and verifying that what it’s doing makes sense.
ChatGPT also has some distinct limitations in terms of how effective it may be. “It doesn’t understand what you do unless you do something that the rest of the internet understands,” as Gouveia put it. “And that probably doesn’t describe an underwriter at an insurance company.”
To illustrate her point, she told of a colleague who asked ChatGPT to write him a healthy diet. The result was a diet that called for plenty of – cake. “Cake is good and associated with positivity,” she said, and perhaps ChatGPT liked that cake has a lot of saturated fats. Whatever the reason, here again, it pays to have someone monitoring the sorts of results ChatGPT comes up with.
These were just a few of the examples of the ways insurance companies may choose to use – or beware of – ChatGPT and other GPT-based AI technologies. Check out the full podcast to learn more, including the potential dangers around corporate data leaking into the public domain and how customers may take it upon themselves to use ChatGPT to vet insurance companies and their offerings. Find the transcript here.
Check out the full Unstructured Unlocked podcast on your favorite platform, including: