With generative artificial intelligence technology, including ChatGPT, we are in the midst of a technology revolution on par with the web browser and smartphone, but probably not in the way you think – and not without challenges for the enterprise.
Those were some of the key messages I tried to get across in my recent guest spot on RIMScast, hosted by Justin Smulison, Business Content Manager at RIMS, which is the Risk and Insurance Management Society.
As Smulison correctly surmised, ChatGPT did not arrive overnight, as it may seem. It is the culmination of a decade or so of work in AI technologies including deep learning, large language models, and generative AI. Now it’s all coming together to create this “iPhone moment” for artificial intelligence, where – thanks to ChatGPT – literally anyone can experience and recognize the value of AI technology.
How ChatGPT impacts the enterprise
When Smulison asked me about the impact of ChatGPT on insurance companies and enterprises in general, my answer may have surprised him. While some are positioning ChatGPT as a Swiss army knife application that can solve most any problem, my take is its biggest impact is as a fundamentally new software programming paradigm.
For the first time, we have a programming language that uses the English language as the programming interface. To me, that’s profound, and here’s why.
There are roughly 25 million software engineers in the world, folks who are skilled in the art of software programming. But more than a billion people speak the English language. Conceivably, all of those English speakers can now program using ChatGPT.
I touched on this on another guest spot, on Indico Data’s own Unstructured Unlocked podcast. ChatGPT understands programming languages, such as C or Python. That means anyone can program in those languages simply by giving ChatGPT instructions, in plain English, on what you want the code to do. Simply put, if you can speak English, you can now write software code using ChatGPT.
Challenges with ChatGPT
So, the good news is, all of your employees could now conceivably be considered software programmers. The bad news is you don’t want all of your employees putting code into production. So that’s an inherent risk in all this.
What’s required is some thought process, structure, and guardrails. One big one is the data you’re going to feed ChatGPT, or any other large language model. All of these tools are only as good as the data they’ve been trained on. In the case of ChatGPT, that’s essentially the entire internet up to around 2021.
Think about that. We all know the internet is full of erroneous, biased, inappropriate, and even dangerous content. If you let employees run amok with ChatGPT, you’ll have no way of ensuring such data doesn’t make its way into the answers or code that ChatGPT produces for you.
So, apply good software development paradigms to govern how this new tool is used, to ensure compliance with your existing policies. I like to talk about “Safety, Security, and Scalability,” or the three Ss. You need a framework for understanding the three Ss to use this technology.
There’s also a gap between the way AI functions and how humans work. In short, when asked a question, ChatGPT is known to make up answers. It’s not done maliciously, but simply because it’s something of an AI blind spot. It’s been taught on specific data and is trained to provide what at least sound like plausible answers, no matter what. We can fix these blind spots, but we have to find them first; until then, they pose a risk.
If you’re using ChatGPT to make decisions, you need a way to ensure those are quality, complete decisions, not fabricated ones. It really gets to the idea of precision and recall. Precision means, is what I gave you correct, whether it’s an answer to a question or a set of data? Recall means, did I find all the possible answers or data that I should have?
Related content: Insurance automation: How to find the best products and measure their effectiveness
How AI can help with sensitive data
All of that said, AI and large language models can be a great help to companies in insurance, financial services, and other industries that have to consistently quantify risk. Much of what auditors and risk managers do is consume vast amounts of unstructured data – contracts, policies, emails, and other documents – to find patterns that suggest risky behavior or exposure. It really is like trying to find the proverbial needle in the haystack.
AI technologies, including the Indico Data intelligent intake platform, can be a big help in that effort. I’ve seen a huge uptick in risk management professionals wanting to embrace technology as a “bionic arm” to allow a much faster and broader perspective on potential threat vectors and exposures.