On this page, learn:
ChatGPT is all the rage, but from a business perspective is it more than a passing fad or consumer oddity? In short, yes.
ChatGPT – and the artificial intelligence technology behind it – is applicable to business use cases, including process automation and intelligent intake. Read on to learn where it came from and what it can do for commercial insurance, financial services and banking, and commercial real estate companies.
Improved chatbots; faster claims decision-making; underwriting submission document extraction, classification, summarization; fraud detection
Automate processes around lease agreements, invoices, rent rolls, contracts, lease administration, and more.
ChatGPT is an evolution of a type of artificial intelligence model known as a large language model, or LLM.
Large Language Models (LLMs) are sophisticated computer models designed to process and generate human-like text. They’re trained on vast amounts of text data and have many internal components, or parameters, that help them discern patterns in language. The term “large” signifies both the complexity of the model and the extensive data it learns from.
A prominent type of LLM is the Generative Pretrained Transformer (GPT). Although it doesn’t understand language in the human sense, it’s adept at predicting what word should logically follow in a sentence by referring to the patterns it has learned.
When we say GPT is “generative,” it means it can produce or “generate” text based on a given input or prompt. It does this by predicting a suitable response, constructing it word by word based on the patterns it’s learned.
“Pretrained” signifies that GPT has been initially trained with a wide-ranging body of text data, enabling it to identify common patterns and contexts in language. This is a crucial step that helps GPT make intelligent predictions when generating text.
The “transformer” in GPT refers to a specific model architecture it uses, which allows GPT to consider the context of each word in a sentence. It does this by weighing the relevance of each word when predicting the next one, which is critical for understanding context and generating coherent responses.
We’ve seen a progression of GPT models from OpenAI that have become increasingly more functional, largely because they’ve been trained on more data.
ChatGPT, then, is trained on tens of thousands of pieces of human feedback – essentially responses that are actually scored by humans for preference. That, along with the massive increase in training data, gives it an order of magnitude improvement in function vs. older models.
The GPT series, including ChatGPT, is not the only game in town when it comes to transformer LLMs, however. Other models include:
The list goes on. The point is, numerous LLMs serve different functions, and have different strengths, weaknesses, and compute requirements. The best choice for business use will depend on a combination of factors specific to the case you’re trying to solve as well as available compute resources.
GPT models specifically are good at predicting the next word in a series or sentence. While that may seem simplistic, it becomes quite powerful when the model is trained on trillions of words or tokens.
It means you can ask complicated questions and get reasonable answers, especially if you’re adept at steering the conversation. A good GPT model will try to cooperate with your instructions and get the answer you’re looking for.
It helps, however, to have some knowledge of the topic going in. If you ask ChatGPT to explain quantum mechanics, it will confidently come up with an answer. But if you have no idea what quantum mechanics is all about, you’ll have no way of knowing whether the answer is accurate. If you do, and you know the answer is off base, you can guide ChatGPT to correct itself, and it will.
GPT models are also good at summarizing vast amounts of data, such as legal language or the various inputs that may go into an insurance underwriting scenario, for example. Here again, that creates opportunity for people to develop the skill to ask effective questions to get useful responses.
Putting GPT models to use in a business environment generally requires a platform that makes them accessible and flexible enough to apply to different business use cases.
Indico Data, for example, is focused on intelligent intake, which involves using large language models to read various sorts of documents, including unstructured content like emails, PDFs and images. Having been trained on massive data sets, GPT models are well-suited for that sort of function.
But the Indico Data platform extends the power of the model to various, industry specific use cases, by making it simple for users to label the data that the business deems important in any given document.
Therein lies an important distinction. ChatGPT, while great for individual use, is not intended for the sort of process automation that intelligent intake tackles. It’s more for responding to prompts and answering questions – hence the term “chat.”
Use cases for GPT-based large language models in commercial insurance include automating claims handling, where an intelligent intake model can address first notice of loss (FNOL) processing. This typically involves numerous documents, including ACORD forms, images of damage, adjuster notes, and more. Intelligent intake enables insurance associates to easily create models that read all of this material, classify each document, extract relevant data, and input it into a downstream processing system such as Guidewire.
GPT models also hold potential to summarize numerous claims documents into bullet points, making it easier for claims handlers to compare a new claim to previously paid ones and make decisions accordingly. That would serve to speed up claims handling and reduce loss ratios.
Insurance companies can also automate underwriting processes using GPT-based intelligent intake models. Similar to the claims use case, the models can read and classify underwriting submission documents such as statements of value, extract data, summarize documents and aid in conducting comparisons.
By applying additional AI functions, insurance companies could also make predictions on the likelihood a policy will result in a claim – before ever issuing the policy. Then they can make more informed decisions that lead to better loss ratios.
Use cases for GPT-based intelligent intake models in commercial real estate include lease agreement process automation. Models can be trained to extract essential information from the agreements and enter them into downstream systems, such an ERP platform.
Similarly, intelligent intake models can automate rent roll processing, enabling companies to extract far more data than would likely be feasible when done manually. Armed with additional data, real estate companies can make better-informed decisions and apply AI-based analytics to find opportunities as well as red flags.
GPT-based intelligent intake models can also help real estate firms deal with all the legal and contractual documents that are inherent to the business. Models can automate the “reading” of these documents and extract valuable information to aid in areas including lifecycle management, and risk analysis.
Intelligent intake models can help financial institutions more quickly onboard new customers by automating the processing of the reams of documentation that come with each new client.
GPT-based models can also automate mortgage processing by reading W-2s, bank statements, tax returns, purchase and sale agreements, and other required documentation, and extracting relevant data. Intelligent intake can dramatically speed up the mortgage application process, enabling companies to get to the analysis and decision phase far more quickly.
Automating the processing of ISDA Master Agreements is a dramatic time-saver for financial institutions, given each document is some 28 pages long, with plenty of variety among them. Intelligent intake can take processing time from 2 hours per agreement to just minutes.
Complying with U.S regulations around detecting money laundering means collecting numerous documents to prove clients are legitimate, as well as ongoing monitoring for negative news about clients. An intelligent intake platform can automate significant portions of the job, saving valuable time and money.