Upcoming Webinar Revolutionizing underwriting clearance: a groundbreaking AI solution unveiled
November 12 at 8 AM PT   |   11 AM ET  
Save Your Spot
0
Days
0
Hours
0
Minutes
0
Seconds
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

ChatGPT

ChatGPT: What it means for business

ChatGPT, and GPT technology in general, can help with business use cases around process automation in commercial insurance, banking and financial services, commercial real estate, and more.

INDICO RESOURCE

How intelligent document processing can transform commercial real estate organizations

Learn more

On this page, learn:

  • Where ChatGPT came from
  • Why GPT models are transformative for business use cases
  • GPT use cases in commercial insurance
  • GPT use cases in banking and financial services
  • GPT use cases in commercial real estate

ChatGPT is all the rage, but from a business perspective is it more than a passing fad or consumer oddity? In short, yes.

ChatGPT – and the artificial intelligence technology behind it – is applicable to business use cases, including process automation and intelligent intake. Read on to learn where it came from and what it can do for commercial insurance, financial services and banking, and commercial real estate companies.

<

Sample ChatGPT use cases

Commercial insurance

Improved chatbots; faster claims decision-making; underwriting submission document extraction, classification, summarization; fraud detection

Commercial real estate

Automate processes around lease agreements, invoices, rent rolls, contracts, lease administration, and more.

Commercial banking/financial services

Automate processes including customer onboarding, mortgages, and ISDA master agreements; more easily comply with anti-money laundering regulations; improved chatbots

Defining the GPT in ChatGPT


ChatGPT is an evolution of a type of artificial intelligence model known as a large language model, or LLM.

Large Language Models (LLMs) are sophisticated computer models designed to process and generate human-like text. They’re trained on vast amounts of text data and have many internal components, or parameters, that help them discern patterns in language. The term “large” signifies both the complexity of the model and the extensive data it learns from.

A prominent type of LLM is the Generative Pretrained Transformer (GPT). Although it doesn’t understand language in the human sense, it’s adept at predicting what word should logically follow in a sentence by referring to the patterns it has learned.

When we say GPT is “generative,” it means it can produce or “generate” text based on a given input or prompt. It does this by predicting a suitable response, constructing it word by word based on the patterns it’s learned.

“Pretrained” signifies that GPT has been initially trained with a wide-ranging body of text data, enabling it to identify common patterns and contexts in language. This is a crucial step that helps GPT make intelligent predictions when generating text.

The “transformer” in GPT refers to a specific model architecture it uses, which allows GPT to consider the context of each word in a sentence. It does this by weighing the relevance of each word when predicting the next one, which is critical for understanding context and generating coherent responses.

Progression of OpenAI GPT models

We’ve seen a progression of GPT models from OpenAI that have become increasingly more functional, largely because they’ve been trained on more data.

  • GPT 1 came out in 2018. It was trained on some 117 million parameters and could answer simple questions related to its training data.
  • GPT 2 debuted in 2019 and was trained on about 10x more parameters, some 1.5 billion. It was capable of solving tasks including translation, summarization, and sentence prediction.
  • GPT 3, released in 2020, is considered OpenAI’s breakthrough LLM. Trained on some 175 billion parameters, it can generate paragraphs of text that sound like human writing.
  • GPT 3.5 is version of the GPT 3 model designed as a general-purpose chatbot, capable of handling various subjects. It is what ChatGPT was based on upon its release in Nov. 2022, the first GPT model suitable for public use thanks to its simplified interface. Its training also incorporated human feedback to help guide it to the correct answers.
  • GPT 4 is trained on even more parameters (an exact number remains elusive), incorporates an additional six months of human and AI feedback, and is trained on newer data, up through September 2021 (vs. June 2021 for GPT 3.5). GPT 4 improves on both context and conversational aspects, meaning it can improve responses based on additional feedback users give it in the course of a chat.

ChatGPT, then, is trained on tens of thousands of pieces of human feedback – essentially responses that are actually scored by humans for preference. That, along with the massive increase in training data, gives it an order of magnitude improvement in function vs. older models.


Other transformer models

The GPT series, including ChatGPT, is not the only game in town when it comes to transformer LLMs, however. Other models include:

  • Google BERT: Introduced and open-sourced by Google in 2018, BERT was capable of sentiment analysis, semantic role labeling, sentence classification and dealing with words with multiple meanings. Google had enough faith in it that it used it in its search engine to essentially score web pages for relevance on the topics users searched for.
  • Meta LLaMA: Introduced in early 2023 to the open source community by Facebook’s parent company (but not commercially licensable), LLaMA is intended to be a “smaller, more performant model” that enables researchers without access to large amounts of infrastructure to study LLMs. The smallest LLaMA model was trained on one trillion “tokens,” or pieces of words, whereas the largest is trained on 1.4 trillion tokens, from 20 languages.
  • Google Bard: Also in early 2023, Google unveiled a test version of Bard, a LLM based on its Language Model for Dialogue Applications (LaMDA). Similar in function to ChatGPT, Bard is the latest in a series of Google AI technologies, including LaMDA, PaLM, Imagen and MusicLM, that Google intends to incorporate into its search platform.

The list goes on. The point is, numerous LLMs serve different functions, and have different strengths, weaknesses, and compute requirements. The best choice for business use will depend on a combination of factors specific to the case you’re trying to solve as well as available compute resources.

Strengths of GPT Models

GPT models specifically are good at predicting the next word in a series or sentence. While that may seem simplistic, it becomes quite powerful when the model is trained on trillions of words or tokens.

It means you can ask complicated questions and get reasonable answers, especially if you’re adept at steering the conversation. A good GPT model will try to cooperate with your instructions and get the answer you’re looking for.

It helps, however, to have some knowledge of the topic going in. If you ask ChatGPT to explain quantum mechanics, it will confidently come up with an answer. But if you have no idea what quantum mechanics is all about, you’ll have no way of knowing whether the answer is accurate. If you do, and you know the answer is off base, you can guide ChatGPT to correct itself, and it will.

GPT models are also good at summarizing vast amounts of data, such as legal language or the various inputs that may go into an insurance underwriting scenario, for example. Here again, that creates opportunity for people to develop the skill to ask effective questions to get useful responses.

Putting GPT models to work

Putting GPT models to use in a business environment generally requires a platform that makes them accessible and flexible enough to apply to different business use cases.

Indico Data, for example, is focused on intelligent intake, which involves using large language models to read various sorts of documents, including unstructured content like emails, PDFs and images. Having been trained on massive data sets, GPT models are well-suited for that sort of function.

But the Indico Data platform extends the power of the model to various, industry specific use cases, by making it simple for users to label the data that the business deems important in any given document.

Therein lies an important distinction. ChatGPT, while great for individual use, is not intended for the sort of process automation that intelligent intake tackles. It’s more for responding to prompts and answering questions – hence the term “chat.”

Sample use cases for GPT models

In terms of use cases for GPT models, including ChatGPT, chatbots are low-hanging fruit that could apply to most any industry, from insurance and financial services to commercial real estate and more. When trained on data specific to a given vertical industry, chatbots based on GPT models will become increasingly more effective than current generations.

GPT in insurance

Claims process automation

Use cases for GPT-based large language models in commercial insurance include automating claims handling, where an intelligent intake model can address first notice of loss (FNOL) processing. This typically involves numerous documents, including ACORD forms, images of damage, adjuster notes, and more. Intelligent intake enables insurance associates to easily create models that read all of this material, classify each document, extract relevant data, and input it into a downstream processing system such as Guidewire.

GPT models also hold potential to summarize numerous claims documents into bullet points, making it easier for claims handlers to compare a new claim to previously paid ones and make decisions accordingly. That would serve to speed up claims handling and reduce loss ratios.

Automating underwriting

Insurance companies can also automate underwriting processes using GPT-based intelligent intake models. Similar to the claims use case, the models can read and classify underwriting submission documents such as statements of value, extract data, summarize documents and aid in conducting comparisons.

By applying additional AI functions, insurance companies could also make predictions on the likelihood a policy will result in a claim – before ever issuing the policy. Then they can make more informed decisions that lead to better loss ratios.

 

GPT in commercial real estate

Lease agreement processing

Use cases for GPT-based intelligent intake models in commercial real estate include lease agreement process automation. Models can be trained to extract essential information from the agreements and enter them into downstream systems, such an ERP platform.

Rent roll processing

Similarly, intelligent intake models can automate rent roll processing, enabling companies to extract far more data than would likely be feasible when done manually. Armed with additional data, real estate companies can make better-informed decisions and apply AI-based analytics to find opportunities as well as red flags.

Legal and contracts document automation

GPT-based intelligent intake models can also help real estate firms deal with all the legal and contractual documents that are inherent to the business. Models can automate the “reading” of these documents and extract valuable information to aid in areas including lifecycle management, and risk analysis.

GPT in commercial banking and financial services

Customer onboarding

Intelligent intake models can help financial institutions more quickly onboard new customers by automating the processing of the reams of documentation that come with each new client.

Mortgage processing

GPT-based models can also automate mortgage processing by reading W-2s, bank statements, tax returns, purchase and sale agreements, and other required documentation, and extracting relevant data. Intelligent intake can dramatically speed up the mortgage application process, enabling companies to get to the analysis and decision phase far more quickly.

ISDA Master Agreements

Automating the processing of ISDA Master Agreements is a dramatic time-saver for financial institutions, given each document is some 28 pages long, with plenty of variety among them. Intelligent intake can take processing time from 2 hours per agreement to just minutes.

Anti-money Laundering

Complying with U.S regulations around detecting money laundering means collecting numerous documents to prove clients are legitimate, as well as ongoing monitoring for negative news about clients. An intelligent intake platform can automate significant portions of the job, saving valuable time and money.

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.