As commercial banks seek to automate document-intensive processes, they need to be mindful about using artificial intelligence technologies that leave them unable to explain why or how decisions were made. The antidote to such “black box” technologies is the concept of “explainable AI”.
It’s an issue because any perception of bias in commercial banking practices can lead to charges of discrimination based on sex or ethnicity that can result in fines and reputational damage.
Apple faced just such charges a couple of years ago when it unveiled its Apple Card credit card with Goldman Sachs. Customers fumed publicly, accusing Apple and Goldman Sachs of discriminating against women. Men, including Apple co-founder Steve Wozniak, went public with stories of their wives being granted credit limits that were 10x to 20x lower than their own.
Apple and Goldman denied the charges of bias but struggled to prove it. As Wired reported: “The response from Apple just added confusion and suspicion. No one from the company seemed able to describe how the algorithm even worked, let alone justify its output. While Goldman Sachs, the issuing bank for the Apple Card, insisted right away that there isn’t any gender bias in the algorithm, it failed to offer any proof.”
The use of intelligent document processing in commercial banking means banks must be mindful of explainability as they seek to automate numerous processes, from authorizing loans and lines of credit to investment banking services. Should a customer complain about a low debt limit or the bank be subject to an audit, the bank needs to be able to explain why its AI tool made the decisions it did.
Related Article: 3 Use Cases for Intelligent Document Processing in Commercial Banking
The black box AI problem
The ability to do that requires companies understand how their AI solutions arrive at the decisions they make. If they can’t, the AI is considered a “black box,” meaning the rationale is hidden from view.
Cynthia Rudin, professor of computer science at Duke University, identifies two types of black box AI. The first involves deep neural networks, the architecture used in deep learning algorithms. These involve layer upon layer of variables used to train models. “As neural networks grow larger and larger, it becomes virtually impossible to trace how their millions (and sometimes, billions) of parameters combine to make decisions. Even when AI engineers have access to those parameters, they won’t be able to precisely deconstruct the decisions of the neural network,” according to an explanation of Rudin’s work at TechTalks.com.
The other type of black box AI is the use of proprietary algorithms by AI vendors who don’t want to disclose the inner workings of their system, because of competitive reasons or to prevent “bad actors from gaming the system,” according to TechTalks.
Without the ability to explain AI decision-making, the possibility for bias exists – even if unintended. As a report from the Brookings Institute explains: “Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions which can have a collective, disparate impact on certain groups of people even without the programmer’s intention to discriminate.”
The need for explainable AI
“Explainable AI” aims to avoid such issues by making it clear how an AI solution arrived at the decision it did. With that in hand, commercial banks will be able to tweak AI models, or the data on which they are trained, in order to avoid bias.
Commercial banks have good reason to adopt explainable AI: government regulation. The Federal Trade Commission in April issued guidance on “truth, fairness, and equity in your company’s use of AI,” with the aim of helping companies “harness the benefits of AI without inadvertently introducing bias or other unfair outcomes.”
As its guidance points out, the FTC already has regulations around intelligent automation in banking, including the FTC Act, which prohibits unfair or deceptive practices. Additionally, the Fair Credit Reporting Act “comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.” Similarly, the Equal Credit Opportunity Act makes it illegal for a company to use a biased algorithm in credit decision-making.
Open, transparent, unbiased
At Indico, we have a simpler reason for ensuring all of the decisions made by our Intelligent Process Automation platform are explainable: because it’s the right thing to do. Ensuring that our AI is open, transparent, unbiased and explainable is a philosophical stance that our co-founder, Slater Victoroff, has insisted on from day one. Our tool comes with explainability features that make it a simple matter to determine why the model made any decision.
If you come across other intelligent document processing tools that can’t offer the same level of explainability, it’s likely due to one of two reasons. First is that it’s not “real” AI in the first place, but just a set of rules and templates to automate repetitive manual processes. The other is that they can’t offer explainability because they don’t fully understand how their algorithms work. That’s the equivalent of saying, “Trust me. It’s fine.”
To learn more about how the Indico platform works, check out our intelligent process automation overview. And feel free to contact us with any questions; we’ve got nothing to hide.