Skip to Main Content
Speaking of the Economy
Robot hand touches bank
Speaking of the Economy
Oct. 25, 2023

AI and Banking Supervision

Audiences: Bankers, Regulators, General Public

Tom Bilston and Tony Murray of the Richmond Fed's bank supervision team discuss how banks use machine learning, chat bots and other forms of AI as well as what these technological innovations mean for bank examiners, both in terms of the risks they monitor and how they do their job.

Transcript


Tim Sablik: Hello, I'm Tim Sablik, a senior economics writer at the Richmond Fed. My guests today are Tom Bilston, Assistant Vice President and Horizontal Team Lead, LISCC Program, Capital Retail Credit; and Tony Murray, SRC Strategy, Risk and Innovation Team Lead. Tom and Tony, welcome to the show.

Tom Bilston: Thanks, Tim.

Tony Murray: Thanks, Tim.

Sablik: I'll resist the urge to make a joke about having the three T's here on Speaking of the Economy [laughter].

Today, I'm excited to have you both on here to discuss how the banking industry is using artificial intelligence or AI, and how bank supervisors and regulators at the Fed are evaluating the potential benefits and costs of this emerging technology.

Before we dive into all of that, I realized as I was preparing for this show that we've never actually covered the Fed's banking supervisory role on the podcast before. For listeners who might not be familiar with that aspect of the Fed's work, could you tell us a bit about what your teams do?

Bilston: I run a team of bank examiners and quantitative analysts that monitor and assess retail credit risk in the Fed's LISCC Program. LISCC is Large Institution Supervision Coordinating Committee. It's a coordinating body that works to operationalize the supervision of large financial institutions that pose the greatest risk to U.S. financial stability.

Bank examiners leverage a wide range of sources, with the aim of understanding risks of the banks and ultimately assessing the firm's ability to manage or mitigate those risks. Some examiners focus on quantitative tools or models, and others more on retail products. The retail aspect of this, or you could say consumer, is most easily thought of as loans to individuals such as loans for housing, cars, credit cards, personal, student, and even small business. There are other teams that focus on other risks, such as interest rate risk, wholesale credit risk, or counterparty credit risk.

It's probably also worth noting that I was the lead of an AI working group of a systemwide fintech supervisory program, where we monitored and sought ways to effectively supervise banks use of AI.

Murray: I run the strategy, risk and innovation team for the Supervision, Regulation and Credit department. The work is split across two different workstreams.

First, we have the strategy and risk component. While Tom was talking about understanding the risks associated with banks and understanding their retail credit and things like that, my job is more inward focused in understanding what are the top risks internally, and then ensure that with those risks we're also meeting our strategic objectives that we've set out.

I also have a team that works to leverage emerging technologies to make the life of a bank examiner easier. This includes utilizing technologies like machine learning and artificial intelligence.

Sablik: That's the perfect segue. AI applications have recently burst onto the mainstream, thanks to advances in machine learning and applications that some listeners may be familiar with like ChatGPT. Many industries are exploring how this technology might be applied to their work, and that includes banks. Tom, what are some of the ways that banks are already using AI?

Bilston: Banks are using and/or testing the use of AI across a range of areas: fraud detection, anomaly detection, ways to improve operational efficiency, ways to help with textual analysis, further personalization of services, and to build out chat bots. Some of the areas in the work I do that I'm closest to use AI techniques to help build models that ultimately do a range of different things, as well as for credit decisioning or for credit management.

While there have been many real leaps of the technology of late — for example, ChatGPT — it is worth noting that some of these concepts are not actually that new. Banks do have experience using them.

Sablik: What are some of the concerns that these applications might raise for bank supervisors?

Bilston: Before getting into some of the concerns and risks, I should note that the large banks I see in my day to day generally have established processes in place that they use to try and mitigate the risks. The banks have established guidance on how to mitigate model risk, handle new product approvals, manage third-party relationships, and so on.

Still, there are, of course, many concerns with any new technology or technique. One is operational risk — the risk of loss as a result of ineffective or failed internal processes, people, systems, or external events which can disrupt the flow of business operations. Right now, there's a lot of excitement about AI. Banks are incentivized not to fall behind and the incentives may encourage firms to operationalize AI products and services before they've properly tested out these products.

Another risk that I wanted to talk about is model risk. Firms may not understand well enough how an AI algorithm works, and then they use it incorrectly or interpret its results incorrectly. This problem exists with other models or applications. But AI approaches have the potential to complicate this further because of their complexity of approach, the speed at which new approaches have been developed, and potentially challenging interpretations of outcomes.

The third point I want to talk about is consumer compliance. I'm not an expert in this space. Working in retail credit, I do get a little exposure in my day to day and two concerns repeatedly come up.

One is bias. Developers, without knowing, may make a loan decisioning model that inadvertently rejects extending credit to protected classes — minorities and other groups that are protected by law. The argument is that AI algorithms being better than pre-existing algorithms, they can bring up the very small signals in the data and actually use them to drive the decision.

The second is adverse action codes. When a lender declines to extend credit, they're required to tell the borrower why they didn't extend credit. But the way algorithms work make this harder — not impossible, but harder. And if it's harder, the greater the probability for mistakes.

Sablik: Thanks very much for that overview.

Tony, are bank supervisors at the Fed using AI to help with their work at all?

Murray: One of the things I'd really like to do is make a distinction here between some of the most newsworthy stories of 2023 around AI. Those are things like generative AI and ChatGPT which you might have heard of as large language models. We're exploring opportunities to leverage this technology.

But we are still trying to understand the risks and issues that come with using this technology. And we're not alone here. A ton of other organizations are really doing the same sort of analysis to understand and see how we can best use the technology.

While we haven't really begun to use this sort of ChatGPT for any supervisory work, there is another component that we do use and would like to highlight. Machine learning is a subset of AI that is being used by bank supervisors. This includes utilizing natural language processing, which is an ability for an algorithm to go through a set of documents and extract important information from it without humans having to read it first. So, I would classify our use of AI as emerging, with extensive possibilities for growth as we continue to learn.

Sablik: Do you think that, as AI continues to grow and evolve, it will affect some changes on the job of bank supervisors?

Murray: I do. But I really want to caveat my answer here. All technology changes do equate to some changes of the tasks associated with the job. Email, the internet, cell phones — they all have changed the ways we have worked. So, just like these other technologies change the way we work, AI will almost certainly do the same and we will have to continue to adjust how we do our job. But the underlying critical assessment and analytical work that are core duties of the bank examiner cannot be replaced by AI.

Sablik: Thinking ahead a little bit to some of the potential innovations down the road, what are some ways that you think AI could potentially improve financial services as well as bank supervision? Tom, maybe we can start with you.

Bilston: I think there's a lot of potential. Most immediately, I think we will see improvements flow into customer services and communications. We'll see better chatbots, better customer anticipation of needs, faster loan decisioning, greater personalization of apps and services, and better investor analysis.

But, as I alluded to, there's a lot of potential here for risk. As a bank supervisor, it will be great to see AI being used to improve risk management functions.

Also, remember that if financial service companies can access these technologies, so can bad actors. While financial services companies can use AI to catch fraud, fraudsters can also use AI and machine learning to find gaps in their defenses.

Murray: As I stated previously, there's a critical component of bank supervision that AI will not be able to change around using examiner judgment and analytical work. Those are things that are critical assessments that require a human to do. But I am excited to see where AI will enable examiners to focus more on that work and less on tasks that they currently must complete to be able to do that work.

Sablik: How do you balance the potential benefits of new technologies like AI against the new risks that they could introduce, which you both talked a bit about? In the case of bank examiners, how do they ensure that banks are monitoring these risks appropriately?

Murray: This is something that internally we are thinking a lot about. As we think about newer technologies like generative AI, what are those risks associated with the type of data they're trained on? Are there inherent biases that are baked into these models? It's part of why we're being deliberative and thoughtful about deploying this technology.

Bilston: I agree with Tony. This is certainly an area that we think a lot about. The large banks I see in my day to day generally have established processes in place that they use to try and mitigate the risks. Examiners have guidance on how to mitigate model risk, handle new product approvals, manage third party relationships, and so on.

I should also note that I only see a sliver of the U.S. banking system in my day to day. If you ask someone supervising another bank portfolio, they may have different concerns.

Sablik: Tom and Tony, thanks so much for joining me today to talk about this exciting topic.

Bilston: Thank you.

Murray: Appreciate the time, Tim.

Sablik: If listeners are interested in learning more about this topic, we recently published an article about it in Econ Focus magazine. You can find a link to that as well as other related links on the show page. And if you enjoyed this episode, please consider leaving us a rating and review on your favorite podcast app.