Recommendations for responsible use of AI in financial services

The financial sector has long adopted many of the newest technologies to increase efficiency and profits while minimizing risk. Artificial intelligence (AI) brings opportunities to advance these goals, especially to support information processing, and was quickly adopted by the retail financial services sector. In fact, the sector’s spending on AI is expected to grow to $97 billion by 2027, up from $35 billion in 2023.1 For years, AI has been used in banking, fraud detection, mortgage applications, and credit underwriting in addition to other backend financial functions like data analytics. Many of these use cases are promising, offering the potential for greater accessibility and improved customer service, while others may introduce challenges, including concerns about bias and discrimination.
Yet the technology was introduced into a complex regulatory landscape. As one of the country’s most highly regulated industries, there are several independent agencies with oversight over financial institutions, including the Federal Deposit Insurance Corporation (FDIC), Securities and Exchange Commission (SEC), Consumer Financial Protection Bureau (CFPB), and Commodity Futures Trading Commission (CFTC).2 Each of these is designated unique or overlapping authority to ensure regulatory compliance to achieve goals such as financial stability and taxpayer protection. The expansion of AI into the sector also has called for guidance on how these regulations apply to the technology and an assessment of where additional safeguards are needed. Such efforts progressed in part under the prior administration.
In 2023, the CFPB released guidance on credit denials involving AI, describing how “lenders must use specific and accurate reasons when taking adverse actions against consumers.”3 Seeking to empower the directors of the Federal Housing Finance Agency (FHFA) and CFPB, additional guidance in 2023 required regulated entities “to use appropriate methodologies including AI tools to ensure compliance with Federal law,” “evaluate their underwriting models for bias,” “evaluate automated collateral-valuation and appraisal processes in ways that minimize bias,” and “combat unlawful discrimination enabled by automated or algorithmic tools used to make decisions about access to housing and in other real estate-related transactions.”4 While these measures are critical as the use of AI continues to grow in the industry, they have since been revoked with work stopped or eliminated at the current downsized CFPB.5
In this written testimony, I argue that the adoption and use of AI in the financial services sector are at an accelerated pace and require careful review. Risks abound that can undermine consumers’ ability to be economically resilient and experience the benefits of wealth creation. Due to inaccurate or discriminatory information that is often used to train AI models, marginalized populations are at even higher risk when AI systems make misjudgments about their qualifications and eligibility for economic opportunities; thereby, widening the wealth gap and limiting access to homeownership, credit worthiness, and other financial transactions that bring hope and promise to their quality of life. For these reasons, Congress must continue to foster responsible and ethical AI use in the financial regulation sector by providing safeguards that protect consumers from AI risks now and into the future.
Continue reading the full testimony
link