AI and Fair Lending: Why Financial Institutions Must Maintain Vigilance Despite Shifting Regulatory Priorities

Blog

Article | January 23, 2026

Authored by FMF&E


Artificial intelligence holds extraordinary potential to revolutionize credit allocation and expand access to financial services. Yet as financial institutions increasingly adopt AI-driven lending models, they face a critical question: How can they harness AI's promise while avoiding the pitfalls of algorithmic bias and discrimination?

The Risks of Reducing Vigilance

Even though the regulators have de-emphasized disparate impact and have shifted focus more on disparate treatment of members with their examinations, it's important that financial institutions don't completely lose sight of proper planning. Easing too much on testing can potentially lead to harming the financial institution's member base or customer base, and it can lead to reputational risk. 

Beyond reputational harm, financial institutions face enforcement mechanisms that remain active despite the federal regulatory shift. State-level enforcement has emerged as a significant concern. States like New York, California, Massachusetts, and Illinois continue to maintain aggressive fair lending enforcement programs.

Private litigation presents another avenue for exposure, and institutions should consider the lookback period that will come when regulatory priorities shift again.

Understanding AI-Driven Lending Discrimination

AI-driven lending discrimination occurs when automated systems make decisions that result in unfair treatment of certain groups, often without clear or intentional bias from the lender. A 2023 Consumer Financial Protection Bureau study found that over 60% of AI-based credit decisions lacked explainable reasoning when reviewed by compliance teams.

Algorithms trained on biased data can amplify existing inequalities. Complex lending models often include historical lending data that reflects past discrimination, and proxy variables such as zip codes or education levels that correlate with race or income.

The data set not being diverse or representative is a key red flag. Financial institutions must ensure that when partnering with Fintechs using proprietary algorithms, historical data does not reflect past discriminatory practices as AI can perpetuate this by relying on this type of data.

The Explainability Challenge

Federal financial regulators have emphasized that existing laws apply to financial services regardless of whether those services involve AI. Lenders must still comply with the Equal Credit Opportunity Act (ECOA) and Regulation B, which require them to avoid discrimination based on protected characteristics and provide specific and understandable reasons for adverse actions.

One of the most important issues is being able to understand and explain the AI model to be able to justify decisions to regulators for their exams. As far as monitoring goes, it's important to make sure that the data input is valid because bad data input will lead to bad output.

Best Practices for AI-Driven Lending Compliance

Financial institutions should implement several key practices to mitigate fair lending risks in their AI-driven lending programs:

Conduct Regular Fairness and Bias Audits: Regularly test AI models for disparate impact across demographic groups. Conduct market studies, evaluate loan application sourcing methods, and benchmark lending performance compared to peers.

Ensure Data Quality and Diversity: Use diverse and representative data sets when training AI models. Scrutinize historical lending data for potential bias before using it to train algorithms.

Implement Explainable AI Frameworks: Maintain comprehensive documentation of model inputs, logic, and decision pathways. Implement explainable AI (XAI) frameworks that enable institutions to trace how specific credit decisions were made.

Maintain Human Oversight: While AI can recognize trends in data, human judgment remains essential. AI would still require humans for the monitoring aspect.  AI can recognize and find trends in data, but tends to lack the ability to make judgments. Financial institutions should continue to Include manual review for high-risk decisions, and continue to have quality control practices that lead the credit decisioning as well as the AI monitoring process.

Provide Clear Adverse Action Notices: Ensure consumers receive clear, specific reasons for credit denials, even when decisions are made by complex algorithms.

The Path Forward

Looking ahead, understanding the AI framework, training your staff appropriately, verifying the data input, implementing ways to continuously monitor the output and making sure that historical data is not reflective of past discriminatory practices is important. 

Financial institutions that continue investing in fair lending compliance will protect themselves from state enforcement actions, private litigation, and reputational harm while positioning themselves for success when federal regulatory priorities inevitably shift again.

By maintaining strong fair lending practices, financial institutions demonstrate their commitment to serving all members of their communities equitably, regardless of the regulatory climate.

Home Footer BG

The FMF&E team is eager to learn about you and your business. We are a Central New York based certified public accounting firm serving nationwide clients since 1980. Our experienced and dedicated team provides audit, accounting, tax and consulting services to businesses throughout the United States. Our clients include many energy companies, financial institutions, construction and real estate developers, manufacturers, professional services, and wholesalers and distributors.

FMF&E is a team of over 85 highly skilled and motivated professionals. Our team members possess additional highly valued industry certifications such as Certified Valuation Analyst, Certified Fraud Examiner, Certified Credit Union Internal Auditor, NAFCU Certified Compliance Officer, and more. Our growth has come from applying a strong results-oriented approach to servicing our clients.

For more information on how FMF&E can assist you, please email info@fmfecpa.com.

Let’s Talk!

You can reach us at info@fmfecpa.com or fill out the form below and we’ll contact you to discuss your specific situation.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.