CFPB Issues New Guidance on AI in Mortgage Lending
Breaking · Technology & AI · January 28, 2026
Key Points of the Guidance
The Consumer Financial Protection Bureau (CFPB) has released interpretive guidance clarifying how existing fair lending laws apply to AI and algorithmic decision-making in mortgage lending. The guidance emphasizes that lenders using AI tools for underwriting, pricing, or marketing are still fully responsible for compliance with the Equal Credit Opportunity Act (ECOA) and Fair Housing Act. Using a third-party AI model does not shield lenders from liability for discriminatory outcomes.
The Leading AI Newsletter for Loan Originators
Stay on top of cutting-edge mortgage industry AI updates so you don't get left behind. As a free bonus, get our free download on how to hire an AI employee.
Adverse Action Notice Requirements
The guidance specifically addresses adverse action notices under ECOA. When an AI model contributes to a denial or unfavorable terms, lenders must provide specific and accurate reasons — not vague references to algorithmic scores. The CFPB rejected the argument that AI model complexity makes it impossible to provide specific adverse action reasons, stating that if a lender cannot explain why an application was denied, they should not be using that model for lending decisions.
What This Means for Mortgage Technology
For mortgage technology vendors, this guidance reinforces the need for explainable AI models with clear audit trails. Lenders evaluating AI-powered underwriting or pricing tools should ensure the vendor can demonstrate model explainability, fair lending testing results, and the ability to generate compliant adverse action notices. The guidance does not prohibit AI use — it clarifies that AI tools must meet the same standards as traditional lending processes.