February 25, 2026

AI in Financial Services: Popular Use Cases and the Regulatory Road Ahead

11 min

Artificial intelligence (AI) is transforming the delivery of consumer financial services. Consumer finance companies—from traditional banks and loan servicers to emerging fintech companies—are using AI in increasingly sophisticated ways to optimize customer experience, speed up decision making, improve fraud detection, and ensure compliance with applicable laws.

However, adoption of AI is not without potential pitfalls. It requires navigating an emerging power struggle between the federal government and state regulators, a wave of state legislation focused on AI, existing consumer protection laws, and third-party partner expectations. This article examines some of the ways that AI is being deployed by consumer finance companies and analyzes how existing federal and emerging state laws apply to those deployments in practice.

Regulatory Background

Federal Executive Order

On December 11, 2025, the White House issued an executive order (the "Order") that seeks to create a singular national policy framework for artificial intelligence. To accomplish its goal, the Order seeks to limit state-level regulation through a mixture of proactive litigation and restrictions on previously allocated federal funding. It also directs the special advisor for AI and Crypto and the assistant to the president for Science and Technology to prepare a legislative recommendation establishing a uniform federal policy framework that preempts certain conflicting state AI laws. Finally, it directs the Federal Trade Commission (FTC) to issue guidance on applying the FTC Act's unfairness and deception standards to AI models, potentially preempting state bias laws.

State Regulation and Legislation

States appear undeterred. Throughout 2025, many states debated legislation that would have imposed new AI governance frameworks. Other previously adopted pieces of AI legislation are moving closer to their effective dates.

A few such laws are of particular importance to consumer financial services companies.

  • With respect to non-Gramm-Leach-Bliley Act data, the California Consumer Privacy Act regulations related to automated decision-making technology (ADM) require companies to provide pre-use notice, the right to opt out of ADM use, the right to appeal an ADM's decision with respect to a consumer, and the right to access information used by the ADM in making its determination, among other things.
  • The Colorado AI Act was signed into law in 2024 and becomes effective on June 30, 2026. It imposes several requirements on developers of high-risk AI systems, which include systems that have a material effect on the provision of financial services. Among other things, developers must make public-facing disclosures related to their use of high-risk AI, notify consumers about the developer's use of the AI system (including separate notification requirements related to adverse decisions), conduct an impact assessment related to their use of a high-risk AI model, and use "reasonable care" to prevent algorithmic discrimination.
  • The Texas Responsible AI Governance Act prohibits financial institutions from developing or deploying AI systems with the intent to unlawfully discriminate against a protected class.
  • The Utah AI Policy Act requires entities that engage in consumer transactions and cause consumers to interact with generative AI to disclose to the consumer that they are interacting with AI upon consumer request.

As state legislative efforts to handle novel use cases for AI come into force, this list is likely to grow.

Existing Federal Laws and Regulations

Consumer financial services companies that deploy AI also need to remain aware of existing laws and regulations. While the exact scope of laws involved depends on a company's particular product or use case, several core federal statutes are consistently implicated:

  • Unfair, Deceptive, or Abusive Acts or Practices (UDAAP). The Consumer Financial Protection Act prohibits UDAAPs in connection with the offering of consumer financial products or services. Among other things, AI-driven customer interactions may create UDAAP exposure if responses are inaccurate, omit material terms, reframe pricing or eligibility criteria in misleading ways, or impede consumers' exercise of statutory rights. Regulators have signaled particular sensitivity where automated tools significantly limit access to human customer service or make it difficult for customers to exercise statutory rights.
  • Equal Credit Opportunity Act and Regulation B (ECOA). Among other things, ECOA prohibits discrimination on a prohibited basis in any aspect of a credit transaction and obligates creditors to provide specific, accurate reasons when taking adverse action. While federal enforcement of ECOA is limited, private litigants can still bring actions under the law, and many states have analogous antidiscrimination laws.
  • Truth in Lending Act and Regulation Z (TILA). TILA requires the clear and conspicuous disclosure of credit terms, including a loan's annual percentage rate and finance charge. Providing these disclosures can become complicated when agentic AI systems are used to communicate with consumers or ML-driven models offer consumers dynamic pricing.
  • Electronic Fund Transfer Act and Regulation E (EFTA). The EFTA establishes investigation timelines and provisional credit requirements for electronic fund transfer errors. AI misinterpretations of consumer payment instructions and AI-based error dispute triage tools remain subject to these requirements.
  • Fair Credit Reporting Act (FCRA). The FCRA regulates the use and furnishing of consumer reports. It also mandates specific credit score disclosures where a credit score is used in taking adverse action. In some cases, proprietary ML-driven loan scores may qualify as credit scores, triggering disclosure requirements.
  • Gramm-Leach-Bliley Act and the Safeguards Rule (GLBA). The GLBA imposes privacy notice and information security obligations on financial institutions with respect to nonpublic personal information. AI systems that collect, synthesize, or transmit consumer financial data—including through integrations with third-party model providers or APIs—may be subject to these requirements.
  • Bank Secrecy Act, Anti-Money Laundering, and OFAC (BSA/AML). Financial institutions increasingly use ML models to perform transaction monitoring and sanctions screening. Regulators have previously sought to understand how these models are used in practice. Overall, the ML models used by financial institutions for BSA/AML purposes must remain reasonably designed to detect suspicious activity and sanctions violations.
  • Model Risk Management (MRM). Federal banking regulators require banks to maintain robust model risk management programs, including documentation describing model design and construction, independent validation, performance monitoring, and change management. These expectations apply to machine learning and other AI models used in underwriting, fraud detection, servicing quality control, or compliance monitoring.

AI Use Cases and Regulatory Touchpoints

Consumer financial services companies are deploying AI in a wide variety of ways. Each use case implicates a mix of current laws and new, AI-specific laws that have recently come into effect.

Customer Acquisition and Experience

Financial institutions are moving beyond static chatbots and deploying GPT-based systems that facilitate entire consumer transactions. Often these systems are embedded within websites, mobile applications, popular messaging services, and digital account portals, allowing consumers to complete applications and initiate payments within a single conversational interface. In some cases, institutions are training models to customize website and messaging language and tone to specific consumers with the goal of making it more likely that consumers complete loan applications or payment transactions.

Companies using these dynamic models will want to pay attention to existing federal laws. UDAAP risk may arise if AI-generated messaging or website content is inaccurate, omits material terms, or reframes pricing or eligibility in ways that could mislead a consumer. Systems configured to self-improve for conversion purposes must be monitored to ensure they do not hallucinate information or adopt coercive or deceptive language. Repeated automated prompts encouraging completion of an abandoned transaction may also present unfairness concerns if they continue for an extended period and cross into harassment. Regulators have signaled that limiting consumers to chatbot-only customer service can be unfair where meaningful access to a human representative is unavailable.

Emerging state laws may add additional requirements, depending on a company's specific use case. California's ADM regulations apply to "profiling" consumers, so they may impose notice and other requirements, depending on how models are used and the data they use to evaluate consumers. The Colorado AI Act may also require disclosure and risk management controls for certain high-risk systems.

Underwriting and Credit Decision Making

ML-driven underwriting models are now common. They are used to assess creditworthiness, determine pricing, set revolving credit limits, and evaluate loan pools for secondary market transactions. These models often rely on alternative and cash flow data, engineered variables or embedded sub-models, and applicant-specific sequencing. In some situations, the sequencing and weights of an underwriting model's variables shift dynamically, depending on the individual applicant's data profile.

ECOA and related state antidiscrimination laws remain foundational. Dynamic underwriting models may create disparate treatment or disparate impact risk, particularly where variable sequencing and weights are set based on applicant characteristics that may function as proxies for prohibited basis status. While federal regulators are not actively enforcing the disparate impact theory of liability under ECOA, private litigation and state enforcement remain viable avenues for challenge.

Adverse action requirements are also a potential issue. Many institutions rely on waterfalls of underwriting model hard cuts and Shapley values (i.e., essentially a variable-specific importance score) to determine the principal reasons that led to a consumer's denial for credit. However, when models are non-monotonic (i.e., not one-directional) or non-linear (i.e., not proportional), generating specific, logical adverse action reasons may be more difficult. This can become even more complex when ML-driven models include embedded submodels or engineered variables, which then use multiple separate variables to "score" a consumer. In practice, adverse action notices that reflect abstract or internally generated feature names—or that appear inconsistent with the consumer's understanding of their profile—may invite regulatory scrutiny.

The FCRA introduces additional considerations. Institutions using ML models to score consumers or loans will want to ensure that those scores are not "credit scores" under the FCRA, potentially triggering disclosure requirements when used in taking adverse action against a consumer. Additionally, where these scores or other consumer- or loan-related reports are provided to secondary market loan investors, companies should assess whether the scores or reports are "consumer reports" under the FCRA.

Banks and fintech companies should also be aware of the MRM challenges posed by the use of ML models. Guidance from federal banking regulators on model risk management requires institutions to document model design and data sources, conduct independent validation, and perform ongoing performance monitoring—expectations that apply equally to ML underwriting models. ML-driven underwriting models can complicate model risk management because engineered variables and complex feature interactions are often less transparent, making documentation and independent validation more demanding. Dynamic models also require more robust ongoing monitoring to detect drift and unintended correlations over time. Meeting bank MRM standards frequently presents a material operational and governance challenge for fintechs that use ML-driven underwriting models.

Emerging state laws may layer on additional obligations. The Colorado AI Act expressly classifies credit decisions as consequential decisions, requiring impact assessments and governance controls on underwriting models. In some situations, California's ADM law may provide consumers with opt-out rights for ADM models and access to "meaningful information about the logic involved" in those models' decisions. Bias audit regimes modeled on New York City Local Law 144 may also expand ongoing fair lending assessment expectations.

Fraud Prevention and Identity Verification

Financial services companies have adopted many AI-based methods to conduct identity verification, perform real-time transaction monitoring, and root out fraud. Two of the primary tools include (1) selfie-based identity verification that uses AI to confirm that a consumer's selfie matches their ID, and (2) ML-driven models that are used to identify behavioral indicators of fraud.

Under existing consumer finance laws, two primary concerns may arise. First, to the extent that identity verification tools struggle to verify consumers' identities based on skin tone, that can create risk under ECOA and state antidiscrimination laws. Moreover, behavioral fraud triggers may also correlate with age, disability, or other characteristics, creating potential discrimination risk. These risks may be heightened where AI-based systems restrict access to products or services, particularly where it is difficult to access human customer service agents. Second, UDAAP exposure may arise where incorrect identity verification denials or fraud flags result in account freezes or blocked payments, particularly if consumers are deprived of access to funds without timely explanation or recourse.

State laws come into play as well. Under the Colorado AI Act, fraud models that result in account denials may constitute consequential decisions, triggering impact assessment and consumer notice requirements. Numerous states are also adopting or considering biometric or AI governance statutes that could also apply to identity verification systems that require the provision of biometric information.

Servicing and Collections

AI is increasingly used in servicing and collections to pre-screen borrowers for loan modification eligibility, optimize collections contact timing and channel selection, and predict disputes or chargebacks in payments programs. In some cases, predictive assessments influence account pricing, access, or escalation pathways.

Under existing law, the Fair Debt Collection Practices Act (FDCPA) imposes requirements governing contact timing, frequency, and harassment for certain debt collectors, and UDAAP theories may arise even where companies are not subject to the FDCPA. Automated dispute and chargeback triage must comply with Regulation E and the Fair Credit Billing Act. Predictive models that deprioritize or deny disputes based on confidence scores cannot substitute for statutorily required investigations. Institutions must ensure that automated workflows align with statutory investigation timelines and provisional credit obligations.

Emerging state laws may also apply. The Colorado AI Act classifies loan modification determinations as consequential decisions, requiring governance and impact assessments. Moreover, proposed Maryland legislation would require human review before certain adverse AI-assisted collections actions.

Conclusion

AI adoption in consumer financial services is not a matter of if, but how. While AI-driven efficiencies are significant, product attorneys and business leaders should remain attentive to the regulatory constraints that accompany deployment.