On March 29, 2021, the federal banking agencies issued a Request for Information (RFI) from the industry and the public regarding the use of artificial intelligence (AI), including machine learning, as part of financial services activities, such as for fraud prevention, personalization of customer services, and credit underwriting.
Together, the Federal Reserve Board, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau, and National Credit Union Administration (collectively, the Agencies) seek comments from financial institutions, trade associations, consumer groups, and other stakeholders to better understand the respondents' views on a range of issues. They include:
- How financial institutions use AI to provide services to customers;
- How financial institutions use AI for business or operational purposes;
- Appropriate governance, risk management, and controls with respect to AI;
- Any challenges in developing, managing, and adopting AI; and
- Whether any clarifications from the Agencies would be helpful to ensure that AI use occurs in a "safe and sound manner and in compliance with applicable laws and regulations, including those relating to consumer protection."
Though the Agencies indicate their support for "responsible innovation by financial institutions," the RFI expresses concern that AI could introduce potential consumer protection risks, including discrimination, UDAP/UDAAP violations, and privacy concerns. Accordingly, the Agencies intend to use the information gathered from the RFI process to determine whether to issue AI-related guidance or regulations.
AI is increasingly becoming a critical component of financial services operations, including for data security and credit modeling. Banks and other financial services companies that use or develop AI—including the companies that design AI applications for the industry—should take advantage of the opportunity to comment on the RFI.
Comments will be accepted until June 1, 2021.
The RFI encourages any person submitting comments to respond to 17 questions spanning seven different risk areas, summarized below. Note that 5 of the 17 questions relate to fair lending risk, which suggests this area is of particular focus for the Agencies.
- Lack of Explainability: Certain AI approaches are so complex that overall functionality or specific predictions and decisions may be difficult or nearly impossible to explain.
1. How do businesses manage lack of explainability? How does explainability impact decisions to develop or implement AI?
2. How frequently do businesses employ "post-hoc" methods to evaluate "conceptual soundness" of AI approaches?
3. Which uses of AI present the greatest explainability challenges?
- Data Quality Limitations / "Overfitting": Limitations inherent in the datasets used to train AI algorithms may amplify bias or inaccuracies and may lead to incorrect predictions and/or pronounced, repeated flawed decision-making (a.k.a. "overfitting"), which may go uncorrected.
4. How do businesses using AI manage risks related to data quality and data processing?
5. Are businesses using nontraditional or "alternative" datasets more effectively in certain AI approaches (e.g., for credit underwriting, fraud detection, trading)?
6. How do businesses manage risks relating to overfitting?
- Cybersecurity Risks: AI may be vulnerable to unique cybersecurity risks, such as "data poisoning attacks," whereby bad actors attempt to "corrupt or contaminate" data used to train an AI model.
7. What cybersecurity risks are businesses experiencing or tracking relating to AI? How are businesses managing those risks?
- Dynamic Updating: Some AI approaches may significantly evolve over time, making efforts to independently review, monitor, track, and validate the AI approach a challenging endeavor (e.g., at different points in time, the same inputs may generate different outputs because the model has evolved).
8. How are businesses managing risks relating to dynamic updating?
- Vendor Oversight / Community Institutions: Financial institutions may rely on third-party services that use AI, which may pose unique challenges from a vendor oversight perspective. Some businesses, such as smaller community institutions, may be more likely to need to rely on vendors for these functions.
9. Do community institutions face particular challenges in developing, adopting, or using AI? How are those challenges addressed?
10. How are financial institutions challenged by or managing risks associated with the use of AI developed or provided by third parties? Do these challenges vary by the size of the financial institution?
- Fair Lending: Given the lack of transparency and explainability inherent in certain AI approaches, it may be challenging to determine whether a given AI approach to underwriting complies with consumer protection legal and regulatory frameworks regarding fair lending.
11. What techniques are businesses using to evaluate AI-based credit determinations for compliance with fair lending laws?
12. Are businesses finding, in practice, that there are risks that AI can be biased or result in discrimination on prohibited bases? How can these risks be managed?
13. How do risk management principles and practices aid or inhibit evaluations of AI-based credit determination approaches for compliance with fair lending laws?
14. What challenges do businesses face when applying risk assessment principles and practices to the development, validation, or use of "fair lending risk assessment models"?
15. How are creditors making AI-based credit determinations identifying reasons for taking adverse action for credit, consistent with their obligations under Regulation B? Does Regulation B provide sufficient clarity for the statement of reasons for adverse action when AI is used?
- Additional Considerations: Here, the Agencies seek general feedback about other potential uses of AI and risks relating to AI in the financial services industry, as well as general feedback about how financial institutions weigh AI benefits and risks.
16. Are businesses using AI in any other ways? Are there any other risk management challenges the Agencies should consider?
17. How are the respondents weighing the benefits and risks to customers or prospective customers that derive from the use of AI?