Use of Artificial Intelligence Targeted by DC Legislation

5 min

The use of artificial intelligence to determine access to credit and other important life opportunities has been targeted by legislation under consideration by the District of Columbia City Council. On December 9, 2021, DC Attorney General Karl Racine introduced the "Stop Discrimination by Algorithms Act of 2021." "Not surprisingly, algorithmic decision-making computer programs have been convincingly proven to replicate and, worse, exacerbate racial and other illegal bias in critical services that all residents of the United States require to function in our treasured capitalistic society, " said AG Racine. "This so-called artificial intelligence is the engine of algorithms that are, in fact, far less smart than they are portrayed, and more discriminatory and unfair than big data wants you to know. Our legislation would end the myth of the intrinsic egalitarian nature of AI." The DC City Council is in session year-round, and the bill may be considered through January 1, 2023.

Big Picture: Recent public statements and enforcement actions by federal regulators have shown a focus on consumer protection and access to credit and payment systems, fair lending law enforcement, and the use of AI and machine learning in financial services. The Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ), and Office of the Comptroller of the Currency (OCC) announced on October 22, 2021 a new interagency (federal and state) initiative to target redlining. CFPB Director Rohit Chopra warned that "algorithms are black boxes behind brick walls." "When consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias. Algorithms can help remove bias, but black box underwriting algorithms are not creating a more equal playing field and only exacerbate the biases fed into them." Noting past consumer protection issues caused by robosigning and automation, Chopra noted that the "speed with which banks and lenders are turning lending and advertising decisions over to algorithms is concerning." In addition, the CFPB staff previously issued guidance on the use of AI and machine learning models and adverse action notices. The CFPB also has active market monitoring initiatives targeting  "big tech" payment systems and buy now, play later ( "BNPL") credit. The FTC has highlighted the laws it (and others, including the CFPB and banking agencies) enforces relevant to developers and users of AI, to report on big data analytics and machine learning. The FTC has also conducted a hearing on algorithms, AI and predictive analytics; and issued business guidance on AI and algorithms. Additionally, the SEC has indicated interest in scrutinizing and studying the use of AI in financial services.

What’s Included in the Proposed Legislation?

Prohibitions, Disclosures, and Transparency

The bill would change District law to add civil rights protections and to protect communities from alleged harm caused by algorithmic bias by:

  • Prohibited Practices: Covered entities would be prohibited from using algorithms that produce biased and unfair results.
  • Auditing for Discriminatory Processing and Reporting Requirement: Covered entities would be required: to perform annual audits of their algorithms for discriminatory patterns (directly and disparate impact); to document how their algorithms are built, how the algorithms make determinations, and how all of the determinations are made; and to report audit results and any needed corrective steps to the OAG.
  • Increasing transparency for consumers:
    • Disclosures — Companies would be required to make disclosures to all consumers about their use of algorithms to reach decisions, what personal information they collect, and how their algorithms use it to reach decisions.
    • Adverse Action — If businesses make an unfavorable decision about an opportunity — for example, denying a housing application or charging a higher interest rate for a loan — based on an algorithmic determination, they must provide a more in-depth explanation.
    • Dispute and Corrections — Provide consumers an opportunity to submit corrections to prevent negative decisions based on inaccurate personal information.

Coverage and Key Definitions

The bill would apply to covered entities and service providers. "Covered entities" include individuals and legal entities that either:

  • make algorithmic eligibility determinations or algorithmic information availability determinations, or
  • rely on algorithmic eligibility determinations or algorithmic information availability determinations supplied by a service provider,
  • and that meet one of the following criteria:
    • possess or control personal information on more than 25,000 District 8 residents;
    • have greater than $15 million in average annualized gross receipts for the 10 years preceding the most recent fiscal year;
    • are a data broker, or other entity, that derives 50 percent or more of its annual revenue by collecting, assembling, selling, distributing, providing access to, or maintaining personal information, and some proportion of the personal information concerns a District resident who is not a customer or an employee of that entity; or
    • is a service provider.

Any covered entity that relies in whole or in part on a service provider to conduct an algorithmic eligibility determination or an algorithmic information availability determination would be obligated to require by written agreement that the service provider implement and maintain measures reasonably designed to ensure that the service provider complies with law.

 "Algorithmic eligibility determination" would be defined as "a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s eligibility for, or opportunity to access, important life opportunities."

 "Algorithmic information availability determination" would be defined as "a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s receipt of advertising, marketing, solicitations, or offers for an important life opportunity."

Additional defined terms would include: "adverse action"; "important life opportunities," which would include "access to, approval for, or offer of credit"; "personal information"; and "service provider."

Enforcement

The bill would allow the Office of the Attorney General and private individuals to bring suit for violations of the law, with remedies to include injunctive relief, damages, restitution, and penalties (up to $10,000 per violation).

More Information

A copy of the press release is available here.

A copy of the legislative transmittal letter is available here.

A copy of the bill text is available here.

* * * * * *

Related Articles


To Boldly Go: New Frontiers in Fair Lending

Update on Consumer Financial Services Investigations and Enforcement

AI in Financial Services: Federal Banking Agencies Request Input Regarding Use of Models for Compliance with BSA/AML and OFAC Requirements

Practical Ways to Incorporate AI and Machine Learning Into Your Company's Risk and Compliance Efforts