May 31, 2023

Is Your Robot a Racist? EEOC Intensifies Scrutiny over Employers' AI

5 min

Whose idea was it to hire all these new folks anyway? Regardless, your HR Department is on the phone pleading for assistance—they are overrun with thousands of applications and résumés. Searching for a way to help, you come across the hottest solution on the market: artificial intelligence (AI). More specifically, you discover an AI screening tool that promises to automatically review job applications and identify the perfect candidates. Sold. As you roll out your new technology, all seems to be going according to plan—the application backlog has disappeared, and new hires are making their way into the office. As you look around, however, a pattern seems to emerge: most of your new employees are white. As a sinking feeling washes over you, two thoughts immediately come to mind: "Is my robot a racist?" and "Are we in trouble?" To the latter, the EEOC has recently provided an answer: "Yes."

The Equal Employment Opportunity Commission (EEOC) released a new technical assistance document explicitly putting employers on notice that the agency is scrutinizing how employers utilize artificial intelligence for potential employment discrimination. The EEOC says that the new guidance will help employers answer questions about whether and how the use of artificial intelligence in the workplace may run afoul of Title VII.

Artificial Intelligence, Real Discrimination

The EEOC's latest technical assistance document focuses on how the use of artificial intelligence in an employer's decisions on hiring, promoting, and firing may have a "disproportionately large negative effect" on the basis of race, color, religion, sex, or national origin. Specifically, the EEOC warns of five instances where the use of AI in the workplace may lead to liability for employers:

  1. Resume scanners that prioritize applications using certain keywords and phrases;
  2. Employee monitoring software that evaluates employees and applicants based on keystrokes and other factors;
  3. "Virtual assistants" or "chatbots" that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  4. Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  5. Testing software that provides "job fit" scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived "cultural fit" based on their performance on a game or test.

Critically, the EEOC also makes clear that employers cannot simply shift the blame to third party vendors. The new guidance states that even if an employer's AI is developed and implemented by another entity, the employer may still be responsible for discrimination in violation of Title VII. Additionally, the EEOC warns employers that they may be held responsible for the actions of their agents, including when an employer relies on an agent to administer a recruitment or hiring procedure on its behalf.

Finally, while the new technical assistance is short on precise compliance advice for employers, the EEOC does suggest that employers use the "Four-Fifths Rule" as a rule of thumb to determine whether an AI tool creates an impermissible disparate impact. The rule states that when a protected group's selection rate is less than four-fifths of another group's selection rate, then the employer may be liable for disparate impact discrimination. Here is an example provided by the EEOC:

As a part of the application process, your company is using an algorithm to score a personality test:

  1. 80 white individuals and 40 black individuals apply and take the test.
  2. 48/80 (60%) of the white individuals advance to the next round.
  3. 12/40 (30%) of the black individuals advance to the next round.
  4. The ratio of the two selection rates is thus 30/60 (50%)

Because 30/60 (50%) is lower than 4/5 (80%), the Four-Fifths Rule says that the AI-driven selection rate for Black applicants is significantly lower than the selection rate for White applicants, "which could be evidence of discrimination against Black applicants." Importantly, while the Four-Fifths Rule may be an easy-to-use guide, the EEOC makes clear that it may be inappropriate in certain circumstances, and is not a reasonable substitute for robust statistical analysis.

While the new technical assistance document does not have the force and effect of law, the EEOC has recently begun sending signals that it will continue to ramp up litigation around AI's role in employment discrimination. For example, in its newly released 2022 Annual Performance Report, the EEOC highlighted its recent lawsuit against iTutorGroup, Inc. in which the agency alleges that the tutoring service programmed its software to automatically reject female applicants over the age of 55 and male applicants over the age of 65.

Additional Considerations for Employers

In addition to EEOC enforcement efforts, employers must also be cognizant of rapidly expanding state regulations pertaining to AI. For example, both Maryland and Illinois have enacted legislation limiting employers' ability to use video evaluations, including facial recognition tools. Further, numerous states across the country have proposed various AI-related regulations, including New York, California, Texas, the District of Columbia, New Jersey, and Pennsylvania.

At the local level, New York City has enacted stringent regulations limiting the use of artificial intelligence in the workplace. Beginning July 5, 2023, employers in the city are prohibited from using AI-driven hiring tools to make employment decisions, unless the tool is audited annually for bias, the employer publishes a summary of the audit, and the employer provides notice to affected applicants and employees.

As the EEOC—and state and local governments—continue to prioritize AI-driven employment discrimination claims, employers must remain vigilant about how they are using artificial intelligence in the workplace. If your company has any questions about the EEOC's latest guidance, or how to utilize AI while reducing legal liability and reputational harm, please contact the authors of this article or any attorney in Venable's Labor and Employment Group.