On October 28, 2021 the U.S. Equal Employment Opportunity Commission (EEOC) announced an initiative to ensure that the use of artificial intelligence (AI) in hiring and other employment decisions complies with federal civil rights laws. The EEOC says the initiative will guide applicants, employees, employers, and technology vendors in ensuring that AI is used fairly and consistently.
In the agency's announcement, EEOC Chair Charlotte A. Burrows explained that "the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs," and that the EEOC "must work to ensure that these new technologies do not become a high-tech pathway to discrimination."
The EEOC's new initiative on AI will:
- Establish an internal working group to coordinate the agency's work on the initiative;
- Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications;
- Gather information about the adoption, design, and impact of hiring and other employment-related technologies;
- Identify promising practices; and
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.
As the EEOC prepares to scrutinize AI and algorithmic fairness, employers need to be aware of how AI might be used in their hiring and firing processes.
AI in Recruitment and Candidate Screening
One way that AI has broken into employment law is through the use of machine-learning recruiting engines. Recruiting engines learn an employer's ideal candidate and screen résumés to match the employer's desired characteristics. Despite being time and cost effective, these tools often fail to rise above human biases regarding gender and race.
For example, Amazon's recruiting engine came under fire for discriminating among applicants. Amazon's experimental AI tool was designed to comb through résumés and rank candidates. Within one year, it was clear that the program was not ranking candidates for technical roles in a gender-neutral manner. Amazon's AI model was trained to vet current candidates based on patterns in résumés accepted by the company over the previous ten years. Given male dominance in the tech industry, a large majority of the historical applications came from men. As a result, the AI began downgrading résumés that included the word "women's" and even penalized applicants from all women's colleges. In effect, Amazon's AI had learned to prefer male candidates based on Amazon's prior hiring practices. Amazon ultimately disbanded the project.
The use of AI in recruiting and screening candidates must be carefully monitored to ensure that no patterns of bias emerge. Such bias is not limited to overt discrimination based on protected characteristics: machines can just as easily prefer candidates from only a small group of universities, people who use similar language on their résumés, or people with similar professional backgrounds. Although these may seem like neutral qualifications, they can also work to screen out candidates based on legally protected characteristics.
Speech Recognition Models
In addition to candidate screening AI, it has become increasingly common for employers to rely on speech recognition technology in the hiring or interview process. Employers have turned to speech-to-text technology such as conversational AI and video interviewing to make hiring practices and other HR tasks more efficient and cost-effective.
A recent Stanford University study, however, found that speech-to-text AI developed by Amazon, IMB, Google, Microsoft, and Apple misidentified the words of African Americans at double the rate of whites. Furthermore, a full 20% of transcriptions from black speakers were misinterpreted so drastically as to render them unusable – compared to only 2% for white speakers. If an employer uses such technology – for example, to streamline answering employee questions about HR benefits or to gauge employee morale through a survey – it may be inadvertently favoring employees of one race over another.
AI technologies used in advertising algorithms may also be problematic. Just like with other digital advertising, AI used to target certain job seekers may be used in a discriminatory manner. If a job posting is advertised on social media platforms or is included in targeted search engine results, it is possible that candidates of certain protected characteristics may not even see it. Employers must take care that advertising in recruitment of candidates does not allow for targeting candidates based upon any protected characteristic.
Employers across the country are increasingly adopting AI technologies to streamline and automate the recruiting and hiring process. They may think that the use of AI can eliminate human and/or implicit bias from the hiring equation, but, as noted above, it can do more harm than good if companies are not strategic and thoughtful about its use. As companies continue to embrace AI, employers must be aware of the EEOC's heightened concern over unintentional discrimination within AI programs and work to ensure that new technology does not create new liability.
If your organization has any questions about the implications of the EEOC's new AI initiative, please contact the authors of this alert or any attorney in Venable's Labor and Employment Group.
* The authors of this article thank Jacob Polce, law clerk, for his assistance in its preparation.