Picture this: Your company has developed an artificial intelligence software program that instantly scans thousands of résumés and identifies the perfect candidates for hiring. You call a prospective client whose human resources department is overrun with processing job applications and conducting interviews. You tell them that, for a modest fee, your cutting-edge AI program will handle all aspects of the job selection process in a matter of seconds and will uncover the best qualified, motivated, and loyal applicants for hiring. Cha-ching—another sale. As your new customer sends you piles of job applications, and you identify for them the best candidates for hiring, their application backlog disappears, and you collect your fee. One morning, however, you receive a startling call: your customer informs you that nearly all their new hires are white, male, and under 40, and your customer is being sued for discrimination. The kicker? You also have been named as a defendant. As you read through the lawsuit, however, relief washes over you as you think: "We aren't the employer, so we aren't liable for discrimination, right?" Not so fast.
Recently, we alerted employers about how the Equal Employment Opportunity Commission (EEOC) is putting employers on notice that it is scrutinizing how employers utilize AI for potential employment discrimination. Under many employment discrimination laws, however, liability is not limited to the traditional employer who manages the workforce and issues the paychecks. Instead, direct liability can extend to "any agent of" an employer if that third-party agent exercises sufficient control over an important aspect of the employment relationship—such as recruiting, screening, and selecting employees.
Employers across the country are increasingly utilizing AI technology to streamline and automate the recruiting and hiring process. In fact, recent surveys suggest that nearly 70% of all HR professionals are using AI in some aspect of the recruitment and application process. Furthermore, as AI becomes more accessible and less technically daunting, the use of AI has spread to all aspects of the hiring process, from screening applications to conducting interviews and scoring personality tests.
One prominent way that AI has broken into the recruitment space is through the use of machine-learning recruiting engines. Recruiting engines learn an employer's ideal candidate and screen résumés to match the employer's desired characteristics—automatically filtering out candidates whose applications do not contain specified criteria, such as types of experience and skills. Résumé-screening AI is already prominent among employers, with recent surveys showing that nearly 99% of Fortune 500 companies use some form of AI to filter résumés out of the applicant pool.
In addition to filtering résumés, employers are also increasingly delegating job interviews to AI hiring systems. Job interview video platforms use AI to analyze interviewees' facial movements, word choice, and speaking voice to rank them against other applicants. Impressively, companies operating these AI interview platforms state that with their customized algorithms and interview questions, they can more accurately predict an applicant's likely job performance better than human interviewers.
Another form of AI being used by employers to streamline hiring is personality assessments. AI companies have developed computer-based games for candidates to play that produce results designed to predict an applicant's cognitive and personality traits. Other AI platforms claim that they can uncover an applicant's psychology by scanning how they use language on résumés, social platforms, and other data submitted to the employer. Other AI platforms go even further—scanning promotions, tenure, and previous job changes to predict whether an individual is likely to leave their current job and enabling employers to target suspected job seekers.
Indirect Interaction, Direct Liability
As AI continues to expand into the recruitment and hiring process, companies in the AI space need to be aware of the risk of being deemed an "employer" and potentially liable for employment discrimination. Under most antidiscrimination laws, it is unlawful for an "employer" to discriminate against any individual because of the individual's race, color, religion, sex, national origin, or other protected classification. Critically, many of these laws define "employer" as not only the entity that is the direct employer, but also "any agent of such" an employer. In other words, antidiscrimination statutes can apply not only to whom we think of as traditional employers, but also (depending on the circumstances) to that employer's third-party business agents.
Recent court decisions illuminate a path that could impose direct liability for employment discrimination on businesses that provide AI recruiting, interviewing, or screening services for their customers. A touchstone question is: To what extent has the traditional employer delegated to the third-party AI company control over its hiring process?
For example, a federal court decided that an insurance company that provided retirement benefits to an employer's employees was an "agent" of the employer, and thus directly liable to the employer's employees for discrimination. The court explained that federal antidiscrimination statutes are broad enough to apply to any entity that significantly affects an individual's access to employment opportunities—regardless of whether that entity is technically the individual's employer. The application of this principle to AI is straightforward: If an AI company is so involved in an employer's hiring process that it significantly affects who has access to employment, then it could face direct liability for employment discrimination by applicants who are not hired.
At this point, facing the potential risk of direct liability for employment discrimination, an AI company may be thinking: "But what about the indemnification clause in our contract with the employer? That clause makes it clear that the employer is solely liable for any violations of the employment discrimination laws, and they must indemnify me if I am sued." While contractual indemnification provisions are generally enforceable, courts have increasingly deemed them to be invalid in the context of employment discrimination.
Specifically, in many jurisdictions across the country, courts will refuse to enforce an indemnification clause when the provision simply shifts the burden of complying with antidiscrimination statutes to one party to the contract. There is a presumption against allowing one entity to contract around its antidiscrimination obligations and shift all responsibility to another entity—even if the two parties agreed to do so in the contract. In other words, not only are AI companies at risk of being held directly liable to an employer's rejected applicants, but they also may not be able to shift that liability (and monetary award) back to the employer/customer. In these cases, the rejected applicants can sue both the traditional employer and the third-party AI company that exercised sufficient control over the adverse employment decision.
The trend is inescapable: Employers are clamoring for artificial intelligence that allows them to delegate recruitment and hiring responsibilities to robots. With this trend, however, comes a new frontier of risk—if the robots turn out to be racist, sexist, or ageist, then those producing, selling, and integrating AI into the workplace of another company may find themselves a target of employment discrimination lawsuits. If you or your company have any questions about navigating these new issues, or how to utilize AI in your workplace while reducing legal risk and reputational harm, please contact the authors of this article or any attorney in Venable's Labor and Employment Group.