How to Avoid Discrimination When Using Artificial Intelligence to Hire Employees

In previous blogs, we’ve discussed how using Artificial Intelligence (AI) technology can help to considerably streamline your recruiting process. With just a few clicks of a button, you can set up parameters that allow you to sort hundreds, if not thousands of resumes, meaning that your beleaguered HR person needs to only look at a dozen of the best of the best candidates and interview the top two or three (and even then, you can use AI to do the interviewing too!). However, entering these same search terms may invariably introduce bias into your hiring process, such that you’ll see a homogenous crew — both in terms of color, culture, background, and even talent. 

To avoid these pitfalls, lawmakers have introduced several measures to help reduce the risk of discrimination when using the AI process. Specifically, employers must abide by anti-discrimination laws under the Equal Employment Opportunity Commission (EEOC), Title VII of the Civil Rights Act of 1964, Age Discrimination in Employment Act (ADEA), Genetic Information Nondiscrimination Act of 2008 (GINA), and the Americans with Disabilities Act of 1990 (ADA). The EEOC, for example, defines disparate treatment as “when an employer treats some individuals less favorably than other similarly situated individuals because of their race, color, religion, sex, or national origin,” and notes that in order to prove that they have been discriminated against, the accuser “must establish that respondent’s actions were based on a discriminatory motive.” The policies also call attention to the notion of disparate impact, whereby “discrimination results from neutral employment policies and practices which are applied evenhandedly to all employees and applicants, but which have the effect of disproportionately excluding women and/or minorities.” Thus, to avoid claims of discrimination or disparate impact, employers must establish methods of validation for any employee screening tool, establish pre-employment testing scores, and document the validity of the tool that they are using. In short, employers need to be able to defend their AI and show documentation, should they ever be hauled in front of a judge. 

On a more local level, several states have introduced legislation aimed at evening out the AI playing field. The list of applicable laws is ever-growing, but some highlights include passing laws that govern the use of facial recognition software for employee applications (Maryland and Illinois); requiring employees to disclose the use of AI and employer electronic monitoring (Connecticut); and shoring up data privacy rules to ensure information gathered via AI is secure in the employment setting (California). Again, state laws are often somewhat of a moving target, but here’s a nice roundup of state laws as of this September regarding AI use. 

At the recent SHRM Annual Conference & Expo 2021, Jennifer Betts, an attorney with Ogletree Deakins, hosted a session titled “AI, the Right Way: Avoiding Employment Discrimination with Artificial Intelligence.” During the session, Betts noted that “artificial intelligence itself is neither inherently good nor inherently bad. It’s critical to remember that AI’s effectiveness is all about how the AI and bots are programmed and maintained, not the concept of AI itself.” That said, she specifically recommends that employers seeking to use AI adhere to the following best practices: 

  • Work closely with legal and human resources staff to develop an AI strategy for each indication that you intend to apply AI.
  • Incorporate human review of all AI-assisted decision making to try to reduce the risk of bias.
  • Plan to disclose and consent employees and even potential job candidates in settings where AI is used.
  • Select an AI vendor that is dedicated to preventing discrimination by using an inclusive approach to the design of their programming and ask pointed questions about how they prevent bias in their approach, including whether they perform audits, retest for disparate impact over time, and whether they use diversity consultants or similar staff when developing their tools.
  • Insist on performing both regular data audits and external validation studies using a third party to further guarantee that discriminatory practices aren’t creeping in.
  • Shore up your security such that any data collected or accessed during the AI process is protected to the extent of federal and state laws (whichever is most stringent).
  • And from an HR perspective, review the search terms that you are using to ensure that you aren’t inviting bias or otherwise excluding qualified candidates by not capturing the types of candidates you would want in the role. 

Do you use AI in your hiring process? Have you found it streamlines the process for you, or does it do more harm than good?