I ponder the term “AI,” aka “artificial intelligence.” From pocket calculators to big-data Google search engines, what is artificial about software that reliably provides shortcuts to ponderous Old School long division or weeks-long research among the library stacks? To call it “automated” is more like it, same acronym in any event.
AI is likely in every workplace save for an occasional sidewalk lemonade stand: timekeeping programs, payroll software, online recruitment and job application processes, etc. Systems improvements continue to accelerate, the marvels of our Digital Age.
Yet, the Equal Employment Opportunity Commission (EEOC) reminds employers that while AI “may be evolving, anti-discrimination laws still apply. The EEOC will address workplace bias that violates federal civil rights laws regardless of the form it takes.” EEOC Chair Charlotte Burrows reiterates: “As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality,”
Thus was born in 2021 the agency’s Artificial Intelligence and Algorithmic Fairness Initiative (Initiative) to examine workplace AI more closely and “guide employers, employees, job applicants, and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.”
The EEOC has since issued a May 12, 2022 guidance memo on the application of the Americans with Disabilities Act in assessing job applicants and employees and, this month, its technical assistant document “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (May 18, 2023).
The Initiative’s prime concern is whether AI systems “for employment decisions, including recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, dismissal, and referral” have “a disproportionately large negative effect” (so-called disparate impact or adverse impact) on any classification protected by Title VII, including race, religion, gender, national origin and many others.
Unlawful disparate or adverse impact discrimination is unintentional but nevertheless actionable under the law. For example, while an AI system to process job applicants is clearly a significant time and cost saver, the framing of its screening questions might over time inadvertently disfavor one ethnic or racial group over another.
Take Away:
The AI programs available to employers are multiple and expanding exponentially. See, e.g., Top 19 HR Management Software for HR Managers in 2023, May 2, 2023.
While the agency’s Initiative and ensuing pronouncements are long on an employer’s ultimate responsibility to ensure its automated human resources decision software is non-discriminatory (including the duty not to just take any vendor’s word for it), the EEOC is very short on any sample system that would pass its muster.
Rather, referring to its 2007 memo on selection procedures and a 2006 issue on detecting race and color discrimination, the agency “encourages employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently. Generally, employers can proactively change the practice going forward.”
In other words, employers are on their own, remaining responsible to monitor the fairness of their hiring practices, and should not presume their digital efficiency tools are intelligent enough to do the job for them.
See also,
- Age Discrimination: Employee’s Suspicion Not Enough, Older Worker Needs More than Her Belief of Unequal Treatment (March 10, 2023)
- Why Job Descriptions Matter (April 4, 2018)
- Pre- Employment Testing – Inquiries are Limited to Job-Related Skills and Qualities (June 2, 2011)
Tim Bowles
May 26, 2023