EEOC settles its first AI discrimination lawsuit

The action highlights an emerging area of legal and compliance risk for businesses.

iTutorGroup Inc., a China-based tutoring company that provides English-language tutoring to students in China, was ordered to pay $365,000 to a group of approximately 200 rejected job seekers age 40 and over, according to a consent decree filed in the US District Court for the Eastern District of New York.

This is the Equal Employment Opportunity Commission’s (EEOC) first-ever lawsuit about artificial intelligence (AI) discrimination in hiring. An agreement has been reached with the tutoring company for allegedly programming its recruitment software to automatically reject older applicants.

Specifically, the EEOC said that, in 2020, iTutorGroup programmed online recruitment software to screen out women applicants aged 55 or older and men who were 60 or older. The situation was discovered when an applicant reapplied with a different birth date on the online application forms. 

Under the consent decree, iTutor is prohibited from rejecting tutor applicants based on sex and age and must adopt anti-discrimination policies and conduct anti-discrimination training. Further, the company must invite all applicants that were purportedly rejected due to their age in March and April 2020 to reapply.

Examining AI for evidence of discrimination in employment practices is a distinct part of the EEOC’s current Strategic Enforcement Plan.

Local and federal lawmaking

Employers using workplace AI tools are now on notice that the EEOC’s enforcement team has won a case based on discrimination that can occur when employers rely on AI and machine-learning tools for hiring and other workplace decisions.

The rapid evolution of ChatGPT and other generative AI tools have created concern, with evidence to back it up, that such technology has the ability to perpetuate bias.

In April, regulators from four agencies across the Biden administration unveiled a plan to enforce existing civil rights laws against AI systems that perpetuate discrimination.

And the White House and business leaders have been working with seven leading US artificial intelligence companies, at President Biden’s request, agreeing to put new AI products through internal and external tests before their release. In July, executives from the companies, including Inc., Alphabet Inc. and Meta Platforms Inc., also promised to allow outside teams to probe for security defects and risks to consumer privacy.

Data gathering

New York legislators are crafting legislation that would place stiff restrictions on artificial intelligence and how businesses gather data about employees under legislation introduced in the NY State Senate earlier this month.

The legislation (S7623) would ban AI-only decision-making for hiring and other employer decisions, while allowing certain types of electronic monitoring to continue if particular conditions are met. Businesses also face new limits on the data they gather about workers. The law essentially requires employers to prove the algorithms they use don’t discriminate against current employees or prospective ones.

(New York City got out to an early lead and has already implemented a new law to provide more transparency around the use of AI in employment decisions.)

California has two bills to regulate the use of AI pending in the Senate and Assembly right now, and, in May, the Assembly proposed an Assembly Joint Resolution that urges the US government to impose an immediate moratorium on the training of AI systems more powerful than GPT-4 for at least six months to allow time to develop AI governance systems. 

US races with China

Representative Frank Lucas, the Oklahoma Republican who chairs the House Science, Space and Technology Committee, opened a hearing in June by saying he recognizes the need for Congress to set some guardrails for AI. But he also warned that any regulation must also promote innovation — especially as the US races China to develop machine learning prowess.

This is the balancing act lawmakers, regulators, adjudicators, corporate executives and even the consuming public must navigate now, and it will be a top compliance challenge going forward, globally.