As the use of artificial intelligence (AI) continues to expand across industries, companies operating all over the world must navigate a complex landscape of local and national regulations. These laws govern a wide scope of AI advancements in an employment context, from recruitment tools to consumer-facing chatbots.
AI use in the context of staffing and employment poses particularly high risk. Processes that require extra caution include resume screening, interview assistance, and decisions related to hiring and promotions. Keep these considerations in mind as we delve into some of the major related legislation that has been passed in the United States.
At the federal level, civil rights and anti-discrimination laws apply in the employment realm and extend to AI use. The US Equal Employer Opportunity Commission (EEOC) enforces federal laws prohibiting employment discrimination.
In fact, they had their first significant settlement in AI discrimination in a hiring lawsuit against iTutor Group in September 2023. The outcome ultimately highlights the EEOC’s expectations that technology-based screening processes comply with existing civil rights laws and the scrutiny that the growing presence of AI in the workplace is under.
In the first days of the Trump administration, on January 23, 2025, the White House issued an executive order entitled Removing Barriers to American Leadership in Artificial Intelligence (AI EO), seeking to replace President Biden’s executive order on AI (EO 14110) with President Trump’s AI action plan to ensure that AI systems are “free from ideological bias or engineered social agendas.”
The White House issued a fact sheet asserting that EO 14110 “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.” In response, the Department of Labor (DOL), Equal Employment Opportunity Commission (EEOC), and Office of Federal Contract Compliance (OFCCP) have removed their AI guidance and frameworks, noting that they may now be outdated or not reflective of current policies.
The AI EO calls the assistant to the president for science and technology, the special advisor for AI and crypto, and the assistant to the president for national security affairs to develop an AI action plan within 180 days of its implementation.
At the state level, many states are still developing legislation in this area and awaiting federal guidance. However, those that have already enacted AI laws offer a framework for others to follow. As an evolving field, governments seek to balance AI’s burgeoning benefits in the workplace with protection for its users against potential harm.
The Utah AI Policy Act is the first U.S. state law to impose transparency requirements on the use of Generative AI (GenAI) specifically, defined as an artificial system that is trained on data to generate non-scripted outputs like outputs created by a human, with limited or no human oversight and communicates with humans through text, audio or visually.
The bulk of GenAI in an employment context refers to chatbots, intelligent recommendation engines, and automated summarization.
Its key provisions include requirements for companies who use GenAI to
Furthermore, the Office of AI Policy (OAIP) was recently developed within the UDOC to be tasked with rulemaking, creation, and oversight of an AI Learning Laboratory Program designed to foster innovation while ensuring responsible use of AI. Creation of this program extends an invitation to companies to apply for a temporary “regulatory mitigation” while testing AI products in the market.
The policy also distinctly categorizes synthetic data, a specific output of GenAI, as non-personal so that its usage does not trigger traditional privacy laws. This distinction ultimately provides companies with more flexibility in using synthetic data such as deepfakes and AI-generated visuals to achieve their objectives without infringing on privacy regulations.
An Automated Employment Decision Tool (AEDT) is used to automate employment decisions, including many platforms and software used in recruitment and hiring. Companies or employment agencies that use an AEDT can be utilized only if:
The notices must be clear and conspicuous and are to be provided to workers at least 10 working days before the AEDT is used.
Businesses have often argued that human oversight exists in AEDT usage, circumventing repercussions for potential violations of NYC 144. In response to these loopholes, New York state legislators have proposed broadening the application of such auditing and disclosures to instances where AI assists in human decision-making rather than when it only plays a predominant role in the process.
We should continue monitoring legislative efforts taken to widen the scope of NYC 144.
Technological facial recognition in job interviews is becoming more common. Analyzing an applicant’s facial expressions, gestures, tone, and word choice, AI systems evaluate traits such as honesty, attitude, and language competence to then generate an overall score for the candidate fitting into the role and atmosphere of the company.
Maryland’s HB 1202 requires companies to obtain consent from applicants by having them sign a waiver for the use of facial recognition services to create a facial template during their interview for employment.
Similar to Maryland’s HB 1202, AIVIA regulates the use of AI by companies to analyze video interviews of applicants for jobs based in Illinois. The Illinois legislature requires companies that use AI-based evaluation systems in interviews to:
This law requires companies to notify workers when AI is used in employment decisions. The reach of this disclosure is expansive, covering a company’s use of AI in “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, disciple, tenure, or the terms, privileges, or conditions of employment.” This requirement applies not only to fully automated decision-making but also to other AI applications in employment.
House Bill 3773 amends the Illinois Human Rights Act to include nondiscrimination provisions containing an explicit statement that companies may not use AI in a way that subjects workers to unlawful discrimination based upon legally protected classes under state law.
Included in these discriminatory prohibitions is the use of zip codes as a proxy for protected classes in employment decisions as they could strongly correlate with applicants’ race, national origin, and socioeconomic status.
Their use in AI systems can thus create a bias in employment decisions, as certain racial groups may be unintentionally favored or excluded based on where they live, perpetuating disparities in recruitment, hiring, and promotions.
Under this law, “private entities” are prohibited from using facial recognition technologies in public places within Portland, Oregon, except:
(1) to the extent necessary to comply with federal, state, or local laws;
(2) for user verification purposes to access the user’s own personal or employer-issued communication and electronic devices; or
(3) in automatic face detection services in social media applications. Reasons for the ban cite the wide ranges in accuracy and error rates that differ by race and gender.
The law indicates that facial recognition technology cannot be used for interviewing or other employment purposes outside the scope of unlocking one’s personal electronic device, such as a laptop.
The act will require “Covered Providers” of GenAI systems to offer users both “latent” disclosures in AI-generated content and the option to include a “manifest” disclosure in such content. Though “Covered Providers” refer to producers of GenAI systems and therefore not directly the companies who utilize these systems, its requirements extend to these companies to uphold.
Covered Providers must ensure that any third party that licenses their GenAI systems maintain these disclosure requirements. Companies utilizing GenAI systems are responsible for including these disclosures or Providers must revoke their license within 96 hours.
The act establishes a duty of reasonable care from those who create high-risk AI tools (“Developers”) and those who use high-risk AI tools (“Deployers”) to protect consumers “from any known or reasonably foreseeable risks” of algorithmic discrimination. AI tools are automatically considered to be high-risk if it “makes,” “assists in making,” or is “capable of altering the outcome” of an employment decision. Companies who utilize such tools in an employment context are Deployers. Deployers are required to show reasonable care by:
For businesses operating in multiple states, navigating the patchwork of AI laws can be particularly challenging. It’s critical to understand the unique requirements of each region, especially when deploying AI for recruitment, talent screening, or even customer service. With the trends in AI legislation we are seeing, these are employment areas of AI implementation to be especially wary of:
Though laws for AI in employment are diverse around the U.S., themes of transparency and anti-discrimination remain constant across the board. Businesses should embrace transparency concerning the use of AI systems to inform affected individuals when AI-generated content or communications are present. It is imperative that companies inform workers and consumers how they use their data by providing conspicuous notices to all affected individuals.
Furthermore, conducting audits and bias assessments on any AI-enabled tools is essential to determining whether they adversely impact protected classes under state and federal law based on their protected characteristics.
TCWGlobal remains committed to providing compliant staffing and employer of record solutions! We stay ahead of all the evolving AI legislation and are here to help with our comprehensive contingent workforce management solutions.
For more information about opt-outs, accommodations, and the general scope of these laws, check out this chart here encompassing both AI and biometric data laws around the world.
For international companies, don’t forget to check out our separate blog post on AI laws outside of the U.S. here.