Blog - TCWGlobal

AI in the Workplace: Understanding the Growing Patchwork of Employment Laws and Regulations in The U.S.

Written by Ariana Naranjo | Feb 21, 2025 6:45:16 PM

As the use of artificial intelligence (AI) continues to expand across industries, companies operating all over the world must navigate a complex landscape of local and national regulations. These laws govern a wide scope of AI advancements in an employment context, from recruitment tools to consumer-facing chatbots.

Key Objectives:

  1. Utah’s AI Policy Act (SB 149) went into effect May 1, 2024
  2. New York City’s Local Law 144 took effect July 5, 2023
  3. Maryland’s Facial Recognition Law took effect October 1, 2020
  4. Illinois’ AI Video Interview Act (AIVIA) took effect January 1, 2020
  5. Illinois New AI Law becomes effective January 1, 2026 (HB 3773)
  6. Portland’s Facial Recognition Ban took effect January 1, 2021
  7. California’s AI Transparency Act (SB 942) set to take effect January 1, 2026
  8. Colorado’s AI Legislation (SB 24-205) goes into effect February 1, 2026
  9. Key Takeaways

AI use in the context of staffing and employment poses particularly high risk. Processes that require extra caution include resume screening, interview assistance, and decisions related to hiring and promotions. Keep these considerations in mind as we delve into some of the major related legislation that has been passed in the United States.

At the federal level, civil rights and anti-discrimination laws apply in the employment realm and extend to AI use. The US Equal Employer Opportunity Commission (EEOC) enforces federal laws prohibiting employment discrimination.

In fact, they had their first significant settlement in AI discrimination in a hiring lawsuit against iTutor Group in September 2023. The outcome ultimately highlights the EEOC’s expectations that technology-based screening processes comply with existing civil rights laws and the scrutiny that the growing presence of AI in the workplace is under.

In the first days of the Trump administration, on January 23, 2025, the White House issued an executive order entitled Removing Barriers to American Leadership in Artificial Intelligence (AI EO), seeking to replace President Biden’s executive order on AI (EO 14110) with President Trump’s AI action plan to ensure that AI systems are “free from ideological bias or engineered social agendas.”

The White House issued a fact sheet asserting that EO 14110 “hindered AI innovation and imposed onerous and unnecessary government control over the development of AI.” In response, the Department of Labor (DOL), Equal Employment Opportunity Commission (EEOC), and Office of Federal Contract Compliance (OFCCP) have removed their AI guidance and frameworks, noting that they may now be outdated or not reflective of current policies.

The AI EO calls the assistant to the president for science and technology, the special advisor for AI and crypto, and the assistant to the president for national security affairs to develop an AI action plan within 180 days of its implementation.

At the state level, many states are still developing legislation in this area and awaiting federal guidance. However, those that have already enacted AI laws offer a framework for others to follow. As an evolving field, governments seek to balance AI’s burgeoning benefits in the workplace with protection for its users against potential harm.

 

Utah’s AI Policy Act (SB 149) went into effect May 1, 2024

The Utah AI Policy Act is the first U.S. state law to impose transparency requirements on the use of Generative AI (GenAI) specifically, defined as an artificial system that is trained on data to generate non-scripted outputs like outputs created by a human, with limited or no human oversight and communicates with humans through text, audio or visually.

The bulk of GenAI in an employment context refers to chatbots, intelligent recommendation engines, and automated summarization.

Its key provisions include requirements for companies who use GenAI to

  • Conspicuously disclose use of GenAI
    • “Regulated occupations,” meaning businesses that are required to obtain a license in Utah to operate, must disclose to users that they are interacting with a GenAI tool prior to any communication.

    • Businesses outside of regulated occupations are to disclose that a user is interacting with GenAI, and not a human, if asked or prompted by the user.

  • Ensure that AI applications do not discriminate or harm individuals

  • Be held accountable for what their chatbots generate as output. In other words, the company is responsible for violations caused by its GenAI applications.

 

Furthermore, the Office of AI Policy (OAIP) was recently developed within the UDOC to be tasked with rulemaking, creation, and oversight of an AI Learning Laboratory Program designed to foster innovation while ensuring responsible use of AI. Creation of this program extends an invitation to companies to apply for a temporary “regulatory mitigation” while testing AI products in the market.

 

The policy also distinctly categorizes synthetic data, a specific output of GenAI, as non-personal so that its usage does not trigger traditional privacy laws. This distinction ultimately provides companies with more flexibility in using synthetic data such as deepfakes and AI-generated visuals to achieve their objectives without infringing on privacy regulations.

 

 

New York City’s Local Law 144 took effect July 5, 2023

An Automated Employment Decision Tool (AEDT) is used to automate employment decisions, including many platforms and software used in recruitment and hiring. Companies or employment agencies that use an AEDT can be utilized only if:

  • The tool is audited for bias within one year before its use by an independent company

  • The audit is repeatedly conducted by an independent company annually

  • Companies publish a public audit summary, and

  • Companies provide notices to applicants/workers subject to the tool’s screening that should include
     
    • A statement that an AEDT will be used to evaluate the candidate
    • Information about the qualifications the tool assesses
    • The types of data the business collects for the AEDT
    • Its data retention policy

 

The notices must be clear and conspicuous and are to be provided to workers at least 10 working days before the AEDT is used.

Businesses have often argued that human oversight exists in AEDT usage, circumventing repercussions for potential violations of NYC 144. In response to these loopholes, New York state legislators have proposed broadening the application of such auditing and disclosures to instances where AI assists in human decision-making rather than when it only plays a predominant role in the process.

We should continue monitoring legislative efforts taken to widen the scope of NYC 144.

 

Maryland’s Facial Recognition Law took effect October 1, 2020

Technological facial recognition in job interviews is becoming more common. Analyzing an applicant’s facial expressions, gestures, tone, and word choice, AI systems evaluate traits such as honesty, attitude, and language competence to then generate an overall score for the candidate fitting into the role and atmosphere of the company.

Maryland’s HB 1202 requires companies to obtain consent from applicants by having them sign a waiver for the use of facial recognition services to create a facial template during their interview for employment.

 

Illinois’ AI Video Interview Act (AIVIA) took effect January 1, 2020

Similar to Maryland’s HB 1202, AIVIA regulates the use of AI by companies to analyze video interviews of applicants for jobs based in Illinois. The Illinois legislature requires companies that use AI-based evaluation systems in interviews to:

  • Notify the applicant in advance that AI will be used to consider their “fitness” for a role;

  • Obtain consent from applicant in advance;

  • Explain to the applicant the characteristics the technology considers in its evaluation;

  • Limit the distribution and sharing of the video to only those persons “whose expertise or technology” is necessary to evaluate the applicant;

  • Destroy the video and all copies within 30 days upon request by the applicant.

 

 

Illinois New AI Law becomes effective January 1, 2026 (HB 3773)

This law requires companies to notify workers when AI is used in employment decisions. The reach of this disclosure is expansive, covering a company’s use of AI in “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, disciple, tenure, or the terms, privileges, or conditions of employment.” This requirement applies not only to fully automated decision-making but also to other AI applications in employment.

House Bill 3773 amends the Illinois Human Rights Act to include nondiscrimination provisions containing an explicit statement that companies may not use AI in a way that subjects workers to unlawful discrimination based upon legally protected classes under state law.

Included in these discriminatory prohibitions is the use of zip codes as a proxy for protected classes in employment decisions as they could strongly correlate with applicants’ race, national origin, and socioeconomic status.

Their use in AI systems can thus create a bias in employment decisions, as certain racial groups may be unintentionally favored or excluded based on where they live, perpetuating disparities in recruitment, hiring, and promotions.

 

Portland’s Facial Recognition Ban took effect January 1, 2021

Under this law, “private entities” are prohibited from using facial recognition technologies in public places within Portland, Oregon, except:

(1) to the extent necessary to comply with federal, state, or local laws;

(2) for user verification purposes to access the user’s own personal or employer-issued communication and electronic devices; or

(3) in automatic face detection services in social media applications. Reasons for the ban cite the wide ranges in accuracy and error rates that differ by race and gender.

 

The law indicates that facial recognition technology cannot be used for interviewing or other employment purposes outside the scope of unlocking one’s personal electronic device, such as a laptop.

 

 

California’s AI Transparency Act (SB 942) set to take effect January 1, 2026

The act will require “Covered Providers” of GenAI systems to offer users both “latent” disclosures in AI-generated content and the option to include a “manifest” disclosure in such content. Though “Covered Providers” refer to producers of GenAI systems and therefore not directly the companies who utilize these systems, its requirements extend to these companies to uphold.

  • Manifest disclosures refer to those easily perceived, understood, and recognized by a natural person clearly and conspicuously.

  • Latent disclosures, on the other hand, are implicitly present within the metadata of AI-generated content. They will be required to convey the covered provider’s name, the name and version of GenAI system, the time and date of the content’s creation or alteration, and a unique identifier. They also must be detectable by an AI detection tool.

 

Covered Providers must ensure that any third party that licenses their GenAI systems maintain these disclosure requirements. Companies utilizing GenAI systems are responsible for including these disclosures or Providers must revoke their license within 96 hours.

 

Colorado’s AI Legislation (SB 24-205) goes into effect February 1, 2026

The act establishes a duty of reasonable care from those who create high-risk AI tools (“Developers”) and those who use high-risk AI tools (“Deployers”) to protect consumers “from any known or reasonably foreseeable risks” of algorithmic discrimination. AI tools are automatically considered to be high-risk if it “makes,” “assists in making,” or is “capable of altering the outcome” of an employment decision. Companies who utilize such tools in an employment context are Deployers. Deployers are required to show reasonable care by:

  • Maintaining documentation of their efforts to analyze and mitigate the potential discriminatory impact of each AI tool including documentation of the tool’s purpose, intended benefits, high-level summaries of its trainings data, descriptions of anti-bias testing, and instructions on how to use and monitor the tool.
  • Conducting an “impact assessment” of the AI tool annually at minimum and within 90 days of any “deliberate change” to the tool that “results in any new reasonably foreseeable risk” of discrimination. Deployers may hire a third party to conduct this assessment.

 

Key Takeaways

For businesses operating in multiple states, navigating the patchwork of AI laws can be particularly challenging. It’s critical to understand the unique requirements of each region, especially when deploying AI for recruitment, talent screening, or even customer service. With the trends in AI legislation we are seeing, these are employment areas of AI implementation to be especially wary of:

  • Resume Screening: AI tools that automate resume screening must be carefully monitored to ensure they don’t unintentionally exclude qualified candidates from protected classes, potentially violating anti-discrimination laws in various jurisdictions.

  • Interview Processes: The use of AI in evaluating candidates during interviews—whether through chatbots or video analysis—raises concerns about transparency and the potential for unintentional bias in decision-making. It’s important to ensure these systems are non-discriminatory and meet legal standards in the regions where they are deployed.

  • Ongoing Employment Decisions (Hiring, Promotions, and Performance Management): As AI becomes more involved in evaluating employee performance and making promotion recommendations, it’s crucial to ensure these tools maintain fairness and are free from bias, particularly in a globally diverse workforce.

  • Job Description Creation and Recruitment Marketing: AI tools used to generate job descriptions or recruitment materials should be reviewed regularly to prevent biased language or criteria that could discourage candidates from underrepresented groups from applying, helping us to avoid exclusionary practices.
  • Employee Development and Training: When AI systems are used for managing employee development, training, or performance evaluations, it’s important that these tools are continuously audited to ensure they promote equitable opportunities for all employees and do not inadvertently reinforce bias.

 

Though laws for AI in employment are diverse around the U.S., themes of transparency and anti-discrimination remain constant across the board. Businesses should embrace transparency concerning the use of AI systems to inform affected individuals when AI-generated content or communications are present. It is imperative that companies inform workers and consumers how they use their data by providing conspicuous notices to all affected individuals.

Furthermore, conducting audits and bias assessments on any AI-enabled tools is essential to determining whether they adversely impact protected classes under state and federal law based on their protected characteristics.

 

Need Help?

TCWGlobal remains committed to providing compliant staffing and employer of record solutions! We stay ahead of all the evolving AI legislation and are here to help with our comprehensive contingent workforce management solutions.

For more information about opt-outs, accommodations, and the general scope of these laws, check out this chart here encompassing both AI and biometric data laws around the world.

For international companies, don’t forget to check out our separate blog post on AI laws outside of the U.S. here.