Skip to main content
Looking for help? Contact our Help & Support Team
  • Home
  •   »  
  • Blog
  •   »  
  • Navigating the evolving landscape of ai laws compliance

Navigating the Evolving Landscape of International AI Laws: Compliance

Post by Ariana Naranjo
February 21, 2025
Navigating the Evolving Landscape of International AI Laws: Compliance

 

Artificial Intelligence (AI) is transforming industries worldwide, becoming an integral part of everyday business operations. As AI continues to evolve, governments across the globe are rapidly implementing regulations to address the potential risks that come with its widespread use.

Key Objectives: 

  1. European Union
  2. United Kingdom
  3.  Canada
  4. Australian 
  5. New Zealand
  6. Puerto Rico
  7. Key Takeaways

However, due to the fast-paced development of AI technology, the legal landscape is still unfolding, with laws and guidelines varying greatly by region. For businesses navigating these changes, staying informed is crucial to ensuring compliance and harnessing the benefits of AI responsibly.

In this blog, we will focus on some key AI regulations in jurisdictions outside of the United States, offering insights into how countries around the world are approaching AI governance in an employment context specifically. By understanding and implementing proactive measures, we can ensure that your business remains ahead of the curve as new AI legislation emerges globally.

 

European Union

The EU AI Act went into effect August 1, 2024. Enforcement of some of its provisions began February 2, 2025, and are staggered to be implemented further over the next two years.

The EU AI Act serves as the world’s first comprehensive AI law, intended to govern through one standalone regulator (the European AI Board). The law applies a risk-based approach, separating AI systems into different risk levels, as other countries around the world have similarly adopted.

●      Unacceptable AI systems are considered a threat to people and thus prohibited. This includes social scoring, manipulative AI, etc.

●      High-risk AI systems that negatively affect safety or fundamental rights will have to be registered in an EU database.

●      Limited risk AI systems such as chatbots and deepfakes will have to comply with transparency requirements and EU copyright law.

The act requires transparency at every level of risk of a system, though higher risk systems correspond to more stringent transparency laws. The AI Act categorizes any AI system used to assist in employment decisions as automatically high-risk. This includes AI systems used for recruitment and selection as well as AI tools that influence key management decisions, like promotions, terminations, and performance evaluations.

There are few exceptions to the high-risk qualification of AI in employment—namely when the AI system does not pose a significant risk of harm to someone and does not influence the outcome of decision-making. When these exceptions are made, documentation must be given by providers as to how they came to this assessment of lower risk.

The act sets out onerous obligations for providers of high-risk AI systems to follow before their systems hit the market. Less burdensome requirements are required for “deployers” of high-risk AI systems, which are the category that most employers fall under. The responsibilities of deployers under the Act are to:

  • Give clear notice to impacted workers that they will be subject to a high-risk AI system
  • Provide detailed information on how the AI system works, including its limitations and potential biases
  • Use the system in accordance with the provider’s instructions
  • Guarantee human oversight of the system so that it does not substitute human roles
  • Ensure input data over which they have control is suitable for intended use
  • Monitor the system and flag incidents to the provider
  • Save logs if under their control
  • Carry out a fundamental rights impact assessment

The AI Office will be established within the European Commission to ensure proper implementation and compliance with these regulations by conducting evaluations as well as responding to complaints regarding infringement.

The ban on unacceptable AI systems went into enforcement on February 2, 2025. Enforcement of the requirements for use of high-risk AI systems will be spread out beginning August 2, 2026 to August 2, 2027.

 

United Kingdom

The UK has opted for a decentralized, pro-innovation strategy when it comes to AI regulation, leaving oversight primarily to existing regulators. However, recent guidance on Responsible AI in Recruitment from the UK Department for Science, Innovation, and Technology was published in March 2024, signaling the UK’s recognition of the heightened risks AI presents when used in employment decisions.

Designed for non-technical audiences, the guide provides actionable steps for integrating AI into recruitment. Companies are encouraged to implement mechanisms such as bias audits and performance tests to address potential risks. It also details numerous examples of use cases and the risks that apply to each as they pertain to sourcing, screening, and interviewing.

While the guidance is voluntary, it’s a helpful tool for companies looking to stay ahead of future regulations as AI-driven hiring becomes more widespread.

 

 

Canada

Currently, Canada does not have any AI-specific regulatory laws in place nationwide. However, existing human rights and anti-discrimination laws such as the Canadian Human Rights Act, intersect with use of AI in Canada by prohibiting discrimination in this area.

Quebec’s Bill 64 addressing Automated Decision-Making Processes went into effect September 2023

The Quebec government has adopted Bill 64 (also known as Law 25), An act to modernize legislative provisions as they regard the protection of personal information, imposing strict privacy obligations on companies operating in Quebec or with websites open to Quebec visitors. In the context of AI in employment specifically, Section 12.1 of the Act establishes transparency and consent requirements on systems that are used to make entirely “automated decisions” about an individual—a term that is broad in scope, covering processes utilizing a simple binary choice to intricate AI algorithms used to make hiring suggestions.

Under Section 12.1 of the law, companies must:

  • Gain opt-in consent for use of personal data in an automated decision-making process;
  • Provide a means for individuals to submit questions, comments, or complaints to a representative to review the decision;
  • Allow people to request correction of the personal information used in the decision; and
  • Inform the individual, upon request, of the personal information used, the justifications for how the decision was made, and the individual’s right to correct personal information used.

The transparency rights apply to individuals both within an organization (workers) and outside an organization (such as customers).

 

Ontario’s Disclosure of AI Usage in Job Postings goes into effect January 1, 2026

At the provincial level, Ontario recently amended the Employment Standards Act (ESA) to include the Working for Workers Five Act (Bill 149), requiring companies with 25 or more workers to disclose in publicly advertised job postings whether AI is being used in the hiring process to screen, assess or select candidates. This addition is aimed at giving jobseekers greater certainty in the hiring process while still allowing companies to benefit from the use of AI.

Some specifics of the amendment are still being worked out, as the Ontario Ministry of Labour, Immigration, Training and Skills Development has requested feedback on whether there should be exemptions to this requirement, and whether an employer’s stated use of AI in the hiring process would deter job applicants from applying for a role.

 

 

Australia

Australia has not yet officially enacted any specific statutes or regulations that directly apply to AI, though they have passed voluntary guidelines. They announced the establishment of a new AI Expert Group to assist the Commonwealth Department of Industry, Sciences and Resources as they address the need for “safe and responsible AI in Australia.”

The Voluntary AI Safety Standard consisting of ten guardrails was enacted recently in August 2024, giving organizations a chance to get ahead of the race towards AI legislation by enacting these guidelines now. The guardrails include:

  1. Accountability – Clear roles and responsibilities
  2. Risk Management – Identify and plan for risks
  3. Data Governance – Manage data properly
  4. Testing and Monitoring – Regularly check AI systems
  5. Human Oversight – Keep humans in the loop
  6. Transparency – Be open about AI use
  7. Contestability – Allow challenges to AI decisions
  8. Supply Chain Transparency – Understand third-party AI tools
  9. Record Keeping – Maintain documentation
  10. Stakeholder Engagement – Consider AI’s impact on all groups

In September 2024, the Australian government published proposals of mandatory guardrails for AI in high-risk settings applicable to both developers and deployers. Nine out of ten of these guardrails correspond to those on the Voluntary AI Safety Standard.

The Australian government conducted a consultation process on the proposed mandatory guardrails that was completed in October of 2024. Now, the government is presently considering the responses to inform the next steps on AI regulation.

Companies should begin implementing the Voluntary AI Safety Standard so that they are in good shape when Australia enforces formal legislation in this arena. Ensuring that human oversight maintains a role in AI use and taking steps to mitigate risks in bias by means such as audits and testing, is key to compliance in Australia.

 

New Zealand

New Zealand does not currently have comprehensive AI regulation, but the Privacy Act of 2020 applies to use of AI in the country. The Office of the Privacy Commissioner (OPC) issued guidance on compliance with privacy law when using AI tools. The OPC places an emphasis on the importance of conducting Privacy Impact Assessments (PIAs) prior to using AI tools to ensure compliance with the Privacy Act.

New Zealand’s approach to AI regulation has been outlined in a Cabinet Paper published on June 26, 2024 by the Minister of Science, Innovation and Technology. This paper emphasizes the government’s desire to increase AI use throughout New Zealand to boost productivity and economic growth through a “light-touch, proportionate and risk-based approach.”

Given New Zealand’s approach to follow suit in a risk-based approach, companies could benefit from looking to other countries leading in AI legislation, such as the EU AI Act, and implementing precautions that seem to be consistent across the global board. These consistencies seem to center around transparency and anti-discrimination through notices, disclosure, and risk assessments.

 

 

Puerto Rico

Puerto Rico does not have its own specific set of AI regulations now. However, as a U.S. territory, Puerto Rico must adhere to federal laws regarding AI, data privacy, and workplace discrimination.

For AI laws specifically in the United States, check out our blog here (insert link).

 

Key Takeaways

As we look to the future of AI, it’s important to recognize that the rules governing AI can differ greatly from one country to another. Many governments are still figuring out how to regulate AI, aiming to strike a balance between fostering innovation and minimizing risks.

This uncertainty makes it difficult for companies to predict how AI will evolve in the coming years, especially as regulations continue to change globally. Given the dynamic nature of AI regulation, it’s essential that companies remain cautious and proactive in their approach to utilization of AI as it relates to engagement of talent.

With the trends in AI legislation we are seeing, these are employment areas of AI implementation to be especially wary of:

  • Resume Screening: AI tools that automate resume screening must be carefully monitored to ensure they don’t unintentionally exclude qualified candidates from protected classes, potentially violating anti-discrimination laws in various jurisdictions.

  • Interview Processes: The use of AI in evaluating candidates during interviews—whether through chatbots or video analysis—raises concerns about transparency and the potential for unintentional bias in decision-making. It’s important to ensure these systems are non-discriminatory and meet legal standards in the regions where they are deployed.

  • Ongoing Employment Decisions (Hiring, Promotions, and Performance Management): As AI becomes more involved in evaluating employee performance and making promotion recommendations, it’s crucial to ensure these tools maintain fairness and are free from bias, particularly in a globally diverse workforce.

  • Job Description Creation and Recruitment Marketing: AI tools used to generate job descriptions or recruitment materials should be reviewed regularly to prevent biased language or criteria that could discourage candidates from underrepresented groups from applying, helping us to avoid exclusionary practices.

  • Employee Development and Training: When AI systems are used for managing employee development, training, or performance evaluations, it’s important that these tools are continuously audited to ensure they promote equitable opportunities for all employees and do not inadvertently reinforce bias.

 

Final Thoughts

Though laws for AI in employment are diverse around the world, emphasis on upholding transparency and anti-discrimination remain constant across the board. Businesses should embrace transparency concerning the use of AI systems to inform affected individuals when AI-generated content or communications are present. It is imperative that companies inform workers and consumers how they use their data by providing conspicuous notices to all affected individuals.

Furthermore, conducting audits and bias assessments on any AI-enabled tools is essential to determining whether they adversely impact protected classes under state and federal law based on their protected characteristics.

 

Need Help? 

TCWGlobal remains committed to providing compliant staffing and employer of record solutions! We stay ahead of all the evolving AI legislation and are here to help with our comprehensive contingent workforce management solutions.

For more information about opt-outs, accommodations, and the general scope of these laws, check out this chart here, encompassing both AI and biometric data laws around the world.

For U.S.-based companies, don't forget to check out our separate blog post on AI regulations in the United States here. Together, we can embrace the future of AI with confidence and responsibility.

 

Post by Ariana Naranjo
February 21, 2025
Ariana Naranjo is a passionate writer with a keen interest in workforce trends and HR policies. She enjoys turning complex topics into engaging, insightful content for readers.