On 2 February 2025 the first deadlines under the EU AI Act took effect. This included the AI literacy provisions, responsibility for which will likely be with HR teams and the ban on prohibited AI systems. What do these and other upcoming changes under the Act mean for in-scope employers?  

In this webinar, our multijurisdictional

[UPDATE RE THE OMNIUS PROPOSAL HERE]

The European Union’s Corporate Sustainability Reporting Directive is a regulation requiring covered companies to disclose information on what they see as the risks and opportunities arising from social and environmental issues, and on the impact of their activities on people and the environment.

The CSRD impacts not

Join our AI and Cyber CLE Series

If your last name starts with A-G, you are probably well aware that your (recently extended) MCLE compliance deadline is on March 30, 2025. In addition to the general credit requirement, the state of California requires all attorneys to complete:

  • At least four hours of legal ethics
  • At least two hours on competence issues
  • At least two hours on the elimination of bias in the legal profession and society. Of the two hours, at least one hour must focus on implicit bias and the promotion of bias‑reducing strategies.
  • At least one hour on technology 
  • At least one hour on civility

Continue Reading California’s CLE Compliance Deadline Is Approaching – We can help!

The Corporate Sustainability Reporting Directive represents one of the biggest ever shifts in reporting requirements for organizations. (For most companies, the first reporting will be on the financial year which starts after January 1, 2025.)

It requires most large organizations to comply with mandatory, detailed sustainability reporting standards, including extensive employment related disclosures. We are already advising a number of organizations in their sustainability journey and employment-related implications of the CSRD and, if it is not something you are already looking it, it will likely be on your radar very soon.

tl;dr

The employment-related implications of the CSRD mean that organizations will have to provide detailed descriptions of workforce policies; provide information on how the company engages with workers and workers representatives; and provide specific metric and target data relating to diversity, wages, compensation, health and safety and incidents and complaints (e.g., harassment and discrimination complaints), amongst others. There are also further disclosures required relating to workers in the supply chain. What is clear is that reporting will cover some potentially very sensitive topics, requiring sufficient preparation and careful consideration.Continue Reading The EU Corporate Sustainability Reporting Directive | Employment Law Implications

We are clearly (and thankfully) well past the pandemic, and yet demands for flexible and remote work press on. While the overall global trend of transforming the traditional 9-to-5 work model is consistent, laws governing flexible work arrangements can vary significantly by jurisdiction.

We monitor this space closely (see our previous update here) and advise multinational companies on a multitude of issues bearing on remote, hybrid and flexible arrangements, including health & safety rules, working time regulations, tax and employment benefit issues, cybersecurity and data privacy protections, workforce productivity monitoring and more.

Key recent updates around the globe (organized by region) include:

Asia Pacific

  • Australia: Right to disconnect – Working 9 to [to be determined…]?
    In August 2024, a Full Bench of the Fair Work Commission finalized the new “right to disconnect” model term, which will soon be inserted into all modern awards. Whilst we wait for the Fair Work Commission to issue its guidance on the new workplace right, here’s what you should know, and what we think you should do to prepare for the introduction of the right to disconnect

Continue Reading HR Trend Watch: Maintaining compliance while unlocking the talent rewards of flexible work arrangements

By and large, HR departments are proving to be ground zero for enterprise adoption of artificial intelligence technologies. AI can be used to collect and analyze applicant data, productivity, performance, engagement, and risk to company resources. However, with the recent explosion of attention on AI and the avalanche of new AI technologies, the use of AI is garnering more attention and scrutiny from regulators, and in some cases, employees. At the same time, organizations are anxious to adopt more AI internally to capitalize on productivity and efficiency gains, and often in-house attorneys are under pressure from internal clients to quickly review and sign off on new tools, and new functionalities within existing tools.

This is especially challenging given the onslaught of new regulations, the patchwork of existing data protection and discrimination laws, and heightened regulatory enforcement. For example, there has been a considerable uptick in European data protection authorities investigating how organizations are deploying workforce AI tools in the monitoring space, including time and activity trackers, video surveillance, network and email monitoring, and GPS tracking. Authorities have issued substantial fines for alleged privacy law violations, including for “unlawfully excessive” or “disproportionate” collection. For example, the French data protection authorities recently imposed a USD $34 million fine related to a multinational e-commerce company’s use of a workplace surveillance system.

The AI regulatory landscape is rapidly evolving, and in most places compliance is still voluntary. However, organizations should build their AI governance programs to include key privacy, data protection, intellectual property, anti-discrimination and other concepts – and a good place to start is with these HR tools given their widespread use and the increased scrutiny. Legal Departments should consider these five key actions:Continue Reading The Legal Playbook for AI in HR: Five Practical Steps to Help Mitigate Your Risk

Equal pay is an increasingly high profile issue for employers with a noticeable rise in equal pay claims in the private sector in the UK. This was underscored recently in a high profile case estimated to result in around £30 million in backpay.

With the implementation of the EU Pay Transparency Directive on the horizon

SHRM reports that one in four organizations currently use AI to support HR-related activities, with adoption of the technology expanding rapidly. The compliance risks arising from generative AI use also are intensifying, with an increasing number of state and local laws restricting employer use of AI tools in the United States. And not to be outdone, substantial regulation impacting multinational employers’ use of AI is emerging in other parts of the world (e.g., the EU AI Act).

One rapidly growing use case is applicant recruiting and screening, a trend likely to continue given recent increases in remote hiring and hybrid work arrangements. AI tools can streamline talent acquisition tasks by automatically sorting, ranking, and eliminating candidates, as well as potentially drawing from a broader and more diverse pool of candidates.

Employers who use AI tools must comply with significant new (and existing) laws that focus on data protection, privacy, information security, wage and hour, and other issues. The focus of this blog, however, is the legislative efforts in the US to protect against algorithmic bias and discrimination in the workplace stemming from the use of AI tools to either replace or augment traditional HR tasks.

IL Becomes the Second State (After CO) to Target Workplace Algorithmic Discrimination

On August 9, 2024, Gov. Pritzker signed H.B. 3773, making it unlawful for employers to use AI that has the effect of discriminating against employees on the basis of protected class in recruitment, hiring, promotion, discipline, termination and other terms, privileges or conditions of employment. The law, effective January 1, 2026, also prohibits employers from using ZIP codes as a stand-in or proxy for protected classes.

Like Colorado, Illinois’ new law also contains a notice requirement: employers must notify applicants and employees when using AI with respect to “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”Continue Reading Illinois Joins Colorado and NYC in Restricting Generative AI in HR (Plus a Quick Survey of the Legal Landscape Across the US and Globally)

In June, we offered our annual Global Employment Law webinar series sharing expert insights on the business climate in major markets around the world for US multinational employers. Baker McKenzie attorneys from over 20 jurisdictions outlined the key new employment law developments and trends that multinationals need to know in four 60-minute sessions.

ICYMI: click below to hear updates for the Americas, Asia Pacific, Europe and the Middle East and Africa and contact a member of our team for a deeper dive on any of the information discussed.


Session 1: The Americas 

Presenters: Andrew Shaw, Clarissa Lehmen*, Daniela Liévano Bahamón, Benjamin Ho, Liliana Hernandez-Salgado and Matías Gabriel Herrero

Click here to watch the video.

*Trench Rossi Watanabe and Baker McKenzie have executed a strategic cooperation agreement for consulting on foreign law.


Continue Reading Summer Replay: Tune In To Our Global Employment Law Update Series (Recordings Linked!)

On May 17, 2024 Colorado Governor Polis signed the landmark Colorado AI Act (Senate Bill 24-205) into law. Colorado is now the first US state with comprehensive AI regulation, adopting a classification system like the European Union’s recent AI Act. The law will take effect February 1, 2026

The law exempts small employers (fewer than fifty full-time employees) from some of its requirements but otherwise requires companies to take extensive measures to protect Colorado residents against harms such as algorithmic discrimination.

SB 205’s Details

SB 205 requires “developers” and “deployers” of “high-risk artificial intelligence systems” to use “reasonable care” to protect Colorado resident consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.” As written, the law most likely applies to both creators of high-risk AI systems, as well as employers adopting high-risk AI technologies within their organization.  Continue Reading From Brussels to Boulder: Colorado Enacts Comprehensive AI Law with Significant Obligations for Employers on the Heels of the EU AI Act