Fast Track to 2026: A 75-Minute Must-Attend Webinar for In-House Counsel

The legal landscape impacting California employers is evolving at breakneck speed. As we race toward 2026, employers need to stay agile, informed, and ready to shift gears. This high-impact session will cover the most pressing workplace trends, risks, and regulatory changes ahead for California

CPPA Adopts Expanded Regulations

Please join us for our next virtual session to discuss the newly adopted CCPA regulations—on September 30 from 12 to 1pm Pacific. In this session, our interdisciplinary team will discuss what the new regulations cover and what companies can do now to comply.

Click here to register.

CLE will be offered.

On July 23, the White House unveiled its much-anticipated AI Action Plan. The Action Plan follows President Trump’s Executive Order 14179 of January 23 on “Removing Barriers to American Leadership in Artificial Intelligence”—which directed the development of the Action Plan within 180 days—and subsequent consultation with stakeholders to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.” This update provides a summary of the Action Plan and key considerations for businesses developing or deploying AI.

The Action Plan is structured around three pillars: (I) Accelerating AI Innovation, (II) Building American AI Infrastructure, and (III) Leading in International AI Diplomacy and Security. Although, the AI Action Plan is not legally binding in itself, each pillar contains a number of policy recommendations and actions, which will subsequently need to be actioned by various government agencies and institutes.

Pillar I – Accelerating AI Innovation

Pillar I focuses on reducing the impact of regulation that may hamper AI development. To this end, the Action Plan instructs the Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” Pillar I emphasizes the need for workplace action that supports transition to an AI economy, citing AI literary and skill development among key workforce priorities.  The Action Plan also calls for federal- and state-led efforts to evaluate the impact of AI on the labor market. In order to promote advancements in American AI technologies, Pillar I specifically calls for investment in open-source AI models, support for the preparation of high-quality datasets for use in model training, and acceleration of the federal government’s adoption of AI.

Pillar II – Building American AI Infrastructure

Pillar II of the Action Plan includes actions aimed at strengthening the country’s AI infrastructure. The Action Plan seeks to streamline the expansion of America’s semiconductor manufacturing capabilities by removing extraneous policy requirements for CHIPS-funded semiconductor manufacturing operations.  Pillar II also focuses on the fortification of AI systems and other critical infrastructure assets against cybersecurity threats. In order to achieve these goals, the Action Plan proposes various measures to enhance cybersecurity protections such as sharing AI-security threat intelligence across critical infrastructure sectors and developing standards to facilitate the development of resilient and secure AI systems.Continue Reading US AI Vision in Action: What Businesses Need to Know About the White House AI Action Plan

As AI adoption accelerates across workplaces, labor organizations around the world are beginning to take notice—and action. The current regulatory focus in the US centers on state-specific laws like those in California, Illinois, Colorado and New York City, but the labor implications of AI are quickly becoming a front-line issue for unions, potentially signaling a new wave of collective bargaining considerations. Similarly, in Europe the deployment of certain AI tools within the organization may trigger information, consultation, and—in some European countries—negotiation obligations. AI tools may only be introduced once the process is completed.

This marks an important inflection point for employers: engaging with employee representatives on AI strategy early can help anticipate employee concerns and reduce friction as new technologies are adopted. Here, we explore how AI is emerging as a key topic in labor relations in the US and Europe and offer practical guidance for employers navigating the evolving intersection of AI, employment law, and collective engagement.

Efforts in the US to Regulate AI’s Impact on Workers

There is no specific US federal law regulating AI in the workplace. An emerging patchwork of state and local legislation (e.g. in Colorado, Illinois and New York City) address the potential for bias and discrimination in AI-based tools—but do not focus on preventing displacement of employees. In March, New York became the first state to require businesses to disclose AI-related mass layoffs, indicating a growing expectation that employers are transparent about AI’s impact on workers.[1]

Some unions have begun negotiating their own safeguards to address growing concerns about the impact that AI may have on union jobs. For example, in 2023, the Las Vegas Culinary Workers negotiated a collective bargaining agreement with major casinos requiring that the union be provided advance notice, and the opportunity to bargain over, AI implementation. The CBA also provides workers displaced by AI with severance pay, continued benefits, and recall rights.

Similarly, in 2023 both the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) negotiated agreements with the Alliance of Motion Picture and Television Producers (AMPTP) that include safeguards against AI reducing or replacing writers and actors. WGA’s contract requires studios to meet semi-annually with the union to discuss current and future uses of generative AI—giving writers a formal channel to influence how AI is deployed in their industry. The SAG-AFTRA contract requires consent and compensation for use of digital replicas powered by AI.Continue Reading Navigating Labor’s Response to AI: Proactive Strategies for Multinational Employers Across the Atlantic

Tune into our annual Global Employment Law webinar series as we bring the world to you.

Our Global Employment Law Fastpass webinar series is here again! Every June, we offer four regionally-focused webinars to help you stay up-to-speed on the latest employment law developments around the world. From tariffs and economic uncertainty to the use

Trade secrets give tech companies a competitive edge in a rapidly evolving landscape, where success depends on the ability to innovate. The unauthorized acquisition, use, or disclosure of trade secrets can result in significant loss and disruption, making it essential for organizations to have robust safeguards in place to protect their trade secrets. Here, we explore clear steps organizations can take to manage and mitigate risks with a focus on trade secret identification and the role of employees in trade secret protection.

Mission Critical: Protecting Tech Trade Secrets

Fast-paced developments

The technology sector continues to experience huge transformation with emerging technologies and advancements in AI. Companies are investing heavily in developing capabilities, and new services and products. Rapid innovation and desire to be first to market, has caused trade secrets to become an increasingly preferred method of protection over other types of intellectual property regimes, such as applying for a patent, which can be costly and raise complex timing considerations. Trade secrets can protect algorithms, processes, datasets, customer lists, and more. The trade secrets of companies at the forefront of AI and other tech innovation are highly valuable.

Expanding threat landscape 

Threat actors are leveraging tech advancements to steal vast amounts of company information through more sophisticated and efficient attacks. Heightened internet usage increases hacking risks from competitors, foreign governments and hacktivist groups.

AI developments currently require high computational power, concentrating progress in large tech companies. However, the competitor landscape is changing, with many start-ups and established companies building applications within the AI technology stack, alongside new players entering the AI frontier race. The demand for tech talent with the skills to drive innovation is at all-time high, making trade secret protection critical.

In the past few years, tech companies have also been especially susceptible to the public disclosure of confidential internal documents by employee activists motivated by non-monetary factors.

Legal remedies

Legal frameworks for protecting trade secrets have become more robust and varied across jurisdictions. Injunctive relief (to prevent use of trade secrets and reclaim them) is an essential tool in trade secret breach cases. Victims may also pursue financial remedies, such as damages.Continue Reading Top Strategies to Safeguard Tech Trade Secrets

On 2 February 2025 the first deadlines under the EU AI Act took effect. This included the AI literacy provisions, responsibility for which will likely be with HR teams and the ban on prohibited AI systems. What do these and other upcoming changes under the Act mean for in-scope employers?  

In this webinar, our multijurisdictional

Shortly after taking office, President Trump rescinded Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the US, identifying security, privacy and discrimination as particular areas of concern. Trump signed his own executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” directing his advisers to coordinate with the heads of federal agencies and departments, among others, to develop an “action plan” to “sustain and enhance America’s global AI dominance” within 180 days.

While we wait to see if and how the federal government intends to combat potential algorithmic discrimination and bias in artificial intelligence platforms and systems, a patchwork of state and local laws is emerging. Colorado’s AI Act will soon require developers and deployers of high-risk AI systems to protect against algorithmic discrimination. Similarly, New York City’s Local Law 144 imposes strict requirements on employers that use automated employment decision tools, and Illinois’ H.B. 3773 prohibits employers from using AI to engage in unlawful discrimination in recruitment and other employment decisions and requires employers to notify applicants and employees of the use of AI in employment decisions. While well-intentioned, these regulations come with substantial new, and sometimes vague, obligations for covered employers.

California is likely to add to the patchwork of AI regulation in 2025 in two significant ways. First, California Assemblymember Rebecca Bauer-Kahan, Chair of the Assembly Privacy and Consumer Protection Committee, plans to reintroduce a bill to protect against algorithmic discrimination by imposing extensive risk mitigation measures on covered entities. Second, the California Privacy Protection Agency’s ongoing rulemaking under the California Consumer Privacy Act will likely result in regulations restricting the use of automated decision-making technology by imposing requirements to mitigate algorithmic discrimination.Continue Reading Passage of Reintroduced California AI Bill Would Result In Onerous New Compliance Obligations For Covered Employers

  • Key laws and regulations, including recent changes and expected developments over the next year
  • Foundational data privacy obligations including information and notification requirements, data subject rights, accountability and governance measures, and responsibilities of data controllers and