top of page
Richard Liu

AI Revolution: Balancing Innovation with Worker Rights and Intellectual Property

Updated: Apr 26

President Biden's Executive Order on Artificial Intelligence (AI) establishes standards for AI safety and security, prioritizing American privacy, equity, civil rights, consumer, and worker support. It aims to bolster innovation and competition, reinforcing America's position in the global AI arena.

 

A key focus of this directive is the impact of AI on employment, particularly addressing concerns over technology-induced job displacement. This initiative responds to growing unease about AI-induced workforce disruptions, as evidenced by recent widespread strikes in various industries, including Hollywood.



In parallel, unions representing workers ranging from Las Vegas casino staff to Hollywood writers and actors have recently negotiated labor contracts that aim to control the use of AI in the workplace.


Here are the key points:

  • Writers Guild of America (WGA)’s agreement ensures writers' autonomy in AI usage. Human writers will maintain rights and pay even if AI is involved at some stage of the process.

  • The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA)’s agreement emphasizes the need for consent and fair compensation for actors if their digital likenesses are reproduced using AI.

  • Culinary Workers Union’s agreement requires employers to give a six-month notice before implementing AI that could disrupt jobs, with compensation for displaced workers.


These union actions, alongside government policy, demonstrate a holistic strategy to responsibly incorporate AI into the American workforce. This approach balances AI's benefits with vital worker rights and societal interests.

 

Further highlighting the complexities of AI, a recent legal case against Meta Platforms Inc. in California federal court underscores AI's intellectual property challenges. The case revolved around Meta's use of written works for AI training, spotlighting new copyright law complexities.

 

The primary issue was whether AI models derived from these works were "infringing derivative works." The court ruled that these models don't constitute direct adaptations of the original texts, dismissing most copyright claims. However, the lawsuit proceeds on limited grounds, focusing on Meta's unauthorized use of the authors' works.

 



What to do in the future?

Looking forward, companies must adopt several key legal strategies to navigate this evolving AI terrain:


  • Regulatory Compliance and Anticipation: Proactively align with current and anticipated AI regulations, focusing on privacy, data security, and ethical AI use. This includes understanding international laws if operating globally.

  • Intellectual Property Management: Pay close attention to intellectual property rights in AI, both in terms of using external data and protecting proprietary AI innovations. This involves staying updated with evolving IP laws related to AI-generated content.

  • Risk Assessment and Liability Management: Regularly conduct risk assessments of AI systems, identifying potential legal liabilities, especially in areas like consumer protection, employment law, and contractual obligations.

  • Workforce and Employment Law Adaptation: Collaborate with employees in developing policies for AI integration in the workplace. This includes addressing potential job displacements and redefining roles and responsibilities in the context of AI.


These strategies will help companies navigate the complex legal challenges of AI integration, ensuring compliance and mitigating risks while leveraging the transformative potential of AI technology.



 

Comments


bottom of page