• AI RULES
  • Posts
  • Humans Win The First Big AI Labor Dispute

Humans Win The First Big AI Labor Dispute

PLUS: Sample ChatGPT prompt to generate litigation content

Welcome, fellow attorneys!

This week's newsletter covers key AI developments across multiple sectors. In entertainment, Hollywood screenwriters secured protections limiting AI's use in film and TV writing. A South Korean court sentenced a man for using AI to create illegal content, setting a global precedent. As the EU finalizes AI regulations, disagreements persist around classifying high-risk systems. We also provide sample ChatGPT prompts for attorneys to draft litigation materials, along with a warning to verify all AI outputs.

On the docket today:

  • Humans Win The First Big AI Labor Dispute

  • South Korean Court Sets Precedent in Sentencing Man for Using AI to Generate Child Sexual Abuse Material

  • Differing Perspectives Emerge on EU AI Act's High-Risk Classification

  • ChatGPT Prompt To Help You Draft Litigation Content

  • Hilarious Meme

  New here?Subscribe

Image created using DALL-E.

After a prolonged strike, Hollywood screenwriters have secured key protections against the use of AI in film and TV writing. The tentative deal establishes guardrails, rather than an outright ban, on deploying AI in the creative process.

Under the agreement, studios cannot mandate writers use AI tools or provide them AI-generated material without disclosure. AI cannot be credited as a writer or used to undermine compensation. The deal embraces AI as an optional resource, while empowering human writers to control its integration.

This marks one of the first major union contracts to address AI's impacts on creative industries. It may become a template as technology proliferates. Actors continue to strike over similar AI concerns, including use of digital likenesses.

The writers' deal balances welcoming AI with upholding creative roles. It does not prohibit using AI in writing or training systems on existing scripts. But it grants oversight to writers regarding how their work is utilized. As rapid AI advancements continue, negotiations will remain ongoing.

For now, the studios acceded to reasonable limitations, avoiding a complete standstill. This outcome suggests AI can augment human skills when thoughtfully implemented, not just displace jobs. As AI spreads across sectors, this coexistence model warrants consideration. Though further AI challenges lie ahead, the deal is being hailed as a milestone in protecting creative livelihoods.

Imaged created using DALL-E.

The Busan District Court in South Korea recently handed down a prison sentence to an unnamed man in his 40s for using AI to generate sexually exploitative images of children. This sets a precedent as global courts confront AI-enabled abusive content.

The Busan District Prosecutor's Office stated the man created approximately 360 AI images in April, which were confiscated by police. Prosecutors successfully argued these realistically simulated depictions violate protections for minors under the Act on the Protection of Children and Youth, despite not showing actual children.

This sentencing comes amid worldwide efforts to regulate AI systems and curb nonconsensual deepfake pornography disproportionately harming women and girls. Major platforms have tightened policies, but risks remain as capabilities improve.

While not prohibiting beneficial applications, ethical oversight and accountability measures will be critical to prevent AI abuses, especially against vulnerable groups. As AI progresses, legal frameworks may struggle to keep pace. However diligent enforcement of existing protections, as demonstrated in this case, can steer innovation toward positive impacts.

Balanced regulation and vigorous prosecution can deter violations until comprehensive frameworks develop. By imposing consequences for malicious uses, this sentence signals AI must uphold human dignity, rights and safety.

Image created using DALL-E.

As EU negotiations on regulating AI near conclusion, contrasting views have emerged between civil society groups and industry on classifying high-risk systems.

As EU negotiations on regulating AI near conclusion, contrasting views have emerged between digital rights groups like the Center for Democracy and Technology (CDT) and tech companies/industry advocates on classifying high-risk systems.

Digital rights groups argue for broad high-risk categorization to uphold protections. Recently leaked compromise language from the European Commission would exempt certain uses, like recruitment screening software, from heightened obligations even in high-risk sectors. Digital rights groups contend this undermines crucial safeguards, as oversight reliance on provider self-assessments is unrealistic. They warn of exceptions swallowing rules if accessory or preparatory systems are excluded. Given algorithms' power to influence decisions, comprehensive impact assessments are urged by digital rights advocates.

Tech companies and industry advocates conversely argue exemptions provide clarity for lower-complexity or human-centric tools. Overly rigid categorization may stifle innovation in applying AI judiciously per industry arguments. Self-assessments with documentation enable proportional, adaptable regulation in their view.

Balancing innovation and accountability remains contentious. But all agree effective, ethical oversight is essential as capabilities advance. While divisions persist on specifics, the act's groundbreaking scope signals Europe's leadership in steering AI toward societal good. Ongoing diligence and cooperation will be vital to maximize benefits while upholding rights. As pioneering legislation, the AI Act's framework can inform responsible governance worldwide.

Warning: Use these prompts at your own risk! Always read and verify the accuracy of the outputs!

  • For drafting litigation content

    • Initial prompt: “You are preparing discovery interrogatories in a personal injury lawsuit that involved an interstate highway collision of multiple vehicles including cars and tractor-trailers. Draft a comprehensive list of detailed interrogatory questions structured to better understand the chain of events of the entire collision, all the parties involved, and issues of liability due to foreseen and unforeseen circumstances.”

    • Follow-up prompt: “Generate more questions related to weather conditions and the driving and medical histories of those involved.”

Meme Of The Week:

That’s all for today!

What'd you think of today's email?

Login or Subscribe to participate in polls.