• AI RULES
  • Posts
  • Google Sued Over AI Copyright Issues

Google Sued Over AI Copyright Issues

Welcome to this week’s deep dive into the evolving world of AI regulation and its implications. From Florida's groundbreaking law mandating AI disclaimers in political ads, to notable advancements under President Biden’s AI Executive Order, and intense debates within the EU over the handling of open-source AI models, our newsletter delivers essential insights. We also cover a significant legal challenge faced by Google, as artists assert their rights over copyrighted content used in AI training. Stay informed and ahead of the curve as we navigate these pivotal developments together. 🌐👨‍⚖️🤖

On the docket today:

  New here?Subscribe

Image created using DALL-E 3.

A coalition of visual artists has initiated a class-action lawsuit against Google, alleging that the tech giant's AI image generator, Imagen, unlawfully uses copyrighted works for its training datasets. The case, brought forth in California, challenges Google's practices around the accumulation and use of vast amounts of artistic content without permission.

Key Takeaways

  • Plaintiff Details: The lawsuit was filed by a group of artists including a photographer and three illustrators, who seek to represent all copyright owners affected by Google's training practices.

  • Technology in Question: Imagen, Google's text-to-image AI model, reportedly generates images by leveraging a dataset known as LAION-400M, which includes copyrighted material without the artists' consent.

  • Claims Against Google: The suit claims that Imagen's training process involves direct copying of protected works, making the AI model itself an infringing derivative work.

  • Google's Defense: Google contends that its AI models are trained primarily with publicly available data, and plans to challenge the artists' claims by asserting that such usage falls under fair use provisions.

  • Related Litigation: This lawsuit follows similar actions taken by artists against other AI companies, underscoring a growing trend of legal challenges related to AI and copyright.

  • Strategic Non-Disclosure: The complaint also highlights Google's deliberate omission of detailed dataset descriptions in its latest release, suggesting an attempt to mitigate legal risks highlighted by previous lawsuits.

  • Potential Impact: The case could set significant precedents regarding copyright law and AI, influencing how tech companies approach the training of generative AI models.

Image created using DALL-E 3.

Florida has enacted new legislation to address the use of artificial intelligence in political advertising. The law mandates clear disclosure when AI-generated content is used, reflecting growing concerns about transparency and the potential for misinformation.

Key Takeaways:

  • Political ads using AI must include the disclaimer: “Created in whole or in part with the use of generative artificial intelligence (AI).”

  • Disclaimer requirements vary by media: (1) Bold, minimum 12-point font for print, (2) Visible throughout video or TV ads, occupying at least 4% of the vertical picture height, (3) Clearly readable without user action on the internet, and (4) Audibly clear and at least 3 seconds long for audio.

  • Non-compliance with disclaimer rules is a first-degree misdemeanor.

  • Provisions for expedited legal complaints and hearings are included.

Image created using DALL-E 3.

Six months following President Biden's landmark Executive Order aimed at advancing and securing artificial intelligence (AI), significant strides have been made across various federal agencies. These efforts are designed to harness AI's potential while addressing its risks, ensuring equitable benefits, and maintaining American leadership in AI innovation.

Key Takeaways:

  • Risk Management in AI: Agencies have addressed safety and security risks across several areas, including critical infrastructure and biological materials, and established a framework for screening synthetic nucleic acids.

  • Guidance and Standards Development: Draft documents on managing generative AI risks and guidelines for critical infrastructure have been released for public comment, with final versions to offer additional guidance based on the NIST’s AI Risk Management Framework.

  • AI in Workforce and Civil Rights: New principles for AI use in the workplace have been developed, alongside guidelines to ensure non-discriminatory AI applications in employment, housing, and health sectors.

  • Advancing AI for Public Good: Initiatives include DOE funding for AI in science, pilots for energy management, and a sustained effort to assess AI's impact on the electric grid.

  • AI Talent Development in Government: Over 150 AI professionals have been hired to support AI integration in government operations, with further plans to expand this workforce.

Image created using DALL-E 3.

The recent developments surrounding the European Union's Artificial Intelligence Act have sparked a debate on the treatment of open-source large language models (LLMs) and their potential misuse in spreading disinformation. The legislation, nearing finalization in Brussels, grants broad exemptions to open-source AI models, which could inadvertently facilitate the creation and spread of harmful content by malicious actors. This regulatory oversight highlights a critical gap in addressing the dual nature of AI technologies as both enablers of innovation and potential tools for misuse.

Key Takeaways:

  • Open-Source vs. Proprietary AI Models: Open-source LLMs allow users to modify core programming, unlike proprietary models like OpenAI's ChatGPT or Google's Gemini, which restrict user access to prevent the generation of harmful content. This openness increases the risk of misuse.

  • Potential for Misuse: Tests on popular open-source models from the Hugging Face platform have shown that these models can, and often do, generate harmful content such as hate speech and misinformation with disturbing effectiveness.

  • Legislative Gaps: The EU's AI Act currently exempts most open-source AI models used non-commercially from stringent regulations. This exemption fails to account for the high potential for harm, placing undue reliance on developers to follow transparency and documentation guidelines voluntarily.

  • Regulatory Recommendations: Experts suggest that open-source models should be classified as high-risk AI when they pose significant threats to public health, safety, fundamental rights, or democracy. This classification would subject them to the same strict regulations as commercial AI models.

  • Urgent Need for Closing Loopholes: The EU is urged to amend the AI Act to close loopholes that allow open-source AI to be exploited for disseminating disinformation. Ensuring that open-source AI models are subject to rigorous scrutiny and compliance requirements is essential to safeguard democratic values and public safety.

@lawyerswhowine

That’s all for today!

Catch ya’ll on LinkedIn