• Posts
  • EU's Evolving AI Rulebook: Balancing Innovation with Ethical Oversight

EU's Evolving AI Rulebook: Balancing Innovation with Ethical Oversight

PLUS: ChatGPT Prompt for Drafting a Standard Operating Procedure

What up, humans!

This week, we're diving deep into the AI realm, from Google's sassy defense against data-scraping drama to the EU's evolving tiered approach to foundation models. We'll also explore a Fugees star's encore plea over an AI-scripted defense, and some courtroom citations that have us saying, "AI, caramba!" Plus, tech titans are playing knight-in-shining-armor, vowing to shield AI users from copyright clashes. As the EU charts its course in AI governance, balancing innovation with ethical oversight, we're here to keep you updated on every twist and turn. Ready for the legal rollercoaster? Let's roll! 🎢🤖📜

On the docket today:

  • EU's Evolving AI Rulebook: Balancing Innovation with Ethical Oversight

  • Google Defends AI Data-Scraping Practices, Files Motion to Dismiss

  • Fugees Star Seeks Retrial Over AI-Drafted Defense

  • AI Blamed for Fake L.A. Court Citations

  • Google Offers AI Users Copyright Defense

  • ChatGPT Prompt for Drafting a Standard Operating Procedure

  • Hilarious Meme

  New here?Subscribe

Image created using DALL-E.

The EU is shaping its stance on AI with potential concessions in its upcoming AI rulebook. Key highlights:

  1. Foundation Models: The EU is focusing on regulating AI models without a specific purpose, especially after the rise of models like ChatGPT. The European Parliament suggests a tiered approach for such models, which the Council supports. The proposed definition for foundation models is those that can perform a wide range of tasks. They'll have transparency obligations, especially in documenting their training process.

  2. Very Capable Foundation Models: A new category for models whose capabilities surpass current standards and might not be fully understood. These models would need regular checks from external teams and independent auditors.

  3. General-Purpose AI Systems: Systems built on foundation models and used at a large scale. They'll have obligations like regular vetting and risk mitigation. They should also declare if they can be used for high-risk applications.

  4. Copyright Issues: AI providers should ensure their systems are trained following EU copyright laws and that generative AI outputs are identifiable as AI-generated.

  5. Governance: An AI Office is proposed to oversee rules on foundation models and large-scale general-purpose AI systems.

  6. Biometric Identification: A contentious issue. The Parliament wants a ban on real-time biometric systems by law enforcement, but governments seek exceptions.

  7. Banned Practices: The Parliament wants to ban emotion recognition in certain areas, but the presidency seeks exceptions, especially in law enforcement.

  8. Law Enforcement: The Council suggests exceptions in law enforcement, with the presidency proposing certain limitations.

  9. National Security: While initially excluded from the AI Act, a balanced compromise is being considered.

  10. High-Risk Use Cases: The presidency aims to categorize emotion recognition and biometric categorization as high-risk rather than banning them outright.

This evolving EU approach to AI seeks a balance between innovation and ethical considerations, with the final rulebook eagerly anticipated.

Image created using DALL-E.

On July 11, 2023, a class action lawsuit was filed against Google in a California federal court, alleging that its data-scraping practices for generative AI training infringe on privacy and property rights. Responding to these allegations, Google filed a motion to dismiss the lawsuit on October 16, 2023.

Central to Google's defense is the assertion that the use of public data, such as that for its chatbot Bard, is indispensable for AI development. Google warns that the lawsuit not only jeopardizes its services but also threatens the foundational concept of generative AI.

Google firmly states: "Using publicly available information to learn is not stealing." They further argue that such practices neither constitute an invasion of privacy, conversion, negligence, unfair competition, nor copyright infringement.

The lawsuit, initiated by eight individuals, accuses Google of misappropriating content from social media and its own platforms for AI training. Challenging the lawsuit's relevance, Google emphasizes that the complaint largely leans on "irrelevant conduct by third parties and doomsday predictions about AI." Google also highlights the lawsuit's ambiguity concerning the specific personal data allegedly collected and the nature of the purported harm to the plaintiffs.

In response to claims of using a book by one of the plaintiffs for AI training, Google invokes the fair use doctrine of copyright law as its defense. Through its motion to dismiss, Google underscores its conviction in the legitimacy of its AI training methods and the pivotal role of public data in AI advancement.

Imaged created using DALL-E.

In a groundbreaking legal twist, Fugees star Pras Michel, convicted on charges including campaign finance violations and acting as an unregistered foreign agent, is seeking a retrial. Why? He alleges that his defense attorney, David Kenner, leaned on generative AI to craft the closing argument. Michel's new legal team from ArentFox Schiff contends that this AI-driven approach resulted in a "deficient" closing, which overlooked key arguments and muddled the defense's stance.

But there's more to this digital drama. Kenner, post-trial, seemed to champion the AI tool as a revolutionary step in legal proceedings. However, suspicions arose when it was revealed that Kenner and co-counsel Alon Israely might have undisclosed financial ties to a tech partner of the AI program, EyeLevel.AI. This potential conflict of interest suggests that Michel's high-profile trial might have been used as a promotional platform for the AI tool.

This case not only underscores the ethical quandaries of AI's role in legal defense but also raises questions about its efficacy. While AI's promise of streamlining complex legal work is alluring, its debut in the courtroom, as alleged by Michel's team, might have been more of a misfire than a milestone.

Image created using DALL-E.

In recent events at the L.A. Superior Court, Judge Ian Fusselman identified fabricated legal citations in a filing. Two of the cases referenced were non-existent, while others were unrelated to the subject matter, eviction law. The perplexing discovery led to the statement, "This was an entire body of law that was fabricated." While the exact preparation method of the filing remains unclear, six legal experts suspect the misuse of a generative AI program.

The spotlight is on Dennis Block, who operates what he claims is California's "leading eviction law firm." Despite the firm's reputation, a recent eviction case filing under Block's signature was flagged for containing "inaccurate and false statements." The document, though well-formatted and seemingly credible, was found to have these fabricated references.

Generative AI programs, like ChatGPT, are gaining attention in the legal realm. While they offer potential cost-saving benefits, experts emphasize the importance of thoroughly reviewing AI-generated work, labeling oversight as both risky and unethical. Russell Korobkin, a law professor at UCLA, believes it's highly probable that generative AI was employed in drafting the questionable brief, highlighting the need for caution and scrutiny when integrating AI tools in legal practices.

Image created using DALL-E.

Google has pledged to shield users of its generative artificial intelligence (AI) systems within Google Cloud and Workspace platforms from allegations of intellectual property infringement. This commitment mirrors similar assurances from other tech giants like Microsoft and Adobe. Google's protection encompasses both the training data and the outputs produced by its foundational models. Specifically, if a user faces legal challenges due to Google's training data containing copyrighted material, Google will assume the legal responsibility. Additionally, users are protected if the results they derive from Google's foundation models resemble existing copyrighted works, provided they did not intentionally infringe upon others' rights.

The company has detailed seven products under this protective umbrella, notably excluding Google's Bard search tool. This move comes amid growing concerns about the potential copyright pitfalls of generative AI. Several lawsuits have been filed against tech companies, alleging copyright infringement through AI training. Google itself has been a target of a proposed class action lawsuit for purportedly using copyrighted data for AI training. As generative AI continues to evolve, its intersection with copyright law becomes a focal point, emphasizing the need for clear guidelines and protections.

ChatGPT Prompt for Legal Research

WARNING: Use these prompts at your own risk! Always, always, always read and verify the accuracy of the outputs!

ChatGPT prompt (customize as necessary):

Please create a detailed and comprehensive document outlining the step-by-step instructions for completing a specific task or process within the organization. The document should include all necessary information, such as safety protocols, equipment and materials needed, and any relevant policies or regulations that must be followed. The goal is to create a standard operating procedure that can be easily understood and followed by all members of the team, ultimately improving efficiency and ensuring consistency in the execution of the task or process.

Meme Of The Week:

That’s all for today!

What'd you think of today's email?

Login or Subscribe to participate in polls.