• AI RULES
  • Posts
  • European Union Eyes Stricter AI Rules

European Union Eyes Stricter AI Rules

PLUS: ChatGPT Prompt for Legal Research

Welcome, fellow attorneys!

In this week's newsletter we dive into the rapidly evolving global landscape of AI governance. From new regulatory proposals in the EU to contrasting approaches in the US, UK, and beyond, we'll summarize the key developments and debates shaping how nations oversee these transformative technologies. Let's explore where major countries stand on balancing innovation with responsible regulation.

On the docket today:

  • European Union Eyes Stricter AI Rules

  • AI Regulators Are Coming!

  • Chinese Regulations Give Edge to AI Industry Over Western Counterparts

  • A Quick Rundown of AI Regulation: EU Leads The Way, While US and UK Urge Restraint

  • ChatGPT Prompt for Legal Research

  • Hilarious Meme

  New here?Subscribe

Image created using DALL-E.

The EU is considering additional regulations on large AI models through its pending AI Act. EU representatives are discussing a plan to address concerns over major language models like OpenAI's GPT-4 and Meta's Llama 2 while avoiding overburdening startups. The approach would mirror the EU's Digital Services Act, which requires platforms to protect user data with extra rules for big tech firms.

The proposed LLM regulations remain in early stages per anonymous sources. The EU's AI Act aims to be one of the first mandatory AI regulations by a Western government. Under it, companies would need to assess risks and label AI content. Certain uses like biometric surveillance would be banned. The rules are not finalized yet and face potential objections by member states.

In contrast, the US warned the regulations could favor large tech companies and hurt smaller ones by increasing costs. Washington cautioned of dampened productivity and jobs/investment leaving Europe. China already implemented its own AI rules, including on algorithms and generative models.

In summary, the EU is considering tiered AI regulations to balance innovation and oversight, but faces criticism over potential impacts. With the Act's finalization targeted by year's end, key issues remain around regulating models like GPT-4. The rules would require risk assessments and content labeling from all AI firms, but tighter controls on the largest systems.

Imaged created using DALL-E. Apparently, this is what AI regulators will look like! 😆 

As AI law expert Barry Scannel of William Fry LLP explains, “AI regulators are coming. In fact, in some places - they are already here, for example, Spain recently took steps to create the Spanish Agency for the Supervision of Artificial Intelligence to actively guide the country’s AI ecosystem.

Article 59 of the EU’s AI Act stands out for its specific focus on “National Competent Authorities”, the AI regulators who will be central to how AI will be governed within individual EU member states. The importance of Article 59 lies in its multifaceted approach to governance, mandating these authorities to ensure the ethical and safe deployment of AI technologies wile also serving as advisory bodies.

Article 59 requires each member state to designate an authority responsible for the implementation and application of AI regulation. This authority must be organized to safeguard its objectivity and impartiality, thereby bolstering its credibility and the public’s trust. Moreover, the provision explicitly calls for these authorities to be equipped with sufficient resources - financial, technical, and human. Staffing these organizations with experts who have a diverse range of competencies, from deep understanding of AI technologies to expertise in data protection, cybersecurity, and even fundamental rights, is non-negotiable according to the Act. Where member states will actually find these individuals with all this expertise is another matter.”

Why this matters: The new EU national AI regulators will strongly influence compliance, enforcement, and responsible AI deployment across borders. Engaging these emerging oversight bodies proactively can benefit attorneys through specialized expertise, streamlined approvals, jurisdictional clarity, and advising clients on public trust factors.

China's new interim measures regulating generative AI appear designed to aid rather than encumber domestic firms, per Angela Huyue Zhang. The final rules significantly watered down concerning provisions in earlier drafts, like ensuring training data/outputs are fully accurate. While requiring approval from authorities does incur costs, tech giants seem unlikely to be deterred given their resources and capacity for compliance. Approvals are already being granted, like for Baidu's new chatbot. Provisions actively encourage public-private sector collaboration on AI innovation.

In contrast, Zhang argues EU and US firms face growing legal hurdles. The EU's upcoming AI Act may mandate detailed copyright summaries for training data, exposing developers to lawsuits. US firms already battle lawsuits on issues from copyright to discrimination, with OpenAI negotiating content deals to stem litigation. Lawsuits can impose heavy fines and force operational changes.

Moreover, Chinese regulators and courts seem inclined to take a lenient approach to AI legal violations, following government directives to aid growth. While this risks entrenching large firms and stifling innovation long-term, it can boost Chinese AI in the near future. In summary, China's regulatory stance may confer advantage to its AI industry versus more encumbered Western counterparts.

Image created using DALL-E.

The EU aims to be the first jurisdiction to enact oversight on generative AI through its proposed AI Act. The act would create a framework to categorize AI systems by potential risk levels ranging from minimal to high. Uses deemed high-risk, such as biometric identification, certain education and workforce tools, and AI applied in law enforcement, would face mandatory risk management procedures and assessments before deployment. Moreover, certain AI applications would be outright prohibited under the act, including facial recognition, social credit scoring systems, and manipulative AI like in toys. For generative AI specifically, the act's rules would require disclosing what content is artificially generated and revealing training data sources. Companies would need to show steps were taken to mitigate risks.

In contrast, the US currently lags behind Europe in formalizing AI regulation, but pledges bipartisan efforts are underway. Potential changes to copyright laws are being debated regarding usage rights of training data. While no comprehensive regulatory regime has yet emerged, voices in government increasingly call for developing guardrails on AI.

Differing from the strict governance approach, the UK articulated ambitions to become an "AI superpower" and believes early regulation could hamper innovation. The UK favors ongoing assessment of evolving AI impacts before imposing specific rules. Critics contend this reactive stance enables harms to manifest unchecked.

Similar to the EU, Brazil released draft legislation that would categorize AI by risk levels and impose strict liability on system creators. However, Brazil has yet to enact its proposal. China has taken steps to regulate algorithms and deepfakes, but notably seeks to mandate "true and accurate" training data that could severely restrict commercial chatbots.

In summary, the EU and like-minded countries spearhead more assertive oversight, while the US and UK urge caution over proactive restrictions. But pressure grows worldwide for regulators to address AI's societal impacts. The ultimate regulatory frameworks remain in flux.

ChatGPT Prompt for Legal Research

Warning: Use these prompts at your own risk! Always read and verify the accuracy of the outputs!

Act as a legal expert and conduct a comparative analysis on [two legal theories or principles]. In this analysis, focus on detailing the similarities, differences, and real world applications of the chosen theories or principles. Your response should be friendly, beginner-friendly, and free of jargon. Please provide a moderate-length response in a format that includes paragraphs and bullets.

Additional Context for Prompt

To further enhance the response, consider including the following additional topics:

  • Influential legal scholars or thinkers associated with each theory or principle

  • Comparative analysis of the strengths and weaknesses of each theory or principle

  • Practical implications and considerations when applying each theory or principle in legal practice

  • Potential future developments or trends related to the two theories or principles

  • Comparative analysis of the impact of the theories or principles in different legal jurisdictions or systems

Follow Up Questions to Guide Conversation

  • Ask after ChatGPT's first response

    • Can you provide specific examples of how each theory or principle has been applied in real-world legal cases?

    • How have these theories or principles influenced the development of legal systems over time?

    • What are some common misconceptions or misunderstandings about these theories or principles?

    • Are there any notable criticisms or alternative perspectives on these theories or principles that should be considered?

    • How do these theories or principles interact with other legal doctrines or principles?

    • Can you explain the practical implications of these theories or principles for legal professionals?

    • What are some potential challenges or limitations in applying these theories or principles in practice?

    • How have these theories or principles evolved or been interpreted differently in different legal jurisdictions?

    • Are there any ongoing debates or discussions about the relevance or applicability of these theories or principles in modern legal contexts?

    • Can you provide any insights into the future direction or potential developments of these theories or principles?

Meme Of The Week:

That’s all for today!

What'd you think of today's email?

Login or Subscribe to participate in polls.