• AI RULES
  • Posts
  • EU Lawmakers Struggle to Reach Agreement on AI Rules

EU Lawmakers Struggle to Reach Agreement on AI Rules

PLUS: ChatGPT Prompt for Proofreading

What up, humans!

This week, the EU's on the brink of AI legislation, but it's a bumpy road ahead. Meanwhile, Biden's prepping a game-changing AI executive order. And guess what? AI's "godfathers" are ringing alarm bells, urging tech giants to play safe. As the world grapples with AI's might, legal minds are in for a rollercoaster ride.

On the docket today:

  • EU Lawmakers Struggle to Reach Agreement on AI Rules

  • EU to Launch Centralized AI Oversight Office Amidst Big-Tech Scrutiny 

  • Biden Administration Expected to Unveil AI Executive Order Next Monday

  • Leading Experts Call for Robust Safety Measures and Legal Liability

  • ChatGPT Prompt for Proofreading

  • Hilarious Lawyer Meme

  New here?Subscribe

REUTERS/Yves Herman/File Photo

The European Union is on the brink of finalizing its pioneering AI legislation, but several challenges remain. Here's a synthesis of the latest developments:

  1. Intense Negotiations: The EU's comprehensive AI law is undergoing rigorous discussions. While there's consensus on classifying high-risk AI applications, debates around powerful 'foundation' models and law enforcement provisions persist.

  2. Foundation Models: These AI systems, exemplified by OpenAI's ChatGPT, are trained on vast data sets and can adapt to new data. The EU's approach is leaning towards a tiered system, especially for models with significant user bases. Spain, holding the EU presidency, proposes that foundation models with over 45 million users undergo regular checks for vulnerabilities. However, concerns arise that smaller platforms could pose similar risks.

  3. Prohibitions and Law Enforcement: This area remains contentious. The EU is considering a package deal encompassing bans, law enforcement exceptions, fundamental rights impact assessments, and environmental provisions.

  4. Timeline: While Spain pushes for a swift resolution, insiders believe a final agreement might not materialize until the fifth trilogue in December. Delays could push negotiations into 2024, especially with the upcoming European parliament elections in June.

  5. AI Act's Evolution: Initiated in 2021, the draft AI Act has seen proposals around facial recognition, biometric surveillance, and AI tool risk classifications.

Dragoş Tudorache at a press briefing on June 13, 2023 in Strasbourg, France (Image: European Parliament)

The European Union is poised to establish a dedicated office to supervise the AI Act's enforcement, particularly concerning major tech giants like OpenAI. This move comes after the European Parliament's approval of regulations in June to counteract potential societal harms from AI. The AI Act is slated for implementation in early 2026.

Dragoş Tudorache, a pivotal figure in the AI Act's development, confirmed the consensus on the creation of an "EU AI Office." While its final name and structure remain under discussion, the office's primary role will be centralized oversight, supplemented by national subsidiaries tasked with talent acquisition and expertise development.

Tudorache emphasized the office's significance in monitoring global tech behemoths, such as OpenAI and Meta. Given these companies' vast influence and reach, a robust enforcer is essential, drawing parallels with the General Data Protection Regulation's implementation.

While the AI Act aims to curb high-risk applications, including public biometric recognition, concerns linger. Critics highlight a two-year enforcement gap, potentially allowing more high-risk AI systems to proliferate before the Act's enforcement. Tudorache suggests a 12-16 month timeline as more realistic for compliance readiness.

President Biden is expected to sign an order next week that, among other things, aims to establish guideposts for the use of AI by federal agencies. PHOTO: EVAN VUCCI/ASSOCIATED PRESS

The Biden administration is poised to release a groundbreaking executive order on artificial intelligence (AI), signaling the U.S. government's most ambitious effort yet to regulate this transformative technology. This announcement comes just ahead of an international AI safety summit in Britain, where global leaders, tech magnates, and civil society representatives will discuss AI's societal implications.

Central to the order is the U.S. government's intention to mandate assessments for advanced AI models before their deployment by federal employees. This move underscores the government's role as a major tech consumer and its commitment to ensuring AI's safe integration. Additionally, the order seeks to streamline immigration for highly skilled tech professionals, enhancing the U.S.'s competitive edge in the tech arena. Key federal departments, including Defense and Energy, will be tasked with evaluating how AI can fortify their operations, particularly in the realm of cyber defense.

While the European Union is concurrently crafting its AI Act to safeguard consumers from AI's potential hazards, the U.S. is leveraging voluntary commitments from tech giants like OpenAI and Google. These commitments focus on AI safety, transparency, and collaborative data sharing. As AI continues to revolutionize industries and societies, this executive order represents a significant stride towards responsible AI governance and innovation.

The scales of justice transform into pixelated data. Composite: Guardian Design/Getty Images

Prominent AI researchers, including the revered "godfathers" of the technology, Geoffrey Hinton and Yoshua Bengio, have raised alarms over the unchecked growth of powerful artificial intelligence systems. In a recent policy proposal, these experts have emphasized the potential threats AI poses to societal stability and have urged for companies to be held accountable for any harm their AI products may cause.

The clarion call comes as global stakeholders, from politicians to tech giants, gear up for an AI Safety Summit at Bletchley Park. Stuart Russell, a notable computer science professor, highlighted the urgency, stating, "Increasing [AI] capabilities before we understand how to make them safe is utterly reckless."

Key policy recommendations from the document include:

  • Allocating a significant portion of AI R&D resources to safety and ethical considerations.

  • Granting independent auditors access to AI labs.

  • Instituting a licensing system for advanced AI models.

  • Holding tech companies accountable for foreseeable and preventable harms from their AI systems.

The document also warns of the potential for AI to amplify social injustices, erode professional standards, and destabilize society. With AI models like OpenAI's GPT-4 already showcasing advanced capabilities, the experts stress the need for immediate action to ensure these systems remain under human control.

As the legal community navigates the AI landscape, understanding these concerns and the proposed measures will be crucial in shaping future regulations and ensuring the responsible growth of AI.

ChatGPT Prompt for Proofreading

WARNING: Use these prompts at your own risk! Always, always, always read and verify the accuracy of the outputs!

ChatGPT prompt: Proofread this document to ensure consistency in verb tenses and correct any grammatical errors.

Edit this prompt to cover the following requests:

  • Check for and correct long or complicated sentences

  • Rewrite all sentences with passive voice in an active voice

  • Check the document for consistency in formatting and style

  • Make sure the article has proper punctuation and capitalization

Meme Of The Week:

That’s all for today!

Catch ya’ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.