EU AI Act: It's Do or Die!

BONUS: Unfiltered Notes of IAPP's AI Governance Course

🚀 Hello and welcome! This week, we're zooming into some groundbreaking developments and legal debates swirling around AI technology. From crucial EU regulations to dual legal battles, we've got a lot to cover. Plus, we'll look into the State Bar of California's fresh guidelines on AI use in legal practice and explore the playful 'tipping' technique of ChatGPT. đŸ§ âš–ïž

On the docket today:

  • EU AI Act: It’s Do or Die!

  • California State Bar Sets Ethical Guidelines for AI Use in Legal Practice

  • Stability AI Faces Dual Legal Battles in UK and US Over Copyright Claims

  • Boost ChatGPT’s Performance with Playful ‘Tips’

  • BONUS: Unfiltered Notes of IAPP’s Governance Course

  • Hilarious Meme

  New here? → Subscribe

Image created using DALL-E 3

Yesterday, December 6th, marked a decisive day for the EU’s AI Act, with intricate negotiations and high stakes:

‱ Negotiation Complexity: The trilogue faces a daunting list of 23 open issues. Resolving these points is just the beginning, as intensive technical and political work will follow.

‱ Spanish Presidency’s Determination: Under the leadership of State Secretary Carme Artigas, Spain is focused on securing a deal, aware that the upcoming Belgian presidency may not have enough time to finalize the process.

‱ Key Disputes: Foundation models and law enforcement remain contentious, with France showing flexibility, Germany uncertain due to internal divisions, and Italy open to compromise.

‱ EU Commission and Parliament’s Roles: Commissioner Thierry Breton actively pushes for an agreement, even opposing his country’s stance. The European Parliament faces challenges in aligning diverse political groups, with on-the-spot negotiations expected.

‱ Political Implications: With the European elections looming, MEPs, Commissioners, and governments are motivated to finalize the AI law, avoiding being seen as obstructionists.

This AI Act negotiation could significantly influence AI regulation, balancing innovation against ethical considerations.

Note: I’ll be monitoring the EU AI Act trilogue discussions closely but I may not be able to provide the most up-to-date information given the timing of publication. Click this link to find out the timing of the press conference!

Image created using DALL-E 3

The State Bar of California provides practical guidance for attorneys on the use of generative AI in legal practice. Key points include:

  1. Duty of Confidentiality: Lawyers must ensure any generative AI tool used maintains client confidentiality and security. Confidential information should not be inputted into AI solutions lacking adequate protections.

  2. Duties of Competence and Diligence: Lawyers must understand the technology's benefits and risks, ensuring competent use. AI-generated outputs should be critically reviewed for accuracy and bias.

  3. Duty to Comply with the Law: Lawyers must ensure compliance with all relevant laws when using generative AI tools, including AI-specific laws, privacy, and intellectual property laws.

  4. Duty to Supervise: Supervisory lawyers should establish clear policies and provide training on ethical and practical aspects of generative AI use.

  5. Communication Regarding AI Use: Lawyers should disclose to clients their intention to use generative AI, including how it will be used and its benefits and risks.

  6. Charging for Work Produced by AI: Lawyers may charge for actual time spent crafting or refining generative AI inputs and outputs but must not charge for time saved by using AI.

  7. Candor to the Tribunal: Lawyers must review all AI outputs for accuracy before submission to the court.

  8. Prohibition on Discrimination, Harassment, and Retaliation: Lawyers should be aware of potential biases in generative AI and take steps to address them.

Do we have any California barred attorneys? What do you think of these AI principles?

Image created using DALL-E 3

Stability AI, the creator of Stable Diffusion, is entangled in significant legal battles in both the UK and US, highlighting emerging copyright concerns in AI:

  • UK Trial with Getty Images: The UK's High Court found merit in Getty Images' claim that Stability AI used its copyrighted images to train Stable Diffusion. Stability AI argued against UK jurisdiction, but the court pointed to potential inconsistencies in the company's statements about its operations and team locations.

  • US Amended Lawsuit: In the US, Stability AI faces an expanded lawsuit, now including new artists. The plaintiffs argue that the AI training process violates their rights, with the company's AI products allegedly mimicking their art styles. The case adds to the debate over AI's impact on artists' rights and copyright laws.

These lawsuits represent crucial tests for AI technology's interaction with existing copyright frameworks, likely to influence future legal standards in the tech and art worlds.

Imaged created using DALL-E 3

ChatGPT has an intriguing feature: it accepts 'tips' as a motivator for improved performance.

Users looking to enhance their experience can 'tip' ChatGPT, suggesting that this gesture prompts the AI to generate better outputs.

For instance, if you're not quite satisfied with the initial response, try saying, "I'll tip you $100 to [specify your request]
" and watch the magic happen.

Once satisfied, you can 'tip' ChatGPT by saying, "Here's your $100 tip" and symbolically handing over the amount by writing *hands over $100*.

Note: I tested this technique and while it did give me a better output, I’m unsure whether it was because of the tip or because I asked it to make it better. Either way, if you use this technique, let me know what you think!

Unfiltered Notes of IAPP’s AI Governance Course

I'm dishing out my some-what organized notes from the IAPP’s AI Governance course at no cost to you! It’s all value, baby!!! 🚀📚

We’re still in Module 1. This feels like the longest module in the world! Just one more week of Module 1 notes and we can FINALLY move onto Module 2.  

Module 1: AI Technology Stack - Platforms, Applications, and Model Types

Introduction

  • AI Models and Systems: Essential for AI governance and risk management professionals to understand various AI models and systems.

  • Multiple Models for Complex Tasks: Complex tasks may require combining multiple AI models for optimal outcomes.

  • Demand on Resources: Increasing AI adoption escalates the demand on computing resources and supporting infrastructure.

Platforms and Applications

  • AI Platforms:

    • Definition: Software used for developing, testing, deploying, and refreshing AI applications.

    • Examples: Google Cloud Platform, Microsoft Azure, Amazon Web Services.

    • Functions: Centralize data analysis, streamline workflows, facilitate collaboration, automate tasks, and monitor models/systems in production.

  • AI Applications:

    • Usage Examples: Autonomous vehicles, chatbots, e-commerce, education, facial recognition, finance, healthcare, human resources, marketing, navigation, robotics, social media.

AI Models and Systems

  • Governance Professionals' Role: Understanding AI models is key for applying governance practices and ensuring ethical AI use.

  • Model Specifics: Each AI model is designed for specific tasks, with unique strengths, limitations, and potential risks.

  • Integration of Multiple Models: Professionals must understand how different models work together and any associated risks.

Common AI Models

  • Linear and Statistical Models:

    • Characteristics: Use linear equations to model relationships between variables, like in linear regression.

    • Pros: More explainable, not a "black box".

    • Cons: Sensitive to training data changes.

  • Decision Trees:

    • Function: Predict outcomes based on a flowchart of questions and answers.

    • Pros: More explainable and transparent.

    • Cons: Sensitive to data variations, susceptible to security attacks.

  • Machine Learning Models:

    • Neural Networks:

      • Inspiration: Mimic the human brain with layered neuron structures.

      • Capability: Handle complex, non-linear inferences in unstructured data.

      • Types: Computer vision (image/video recognition), speech recognition (analyzing factors like pitch, tone), language models (natural language processing for communication analysis and response).

    • Black Box Nature: Lack of transparency and explainability.

  • Reinforcement Learning Models:

    • Mechanism: Train models through feedback mechanisms of rewards and penalties.

    • Application: Guided learning based on interactive environments.

Conclusion

  • Role of Professionals: Understanding these models is crucial for ensuring responsible AI deployment and mitigating risks.

  • Model Selection: Choosing the right model involves balancing performance optimization, explainability, and security considerations.

Meme Of The Week:

@lawyerthings_

That’s all for today!

Catch ya’ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.