Musk vs. Altman

$800 + Value: IAPP's AI Governance Course Notes

Hold onto your hats, AI legal nerds! ⚖️🤓 This week, we're zooming through a whirlwind of AI legal battles, from Musk's ethical showdown with OpenAI to The New York Times' copyright clash, alongside Clifford Chance's trailblazing AI integration and Dr. Vanberghen's deepfake warnings ⚠️

On the docket today:

  New here?Subscribe

Image created using DALL-E 3.

The complaint filed by Elon Musk against Sam Altman, Greg Brockman, OpenAI, and related entities highlights significant legal and ethical concerns with OpenAI's shift from its non-profit mission. It includes causes of action such as breach of contract, promissory estoppel, breach of fiduciary duty, unfair competition, and a demand for accounting, focusing on the secrecy surrounding GPT-4 and the development of a potentially more advanced AI model, Q*. The document also raises issues about the commercialization of AI technology and OpenAI's close ties with Microsoft, alongside governance concerns following a board coup in November 2023.

Image created using DALL-E 3.

OpenAI has responded to a lawsuit filed by The New York Times, accusing the company of using its content to train ChatGPT without permission. OpenAI counters by alleging that the Times engaged in hacking to produce evidence for its case, violating OpenAI's terms of use through "deceptive prompts." The AI company is seeking to dismiss most of the newspaper's claims, arguing that their use of copyrighted content falls under fair use and is essential for creating innovative products. OpenAI criticizes the Times' approach as a threat to their business model and expresses willingness to partner with news organizations, referencing an existing agreement with The Associated Press. The Times, through its lead counsel, denies OpenAI's accusations, insisting on the legitimacy of their evidence gathering and highlighting the scale of OpenAI's alleged unauthorized use of its copyrighted works. The legal battle focuses on claims of direct infringement, contributory copyright infringement, and unfair competition, with OpenAI challenging the viability and timing of these claims.

Image created using DALL-E 3.

Clifford Chance, a British multinational law firm headquartered in London, England, is rolling out a comprehensive suite of AI-powered workplace solutions, including Microsoft’s Chat-GPT-4-powered Co-Pilot and Viva Suite, across its global workforce of nearly 7,000 individuals. This initiative follows the development of Clifford Chance Assist, a private AI tool on Microsoft’s Azure OpenAI platform, aimed at enhancing efficiency in tasks like document summarization, minute recording, and email drafting. Chief Technology Officer Paul Greenwood highlights the firm's strategic AI integration since 2019, focusing on productivity and value enhancement without foreseeing a reduction in employment levels. Clifford Chance prioritizes data control and client data protection, forbidding the use of public Chat-GPT versions for sensitive information. This move positions Clifford Chance as a leader in legal sector innovation, emphasizing the importance of AI in transforming legal practices while maintaining client trust and data security.

Prof. Dr. Cristina Vanberghen - international legal practitioner and academic in the area of digitalisation - critically examines the escalating threat of deepfakes, emphasizing their potential to undermine democratic integrity and public trust, particularly in the context of global elections. She critiques the current regulatory measures, including the EU AI Act and the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, for their lack of concrete enforcement mechanisms and their potential insufficiency in addressing the evolving nature of deepfake threats. Vanberghen advocates for the reclassification of deepfakes from “limited risk” AI systems to “high-risk” AI systems within the EU AI Act, calling for clearer legal frameworks for developer liability and the criminalization of malicious deepfake creation and dissemination. She underscores the importance of robust legal measures, international cooperation, and the empowerment of the public with digital literacy skills to effectively combat the challenges posed by deepfakes.

@lawyerswhowine

$800+ Worth of AI Governance Notes PLUS: IAPP’s AI Governance Course Notes

Module 7 is here! And, I’m just going to dump all sections here sooooo, it’s gonna be long. Get ready my fellow AI nerds! 🤓📚

Module 7: Emerging Laws and Standards in the EU

Introduction

  • Context: AI governance professionals must understand the regulatory landscape in the EU regarding AI, including existing and upcoming laws.

  • Focus: Three proposed AI acts in the EU are examined for their common elements and differences.

Key Topics

  • AI System Classifications: Understanding the categorization of AI systems into prohibited, high-risk, limited-risk, and low-risk.

  • High-Risk Systems: Grasping the specific requirements for these systems, including foundational models.

  • Notification Requirements: Obligations to inform customers and national authorities about AI system aspects.

  • Enforcement and Penalties: Understanding the legal consequences of noncompliance with AI regulations.

  • Innovative AI Testing and Research Exemptions: Procedures and allowances for testing new AI technologies and exemptions for academic research.

  • Transparency and Registration: Necessity for a public registration database for AI systems.

The EU AI Act

  • Goals: To guarantee AI system safety and adherence to EU values/rights and to foster investment and innovation in AI.

  • Development Timeline:

    • European Commission's initial proposal in April 2021.

    • Council of the EU's position in December 2022.

    • European Parliament's final stance in June 2023.

    • Triologue negotiations anticipated in late 2023/early 2024.

  • Applicability: Extra-territorial reach, impacting AI systems marketed or used within the EU.

  • Exemptions: Military, national security, defense, and R&D contexts are typically exempt from these regulations.

Framework Breakdown

  • Risk-Based Approach: AI systems are categorized by risk level, each with specific regulatory requirements.

  • Prohibited AI Practices: Include AI applications that:

    • Use subliminal methods to materially distort behavior causing harm.

    • Exploit vulnerabilities of specific groups (e.g., based on age or disability) to significantly influence behavior.

    • Implement social credit scoring by public authorities.

    • Utilize real-time biometric identification in public spaces for law enforcement (limited exceptions allowed).

  • High-Risk AI Systems: Encompass systems with significant potential impacts on rights or safety.

  • Limited Risk AI Systems: Involve AI systems like chatbots or emotional detection tools, with specific transparency obligations.

  • Low/Minimal Risk AI Systems: Cover all other AI applications not specified in higher risk categories.

High-Risk AI Systems

  • Areas Defined in Annex III:

    • Biometric identification and categorization of natural persons.

    • Critical infrastructure management (e.g., utilities, transport).

    • Education and vocational training (e.g., admission algorithms).

    • Employment and workforce management (e.g., recruitment, employee monitoring).

    • Essential public/private services access (e.g., credit scoring, insurance risk assessment).

    • Law enforcement applications (e.g., crime prediction, lie detection).

    • Migration, asylum, and border control management.

    • Judicial and administrative processes in justice systems.

  • Requirements for Providers: Include risk management, data governance, transparency, human oversight, and robust security measures.

  • Obligations for Users: Safe usage as per guidelines, incident monitoring and reporting, and ensuring human oversight.

Registration and Notification

  • EU Database for High-Risk AI Systems: A publicly accessible database documenting high-risk AI systems.

  • Incident Reporting: Mandates providers to report serious incidents or malfunctions to authorities within 15 days.

Key Differences Between Proposals

  • Definition of AI: The Council and Parliament aimed to narrow the definition, primarily focusing on machine learning technologies.

  • Expanding Prohibitions and High-Risk Categories: The European Parliament proposes to ban more AI practices and add categories to high-risk AI systems.

Summary

  • Comprehensive understanding of these evolving EU regulations is crucial for AI governance professionals.

  • Organizations must prepare for stringent compliance requirements, including public transparency and incident reporting.

Module 7: Emerging Laws and Standards in the EU

Introduction

  • Context: Continuation of the discussion on emerging AI laws and standards in the EU.

  • Focus: Understanding the evolving legal landscape for AI in the EU, including classifications, requirements, and governance frameworks.

General Purpose AI and Foundation Models

  • Development: The European Commission's original proposal (April 2021) did not specifically address general-purpose AI and foundation models.

  • Council's View: General-purpose AI defined as systems usable for many different purposes. Suggests future legal acts for applying high-risk AI requirements to general-purpose AI, pending consultation and impact assessment.

  • European Parliament's Stance: Advocates for stringent risk assessment and mitigation for foundation models, including registration in the EU database and transparency about AI-generated content and copyrighted training data.

Governance and Enforcement Framework

  • National Supervisory Authorities: Member states designate existing authorities for AI Act enforcement, possibly involving multiple organizations.

  • EU Level: Creation of a European AI Board, composed of national authorities, chaired by the European Commission, ensuring uniform AI Act implementation.

  • Penalties:

    • For breaching prohibited AI practices: Up to €30 million or 6% of global turnover.

    • For other non-compliances: Up to €20 million or 4% of global turnover.

    • Council proposes additional caps for SMEs to ensure proportionate fines.

The EU AI Act and Next Steps

  • Identification: Determine which AI systems are classified as high-risk.

  • Territorial Scope: Assess whether the AI systems fall within the AI Act's scope.

  • Provider/User Clarification: Understand whether your organization is a provider or user of the AI system.

  • AI Procurement Policies: Develop robust procurement processes and due diligence for AI systems.

  • Gap Analysis: Compare current AI policies with AI Act requirements and address discrepancies.

  • Technical Standards: Stay updated with international and European standards relevant to AI.

Summary

  • Importance of Legal Teams: AI professionals should ensure their legal teams are knowledgeable about AI legislation.

  • Global Impact: Like the GDPR, the EU AI Act may influence global AI regulation approaches.

  • Compliance Strategy: Understanding compliance requirements is essential for AI governance teams.

Module 7: Emerging Laws and Standards Around the Globe

Introduction

  • Context: Global examination of AI regulation.

  • Goal: AI governance teams should leverage emerging laws globally for effective AI utilization.

  • Focus: Emerging laws in various countries including Canada, the U.S., Singapore, Japan, China, and others.

Global AI Regulation Overview

  • Global Activity: Countries worldwide are actively developing AI-specific legislative and regulatory frameworks.

  • Approaches: Varying approaches include enhancing existing regulations and creating new AI-specific laws.

  • Risk-Based Approach: Most jurisdictions are adopting a risk-based approach, except China.

  • Transparency: A common element in many laws, ensuring AI systems' decisions are understandable.

Canada's AI Regulation

  • AI and Data Act (C-27): Broad definition of AI, applying to private sector; focuses on high-impact systems considering harm's nature, scale of use, and consumer autonomy.

  • Federal AI and Data Commissioner: Proposed for overseeing AI operations, with powers to order cessation of AI operations in certain scenarios.

Singapore's AI Framework

  • AI Governance Framework: Industry-led advisory council advises on commercial AI uses.

  • Model Framework (2019): Voluntary framework guiding responsible AI deployment based on transparency, fairness, and human-centric principles.

  • AI Verify (2023): A toolkit supporting AI governance testing and oversight, fostering interoperability and common governance principles.

China's AI Approach

  • Rights-Based Approach: Focuses on end-user rights, opt-out options, and prohibitions on price differentiation.

  • Specific Directives: Target online recommendations, social media, and gaming.

  • Local Governance: Municipalities passing additional AI governance features, including compliance oversight and auditing.

Other Global Perspectives

  • South Africa: Protection of Personal Information Act, similar to the EU's GDPR.

  • Brazil: Proposed risk-based approach with human oversight.

  • India: Promotes AI use, established national committees for AI policy.

  • Japan: Non-binding AI guidelines, machine learning management guidelines, cloud service guidelines for AI.

  • South Korea: Non-binding guidance focusing on preventing AI dysfunction and promoting innovation.

Key Takeaways

  • Review of AI Laws: AI systems are subject to existing legal standards but pose new challenges.

  • Compliance Efforts: Ongoing effort to comply with developing AI laws and regulations.

  • Legal Advisory: Critical for organizations to have informed legal advisors for compliance readiness.

Summary

  • Proactive Regulatory Preparation: Essential for minimizing future adjustments.

  • Importance of Legal Expertise: Staying informed on upcoming AI laws is crucial for compliance and operational success.

Module 7: Standards and Risk Management Frameworks - Detailed Notes

Introduction

  • Focus: Designing an AI risk model suitable for intended AI use.

  • Goal: Selecting aspects from various frameworks to address organization-specific AI use.

Key Frameworks and Guidelines

  • ISO 31000:2018 Risk Management – Guidelines

    • Principles: Inclusive, dynamic, informed, human-centered.

    • Focus Areas: Leadership, integration, design, implementation, evaluation, improvement.

    • Process: Identify, evaluate, and mitigate risks.

  • NIST AI Risk Management Framework (AI RMF)

    • Principles: Trustworthy AI characterized by validity, safety, security, accountability, explainability, privacy, and fairness.

    • Process: Identify use and risks, assess and analyze risks, prioritize and manage risks.

  • Council of Europe Human Rights, Democracy, and Rule of Law Assurance Framework (HUDERIA)

    • Principles: Human dignity, freedom, prevention of harm, non-discrimination, transparency, data protection, democracy, rule of law.

    • Process: Impact assessment, risk grading, continuous monitoring.

  • ISO/IEC Guide 51 Safety aspects

    • Focuses on including safety aspects in standards.

  • EU Artificial Intelligence Act (EU AIA) and Singapore Model AI Governance Framework

    • Emphasizes safety, transparency, non-discrimination, environmental friendliness, transparency, explainability, repeatability, security, robustness, fairness, data governance, and accountability.

AI Governance and Responsible AI Programs

  • Context: Tailoring AI governance programs to the organization's specific use cases, sector, jurisdiction, and risk tolerance.

  • Frameworks vs. Principles: Frameworks operationalize values, whereas principles are a set of values.

  • Key Considerations: Sector, jurisdiction, risk tolerance, business strategy, resources for implementation.

  • Challenges: Managing change and cultural shifts, engaging stakeholders, raising awareness, gaining buy-in from leadership and ground teams.

  • Tips: Utilize various methodologies and legal proposals to establish AI governance programs.

Components of an AI Governance Framework

  • Scope: Covers AI development, procurement, and use.

  • Key Elements: Risk management system, data governance, record keeping, transparency, human oversight, accuracy, robustness, cyber security, quality management, performance assessment.

  • Resource Utilization: Tailoring frameworks to use cases; leveraging tools and metrics for specific AI governance issues.

Further Learning: AI Governance Framework

  • ISO 31000:2018: Provides a high-level understanding of risk management principles and processes.

  • NIST AI RMF: Offers practical guidance on managing risks in AI, focusing on trustworthy AI characteristics.

  • HUDERIA: Suggests methodologies for impact assessments aligning with human rights principles in AI.

Challenges and Next Steps

  • Change Management: Ensuring organizational alignment with AI governance principles.

  • Stakeholder Engagement: Involving internal and external parties for comprehensive risk assessment and ethical AI implementation.

  • Documentation: Recording decisions, risks, and mitigations for accountability.

  • Resource Allocation: Prioritizing high-risk AI applications for focused evaluation and resource allocation.

  • Cultural Adaptation: Embracing AI governance as an ongoing journey requiring time, effort, and consistent engagement.

Summary

  • Customized Approach: No universal solution for AI risk management; tailor the approach to fit the organization's values, principles, and purposes.

  • Framework Utilization: Drawing from existing frameworks and identifying organizational principles for a suitable AI risk management strategy.

  • Adaptive Journey: AI governance and risk management is a continuous process, requiring adaptation and consistent evaluation.

That’s all for today!

Catch ya’ll on LinkedIn