- AI RULES
- Posts
- Hallucitations Strike Again (and hit two legal professionals)
Hallucitations Strike Again (and hit two legal professionals)
$800 + Value: IAPP's AI Governance Course Notes
Hey hey, AI homies! The week was filled to the brim with activity, ensuring our newsletter is packed with succulent news 💦 Let's leap directly into the action!
On the docket today:
Analyzing the Impact of Github and Tremblay Copyright Decisions
Funny Attorney Meme
$800+ Worth of AI Governance Material
New here? → Subscribe
In recent incidents, lawyers and a pro se litigant faced sanctions for citing non-existent cases in court filings, a mistake attributed to reliance on generative AI without proper verification. The Missouri case involved a pro se appellant who presented a brief with 22 of 24 cases being fictitious, resulting in a dismissal and a $10,000 fine. In Massachusetts, a lawyer filed memoranda with several fake cases, which led to a sanction of $2,000 after failing to conduct a reasonable inquiry into the accuracy of AI-generated citations. These cases highlight the legal profession's growing concern with the use of generative AI and underscore the importance of diligence in verifying AI-generated legal content.
During Meta's Q4/2023 earnings call, CEO Mark Zuckerberg highlighted the company's vast collection of user-generated content on Facebook and Instagram, emphasizing its importance for AI training and feedback. This practice, however, raises legal and ethical concerns, particularly in light of the FTC's February 13th warning against companies that change their terms of service to use consumer data for AI training without explicit consent. Meta's privacy policy lacks clear disclosure on the use of user data for AI training, a point of contention given the FTC's stance on such practices as potentially unfair or deceptive. The absence of an opt-out option for AI training further complicates the issue, suggesting a potential violation of user privacy rights and highlighting the need for regulatory scrutiny to ensure compliance with data protection laws.
The recent U.S. District Court decisions in Doe v. GITHUB, INC. and Tremblay v. OpenAI, INC. have continued to shape the legal landscape regarding copyright infringement claims against generative AI (GenAI) systems like GitHub's Copilot and OpenAI's ChatGPT.
Key Takeaways from the Github and Tremblay Decisions:
State Law Claims Preemption: Both decisions reaffirmed the principle that state law claims that duplicate or are equivalent to copyright claims are preempted by Section 301 of the U.S. Copyright Act. This preemption applies when the subject matter of the state law claim falls within the scope of copyright and the rights asserted under state law are equivalent to copyright rights, without adding an 'extra element' that makes the claim qualitatively different.
DMCA Claims on CMI: The courts have narrowed the scope of claims under the Digital Millennium Copyright Act (DMCA) related to copyright management information (CMI). In Github, the court dismissed DMCA claims by clarifying that violations require the removal or alteration of CMI from identical copies of copyrighted works, suggesting that the transformation or modification inherent in AI-generated outputs may not meet this standard.
Copyright Infringement and Derivative Works: The decisions continue to grapple with whether and how GenAI outputs might infringe on copyright, specifically questioning whether all outputs from GenAI systems are derivative works because they are derived from copyrighted works. The courts have yet to fully resolve this issue but have allowed cases to proceed where specific allegations of copyright output by AI were made.
Fair Use Defense: The unfolding decisions hint at the fair use defense's potential applicability to GenAI training and outputs, aligning with previous suggestions that the legality of GenAI systems' training under U.S. copyright law might ultimately hinge on this defense.
Mistral AI's partnership with Microsoft raises significant questions about the company's involvement in shaping the AI Act and its alignment with Big Tech interests. The French AI company, previously advocating for leniency in the AI Act's foundation model rules, is now in a deal that seems at odds with its 'European champion' stance. The timing of the deal suggests a possible strategic play during the AI Act negotiations, hinting at a complex relationship with Big Tech lobbying efforts. This move also challenges France's push for 'strategic autonomy' from American tech giants, especially with Mistral's large language model being made available on Microsoft's Azure AI platform. The launch of Large, a new language model competing with OpenAI's GPT-4 but not open source, marks a shift from Mistral's open source approach, raising concerns about its position with EU policymakers as enforcement of the AI Act nears.
@lawyerswhowine
$800+ Worth of AI Governance Notes PLUS: IAPP’s AI Governance Course Notes
Module 6 is here! And, I’m just going to dump all sections here sooooo, it’s gonna be long. Get ready my fellow AI nerds! 🤓📚
Module 6: Current Laws that Apply to AI Systems
Introduction
AI governance professionals must understand existing laws affecting AI use.
Knowledge of these laws helps in avoiding legal/regulatory issues in AI governance and risk management.
Topics and Performance Indicators
Unfair and Deceptive Practices Laws
Product Safety Laws
Intellectual Property Law
EU Digital Services Act (transparency of recommender systems)
Privacy Laws (data use)
Laws and AI Context
AI falls under several regulatory frameworks.
Compliance required throughout the AI development lifecycle.
Categories of AI adoption: performing existing functions in new ways, and accomplishing new processes.
AI in Regulatory Context
Existing regulatory requirements apply to AI-driven processes.
AI introducing new processes must consider how existing laws apply.
Highly regulated industries: financial services, health care, transportation, employment, education.
AI Evolution and Legislation
Challenges: copyright laws application to AI, AI outputs' originality and copyright protection, human intervention in AI inventions.
Courts, agencies, and legislators are evolving to address AI legal questions.
U.S. Legal Context
Employment laws (Title VII, EEOC regulations)
Consumer finance laws (Equal Credit Opportunity Act, Fair Credit Reporting Act)
Model risk management (SR 11-7)
Robotics safety (OSHA guidelines)
FDA approval for software as a medical device
FTC authority over commercial operations (privacy and security in AI)
EU Digital Services Act
Overlaps GDPR: transparency in online platforms and advertising.
Recommender systems and online advertising transparency.
Product Safety Laws
EU AI Act includes product safety laws.
U.S. Consumer Product Safety Commission developing AI standards.
IP laws relevance in AI content generation and use.
Data protection (GDPR, CPRA) and security standards.
Summary
Awareness of applicable laws is essential in AI governance and risk management.
Developing proper policies and procedures for legal compliance reduces organizational risks.
Module 6: Spotlight on the GDPR
Introduction
GDPR addresses automated decision-making in AI.
Compliance with GDPR is crucial for organizations using AI.
Topics and Performance Indicators
Automated Decision-Making
Data Protection Impact Assessments (DPIAs)
Anonymization and its Relation to AI Systems
AI Conformity Assessments and DPIAs
Human Supervision of Algorithmic Systems
Individual’s Rights in AI Logic Transparency
AI and GDPR
Article 22: Impacts AI in automated decision-making.
Data Protection Impact Assessments: Required for high-risk AI processing.
Recital 26: Pseudonymization and anonymization techniques in AI.
GDPR and AI Challenges
AI's application of GDPR principles: transparency, minimization, data subject rights.
Implementing data subject rights in AI systems.
AI decision-making: Requirement of manual review and challenges in black-box AI.
Utilization of pseudonymized and anonymized data in AI.
Summary
GDPR compliance affects AI program choices.
Addressing GDPR considerations early avoids fines, penalties, and loss of trust.
Module 6: Liability Reform for AI
Introduction
Awareness of changes in legal liability crucial for AI governance and risk management.
Understanding regulators' approach to AI liability informs program design.
Topics and Performance Indicators
EU Product Liability Law Reform
AI Product Liability Directive
U.S. Federal Agency Involvement (EO14091)
Product Liability Law
Context: Increasing AI use in high-risk domains and potential serious harm.
Challenges: Difficulty in attributing harm due to AI's autonomous nature.
Complexity: AI's technical nature and "black box" issues complicate liability attribution.
Liability Directives
EU proposals aim to ease proving liability in AI-caused harm.
EU Product Liability Directive: Strict liability for AI products.
AI Liability Directive: Fault liability linked to non-compliance with EU AI Act.
US Legal Framework: Varies by state, with different types of liability claims.
AI Legal Framework
US Approach: Varied state laws and minimal AI-specific case law.
EU Approach: Directives for AI liability, enforcing compliance with AI Act.
Federal agency guidance on AI harms (FTC, FDA, EO14091).
Summary
Legal landscape rapidly adapting to AI.
Organizations must be aware of AI product liabilities.
Ensuring AI safety and compliance reduces risk of litigation and enhances reputation.
That’s all for today!
Catch ya’ll on LinkedIn