• AI RULES
  • Posts
  • EU Parliament Passes AI Act, But There's More!

EU Parliament Passes AI Act, But There's More!

$800 + Value: IAPP's AI Governance Course Notes

Holy shit ya’ll, it’s happening! The EU Parliament passed the AI act 🎉 BUT there’s more that needs to happen before it becomes law. Keep scrolling for more deets and all the other juicy updates in the legal world of AI. Like always, there’s a lot to cover. So, let’s goooooo!

On the docket today:

  New here?Subscribe

EU Parliament Passes AI Act, But There’s More!

Image created using DALL-E 3.

Following intense lobbying from the tech industry and comprehensive negotiations among EU lawmakers, parliament passed the AI Act!

It passed with a vote of 523-46, signaling a major milestone in AI regulation. It is set for final linguistic approval and will come into effect 21 days after publication in the Official Journal, with a phased implementation timeline culminating in full enforcement by 2026.

What’s next:

  • Council of Ministers' Final Vote: The bill now moves to the Council of Ministers for the final approval. Given their previous stance of support contingent on its passage in the Parliament, approval is anticipated.

  • Enactment: Following approval, the bill will be signed into law, with expectations for this to occur by June.

  • Implementation: The Act's provisions will be phased in gradually after it becomes law.

Image created using DALL-E 3.

In a recent article, the complex landscape of Japan's TDM exception and its implications for AI training in the realm of copyright law is meticulously examined. Despite initial misinterpretations stemming from remarks made by then Minister of Education, Culture, Sports, Science and Technology, Keiko Nagaoka, Japan's Agency for Cultural Affairs (ACA) has taken proactive measures to address confusion by releasing a draft discussion paper on AI and copyright.

Key Takeaways:

  • Clarification of Japan's TDM Exception: Article 30(4) of Japan's Copyright Law permits unlicensed use of copyrighted data for testing, analysis, or processing, provided it doesn't "unreasonably prejudice" copyright owners. However, it carefully distinguishes between benign data analysis and uses involving "enjoyment" of the content, where the exception doesn't apply.

  • Interpretation of "Enjoyment": While the specific Japanese term is not directly translatable, it encompasses the user's benefit or perception of the copyrighted content. If the user derives benefit from the data and creates output reflecting the essential characteristics of the copyrighted work, the TDM exception does not apply.

  • Impact on AI Training: Claims of Japan providing a blanket copyright exception for AI training are unfounded. The TDM exception is narrowly defined and excludes uses involving enjoyment of the content. AI developers must exercise caution, particularly regarding data obtained from sites containing pirated content, to avoid liability for infringement.

  • Potential Legislative Clarity: The ACA's draft discussion paper signals Japan's commitment to clarifying its copyright laws and protecting rights-holders. While legislative changes may not be immediate, issuing official guidelines or regulations based on the paper's conclusions could provide further clarity and reaffirm Japan's adherence to international obligations.

Image created using DALL-E 3.

Microsoft is championing the integration of AI as a key strategy in combating robocalls and spam texts, urging the Federal Communications Commission (FCC) to recognize the potential of AI-driven solutions to enhance telecom security. By promoting AI's dual capability to both identify and mitigate fraudulent communications, Microsoft aims to shift regulatory perspectives towards the positive impacts of technological advancement.

Microsoft's dialogue with the FCC included:

  • Advocating for AI in Telecom Security: Microsoft emphasized the importance of AI as a transformative tool in the telecommunications sector, particularly for detecting and mitigating robocalls and spam texts. This advocacy highlights the company's belief in AI's potential to significantly improve consumer protection against fraudulent communications.

  • Presenting Azure Operator Call Protection: During its discussions, Microsoft introduced the Azure Operator Call Protection system, a new AI-based product designed to provide real-time analysis and alert users to potential scams. This product represents Microsoft's commitment to leveraging AI for enhancing the security and reliability of voice communication services without imposing additional burdens on consumers.

  • Addressing Legal and Regulatory Challenges: A key aspect of Microsoft's dialogue with the FCC focused on navigating the legal challenges presented by the Wiretap Act, which currently restricts unauthorized access to the content of communications. Microsoft advocated for regulatory adjustments or clarifications that would allow AI tools to screen and block harmful communications legally.

  • Emphasizing AI's Role Against Fraud: Microsoft underscored AI's dual functionality in the context of robocalls—acknowledging the risk of its misuse for creating more sophisticated scams while strongly advocating for its beneficial use in identifying and blocking such fraudulent activities. This balanced view aims to encourage the FCC to consider the positive applications of AI technology in regulatory decisions.

  • Requesting Regulatory Support and Clarification: A significant part of the dialogue involved Microsoft's request for the FCC to classify AI-driven fraud detection efforts as a "necessary incident" to the provision of communication services. This classification would help clear legal hurdles related to the Wiretap Act, enabling service providers to deploy AI tools more effectively in their fight against robocalls and ensuring consumer protection.

Image created using DALL-E 3.

The U.S. Patent and Trademark Office (USPTO) has recently updated its guidance on inventorship for AI-assisted inventions, clearly stating that AI entities cannot be recognized as inventors. This pivotal guidance draws a significant line between the contributions made by human inventors and the role played by artificial intelligence in the process of innovation, establishing the criteria under which individuals can be acknowledged as inventors in the age of AI-enhanced creativity.

At the core of this guidance are the Pannu factors, originating from the case Pannu v. Iolab Corp. in 1998 by the Federal Circuit, which developed a three-part test to assess significant contributions to an invention. These factors include:

  • Conception or Reduction to Practice: A contributor must significantly engage in the invention's conception or its reduction to practice.

  • Quality of Contribution: The contribution's quality must be substantial compared to the entire scope of the invention.

  • Beyond Well-Known Concepts: A contributor's input must go beyond simply explaining well-known concepts or the current state of the art.

When applied to the context of AI-assisted inventions, the Pannu factors emphasize the necessity for significant contributions, especially underscoring the importance of an individual's input in formulating or modifying the AI's output for it to be considered as inventorship. The USPTO outlines specific criteria for inventorship in AI-enhanced innovations, including:

  • Constructing Input Prompts: Crafting specific prompts that guide the AI towards a particular solution.

  • Modification and Experimentation: Making significant contributions by modifying AI-generated outputs or conducting experiments to refine these outputs.

  • Designing Essential Components: Developing crucial components or training the AI system tailored to solve a specific problem.

 However, certain actions do not qualify as significant contributions, such as merely presenting problems to an AI system without further intellectual input or maintaining intellectual domination over the AI's use without directly contributing to the invention's development.

The USPTO also provides illustrative scenarios to demonstrate different levels of human interaction with AI in the invention process. For example, a case involving the creation of a transaxle for a toy remote control car by two natural persons using an AI system highlights various degrees of human involvement, from no alteration of AI output, which does not qualify for inventorship, to significant modifications or the creation of a new design based on AI suggestions, which do.

These examples underscore the USPTO's stance that significant inventive contributions—like conducting experiments to alter an original design or utilizing AI to enhance a self-conceptualized design—are indispensable for an individual to be recognized as an inventor.

Imaged created using DALL-E 3.

In a recent case in Massachusetts, attorney Steven Marullo received a "restrained" sanction of $2,000 from Norfolk County Superior Court Judge Brian A. Davis for submitting court documents containing fictitious case citations generated by an artificial intelligence tool, in the context of a wrongful death case related to Sandra Birchmore. The use of the AI tool was admitted by Marullo's team, which included two recent law school graduates and an associate. Judge Davis described the submission without proper review as "categorically unacceptable," noting that ignorance of technology tools is not an acceptable excuse. Importantly, Judge Davis warned that future instances of similar misconduct would not be met with the same leniency, indicating a stricter stance on the verification of AI-generated content in legal filings.

The case of Thaler vs. Perlmutter addresses whether AI-generated works are eligible for copyright protection, challenging existing interpretations of the Copyright Act.

  • The U.S. Copyright Office argues against AI eligibility for copyright protection, emphasizing long-standing policies that restrict copyright to human authors.

  • Stephen Thaler's appeal challenges the Copyright Office's decision not to register his AI-generated artwork, "A Recent Entrance to Paradise," arguing the Copyright Act does not expressly limit authorship to humans.

  • The appeal highlights a broader legal and philosophical debate about the nature of creativity and authorship in the age of AI, citing examples where corporations, as non-human entities, have been granted copyright.

  • The Copyright Office's brief points to the Copyright Act's language and structure, which assume an author's humanity through references to life, death, family, and legal capacities, arguing these are nonsensical when applied to machines.

  • Thaler counters by emphasizing the act's purpose to incentivize the creation and dissemination of creative works for the public good, arguing that AI-generated works align with this goal by fostering innovation and expanding the range of creative works.

  • This case presents a significant opportunity for the D.C. Circuit to address the intersection of copyright law and AI, potentially reshaping the legal landscape around AI-generated content.

The D.C. Circuit's upcoming decision in this case will clarify the eligibility of AI-generated works for copyright protection, significantly impacting the intersection of copyright law and artificial intelligence.

@lawyerswhowine

$800+ Worth of AI Governance Notes PLUS: IAPP’s AI Governance Course Notes

My notes for the last and final Module of the IAPP’s AI Governance is here! course is here! It’s taken a while to share all my notes. Hopefully, it was worth it and, at the very least, you have an idea of what the course covers and whether or not you want to drop a lot of cash on it. 🤓📚

Module 8: Ongoing AI Issues and Concerns – Awareness of Legal Issues

1. Developing a Coherent Tort Liability Framework for AI

  • Existing Legal Frameworks: Current laws are not fully equipped to handle AI's unique challenges. This includes issues like AI's opacity, autonomous behavior, and unpredictability. There's a need to adapt traditional tort law to accommodate the complexities of AI.

  • EU Initiatives: The EU Commission’s AI Liability Directive aims to compensate for AI-related damages, addressing AI's specific characteristics such as limited predictability and autonomous behavior. It also seeks to prevent companies from avoiding liability for their AI products.

  • U.S. Initiatives: In the U.S., state regulations, particularly concerning autonomous vehicles, set specific safety standards. The FTC urges businesses to be transparent about AI use, ensure fair and sound decision-making, and maintain accountability.

2. AI Models and Data Licensing Issues

  • Intellectual Property Rights: Defining who owns the data in AI models is a key challenge. This includes safeguarding IP rights, particularly when using third-party AI programs.

  • Data Licensing Terms: These terms regulate trade secrets protection, use limitations, confidentiality, and rights assignment in AI model evolutions.

  • Model Licensing: Traditional software licensing exceptions are less applicable to AI due to its evolving nature. Licensees should seek minimum performance metrics and warranties to mitigate risks of underperformance.

  • Licensees’ Role: They should insist on performance metrics, warranties, thorough testing, and validation. A trial period might be beneficial to assess AI system performance.

3. Intellectual Property Rights in the Context of AI

  • Examples and Challenges:

    • Copyrights: Ownership ambiguity for AI-generated works.

    • Trademarks: Issues regarding AI-created logos or slogans.

    • Patents: Recent rulings emphasize that only humans can be named as inventors.

    • Enforcement: Difficulty in detecting AI-related IP rights violations.

  • Data Protection: AI's capability to scrape data poses risks for IP infringement and misuse. The legal stance on scraping for data usage is still unsettled.

Summary

The legal landscape regarding AI is rapidly evolving, with a focus on adapting existing frameworks to manage AI's unique aspects. This includes creating a coherent tort liability framework, addressing IP rights and data licensing challenges specific to AI, and implementing new directives to ensure fair and transparent use of AI technologies. These efforts are crucial for managing the risks and ethical implications of AI development and deployment, balancing innovation with legal accountability.

Module 8: Ongoing AI Issues and Concerns – Awareness of User Concerns

Educating Users About AI Systems

  • AI Literacy: Focus on understanding both technological and human dimensions of AI, including impacts on privacy, agency, and proprietary information.

  • Organizational Steps for AI Education:

    • Understand AI's purpose and impact on existing processes.

    • Collaborate with legal and tech teams to grasp AI limitations and legal aspects.

    • Train employees on AI use, limitations, and security/privacy controls.

    • Example: Samsung's ChatGPT misuse highlights the importance of understanding AI risks and ensuring sensitive information isn't exposed unintentionally.

Upskilling and Reskilling for AI

  • AI's Impact on Employment: AI can replace or augment jobs, enhancing human capabilities in certain sectors.

  • Job Automation: Roles involving repetitive tasks are more prone to automation. Creative and emotionally-driven jobs are less likely to be automated.

  • Strategies for Workforce Adaptation:

    • Develop skills that complement AI, like critical thinking and creativity.

    • Encourage learning and continuous skill development.

    • Examples: Upskilling in data analysis for marketing professionals, reskilling in coding for project managers, and enhancing UX design skills for product designers.

Opt-Out Alternatives for AI

  • Human Oversight in AI: Required in some laws for decision-making processes. The extent of human involvement varies based on data types and sensitivity.

  • AI Program Oversight: Best practices and legal requirements often mandate human supervision of AI processes, including regular risk assessments and system monitoring.

  • Bypassing AI Processes:

    • Allowing individuals to request non-AI alternatives. Availability and feasibility vary.

    • Example: U.S. Customs' alternative identity verification methods to facial recognition technology.

General Summary

  • AI systems require clear user education to ensure awareness of their functions and limitations, thus preventing misuse and protecting sensitive information.

  • As AI transforms the job landscape, upskilling and reskilling become essential to adapt to new demands and complement AI's capabilities.

  • Providing opt-out alternatives for AI processes is increasingly important, though not always feasible. Where possible, organizations should offer clear and accessible non-AI options.

Organizations need to prioritize AI literacy, workforce adaptation strategies, and potential opt-out alternatives to address user concerns effectively. This approach ensures a responsible and informed use of AI technologies, aligning with evolving legal and ethical standards.

Module 8: Ongoing AI Issues and Concerns – Awareness of AI Auditing and Accountability Issues

Building a Global Profession of Certified Third-Party AI Auditors

  • Challenges: Lack of a mature auditing framework, challenges in auditing AI due to the lack of precedents, and potential conflicts in internal audits.

  • Solutions:

    • Utilize existing frameworks like COBIT, IIA AI Auditing Framework, and COSO ERM Framework.

    • Develop new auditing methods for AI incorporating data, models, outputs, and processes.

    • Promote external audits for independence, as suggested by the EU Digital Services Act.

Leveraging AI Auditing Frameworks

  • COBIT 2019 Framework: Useful for defining audit scope and objectives, identifying risks, and providing assurance over AI initiatives.

  • Other Frameworks: COSO ERM, GAO AI Framework, IIA AI Framework, Singapore PDPC Model AI Governance Framework, and UN Guiding Principles Reporting Framework.

  • Audit Types: Includes technical, empirical, and governance audits, focusing on different aspects like data quality, bias, privacy, and security controls.

AI Audit Standards and Methodology

  • Diverse Methodologies: Varied audit techniques and standards lead to different conclusions based on the audit's purpose.

  • Key Areas: Includes bias audits, ethical development audits, governance, models and data evaluation, and IT general controls.

  • Audit Implementation: Requires clear definitions and precise standards for effective implementation and transparency.

Enhanced Accountability for AI Systems

  • Mechanisms: Development of tools and mechanisms for AI accountability and trustworthiness.

  • EU AI Act: Mandates prior conformity assessments for high-risk AI systems, with a focus on risk management across the AI system’s lifecycle.

  • Human Review Measures: Advisable for organizations to implement human review measures for AI decision-making.

Automating AI Governance Checks

  • Benefits: Increases efficiency, consistency, accuracy, and scalability in AI deployment and compliance.

  • Tools:

    • AI Verify: A framework with 11 AI ethics principles for testing AI system performance.

    • Model Card Regulatory Check: Automates regulatory compliance of AI systems, beneficial for SMEs.

  • Efficiency: Automation in AI governance ensures quicker and more cost-effective compliance with evolving standards and technology.

General Summary

  • The development of a global profession of certified third-party AI auditors is crucial to address the unique challenges posed by AI systems. This requires leveraging existing frameworks and developing new methodologies tailored to AI's complexities.

  • AI auditing frameworks vary, focusing on different aspects like bias, ethical development, governance, and IT controls. These need to be aligned with clear standards for effective auditing.

  • Enhanced accountability mechanisms are necessary, including conformity assessments and human review measures, to ensure AI systems are trustworthy and comply with legal and ethical standards.

  • Automating AI governance checks is essential for maintaining productivity and adapting to evolving standards and technologies. Tools like AI Verify and Model Card Regulatory Check aid in streamlining the compliance process, making it more efficient and cost-effective.

In conclusion, the legal and ethical landscape of AI is complex and evolving. A cohesive approach to AI auditing and accountability is essential, requiring global collaboration, innovation in auditing practices, and the integration of automated tools to meet the challenges of AI governance effectively.

That’s all for today!

Catch ya’ll on LinkedIn