• AI RULES
  • Posts
  • Beijing Court Grants Copyright to AI-Generated Work

Beijing Court Grants Copyright to AI-Generated Work

PLUS: EU AI Act Cheat Sheet & IAPP's AI Governance Course Notes

šŸš€ Welcome! This week, we're unpacking a whirlwind of AI advancements and legal twists. Dive into Beijing's pivotal AI copyright ruling and the EU's latest AI Act agreement. Plus, discover the curious case of ChatGPT's seasonal performance shift and peek into IAPP's AI governance wisdom. Get ready for a tech and law crossover! šŸŒšŸ”āš–ļø

On the docket today:

  • Beijing Court Grants Copyright to AI-Generated Work

  • EU AI Act: Provisional Agreement Reached

  • Resource: EU AI Act Cheat Sheet

  • Resource: European Commission Publishes Q&A on the AI Act

  • The Regulation Conundrum of Open-Source AI

  • ChatGPT Slacks Off in December

  • PLUS: IAPPā€™s AI Governance Course Notes

  • Hilarious ā€œMemeā€

  New here? ā†’ Subscribe

Image created using DALL-E 3

Recently, the Beijing Internet Court in China made a groundbreaking ruling regarding the copyright of AI-generated content. The court decided that an AI-generated image, titled "Spring Breeze Brings Tenderness," created using Stable Diffusion, was protected under copyright laws. This ruling came after the image's creator, Mr. Li, sued a blogger, Ms. Liu, for using the image without permission and altering it. The court awarded Mr. Li CNĀ„500 in damages.

Key aspects of the ruling included:

  1. Copyright of AI-Generated Content: The court found that the AI-generated picture met the criteria for copyright protection under Chinese law, including originality and being a work of art. The court emphasized Mr. Li's significant intellectual input and aesthetic choices in creating the image, rather than attributing authorship to the AI service or its developers.

  2. Human Involvement Over Technological Aspects: The court focused on the level of human involvement in creating the AI-generated work. It acknowledged Mr. Li's intellectual investment in designing the presentation, selecting prompt words, setting parameters, and his involvement in the selection process, which reflected his personalized expression.

  3. Originality and Intellectual Achievement: The court ruled that the AI-generated work demonstrated originality as defined by Chinese law. It was seen as an "intellectual achievement" reflecting the author's aesthetic choice and personalized expression, rather than being a mere "mechanical intellectual achievement."

The Beijing Internet Court's decision marks a significant departure from the stance taken by the US Copyright Office, which has generally not granted copyright protection to AI-generated works. This decision, however, does not set a global precedent as China's civil law system doesn't operate on a system of precedent like common law systems. Despite this, the ruling could indicate a trend, especially since the Supreme Peopleā€™s Court of China has previously selected a similar decision as a model case.

Internationally, the ruling is expected to have significant implications. It contrasts with the strict originality standards in the US and could influence future AI copyright debates worldwide. For instance, in the UK and Europe, the argument that human actions involved in creating AI-generated works could meet the threshold for copyright protection is compatible with local originality requirements, which emphasize "intellectual creation."

While this ruling from China is not globally binding, it presents a notable example of how AI-generated works might be treated under copyright laws in different jurisdictions, potentially paving the way for more such cases in the future.

Image created using DALL-E 3

The European Union has reached a political agreement on the EU AI Act, a landmark bill to regulate artificial intelligence as a whole field. The agreement followed intense negotiations, resolving key issues like the regulation of foundation models and the use of facial recognition by law enforcement.

Key Milestones:

  • Political agreement achieved after months of trilogue negotiations since June, setting the structure and outline of the Act.

  • The European Commission initially proposed the EU AI Act in April 2021.

  • Formal adoption expected before the European Parliamentary elections in May, with a subsequent grace period for compliance.

Highlights of the AI Act:

  • A risk-based ruling system to regulate AI: bans on "unacceptable risk" systems, restrictions on "high risk" systems, and minimal regulation for lower-risk systems.

  • Restrictions on law enforcement's use of remote biometric identification, with exemptions for safety and national security.

  • Prohibition of certain AI applications, including manipulative techniques and systems exploiting vulnerabilities.

  • Requirements for transparency and fundamental rights impact assessments.

  • Regulation of "general purpose AI" systems like ChatGPT, with a focus on training data and energy use transparency.

  • Fines for non-compliance up to 7% of global annual turnover.

Technical and Regulatory Challenges:

  • The Act's effectiveness hinges on precise definitions and practical benchmarks for terms like "relevant," "representative," and "error-free" data sets.

  • Concerns about vague terms in Article 10(3) of the 2021 draft version and other critical sections.

  • Stanford University researchers propose proportional disclosure requirements for model providers, with exemptions for low-resource entities.

Responses from Different Stakeholders:

  • Civil society organization Algorithm Watch acknowledges key safeguards but points out loopholes and exceptions.

  • DIGITALEUROPE raises concerns about the focus shift from a risk-based approach and the impact on SMEs.

  • European DIGITAL SME Alliance supports tiered regulation to balance safety and innovation, advocating for regulation of large providers and protection of smaller entities.

Next Steps:

  • Negotiation of "Level 2" legislation to detail the technical aspects of the Act (4 technical meetings will take place this week and up to 5 meetings in January 2024).

  • Continued discussions on the practical implementation of the Act, focusing on the "how," "when," and "where."

  • The final text and specific regulations are yet to be finalized, with ongoing technical meetings and COREPER approval expected at the end of January.

Conclusion:

  • The EU AI Act is a significant advancement in global AI governance, prioritizing fundamental rights and setting a benchmark for future legislation.

  • The Act's success will depend on the detailed discussions around the Level 2 legislation and the practical application of its provisions.

Note: click the headline to download the PDF. 

The European Commission just issued comprehensive Q&As on the AI Act, updated following the political agreement reached last week. Note: click the headline to download the pdf.

Here is a list of the questions:

  1. Why do we need to regulate the use of Artificial Intelligence?

  2. Which risks will the new AI rules address?

  3. To whom does the AI Act apply?

  4. What are the risk categories?

  5. How do I know whether an AI system is high-risk?

  6. What are the obligations for providers of high-risk AI systems?

  7. What are examples for high-risk use cases as defined in Annex III?

  8. How are general-purpose AI models being regulated?

  9. Why is 10251025 FLOPs an appropriate threshold for GPAI with systemic risks?

  10. Is the AI Act future-proof?

  11. How does the AI Act regulate biometric identification?

  12. Why are particular rules needed for remote biometric identification?

  13. How do the rules protect fundamental rights?

  14. What is a fundamental rights impact assessment? Who has to conduct such an assessment, and when?

  15. How does this regulation address racial and gender bias in AI?

  16. When will the AI Act be fully applicable?

  17. How will the AI Act be enforced?

  18. Why is a European Artificial Intelligence Board needed and what will it do?

  19. What are the tasks of the European AI Office?

  20. What is the difference between the AI Board, AI Office, Advisory Forum and Scientific Panel of independent experts?

  21. What are the penalties for infringement?

  22. What can individuals do that are affected by a rule violation?

  23. How do the voluntary codes of conduct for high-risk AI systems work?

  24. How do the codes of practice for general purpose AI models work?

  25. Does the AI Act contain provisions regarding environmental protection and sustainability?

  26. How can the new rules support innovation?

  27. Besides the AI Act, how will the EU facilitate and support innovation in AI?

  28. What is the international dimension of the EU's approach?

Image created using DALL-E 3

The European Union's AI Act has recently achieved a pivotal milestone. The European Commission, Council, and Parliament have harmonized their viewpoints, offering clarity on how closed-source AI systems will be regulated. However, this development also brings into focus the notable absence of stringent regulations for free and open-source AI systems.

Note: According to Luca Bertuzzi, tech journalist for Euractiv, open-source AI systems that fall into high-risk categories or banned practices are still subject to regulations.

Understanding Open-Source vs. Closed-Source AI

Before we dive into the inherent risks of open-source AI, let's first define what we mean by closed-source and open-source AI:

  1. Closed-Source AI: These are systems like OpenAI's ChatGPT, where the software is intricately developed, maintained, and controlled by its creators. Interaction with these systems typically occurs through a web interface or an API, ensuring that the core software remains confidential and secure.

  2. Open-Source AI: In contrast, open-source AI systems, such as Metaā€™s Llama 2, offer public access to their source code and model weights. This level of transparency enables anyone to access, modify, and redistribute the AI software, which is both a strength and a vulnerability.

The Risks of Open-Source AI

Despite the advantages of transparency, open-source AI models come with a plethora of risks, often overlooked in the push for innovation and competitiveness:

  • Lack of Misuse and Bias Monitoring: There is no systematic way to track or correct misuse and bias in these systems.

  • Removal of Safety Features: Essential safety features can be easily disabled.

  • Fine-Tuning for Abuse: There's a risk of these models being fine-tuned to enhance abusive use cases.

  • Unrestricted Content Production: Without rate limits, there's potential for unchecked content generation.

  • Irreversible Security Vulnerabilities: Once released, security vulnerabilities cannot be patched.

  • Surveillance and Profiling Risks: These models can be misused for surveillance and profiling purposes.

  • Attacks on Closed AI Systems: Open-source AI can be used to attack or undermine closed AI systems.

  • Watermark Removal: The ability to remove watermarks from content poses a threat to content authenticity.

  • Design of Dangerous Materials: They could be used to design harmful substances or systems.

A Critical Perspective on AI Regulation

The EU AI Act's approach to regulating open-source AI - or rather, the lack of it - raises critical questions. It seems to prioritize innovation and market competitiveness over stringent regulatory oversight. This approach often receives accolades in tech circles, but it also warrants a more scrutinizing look. When tech enthusiasts celebrate regulatory gaps, it's a signal for us to examine the potential implications more closely.

Further Exploration with David Evan Harris

For those interested in delving deeper into the risks, harms, and potential regulatory frameworks for open-source AI, I recommend reading the work of David Evan Harris, Chancellorā€™s Public Scholar at UC Berkeley. His insights offer valuable perspectives on navigating this complex and evolving landscape of AI regulation.

ChatGPT Slacks Off in December

Image created using DALL-E 3.

Noticed ChatGPT slowing down this month? There's an interesting theory behind it.

Research suggests GPT-4 performs better when it "thinks" it's May, not December. Rob Lynch's experiment showed that GPT-4 wrote more code in May than in December, with significant statistical differences in output. This 'AI Winter Break Hypothesis' implies GPT-4's performance may vary based on the perceived season.

In other words, it ā€œlearnedā€ to do less work during the holidays!

Try changing the date in your prompts and see if you can boost its productivity!

Unfiltered Notes of IAPPā€™s AI Governance Course

I'm dishing out my some-what organized notes from the IAPPā€™s AI Governance course at no cost to you! Itā€™s all value, baby!!! šŸš€šŸ“š

This is the last of Module 1! Next week, we can dive into Module 2 (and start with risks and harms posed by AI).

Module 1: AI Technology Stack - Infrastructure

Compute Infrastructure: Overview

  • Central Role in AI: The AI technology stack, particularly the compute infrastructure, plays a pivotal role in the development and advancement of AI technologies.

  • Algorithmic Innovation: Recent progress in AI is significantly attributed to algorithmic advances, which have been propelled by enhanced computing capabilities.

Key Infrastructure Components

  • Computation and Processing:

    • Central Processing Unit (CPU): Traditional workhorse for computing tasks, with recent improvements in power and processing speed.

    • Graphical Processing Unit (GPU): Specialized chips providing better performance for AI tasks, crucial for offloading work from CPUs and handling complex algorithmic processes.

    • Serverless Computing: A cloud-computing model allowing code to run on various hardware, not tied to a specific server, enabling scalability and loose coupling.

  • Storage:

    • Storing massive amounts of both structured and unstructured data is essential for training AI models, requiring robust and scalable storage solutions.

  • Network:

    • High-speed networking is critical, especially for distributed AI applications and data-intensive tasks, to maintain efficient communication between different parts of the AI system.

  • High-Performance Computing (HPC):

    • Involves clustered systems with specialized chipsets and networking to handle computationally intensive tasks.

  • Quantum Computing:

    • Represents an advanced form of computation that processes data in three dimensions, offering potential breakthroughs in AI capabilities.

  • Trusted Execution Environments (TEEs):

    • Secure areas of a processor ensuring code confidentiality and integrity, providing a privacy-oriented framework for AI processing.

Infrastructure Challenges and Advances

  • Data Intensity: The burgeoning amount of data, including the rise of phenotypic and genomic data, demands powerful computational infrastructure.

  • Matching Hardware to AI Models: Essential for optimal AI performance; requires alignment of processing power, memory, and other hardware specifications to the needs of the AI model.

  • Scalability: Serverless architectures facilitate the running of multiple instances of AI applications, addressing the scalability requirements of AI systems.

  • Isolation for Performance: HPC and quantum computing offer isolated environments for specialized processing, supporting the rigorous demands of AI computations.

Considerations for AI Planning

  • Scalability and Flexibility: Ensuring the infrastructure can adapt to the changing demands of AI workloads.

  • Specialized Hardware: Recognizing the need for GPUs and other specialized processors that can handle specific AI tasks more efficiently than general-purpose CPUs.

  • Security and Privacy: Accounting for privacy concerns by incorporating TEEs and other security measures in the early stages of AI development.

Conclusion

  • Interconnectivity: The interplay between computation, storage, and networking is fundamental to the AI technology stack.

  • Impact of Innovation: Infrastructure innovation has been and will continue to be a driving force behind the advancement of AI technologies.

Module 1: History of AI and Evolution of Data Science

Introduction

  • Context: Understanding the history of AI and data science is crucial to grasp their current state and future potential.

  • Key Elements: The journey involves key events, technological advancements, and societal shifts.

History of AI

  • Dartmouth Conference (1956):

    • Birth of AI: Marked the formal beginning of AI as a distinct field.

    • Pre-conference Scenario: Various research efforts in psychology, computer science, and linguistics, but without a unified direction.

    • Key Figures: John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon.

    • Proposal: Exploring if every aspect of learning or intelligence could be precisely described for machine simulation.

    • Outcome: Adoption of the term "artificial intelligence" and establishment of a collective mission.

  • AI Summers and Winters:

    • First AI Summer (1950s-1970s): Period of optimism and funding, marked by the creation of LISP and development of ELIZA.

    • First AI Winter (1970s-1980s): Skepticism and funding cuts due to unmet lofty AI promises.

    • Second AI Summer: Revived interest with expert systems and the Japanese Fifth Generation Computer Systems project.

    • Second AI Winter (1980s-1990s): Decline in interest due to high costs and geopolitical changes.

    • Renaissance and Big Data Era (1990s-Present): Revival with IBM's Deep Blue victory and the data explosion from the internet.

Evolution of Data Science

  • Early Foundations (1960s-1980s):

    • Initial Concept: Term "data science" proposed by Peter Naur.

    • Early Data Handling: Mostly manual, with early concepts of data mining and analytics.

  • Age of Databases (1980s-1990s):

    • Advancements: Introduction of RDBMS and SQL.

    • Impact: Transformed business data management.

  • Internet Era (1990s-2000s):

    • Data Explosion: Substantial increase in data volume due to the internet.

    • Big Data Emergence: Recognition of the exponential growth of data.

  • Rise of Data Science (2000s-Present):

    • Increased Relevance: Growing importance of data-driven decision-making.

    • Technological Boost: Advancements like Hadoop enhancing data processing.

Modern Drivers of AI and Data Science

  • Cloud Computing: Offers scalable computing resources, driving AI and data science development.

  • Mobile Tech and Social Media: Data explosion providing rich learning resources for AI models.

  • Internet of Things (IoT): Massive data generation aiding AI model training.

  • Privacy Technologies and Blockchain: Addressing data security and privacy in AI and data science.

  • Computer Vision, AR/VR, and Metaverse: Emerging technologies shaping the future of AI and data science.

    • Computer Vision: Enabling machines to process visual data.

    • AR/VR: Redefining digital interaction; applications in various fields.

    • Metaverse: Vision of a shared virtual space; still evolving.

Summary

  • Learning from History: Understanding the development and challenges faced by these technologies is key to envisioning their future.

  • Brain-Computer Interface (BCI): Represents another frontier in AI-driven human-machine interaction.

  • Future Potential: The history provides a basis to imagine the future possibilities in AI and data science.

ā€œMemeā€ Of The Week:

Not really a meme but still funny. Breton, European Commissioner, got roasted on social media because he called the EU a continent.

Thatā€™s all for today!

Catch yaā€™ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.