- AI RULES
- Final Text of EU AI Act Leaked
Final Text of EU AI Act Leaked
PLUS: IAPP's AI Governance Course Notes
Did ya miss me?! I missed you and all the craziness in AI news! Let’s dive in! This week we're covering a range of crucial topics, including the recent leak of the EU AI Act's final text by journalist Luca Bertuzzi, the impending establishment of the European Artificial Intelligence Office, and China's ambitious draft guidelines for AI industry standardization. Additionally, I include insightful notes from IAPP's AI Governance course, discussing the characteristics of trustworthy AI systems. Stay informed on these pivotal movements shaping the future of AI and law. ⚖️🤖
On the docket today:
Final Text of EU AI Act Leaked
The EU AI Office is Almost Ready
China Drafts AI Standardized Guidelines
China AI Law Cheat Sheet
PLUS: IAPP’s AI Governance Course Notes
New here? → Subscribe
Image created using DALL-E 3
On January 22, 2024, Luca Bertuzzi, a renowned journalist specializing in European affairs, made a significant move by leaking the final text of the EU Artificial Intelligence Act. His expertise in tracking the progress of this critical legislation has been invaluable in keeping the public informed.
This strategic leak occurred just before the Telecom Working Party, a key technical body of the EU Council, was set to discuss the Act, leading up to its expected formal adoption by COREPER on February 2. Bertuzzi has raised concerns that this tight schedule allows inadequate time for national delegates to conduct a comprehensive analysis of the document. This constraint necessitates a focus on the most vital articles, possibly at the expense of a holistic understanding.
Bertuzzi reports that France is actively seeking alliances with other EU countries to potentially form a blocking minority. The aim behind this diplomatic maneuvering is to delay the COREPER vote, creating an opportunity to propose and negotiate specific amendments to the text. However, as of now, France has not succeeded in forming such a coalition. The clarity of this situation is anticipated to emerge as member states provide their technical feedback. In the event that France does not secure the desired changes at this juncture, it is poised to continue exerting influence over the implementation of the AI law, particularly in relation to secondary legislation. This underscores the law's significant national importance to France.
A special thanks to Bertuzzi for leaking this information. It is a service to public understanding and discourse surrounding a major legislative development in the realm of artificial intelligence within the European Union.
Click here for a consolidated version of the final text.
Image created using DALL-E 3
The imminent establishment of the European Artificial Intelligence Office, as indicated by a draft document obtained by Euractiv, marks a critical development in the EU's approach to AI regulation. This Office is set to play a key role in the enforcement of the AI Act, focusing on overseeing General-Purpose AI (GPAI) systems like OpenAI's GPT-4.
Key aspects include:
Enforcement Role: The AI Office will primarily support national authorities in enforcing AI rules, with a specific focus on GPAI models and systems. It is tasked with developing evaluation methodologies and benchmarks for GPAI models, and monitoring their application.
Investigative Powers: The Office is empowered to investigate infringements of AI rules, collect complaints and alerts, request documents, conduct evaluations, and recommend enforcement measures.
Coordination with EU Legislation: The Office will coordinate the enforcement of the AI Act in conjunction with other EU legislations like the Digital Services Act and Digital Markets Act.
Support for Legislation and Standardization: It will assist in the preparation of secondary legislation for the AI Act, establish regulatory sandboxes, and develop EU-level codes of practice and conduct.
Budgetary Constraints: The Office faces financial limitations, with operational expenses and temporary staff funded through reallocated budgets from the Digital Europe Programme.
This development signals a significant step in the EU's regulatory framework for AI, with the AI Office set to become a central figure in the landscape of AI governance and enforcement.
Image created using DALL-E 3
China's Industry and IT Ministry has recently released draft guidelines aimed at standardizing the nation's artificial intelligence (AI) industry. This move is part of China's effort to advance in AI development and catch up with global leaders like the United States.
Key points of these guidelines include:
National and International Standards: China aims to establish over 50 national and industry-wide AI standards by 2026 and participate in forming over 20 international AI standards. This initiative is part of China's effort to enhance its AI industry and compete globally, particularly in response to developments like OpenAI's ChatGPT.
Scope of Standardization:
The guidelines propose standardization in various areas including AI terminology, architecture, testing, evaluation, data services, chips, sensors, computing devices, centers, system software, and development frameworks.
Special emphasis is placed on key technologies in machine learning, intelligent products and services (such as robotics and autonomous vehicles), and industry applications (including smart manufacturing, smart homes, and smart cities).
Security and governance in AI are also highlighted as crucial areas for standardization.
Industry Adoption and Strategy: The target is for 60% of industry applications and projects to meet the standardized outcomes. The approach reflects China's strategy to "hyper standardize" the AI tech stack, influencing the broader tech industry and city infrastructures.
Implications for Trade and Innovation: The standardization aims to streamline domestic trade and IP exchange while using uniform terminology and testing methods across industries. However, there are concerns about potential stifling of innovation and barriers for international businesses, depending on the alignment of China's standards with global standards.
China's Unique Position: China's capability to source a majority of its AI supply chain domestically, its leadership in AI regulation, and the unique requirements of the Chinese language for AI development are seen as factors enabling successful implementation of this hyper-standardization model.
These developments suggest a significant move towards a more structured and unified AI industry in China, with potential global implications for AI technology development, standardization, and regulation.
Value Unlocked: IAPP’s AI Governance Course
I’m finally able to share my notes from Module 2, which covers the characteristics of trustworthy AI systems. 🚀📚
Section 2: Characteristics of Trustworthy AI Systems
Relevance: Understanding what makes an AI system trustworthy is crucial for AI governance professionals. This foundation is key to building an effective AI governance program.
Core Concepts: Trustworthy AI systems are characterized as being human-centric, accountable, and transparent. Grasping these terms in AI and ML contexts is vital for assessing whether an AI system aligns with organizational standards.
Human-Centric AI: Explore the importance of AI systems amplifying human agency and positively impacting the human condition.
Accountable AI: Learn about AI system characteristics like safety, security, resilience, validity, reliability, and fairness.
Transparent AI: Understand the need for AI systems to be comprehensible to various audiences, including technical, legal, and end-users.
Explainable AI: Delve into the necessity of AI systems being able to explain their processes and decisions.
Privacy-Enhanced AI: Examine how AI systems should respect and protect user privacy.
Operational Expectations: Trustworthy AI operates in a legal and fair manner, distinct from distrustful AI which can involve opaque decision-making and unfair outcomes.
Human-Centric Approach: AI should enhance rather than hinder the human experience, amplifying human capabilities.
Accountability: Organizations must take full responsibility for their AI, regardless of the number of contributors involved.
Transparency: AI should be understandable to its intended audience, whether they are technical experts, legal professionals, or general users.
AI Opportunities and Challenges
Opportunities: AI offers advantages such as accuracy in medical assessments, legal predictions, and the ability to process large volumes of diverse data rapidly.
Efficiency and Economics: AI can enhance efficiency and economic outcomes by automating and accelerating tasks while reducing human error and bias.
Challenges: AI may amplify existing biases, create new ones, and produce unfair outcomes.
Ensuring Value and Security
Value Communication: It's important for the intended audience to understand AI's value in supplementing human tasks.
Security and Integrity: AI systems must be secure against reverse engineering and ensure privacy rights.
Operationalizing Trustworthy AI
Integrating into Operating Models: Embedding trustworthy AI principles in organizational models is vital.
Responsible AI Processes: These should be operationalized through a risk management framework addressing privacy, accountability, auditability, robustness, security, transparency, fairness, and promoting human values.
Roles and Responsibilities: Understanding and defining roles in AI usage and governance within the organization is key.
Building Trustworthy AI
Responsible AI Principles: Operationalize principles encompassing ethical and trustworthy AI development and use.
Collective Effort and Cultural Shift: Requires leadership support and a top-down approach.
Governance Structures: Involves establishing governance structures combining diverse roles, supported by technical standards, risk management frameworks, AI playbooks, and guidelines.
Practical Implementation: Bridging the gap between high-level policy and practical implementation.
Building trustworthy AI involves operationalizing responsible AI principles that prioritize ethical development and use. This process requires collective effort, cultural shift, and leadership support, integrating governance structures, and practical guidelines to ensure AI systems are responsible, accountable, and beneficial to society.
Meme Of The Week:
That’s all for today!
Catch ya’ll on LinkedIn
What'd you think of today's email?