• Posts
  • Attorneys Take Heart: AI Isn't Replacing Us Just Yet

Attorneys Take Heart: AI Isn't Replacing Us Just Yet

BONUS: Unfiltered Notes from IAPP's Artificial Intelligence Governance Course

Yo yo yo,

It’s time to buckle up! This week, we’re dissecting AI's courtroom drama with self-represented litigants' AI-drafted briefs and federal courts' diverse AI policies. Dive into the EU's AI Act intricacies and the OECD's new AI definition, shaping the future of AI legislation. Stay ahead in the legal-tech curve where AI meets law. Ready for a rollercoaster ride through AI's legal landscape? Let's roll! 🎢👩‍⚖️🤖

On the docket today:

  • Self-Represented Litigants and AI-Generated Briefs: A New Challenge in Federal Courts

  • Federal Courts' Varied Approaches to AI in Filings

  • EU's AI Act Dilemma: Navigating the Complexities of Foundation Model Regulation

  • OECD's Updated AI Definition Set to Influence EU's AI Legislation

  • BONUS: Unfiltered Notes from IAPP's Artificial Intelligence Governance Course

  • Hilarious Lawyer Meme

  New here?Subscribe

Image Created Using DALL-E 3

A novel trend is gaining attention: self-represented litigants turning to AI tools, notably ChatGPT, to draft sections of their legal briefs. This practice has surfaced in multiple federal court cases, revealing a concerning pattern of briefs filled with references to non-existent cases and fabricated quotations. This phenomenon is not isolated, as evidenced by at least six federal cases, indicating a broader issue.

A key instance in this trend was highlighted in a federal court, where a judge noted that a self-represented plaintiff cited numerous fake judicial opinions. Such incidents have several detrimental effects, including wasting valuable court and opposing party resources, as well as harming the legal system's credibility.

The judge in this case issued a stern warning about adhering to legal standards, emphasizing the serious consequences of submitting briefs with fraudulent case citations. Future incidents of this nature could lead to significant sanctions, ranging from case dismissal to filing restrictions.

This issue is prevalent across various federal courts, including district and appellate levels. Litigants have been found to use AI-generated content that inaccurately represents case facts or cites entirely fabricated cases. These developments are sparking a critical dialogue about the ethical use of AI in legal document preparation, especially for those without formal legal training.

Looks like the role of human lawyers remains indispensable in the legal profession…for now. 

Image created using DALL-E 3

There is a significant shift as federal courts across the United States address the integration of artificial intelligence tools in court filings. This development follows a notable case, Mata v. Avianca, Inc., where a lawyer faced sanctions for submitting a brief with AI-generated, non-existent cases. A thorough review of federal court websites shows that over 14 courts have issued guidance on AI tool usage in litigation, indicating a widespread judicial response to this emerging issue.

Judge Brantley Starr from the Northern District of Texas pioneered with a standing order on AI, setting a precedent for other courts. His order demands either a confirmation of non-use or human verification of AI-generated content, citing concerns over AI inaccuracies and biases. This move has sparked a trend, with judges like Stephen Vaden and Gabriel Fuentes issuing similar orders, each adding their unique stipulations focusing on confidentiality and accuracy, respectively.

The diversity in AI-related court orders is striking. Some judges have mirrored Judge Starr’s approach, while others have introduced additional requirements like the nondisclosure of confidential information or specifying AI-drafted portions in filings. Notably, some courts have outright prohibited AI use in filings, with specific exceptions for legal and internet search engines.

The variety of these standing orders underlines that the judicial regulation of AI in litigation is not just a temporary trend but a significant, evolving aspect of legal practice. This dynamic landscape requires lawyers and self-represented litigants to be vigilant and informed about the latest local rules and individual judges' preferences regarding AI use in court proceedings.

Image created using DALL-E 3

The European Union is at a pivotal juncture with its groundbreaking AI Act, a comprehensive bill crafted to regulate the realm of artificial intelligence. This legislation, emphasizing a risk-based approach, is now in the crucial final stage of its legislative journey. However, recent developments have cast uncertainty over its future. A contentious issue arose during a key technical meeting, stemming from debates over the regulation of advanced foundation models, which are akin to OpenAI's renowned GPT-4. This disagreement has placed the entire bill in jeopardy.

Central to this impasse is the proposed tiered regulatory framework. This approach, designed to apply stricter guidelines to more potent AI systems, mirrors principles found in the Digital Markets Act (DMA) and Digital Services Act (DSA). Despite initial consensus, this strategy has encountered resistance from leading EU member states, including France, Germany, and Italy. Their primary concern lies in the potential stifling of innovation, fearing that stringent regulations could disadvantage European AI enterprises, such as France's Mistral and Germany's Aleph Alpha, especially against their global counterparts in the U.S. and China.

This deadlock has led to significant delays and the premature conclusion of a vital meeting. EU legislators underscore the necessity of including foundation model regulation for the AI Act's successful passage. With the Spanish presidency at the helm of these negotiations and the impending transition to the Belgian presidency, there's an urgency to find common ground. Failure to reach a consensus on the AI Act risks diminishing the EU's role as a trailblazer in setting international standards for AI regulation. This scenario is increasingly concerning as other global players like the USA, UK, and China accelerate their AI policy frameworks.

Upcoming meetings are scheduled to address these hurdles, aiming to find a resolution that secures the future of this landmark legislation.

Image created using DALL-E 3

The Organization for Economic Co-operation and Development (OECD) has recently revised its definition of Artificial Intelligence, a move poised to significantly impact the European Union's upcoming AI regulation. Established as an international economic collaboration platform, the OECD's updated AI definition is part of its ongoing efforts to guide trustworthy AI policies globally.

The new definition characterizes an AI system as a machine-based entity that infers outputs from its inputs, influencing both physical and virtual realms. This broad definition accommodates the varying levels of autonomy and adaptiveness in AI systems, especially after deployment. Crucially, it removes the necessity for human-defined objectives, acknowledging AI's evolving capabilities.

This shift in the AI definition by the OECD is expected to align closely with the EU's Artificial Intelligence Act, a pioneering legislation aimed at regulating AI based on potential risks. The AI Act's drafters have agreed to incorporate the OECD's definition, ensuring semantic consistency and a unified approach to AI terminology internationally.

The importance of this updated definition extends to the AI Act's focus areas, including foundation models and general-purpose AI. The Act will address the alignment between AI objectives and outputs, highlighting the potential for unanticipated consequences in AI systems.

Spanish presidency proposals in the AI Act negotiations also reflect this focus, emphasizing the need for regulations that adapt to AI's evolving nature. The new definition's emphasis on adaptiveness and learning capabilities of AI systems, particularly those based on machine learning techniques, is a critical addition.

As the OECD's AI definition becomes official, its incorporation into the EU's AI legislation is highly anticipated.

Unfiltered Notes from IAPP’s AI Governance Course

Want to tap into expert AI governance insights without the hefty price tag? As an attorney, I know how valuable — yet inaccessible — specialized technology education can be. So I'm excited to share with you raw notes straight from my recent IAPP AI Governance course.

Over the next few weeks, I'll be excerpting key learnings from this almost $1,000 program in my newsletter. Consider it your attorney-focused crash course in AI accountability, without the crushing student debt. Stay tuned to build your understanding of how to ethically and responsibly shape these transformative technologies. No need to pay exorbitant fees for AI fluency — I've got you covered.

Here are the unfiltered notes from Module 1: OECD Framework for AI Assessment & Uses Cases and Benefits of AI.

Module 1: OECD Framework for AI Assessment


  • The Organization for Economic Co-operation and Development (OECD) has developed a framework to assist in classifying AI systems and evaluating their associated risks.

  • This framework comprises five main dimensions to consider when assessing AI systems.

The Five Dimensions:

  • People and Planet:

    • Examines the impact of AI on human rights, privacy, the environment, and society at large.

    • Focuses on how AI systems affect individuals and groups, highlighting the importance of ethical considerations.

  • Economic and Context:

    • Analyzes AI systems within their economic and sectoral context.

    • Looks at the operational sector (e.g., finance, healthcare, education), the criticality of the AI system's function, deployment scale, and the imact of its operations.

  • Data and Input:

    • Considers the types of data used and the methods of collection, including whether expert human input was incorporated.

    • Evaluates the structure, format, and source of data, and how this influences the functioning of the AI system.

  • AI Model:

    • Focuses on the technical aspects of the AI model, such as the construction, usage, and evolution over time.

    • Assesses the technological maturity of the AI system and its effectiveness based on the amount of data and testing it has undergone.

  • Tasks and Output:

    • Addresses the specific tasks the AI system is designed to perform, the outputs it generates, and the resulting actions.

    • Includes the examination of combined tasks and actions, as well as evaluation methods used to measure system performance.

Importance in AI Governance:

  • Understanding and applying this framework is critical for the responsible development and deployment of AI systems.

  • It emphasizes the need for a holistic view of AI technology that considers ethical, societal, and economic impacts.

  • Governing bodies and organizations can use this framework to ensure that AI systems are developed and used in a manner that aligns with societal values and norms.

Module 1: Use Cases and Benefits of AI

AI Use Cases:

  • Recognition:

    • Involves image and speech recognition technologies.

    • Used in facial recognition systems, retail product matching, quality control in manufacturing, and plagiarism detection in education.

  • Detection and Forecasting:

    • AI aids in detecting fraudulent activities in credit card transactions and government services.

    • Useful in event detection in videos, such as identifying sports highlights.

    • Employed in cyber event detection and system management.

    • Enhances forecasting in sales, demand prediction, and weather modeling.

  • Personalization and Interaction Support:

    • Creates unique customer profiles for personalized experiences on websites and platforms.

    • Improves retail sales through customized user interactions.

    • Virtual assistants and chatbots provide transaction support and customer service, increasingly utilized in both private and public sectors.

  • Goal-Driven Optimization:

    • Optimizes complex problems and processes, like supply chain management.

    • Improves efficiency in route planning for transportation and logistics.

  • Recommendation:

    • Powers recommendation engines for products and content based on predictive analytics.

    • Supports decision-making in healthcare, such as medical diagnoses, and in government services like disability case adjudication.

Benefits of AI:

  • AI systems enhance efficiency, accuracy, and personalization across various applications, leading to improved customer experiences and operational effectiveness.

  • They provide critical support in decision-making processes by leveraging vast amounts of data and predictive analytics.

  • AI's capability to optimize processes can lead to cost savings and increased productivity in various industries.

Meme Of The Week:


That’s all for today!

Catch ya’ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.