• AI RULES
  • Posts
  • NDA Negotiated Entirely by AI with No Human Involvement

NDA Negotiated Entirely by AI with No Human Involvement

PLUS: Unfiltered Notes from IAPP's Artificial Intelligence Governance Course

What up fellow attorneys,

It’s time to brace yourselves for contract negotiations between dueling AIs, copyright suits targeting bot training data, and facial recognition slapdowns. The EU's landmark AI Act now resembles a tug-of-war between security and freedom. Meanwhile nations eternally competing for tech supremacy awkwardly bonded at the UK's Bletchley Summit to curb AI's existential risks. But enough preamble - let's scrutinize the legalquakes rattling the AI landscape before its silicon surpasses our carbon.

On the docket today:

  • Legal Contract Negotiated Entirely by AI with No Human Involvement

  • OpenAI to Handle Copyright Infringement Lawsuits for Paying ChatGPT Users

  • EU Parliament and Member States Grapple Over Facial Recognition in AI Law

  • International AI Powers Sign Bletchley Declaration at UK Summit

  • Unfiltered Notes from IAPP's Artificial Intelligence Governance Course

  • Hilarious Lawyer Meme

  New here?Subscribe

Two Robots Drafting a Contract

Image created using DALL-E 3

On November 7, 2023, UK AI company Luminance publicly demonstrated the world’s first fully autonomous legal contract negotiation using artificial intelligence.

In a live showcase at Luminance’s London headquarters, the company’s proprietary AI technology called “Autopilot” successfully negotiated a non-disclosure agreement entirely between two AI systems without any human involvement.

Powered by Luminance’s advanced legal large language model trained on over 150 million legal documents, Autopilot can independently analyze contract text, identify risky clauses, suggest revisions aligned with company policies, negotiate changes with another AI, log all alterations, and finalize agreements in minutes.

This groundbreaking automation of routine legal work such as non-disclosure agreements provides enormous time savings for legal professionals and lawyers. By handling repetitive, low-value tasks like basic contract negotiations, Autopilot enables attorneys and legal teams to focus expertise on higher-value legal work.

While Autopilot operates autonomously, Luminance confirms lawyers can review AI’s work for quality control and oversight if desired. This pioneering demonstration of AI-to-AI contract negotiation shows rapid advances in artificial intelligence capabilities to take on core legal work traditionally performed by human lawyers and legal experts.

Photograph: Justin Sullivan/Getty Images

OpenAI has introduced a new legal protection called Copyright Shield to cover costs for copyright infringement lawsuits faced by some ChatGPT users. During OpenAI's inaugural developer conference, CEO Sam Altman announced the company will "defend our customers" and pay legal fees for copyright claims, applying to paid ChatGPT Enterprise customers and API developers.

This move aims to ease anxiety amid ongoing copyright disputes and lawsuits over unauthorized usage of copyrighted texts in training artificial intelligence systems like ChatGPT. A recent lawsuit filed by authors including George R.R. Martin alleged OpenAI infringed copyrights by using literary works without permission for ChatGPT's dataset.

Rather than removing copyrighted content from AI training data, OpenAI is essentially offering to handle resulting copyright infringement lawsuits on behalf of paying enterprise customers. However, free users of ChatGPT are not covered by Copyright Shield.

Legal experts say the policy continues debate around copyright rules and protections needed for artificial intelligence systems as models like ChatGPT face growing legal scrutiny over training data practices.

Image created using DALL-E 3

The European Parliament is considering allowing limited uses of facial recognition technology in real-time in exchange for banning other AI practices, as part of negotiations over the EU's Artificial Intelligence Act (AI Act) draft law.

Remote biometric identification (RBI) like facial recognition has been a major point of contention in drafting the AI Act. In the original proposal, the European Commission suggested allowing real-time facial recognition in certain cases like tracking missing persons, preventing terrorist attacks, or locating serious crime suspects.

The EU Parliament originally voted for a total RBI ban, wanting strict limits to prevent mass surveillance. Meanwhile, EU governments represented in the Council pushed for exceptions allowing police facial recognition.

The latest compromise would allow real-time facial recognition under narrow circumstances, in exchange for the Parliament getting an expanded list of banned AI uses. This represents the Commission trying to find middle ground.

Key issues still under debate include rules for powerful AI systems like ChatGPT and national security exemptions. The negotiations have entered their final stretch as EU institutions grapple over the scope of facial recognition within the landmark AI law.

The Bletchley Declaration represents a landmark multilateral agreement on artificial intelligence safety endorsed by 28 countries and the European Union at the inaugural 2023 AI Safety Summit in the United Kingdom. Drafted at Bletchley Park, the historic World War II site of British codebreaking against Nazi Germany, the Declaration lays out shared principles and a cooperative framework for governing rapidly advancing artificial intelligence systems globally. It focuses particularly on regulating powerful "frontier" AI models with capabilities rivaling or exceeding human intelligence.

Key Takeaways:

  • The Declaration encourages greater transparency and accountability from artificial intelligence developers around potentially harmful capabilities of advanced AI systems.

  • It establishes a two-part agenda focused on identifying common global risks posed by artificial intelligence, and formulating national policies and capabilities to mitigate those risks.

  • This includes supporting expanded scientific collaboration on artificial intelligence safety, instituting appropriate testing standards and metrics for AI systems, and building public sector capacity to govern AI responsibly.

  • The Declaration represents a landmark multilateral agreement between leading artificial intelligence powers like the United States, China, and European Union on urgently understanding and addressing risks created by progression toward advanced artificial general intelligence (AGI).

  • It creates a shared framework for international cooperation on research into frontier artificial intelligence safety and developing global governance mechanisms.

  • The Declaration was published on the opening day of the pioneering 2023 AI Safety Summit hosted by the United Kingdom at historic Bletchley Park.

  • The unprecedented agreement between 28 nations and the EU highlights growing worldwide urgency and consensus on collectively governing artificial intelligence to benefit humanity.

Unfiltered Notes from IAPP’s AI Governance Course

Want to tap into expert AI governance insights without the hefty price tag? As an attorney, I know how valuable — yet inaccessible — specialized technology education can be. So I'm excited to share with you raw notes straight from my recent IAPP AI Governance course.

Over the next few weeks, I'll be excerpting key learnings from this almost $1,000 program in my newsletter. Consider it your attorney-focused crash course in AI accountability, without the crushing student debt. Stay tuned to build your understanding of how to ethically and responsibly shape these transformative technologies. No need to pay exorbitant fees for AI fluency — I've got you covered.

Here are the unfiltered notes from Module 1: Core Concepts of AI and Machine Learning

Module 1: Core Concepts of AI

Definition of AI:

  • Artificial Intelligence (AI) refers to the capability of machines to perform tasks that typically require human intelligence.

  • It is a branch of computer science aimed at creating technology to emulate human-like activities.

Historical Context:

  • The term AI has been in use for decades, with significant early contributions from Alan Turing in 1950 through the development of the Turing Test. This test evaluates a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Key Characteristics of AI:

  • Technology: AI involves the application of technology to achieve specific objectives.

  • Autonomy: There is an emphasis on the autonomous capabilities of AI systems to reach the defined objectives without constant human guidance.

  • Human Involvement: Human input is crucial for training AI systems and setting objectives for them to accomplish.

  • Output: AI is characterized by its output, which includes task performance, problem-solving, and content creation.

Understanding AI:

  • There is no singular authoritative definition of AI, but common themes in many definitions include technology, autonomy, human involvement, and output.

  • AI systems are often designed to think creatively, consider various possibilities, and keep a goal in mind while making short-term decisions, which are traits traditionally associated with human intelligence.

Module 1: Machine Learning

Learning Process in AI:

  • Machine Learning (ML): Essential for teaching AI systems to solve problems, it encompasses methods by which machines learn from data.

Types of Machine Learning:

  • Supervised Learning:

    • Involves learning from labeled data to perform tasks like classification.

    • Example: Distinguishing cats from dogs in images based on labeled features.

  • Unsupervised Learning:

    • Utilizes unlabeled data to find patterns or outliers.

    • Example: Detecting fraudulent transactions in banking data.

  • Semi-Supervised Learning:

    • Combines both labeled and unlabeled data, often used when labeled data is limited or costly to obtain.

    • Balances the structure provided by known labels with the volume of unlabeled data, improving learning efficiency.

  • Reinforcement Learning:

    • Operates on a system of rewards and penalties to shape behavior.

    • Example: Training self-driving cars to navigate roads safely.

Socio-Technical Context of AI (it influences society and society influences it):

  • Interdisciplinary Involvement: Calls for collaboration across disciplines, such as social sciences and engineering, to understand and shape AI's societal impact.

  • Risk Factors: The complexity of AI systems and their operational environments introduce significant risks, underscoring the need for careful governance and continuous model refinement.

  • The data used for training AI systems can introduce additional complexities and risks, necessitating careful consideration and management.

Meme Of The Week:

@airules

That’s all for today!

Catch ya’ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.