• Posts
  • Wanna Work in AI Law? Here are Key Areas of Client Demand

Wanna Work in AI Law? Here are Key Areas of Client Demand

PLUS: Canada AI Law & Policy Cheat Sheet

Welcome to this week’s exploration of the AI legal landscape. Whether you’re intrigued by the evolving demand in AI law careers, keen to understand the latest USPTO guidelines on AI and patent inventorship, or curious about Connecticut's ambitious plans for comprehensive AI regulation—our newsletter has you covered. 🚀👩‍⚖️

On the docket today:

  New here?Subscribe

Image created using DALL-E 3.

Barry Scannell, a senior lawyer at Irish law firm William Fry, provides an overview of the evolving landscape in AI law, emphasizing the shift from theoretical considerations to tangible, practical legal advice. He outlines the current areas where clients are most actively seeking legal expertise, focusing on AI product development and regulatory compliance.

Key Areas of Legal Expertise According to Barry Scannell:

  • AI Product Development: Providing guidance on the development of AI products, particularly those that incorporate large language models.

  • Regulatory Compliance: Assisting with the complexities of the AI Act, focusing on compliance strategies including designations and risk classifications.

  • Data Protection and Privacy: Addressing overlaps with existing regulations such as the GDPR and DSA, which are critical for the successful integration of AI technologies.

  • Intellectual Property: Managing significant demands in the IP sector, particularly concerning copyright issues related to machine learning and data licensing agreements for AI applications.

Image created using DALL-E 3.

Recent updates from the U.S. Patent and Trademark Office (USPTO) have clarified the role of AI in the patent process. While AI tools can assist in developing inventions, they have introduced new complexities for patent attorneys. These guidelines emphasize the necessity for a human to make a "significant contribution" to the invention process.

Key Takeaways:

  • AI-assisted inventions are patentable if a human is identified as making a significant contribution to the invention process.

  • The USPTO guidance responds to concerns by affirming that only natural persons, not software or AI systems, can be listed as inventors on patent applications.

  • The Federal Circuit ruled in 2022 that AI cannot be an inventor on patents, reinforcing the need for human involvement in the inventive process.

  • Attorneys are advised to carefully document the contributions of human inventors, especially when AI tools are used, to prevent potential invalidity of patents due to improper inventorship claims.

  • There is an additional level of diligence required now to determine the role AI has played in the invention process, which could complicate patent applications.

  • A second round of guidance from the USPTO addresses the use of AI in drafting patent applications, hinting at future complexities if AI begins to contribute autonomously to patentable ideas.

  • The overarching message is that while AI can facilitate the invention process, it does not replace the essential human creative input required for patentability.

Image created using DALL-E 3.

The case between Uber Eats driver Pa Edrissa Manjang and Uber regarding the use of facial recognition technology for ID checks has brought attention to potential biases in AI systems. Manjang alleged that the AI system frequently misidentified him, leading to a reduction in his work opportunities, which he claimed was a form of racial harassment.

Key Takeaways:

  • AI Technology Concerns: The facial recognition system used by Uber, developed by Microsoft, has been criticized for its lower accuracy in identifying individuals of color, raising issues of racial bias in AI.

  • Legal and Ethical Implications: The case underscores the need for companies to ensure AI technologies are implemented in a way that does not lead to indirect discrimination, as outlined by the U.K.'s Equality Act 2010.

  • Company Accountability: Uber’s defense centered around human error and unrelated system issues, rather than flaws in the AI system itself. However, the tribunal did not dismiss the claims, indicating the complexity of distinguishing between human and AI errors in operational processes.

  • Regulatory Oversight: The U.K. General Data Protection Regulation and guidelines from the Information Commissioner's Office emphasize the importance of checking biometric data processes for bias to prevent discrimination.

  • Future Litigation Risks: The settlement of this case without a formal tribunal decision on AI bias leaves open questions about the fairness and reliability of AI systems in employment contexts. Future cases may further explore this issue.

  • Employer Responsibilities: Companies using AI need to be transparent about their use of technology, provide avenues for human review, and ensure their systems do not disadvantage individuals based on protected characteristics.

Image created using DALL-E 3.

Connecticut's legislative efforts are centering on establishing a regulatory framework for the use of AI within the state. The proposed Senate Bill 2 aims to fill the gaps left by federal inaction, addressing issues ranging from algorithmic discrimination to the creation of deep fakes, amidst Governor Ned Lamont's concerns about stifling innovation.

Key Takeaways:

  • Legislative Leadership and Support: Senate Bill 2, spearheaded by Sen. James Maroney and backed by other key Senate leaders, seeks to implement standards for AI usage that could influence hiring, criminal investigations, and more.

  • Governor's Skepticism: Governor Lamont has expressed reservations about Connecticut taking a prominent role in AI regulation, emphasizing the fast-moving nature of technology and the potential to hinder innovation.

  • Industry Concerns: Industry representatives, including those from the Consumer Technology Association, argue that AI regulation should be handled at the federal level to avoid inconsistencies and maintain competitiveness for businesses of all sizes.

  • Protection Against Discrimination: The bill proposes mechanisms to prevent algorithmic discrimination and sets a timeline for compliance by AI developers, which includes penalties for non-compliance.

  • Legal Implications: The bill also explores enabling private lawsuits against AI discrimination, reflecting a strong push for individual legal recourse in the state's judiciary committee.

  • Public and Political Reactions: While there is substantial support from various political figures and technology companies like Microsoft and IBM, there's also significant opposition concerned with overregulation and the practicality of state-level enforcement.

  • Focus on Consumer Protection: Legislators emphasize the need to learn from past regulatory failures, such as those with internet privacy, aiming to protect consumers from potential abuses of AI technology.

  • Adjustments and Amendments: Ongoing revisions to the bill show a responsive legislative process, taking into account feedback from multiple stakeholders to refine and possibly limit the scope of regulations.


That’s all for today!

Catch ya’ll on LinkedIn

What'd you think of today's email?

Login or Subscribe to participate in polls.