• AI RULES
  • Posts
  • Humana Sued Over AI Denials in Senior Healthcare

Humana Sued Over AI Denials in Senior Healthcare

PLUS: IAPP's AI Governance Course Notes

📰 Welcome to this week's newsletter! Explore the legal challenges of AI as health insurer Humana faces a lawsuit over AI-based care denials. We also delve into the UK's unique AI regulation approach compared to the EU's proposals, a lawyer's citation of AI-generated fake cases sparks controversy, and the EU updates liability laws to include AI and software. PLUS: notes from IAPP’s AI Governance course! ⚖️🤖

Please note that I'll be on holiday for a couple of weeks, and the next newsletter will be on January 25, 2024. Happy holidays and happy new year! 🎉

On the docket today:

  • Humana Sued Over AI Denials in Senior Healthcare

  • UK Charts Unique AI Regulatory Course

  • Michael Cohen’s Lawyer Faces Sanctions over AI-Generated Fake Cases

  • EU Updates Liability Laws to Include AI and Software

  • OpenAI’s Guidance to Crafting Powerful Prompts

  • PLUS: IAPP’s AI Governance Course Notes

  • Hilarious Meme

  New here? → Subscribe

Image created using DALL-E 3

Health insurer Humana is under legal scrutiny, accused in a class-action lawsuit of utilizing an AI algorithm to deny essential rehabilitation care to seniors on Medicare Advantage plans. This case puts Humana alongside UnitedHealth Group as major insurers confronted with similar legal challenges.

In-Depth Details:

  • Humana's Legal Battle: The lawsuit, filed in Kentucky's federal court, accuses Humana of substituting medical expertise with an AI tool, leading to the denial of necessary care for elderly patients.

  • Plaintiff’s Struggle: Central to the lawsuit is JoAnne Barrows, an 86-year-old woman from Minnesota. Despite her doctor's recommendation for a six-week recovery period, Humana allegedly stopped her post-acute care payments after just two weeks.

  • Humana's Response: Humana defends its approach, stating that its use of "augmented" intelligence involves human oversight in AI decision-making. The company emphasizes its commitment to patient-focused decisions, adhering to medical guidance and CMS regulations.

  • UnitedHealth Group Context: Echoing the claims against Humana, UnitedHealth Group previously faced a lawsuit for using the same AI algorithm, developed by NaviHealth, a subsidiary. UnitedHealth has denied these allegations, asserting the lawsuit is baseless.

  • Broader Implications: These lawsuits signify a growing concern over AI's role in healthcare decisions, with Congress and the Biden administration raising questions about Medicare Advantage plan denials and advocating for more stringent regulatory oversight.

  • Future Developments: Anticipated changes in federal regulations, scheduled for January, aim to limit the use of predictive algorithms in Medicare Advantage coverage decisions.

Significance: This case underscores the critical balance needed between innovative AI applications in healthcare and ensuring ethical, patient-centric decision-making, particularly in senior care. As legal and regulatory landscapes evolve, this issue remains a focal point for healthcare providers, insurers, and policymakers alike.

Image created using DALL-E 3

While the EU is making significant strides in AI regulation with the proposed EU AI Act, the UK is charting its own course in the AI regulatory landscape. The contrast between the EU’s more prescriptive legislation and the UK’s principles-based approach underscores differing regional strategies in governing AI technologies.

  • UK's Pro-Innovation Approach:

    • AI White Paper Insights: The UK government, through its March 2023 AI white paper, has articulated a 'pro-innovation' stance, favoring a principles-based framework over new AI-specific laws. This approach relies on existing legal frameworks and guidance from sector-specific regulators.

    • Draft Artificial Intelligence (Regulation) Bill: Marking a potentially significant shift, this Private Members’ Bill introduced in the House of Lords aims to establish a new AI regulatory body in the UK and legislate the AI principles outlined in the white paper. This development signals the UK's evolving stance on AI regulation, possibly inching towards a more structured framework.

  • International and Domestic Developments Influencing UK's AI Policy:

    • Global Dialogues and Agreements: The UK's engagement in international AI discussions, such as the U.S. executive order on AI and its own AI Safety Summit, aligns with the global movement towards responsible AI use, exemplified by the G7's guiding principles.

    • Michelle Donelan’s Perspective: The Secretary of State for Science, Innovation and Technology has emphasized a measured approach to AI legislation, considering the rapid advancements in AI technology. The government is contemplating whether to create a central AI regulator or to enhance existing regulatory bodies.

As the EU forges ahead with its comprehensive AI Act, the UK is navigating its own path in AI regulation, balancing innovation with responsible governance. The evolving nature of the UK's approach, especially with the introduction of the Artificial Intelligence (Regulation) Bill, indicates a dynamic and potentially transformative period for AI regulation in the region.

David M. Schwartz, representing Donald Trump's former attorney Michael Cohen, is facing judicial scrutiny for citing three non-existent legal cases in a motion for Cohen’s early release from supervised release.

Key Points:

  • Judicial Order: U.S. District Judge Jesse Furman has ordered Schwartz to provide copies of the three decisions cited in the motion by December 19, 2023, or explain why he should not face sanctions for citing non-existent cases.

  • Citations in Question: The motion filed by Schwartz on November 29, 2023, mentioned three cases – United States v. Figueroa-Florez, United States v. Ortiz, and United States v. Amato – with provided case numbers and summaries. However, these cases appear to be fabricated.

  • Cohen’s Representation: E. Danya Perry, who recently started representing Cohen, acknowledged her inability to verify the cases cited in the initial motion filed by Schwartz.

  • Speculation of AI Use: Given recent incidents where lawyers have used AI tools like ChatGPT for legal research, resulting in citations of non-existent cases, there is speculation that a similar AI tool might have been used in Cohen's case.

  • Cohen’s Legal Background: Cohen was sentenced to three years in prison in 2018 for various crimes, including tax evasion and campaign finance violations. He has been under supervised release after being released early due to the pandemic.

The case highlights, again, the need for careful verification of legal research, regardless of the technology used.

The European Union has reached a political agreement to update the Product Liability Directive (PLD) to align with technological advancements, specifically including software and Artificial Intelligence (AI).

Key Aspects of the Updated PLD:

  1. Scope of Liability: Manufacturers will be liable for defects in products, whether tangible, intangible, or related services, like navigation system traffic data.

  2. Defining Defectiveness: A product is defective if it doesn't provide expected safety based on its foreseeable use, legal requirements, and the user group's specific needs. AI's capacity to learn and acquire new features is a key consideration in assessing defectiveness.

  3. Damages Covered: Material damages under the PLD include death, personal injury, psychological harm, property destruction, and data loss or corruption (excluding professional use). The concept of data loss as material damage was debated but retained without a value threshold.

  4. Disclosure of Evidence: The PLD allows claimants to request court-mandated evidence disclosure from defendants, ensuring it is necessary and proportionate. Confidentiality measures for sensitive information, including trade secrets, can be requested.

  5. Burden of Proof: Claimants must prove product defectiveness, damage, and causality. However, defectiveness can be presumed in certain conditions, like non-disclosure of evidence, non-compliance with safety requirements, or excessive difficulty in proving defectiveness due to technical complexities.

  6. Exemption for Open Source Software: The Directive won’t apply to free and open-source software developed or supplied outside commercial activities.

  7. Compensation Fund: Member states may use national compensation schemes for victims of defective products when the responsible economic operator is insolvent or non-existent, with a preference to avoid public funding.

  8. Liability Exemption and Right of Recourse: Small software component manufacturers may be exempt from liability if another operator is liable, and multiple liable operators have a right of recourse against each other.

  9. Risk Development Clause: The Council allows maintaining national measures making operators liable even if they could not have known about the defectiveness based on current scientific and technical knowledge.

  10. Limitation Periods and Timeline: Claimants have three years to initiate proceedings after damage occurs, with a ten-year expiration period for seeking compensation, extended to 25 years for latent injuries.

The updated PLD will apply to all products placed on the EU market 24 months after its enactment, with member states responsible for transposing the directive into national law within this timeframe. The revision reflects the EU's efforts to modernize liability rules for the digital age, addressing the complexities introduced by AI and software.

Image created using DALL-E 3.

OpenAI published a detailed and comprehensive guide on prompt engineering for GPT-4, offering valuable strategies and examples to craft effective prompts. Here are a few concrete examples:

Example 1: Adding Numbers in Excel

  • Good Prompt: "How do I add numbers in Excel?"

  • Better Prompt: "How do I add up a row of dollar amounts in Excel? I want to do this automatically for a whole sheet of rows with all the totals ending up on the right in a column called 'Total'."

Example 2: Inquiring About a President

  • Good Prompt: "Who’s president?"

  • Better Prompt: "Who was the president of Mexico in 2021, and how frequently are elections held?"

Example 3: Coding the Fibonacci Sequence

  • Good Prompt: "Write code to calculate the Fibonacci sequence."

  • Better Prompt: "Write a TypeScript function to efficiently calculate the Fibonacci sequence. Comment the code liberally to explain what each piece does and why it's written that way."

Example 4: Summarizing Meeting Notes

  • Good Prompt: "Summarize the meeting notes."

  • Better Prompt: "Summarize the meeting notes in a single paragraph. Then write a markdown list of the speakers and each of their key points. Finally, list the next steps or action items suggested by the speakers, if any."

These examples show how providing more specific and detailed prompts leads to more targeted and useful responses from the AI model. The better prompts clarify the context, specify the format, and give explicit instructions on what the response should include.

Unfiltered Notes of IAPP’s AI Governance Course

I’m finally able to share my notes from Module 2, which covers the risks and harms posed by AI. 🚀📚

Module 2: Section 1: Risks and Harms of AI

Introduction

  • Significance: AI's potential harms are extensive and can impact various sectors including individuals, society, organizations, and the environment. These risks might be inadvertently introduced during AI development and usage.

  • AI Ethics Necessity: Organizations must establish an AI ethics framework and procedures to identify, assess, and mitigate AI's potential harms, ensuring responsible AI usage.

  • Comprehensive Risk Understanding: Professionals in AI governance should be well-versed in diverse risks associated with AI, such as reputational, cultural, economic, legal, and regulatory, along with how AI's processing capabilities might amplify these risks.

Topics Covered

  • The module aligns with AIGP certification, focusing on understanding potential AI harms to individuals, groups, society, companies, and ecosystems.

Individual Harms

  • Specific Harms: AI systems can jeopardize personal civil liberties, rights, safety, and economic opportunities.

  • Bias in AI Systems: AI can manifest various biases (implicit, sampling, temporal, overfitting), which can discriminate against individuals or groups. The Amazon hiring AI case exemplifies how AI bias against women emerged from biased training data. Face recognition algorithms often show demographic differentials, leading to unreliable and discriminatory outcomes.

  • Areas of Discriminatory Impact: Employment, insurance, housing, education, credit decisions, and differential pricing can all be affected by AI biases, leading to systemic discrimination.

Organizational and Environmental Harms

  • Scale of Individual Biases: Biases at the individual level can escalate, impacting entire organizations or environments.

  • Environmental Impact: AI's environmental footprint is significant, with studies showing high carbon emissions from training large AI models. While AI can aid environmental efforts (e.g., in agriculture, disaster response), its energy consumption and associated emissions pose challenges.

A Closer Look at Bias

  • Differentiating Bias Types: Understanding the nuances between acceptable (e.g., risk-based financial decisions) and harmful or illegal biases (e.g., discrimination based on race or gender) is crucial.

Civil Rights and Privacy

  • Bias and Privacy Issues: Examples like gender bias in facial recognition highlight civil rights concerns. Privacy risks involve misuse of personal data, challenges in de-identification, and transparency issues.

Economic Opportunity and Risk

  • Dual Nature of AI in Economy: AI presents both opportunities (job creation, productivity enhancement) and risks (job displacement, biased hiring practices). The evolving nature of AI in industries like software development can both threaten and create new job types.

Group Harms

  • Impact on Subgroups: AI can exacerbate discrimination against specific groups, manifesting in mass surveillance and restricting freedoms (assembly, protest). These technologies can deepen existing societal inequities.

Environmental Harms

  • AI’s Dual Environmental Impact: AI's potential for environmental conservation contrasts with its substantial carbon footprint during model training and operation.

Identifying Organizational Harms

  • Risk Categories: Includes reputational, cultural, economic, acceleration, legal, and regulatory risks.

  • Stakeholder Roles: Importance of involving various stakeholders (AI risk owners, legal, IT leaders) in understanding and mitigating risks.

  • Ethics in AI Development: Emphasizes the need for ethical principles to foresee and reduce emerging risks.

Defining Organizational Harms

  • Harm Types and Effects: Discusses various organizational harms, including reputational damage, economic losses, legal sanctions, and the need for continuous risk monitoring and adaptation.

  • Ongoing Risk Management: Stresses the importance of continuous monitoring and adaptation to address evolving AI risks.

Summary

  • Importance of Harm Identification: Emphasizes the necessity of identifying and mitigating harms throughout the AI and ML lifecycle to prevent catastrophic impacts on organizations. The module underscores the need for continuous vigilance and adaptation in AI governance to maintain trust and integrity in AI applications.

Meme Of The Week:

@lawyerswhowine

That’s all for today!

Catch ya’ll on LinkedIn