- AI RULES
- Posts
- Air Canada's Chatbot Defense Rejected
Air Canada's Chatbot Defense Rejected
$800 + Value: IAPP's AI Governance Course Notes
Hey hey, AI homies! This week has been packed with so much action that our newsletter is bursting with juicy updates 💦 So let's dive straight into it! 🌊
On the docket today:
Wisconsin Implements AI Regulation in Political Campaigns and Public Sector
Funny Attorney Meme
$800+ Worth of AI Governance Material
New here? → Subscribe
Air Canada defended itself in a small claims case by arguing that it should not be liable for misleading information provided by its AI chatbot, claiming the chatbot was a separate legal entity responsible for its own actions. This defense came after passenger Jake Moffatt sought a refund based on inaccurate bereavement policy advice given by the chatbot. Tribunal member Christopher Rivers rejected this defense, ruling that Air Canada is responsible for all information on its website, leading to Moffatt receiving a partial refund. Following the tribunal's decision, Air Canada complied with the ruling, and the chatbot service is no longer available on the airline's website.
The EU officially launched its AI Office on February 21, 2024, establishing a pivotal hub for AI expertise within the European Commission to spearhead the implementation of the AI Act, particularly focusing on general-purpose AI.
Key highlights include:
Central Role in Implementing the AI Act: Aims to foster the development and use of trustworthy AI, ensuring the technology's safe application across the EU.
Trustworthy AI Development: Supports initiatives to make AI safe and reliable, upholding citizens' fundamental rights and providing legal certainty to businesses in all 27 Member States.
Enforcement and Guidance: Will enforce rules for general-purpose AI models, develop tools for evaluating AI capabilities, and promote codes of practice in collaboration with AI developers and experts.
International Cooperation: Aims to establish a strategic and effective European approach to AI on a global scale, promoting the EU's vision of trustworthy AI.
Collaboration Across Sectors: Works with Member States, scientific communities, industry, civil society, and the open-source ecosystem through forums and expert groups to gather comprehensive insights on AI.
The AI Office not only marks a step forward in regulating and harnessing AI's potential within the EU but also signifies job opportunities in policy, technical, legal, and administrative roles. Interested candidates are encouraged to stay tuned for upcoming calls for expression of interest.
The Federal Trade Commission (FTC) is proposing a rule to prohibit the impersonation of individuals, extending the scope of a newly finalized rule against government and business impersonation fraud. This action responds to the increasing use of AI-generated deepfakes and other forms of impersonation fraud that harm consumers. The FTC is also finalizing the Government and Business Impersonation Rule, empowering itself to pursue scammers in federal court for monetary relief. Public comments are invited on the supplemental notice of proposed rulemaking, with the final rule on government and business impersonation becoming effective 30 days after publication in the Federal Register.
The Wisconsin Assembly has passed two bills, AB 664 and AB 1068, to regulate artificial intelligence (AI) in political advertising and state operations. AB 664 mandates the disclosure of AI use in political ads, addressing potential issues of misinformation and election integrity. AB 1068 requires state agencies to audit their AI tools, with the goal of assessing how these technologies impact government efficiency and addressing the notion that AI could replace human employment.
On February 8, 2024, the Federal Communications Commission (FCC) issued a press release that, despite suggesting a complete ban on AI-generated voices in robocalls, actually clarified that such calls are subject to the consent requirements of the Telephone Consumer Protection Act (TCPA) for "artificial or prerecorded voices." This clarification aims to prevent the misuse of AI in making unsolicited robocalls to vulnerable individuals. According to the ruling, obtaining prior express consent for non-marketing calls and prior express written consent for marketing calls is mandatory. The ruling also mandates clear identification of the caller and the provision of opt-out mechanisms. This move by the FCC seeks to eliminate any uncertainties about the TCPA's applicability to AI-generated voice calls, emphasizing the protection of consumers.
Deputy Attorney General Lisa O. Monaco highlighted the transformative potential of AI in enhancing the Department of Justice's operations, from identifying criminals to streamlining evidence analysis. However, she also cautioned against AI's risks, such as amplifying biases, facilitating the spread of disinformation, and lowering barriers for criminal activities. The appointment of a Chief AI Officer, the launch of the Justice AI initiative, and the formation of the Disruptive Technology Strike Force are part of the DOJ's strategy to harness AI's benefits while mitigating its dangers. Monaco's remarks underscored the necessity of balancing AI's potential to improve law enforcement and public safety with the imperative to protect civil rights and maintain public trust.
$800+ Worth of AI Governance Notes PLUS: IAPP’s AI Governance Course Notes
Module 5 is here! And, I’m just going to dump all sections here sooooo, it’s gonna be long. Get ready my fellow AI nerds! 🤓📚
Module 5: Mapping, Planning and Scoping the AI Project
Introduction
Focus: Develop AI governance and risk management program.
Key Action: Gather stakeholders to assess risks and impacts of algorithms.
Goal: Create methods for risk mitigation and learn documentation and communication tools.
Topics and Performance Indicators
Define Business Case and Cost/Benefit Analysis:
Consider trade-offs in AI system design.
Assess why AI/ML is chosen over other solutions.
Identify and Classify Risks:
Internal/external risks and their contributing factors.
Categories: Prohibitive, major, moderate.
Construct Probability/Severity Harms Matrix and Risk Mitigation Hierarchy:
Systematic assessment of potential harms.
Develop a structured approach to risk mitigation.
Algorithmic Impact Assessment:
Utilize Privacy Impact Assessments (PIAs) as a basis.
Tailor assessments to AI processes.
Human Involvement in AI Decision Making:
Determine the level and nature of human oversight.
Stakeholder Engagement Process:
Evaluate stakeholder salience.
Ensure diverse representation and expertise.
Perform positionality exercises and establish engagement methods.
Identify AI Actors in Design, Development, and Deployment:
Recognize all parties involved in different AI project phases.
Communication Plans for Regulators and Consumers:
Reflect compliance, disclosure obligations, transparency, and explainability.
Chart Data Lineage and Provenance:
Ensure data is representative, accurate, and unbiased.
Use statistical sampling to identify data gaps.
Solicit Feedback from Impacted Individuals:
Engage with those affected by AI systems for feedback.
Create Preliminary Analysis Report on Risk Factors:
Document and manage identified risks proportionately.
Stakeholders and AI
Define and agree on AI goals with stakeholders.
Assess AI suitability for the mission.
Establish success parameters.
Determine meeting frequency for continuous evaluation.
Establish responsibility for risks and system failures.
Involve AI governance officers, privacy, security, procurement experts, subject matter experts, and legal teams.
Gathering Stakeholders
Define business case and cost/benefit analysis.
Decide on organization’s stance on AI use.
Develop communication plan for regulators and consumers.
Identifying Risk
Utilize tools like probability/severity harms matrix and HUDERIA risk index.
Tailor method to organization’s needs.
Conduct algorithmic impact assessments.
Document risks, mitigations, and approved actions.
Summary
Involving stakeholders, defining goals, assessing risks, and effective communication are crucial in AI implementation and ensuring ethical deployment.
Module 5: Testing and Validating the AI System During Development
Introduction
Importance: Continuous validation to mitigate security, privacy, bias, and safety issues.
Key Action: Identify risks to determine appropriate tests.
Topics and Performance Indicators
Evaluate Trustworthiness and Validity:
Use edge cases, unseen data, and malicious input for testing.
Conduct repeatability assessments and complete model cards/fact sheets.
Create counterfactual explanations (CFEs).
Identify Security Threats:
Conduct adversarial testing and threat modeling.
Establish multiple layers of mitigation.
Understand trade-offs among mitigation strategies.
Risk Tracking and Deployment Strategies:
Document risk changes over time.
Select appropriate deployment strategies.
Testing and Validation
Types of Testing: Accuracy, robustness, reliability, privacy, interpretability, safety, and bias assessments.
Resource Allocation: Prioritize testing based on risks and organizational capacity.
Compliance: Maintain documentation for audits and regulatory compliance.
Tailoring Testing: Customize tests based on algorithm type, industry standards, and AI’s purpose.
Use Cases and Resources
Align testing data and processes to specific use cases.
Understand resource allocation and prioritize based on risks.
Other Testing Considerations
Conduct adversarial testing and establish error mitigation layers.
Document all decisions and processes.
Summary
Thorough testing and validation ensure AI system reliability, security, and performance. Tailoring testing approaches based on risks and purpose is critical.
Module 5: Monitoring and Maintaining a System
Introduction
Focus: Continual monitoring post-deployment.
Key Action: Manage ongoing system performance and security.
Topics and Performance Indicators
Security Protocols and Industry Standards:
Follow protocols and standards for new risks post-deployment.
Develop an incident response plan.
Monitoring the System:
Use inventory and risk scores for AI systems.
Allocate resources based on risk scores.
Regularly review algorithms for drift and changes.
Documenting and Responding to Incidents:
Maintain model versions and datasets.
Create challenger models for transparency.
Respond to risks with prioritized action plans.
Monitoring AI Systems
Implement tests to evaluate goals achievement and unintended outputs.
Use challenger models to predict additional risks.
AI Performance and Incident Response Plans
Consider performance issues as incidents.
Document model versions for accuracy and transparency.
Address internal and external risks with a risk score system.
Summary
Effective AI governance requires ongoing monitoring, fine-tuning, and robust incident response plans. Staying updated with best practices and documenting AI system purposes and limitations is essential.
That’s all for today!
Catch ya’ll on LinkedIn