In 2024, an Air Canada chatbot told a grieving passenger: "Buy the full-price ticket now, and we will refund you the bereavement discount amount within 90 days."
This was a Hallucination. The airline's actual policy explicitly stated that no retrospective refunds would be given.
When the passenger sued, Air Canada's lawyers made a novel argument: "The chatbot is a separate legal entity responsible for its own actions. Air Canada cannot be held liable for the bot's independent statements."
The Civil Resolution Tribunal rejected this immediately. They ruled that the chatbot is part of the company's interface, just like a website or a human agent. Air Canada was forced to pay.
The Wake-Up Call: This case (Moffatt v. Air Canada) terrified General Counsels everywhere. It established Strict Liability for AI Agents. If your Agent promises a discount, commits a crime, defames a celebrity, or slanders a competitor, you are liable. And here is the kicker: Your existing General Liability (CGL) insurance probably doesn't cover it.
Part 1: The "Silent Cyber" Exclusion
Most companies have "Cyber Insurance." It covers Hacking, Ransomware, and Data Breaches. But AI hallucinations are not "Hacks." Nobody broke in. The system worked "as designed," but the output was wrong. This is a Performance Error. Most Cyber policies have a "Silent Cyber" exclusion clause, meaning they don't cover non-malicious algorithmic failures. They pay if you get hacked; they don't pay if your bot is just stupid. You need a new product category: AI Performance Liability Insurance.
Part 2: The Three Vectors of AI Risk
1. Defamation (The ChatGPT Problem)
If your semantic search tool summarizes news for a user and says: "The CEO of Competitor X was convicted of heavy fraud" (and it's false, a hallucination based on a similar name), that is Libel. Mitigation: RAG (Retrieval Augmented Generation) with strict grounding. Insurance policies will require "Human in the Loop" for high-risk public-facing outputs.
2. Copyright Infringement (The Copilot Problem)
If your internal coding assistant pastes a block of GPL-licensed code into your proprietary product, you can be sued by the Free Software Foundation or the code owner. Insurance: Microsoft, Google, and Amazon now offer "IP Indemnification" (The "Copyright Shield") to Enterprise customers. Their promise: "If you use our models with the safety filters on, and you get sued for copyright, we will pay the legal fees and the settlement."
3. Discrimination (The Hiring Bot Problem)
If your Resume Screening AI rejects all women because it was trained on historical data where men dominated the role, you are violating the Civil Rights Act and the EU AI Act. Mitigation: Algorithmic Auditing. Insurers like Munich Re will run "Red Teaming" tests on your model before writing the policy.
Part 3: The EU AI Act Impact
The EU AI Act classifies AI into risk tiers: 1. Unacceptable Risk: (Social Scoring, Real-time biometrics). Banned. 2. High Risk: (Medical devices, Critical Infrastructure, Hiring, Credit Scoring). Require conformity assessments and Mandatory Liability Insurance in some jurisdictions. 3. Limited Risk: (Chatbots, Games). Transparency obligations (Must disclose they are AI).
If you operate a "High Risk" system without insurance compliance, the fine is up to 7% of Global Turnover.
Part 4: Who is Liable? (The Supply Chain)
If a GPT-4 Wrapper App fails, who pays?
OpenAI (Model Provider): They claim "Platform Neutrality." Their ToS says "User assumes all risk." They are the "Engine Manufacturer."
The Developer (You): You are the "Deployer" (in EU terms). You built the car using the engine. You are primarily liable for the car crash.
The Insurer: Will only pay if you can prove you followed "Best Practices" (Guardrails, Evals, Logging).
Part 5: Emerging Insurance Products
The market is adapting. New specialized policies are appearing to cover the specific risks of Generative AI.
1. AI Performance Guarantees (Warranty Insurance) Concept: Similar to a warranty on a solar panel. If the AI model's accuracy drops below 95%, the insurance pays out the difference in lost revenue. Target: Enterprise B2B startups selling "Efficiency" tools.
2. "Model Collapse" Business Interruption Concept: If OpenAI goes down for 3 days, or if GPT-5 is released and makes your GPT-4 wrapper obsolete/incompatible, this pays for the business interruption. Target: Companies critically dependent on 3rd party APIs.
3. Bias & Discrimination Defense Concept: Covers the legal costs of defending a Class Action lawsuit regarding algorithmic bias in hiring or lending. Target: HR Tech and Fintech companies.
Part 6: The Forensics of an AI Claim
How do you prove the AI lied? You need "Black Box" forensics. When a claim is filed, the insurer will send a Forensic Token Analyst. They will demand access to:
System Prompts: Did you tell the AI "Be creative" (increasing risk) or "Be factual"?
Temperature Settings: Was temperature set to 0.7 (risky) or 0.1 (safe)?
RAG Logs: Did the AI have access to the correct document, or did it fail to retrieve it?
If your logs show you used a high temperature for a customer support bot, your claim will be denied due to "Gross Negligence."
Appendix A: The Policyholder's Glossary
Algorithmic Disgorgement: A legal remedy where the FTC forces you to delete not just the data you illegally collected, but the Model experienced on that data. This is a total loss of IP.
Hallucination: Confidently stating false information. In insurance terms, this is often classified as a "Wrongful Act" or "Misrepresentation."
Indemnification: A contractual agreement where the Model Provider (e.g., Microsoft) agrees to pay for your legal costs if you are sued for IP infringement.
Silent Cyber: Potential cyber-related losses not affirmatively covered or excluded in traditional policies. Most insurers are now aggressively closing these loopholes.
Strict Liability: You are responsible for damage even if you were not negligent. If your AI hurts someone, you pay. Period.
Subrogation: The insurer pays you, but then sues the person who caused the problem. E.g., Your insurer pays you for the lawsuit, then sues OpenAI to get their money back.
Appendix B: Frequently Asked Questions
Q: Does my E&O (Errors and Omissions) cover AI? A: Maybe. But usually E&O covers "Professional Services." Is a chatbot a professional service? The courts haven't decided. Don't risk it. Get a specific endorsement.
Q: Can I buy insurance for Copyright Infringement? A: It is very expensive. Insurers are scared of the New York Times v. OpenAI case. Most will exclude IP claims unless you bear the first $1M in deductible.
Conclusion
AI is not just software; it is agency. And agency implies liability. The "Move Fast and Break Things" era is over. The "Move Fast and Get Insured" era has begun.
The next massive industry isn't just AI itself; it is the Actuarial Science of quantifying the risk of the machine. The companies that can accurately price the risk of a hallucination will become the biggest insurers of the 21st century.
Appendix C: The Underwriter's Perspective
Q: How do you price risk for a model that changes every week? A: We don't. We price the governance. I don't care if your model changes. I care if you have a "Change Management Board." If you just push code to prod on Friday at 5pm, your premium goes up.
Q: What is the "Nuclear Scenario" for AI insurance? A: A systemic failure. Imagine if a widely used open-source library (like LangChain) had a hidden vulnerability that caused 10,000 chatbots to start spewing racial slurs simultaneously. That is an aggregation event that could bankrupt a small carrier.
Q: Advice for startups? A: Buy "Tech E&O" with an affirmative AI endorsement. Do not rely on your general policy. Read the exclusions. If it says "excluding algorithms," you are naked.
All in One Place
Atler Pilot decodes your cloud spend story by bringing monitoring, automation, and intelligent insights together for faster and better cloud operations.

