When Nobody Is Responsible, Nobody Is Accountable
In the landmark case Mobley v. Workday, Derek Mobley — an African American man over 40 with a disability — applied to more than 100 jobs through companies using Workday’s AI-powered screening platform. He was rejected every time without receiving a single interview. When he sued, the legal question was deceptively simple: who is liable?
The employers said they relied on Workday’s platform to screen candidates in good faith. Workday said its system reflected the data and preferences the employers provided. The cloud provider hosting the model said it merely provided compute infrastructure. In June 2025, a federal court in the Northern District of California conditionally certified the case as a collective action under the Age Discrimination in Employment Act, potentially covering millions of job applicants — and establishing that AI tool providers can be sued directly as “agents” under employment discrimination laws.
This is the AI liability gap: modern AI systems are built by chains of actors — data providers, model developers, platform providers, integrators, deployers — and existing legal frameworks, designed for a world where a single manufacturer produced a single product that caused a single harm, struggle to assign responsibility across this chain.
The gap is not theoretical. AI systems are making consequential decisions about credit approvals, medical diagnoses, criminal sentencing, insurance pricing, hiring, and content moderation. When those decisions are wrong, the question of who pays for the harm has no clear answer in most jurisdictions.
The Product Liability Problem
Traditional product liability law is built on a straightforward model: a manufacturer produces a defective product, a consumer is harmed by the defect, and the manufacturer is liable. This works for cars, appliances, and pharmaceuticals because:
- The product has a defined state at the time of sale
- The defect can be identified (design defect, manufacturing defect, or failure to warn)
- Causation is traceable (the brake failed, causing the crash)
- The manufacturer is identifiable
AI breaks each assumption:
No defined state: Many AI systems are continuously updated — the model serving predictions today may be different from the model deployed last week. Foundation models are fine-tuned, retrained, and updated with new data. Which version is “the product”?
Emergent behavior vs. defect: AI systems exhibit behaviors that were not explicitly programmed and cannot be predicted from their training data or architecture. When a large language model produces a hallucinated medical recommendation, is that a “defect” in the product or an inherent characteristic of the technology? The distinction matters enormously for liability.
Causation opacity: In a traditional product liability case, an expert can examine the product and explain the chain of causation. In AI, the decision-making process of deep neural networks is opaque — even to the developers. Explaining why an AI system denied a specific loan application or flagged a specific person as a security threat may be technically impossible with current interpretability methods.
Distributed responsibility: An AI harm typically involves multiple parties: the company that collected the training data, the company that trained the foundation model, the company that fine-tuned it for a specific use case, the company that deployed it in production, and the company that made a business decision based on its output. Which party is “the manufacturer”?
The EU Approach: AI Act + Product Liability Directive
The European Union has taken the most comprehensive regulatory approach to AI liability through two complementary instruments.
The EU AI Act (in force since August 2024, high-risk obligations apply August 2026)
The AI Act classifies AI systems by risk level and imposes corresponding obligations:
- Unacceptable risk (banned): Social scoring, real-time biometric identification in public spaces (with narrow exceptions), manipulation techniques targeting vulnerable groups. Prohibitions took effect February 2, 2025.
- High risk (strictly regulated): AI in critical infrastructure, education, employment, law enforcement, migration, and judicial systems. High-risk AI systems must meet requirements for data quality, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Full compliance required by August 2, 2026.
- Limited risk (transparency obligations): Chatbots, deepfake generators, and emotion recognition systems must disclose that users are interacting with AI.
- Minimal risk (no restrictions): Spam filters, AI in video games, inventory management.
The AI Act primarily regulates “providers” (companies that develop or place AI systems on the market) and “deployers” (companies that use AI systems under their authority). Providers of high-risk systems must conduct conformity assessments, maintain technical documentation, implement risk management systems, and register in an EU database. Penalties reach up to EUR 35 million or 7% of global annual turnover for prohibited practices violations.
The Revised Product Liability Directive (transposition deadline: December 2026)
The European Commission originally proposed a dedicated AI Liability Directive in 2022 to address civil liability for AI-caused harms. However, after failing to reach agreement among member states, the Commission withdrew the proposal in early 2025. In its place, the EU is relying on the revised Product Liability Directive (PLD), which came into force in December 2024 and must be transposed into national law by December 9, 2026.
The revised PLD explicitly treats software — including AI systems, operating systems, firmware, and applications — as a “product” subject to strict liability. Key provisions relevant to AI:
- AI as product: AI systems are now subject to the same strict liability regime as physical goods. If an AI system is defective and causes harm, the producer is liable regardless of fault.
- Expanded damages: The scope of compensable harm now includes medically recognized damage to psychological health and the corruption or destruction of data.
- Cybersecurity liability: Manufacturers are liable for damages resulting from cybersecurity vulnerabilities in their products.
- Disclosure of evidence: Courts can order defendants to disclose relevant technical documentation, partially addressing the information asymmetry between AI companies and individuals.
This approach means that AI liability in the EU will increasingly be handled through established product liability channels, with the AI Act providing the regulatory compliance framework and the PLD providing the civil liability mechanism.
Advertisement
The US Approach: State Patchwork Meets Federal Preemption
The United States has not enacted comprehensive AI liability legislation at the federal level. Instead, AI liability is addressed through a complex and increasingly contested mix of sectoral regulation, state legislation, and judicial precedent.
Existing Sectoral Regulation
Federal agencies are applying existing mandates to AI: the FDA regulates AI medical devices, the FTC enforces against deceptive AI practices, the EEOC addresses AI-driven employment discrimination under Title VII, and banking regulators (OCC, Fed, CFPB) address AI in lending under fair lending laws.
State Legislation Explosion
As of early 2026, 27 AI-specific laws have been enacted across 14 states, and 47 states introduced AI-related legislation in 2025 alone:
- Colorado enacted the first comprehensive state AI consumer protection law in 2024 (SB 24-205), requiring developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination. However, implementation has been delayed to June 30, 2026, giving the legislature time to consider amendments.
- California enacted multiple AI laws effective January 1, 2026, including the Transparency in Frontier AI Act, the GAI Training Data Transparency Act, and SB 53 — the first US statute addressing “catastrophic risk” from AI frontier models.
- Illinois amended its Human Rights Act (HB 3773) to prohibit employer use of AI that discriminates against protected classes and requires employers to notify candidates when AI analyzes video interviews.
- New York enacted the Responsible AI Safety and Education (RAISE) Act in December 2025, effective January 2027, creating a dedicated AI oversight office within the Department of Financial Services.
- Texas enacted the Responsible AI Governance Act, effective January 1, 2026.
The Federal Preemption Question
On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” directly targeting the state legislative patchwork. The order directs the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy, conditions federal broadband funding on states not having “onerous” AI laws, and directs the FTC to issue guidance on when state AI laws are preempted by federal trade law.
The order explicitly criticizes Colorado’s AI Act but carves out state laws relating to child safety, AI compute infrastructure, and state government AI procurement. Critically, executive orders cannot overturn existing state law — only Congress or courts can do that. Until legal challenges are resolved, state AI laws remain enforceable.
Litigation-Driven Standards
In the absence of comprehensive federal legislation, US courts are establishing AI liability standards through individual lawsuits:
- Section 230 and AI content: Whether Section 230’s immunity for platforms extends to AI-generated content remains deeply contested. In Garcia v. Character.AI (2025), a federal judge allowed strict product liability, negligence, and wrongful-death claims to proceed against Character AI after a teenager’s death, declining to treat chatbot output as fully protected speech. The Third Circuit’s ruling in Anderson v. TikTok (2024) held that algorithmic promotion of harmful content is not protected by Section 230.
- AI medical malpractice: Malpractice claims involving AI tools increased 14% in 2024 compared to 2022, but no landmark AI malpractice case has been decided. The Federation of State Medical Boards suggested in 2024 that clinicians — not AI makers — should bear liability for AI-assisted errors, though no binding legal standard exists.
- Autonomous vehicle liability: Tesla rejected a $60 million settlement demand in the Benevides case, which resulted in a $243 million verdict including $200 million in punitive damages. Cruise entered a deferred prosecution agreement in 2024 after a pedestrian-dragging incident in San Francisco. The legal trend shifts AV liability from negligence (requiring a human at fault) to product liability (requiring a product defect).
Deepfake Liability: The Frontier
Deepfakes — AI-generated synthetic media depicting real people doing or saying things they never did — represent one of the most acute liability challenges.
Non-consensual intimate imagery (NCII): AI-generated explicit images of real people (overwhelmingly targeting women) are a growing crisis. In January 2024, AI-generated explicit images of Taylor Swift went viral, prompting legislative action at both federal and state levels. The DEFIANCE Act, which would have created federal civil liability for non-consensual AI-generated intimate images, passed the Senate unanimously in 2024 but stalled in the House. Instead, President Trump signed the TAKE IT DOWN Act on May 19, 2025 — criminalizing the knowing publication of non-consensual intimate images (including deepfakes) and requiring platforms to implement notice-and-removal procedures within 48 hours. Enforcement provisions take effect May 2026.
Political deepfakes: AI-generated videos and audio of political figures making false statements pose direct threats to democratic processes. In January 2024, an AI-generated robocall mimicking President Biden’s voice urged New Hampshire voters not to vote in the primary. The FCC subsequently ruled that AI-generated voices in robocalls are illegal under the Telephone Consumer Protection Act and levied a $6 million fine against the political consultant responsible.
Commercial deepfakes: AI-generated advertisements, endorsements, and impersonations of celebrities raise intellectual property and right-of-publicity issues. Multiple lawsuits are pending against AI companies whose models can generate images and voices of specific individuals without consent.
The liability chain for deepfakes is especially complex: The model developer, the platform operator, the individual who generated the deepfake, and the distributor who shared it may all bear some liability. Different jurisdictions are drawing these lines differently, and the TAKE IT DOWN Act’s platform-focused approach contrasts with the EU’s broader regulatory framework under the AI Act.
Insurance, Contracts, and Risk Transfer
As AI liability law remains uncertain, organizations are managing risk through contractual and insurance mechanisms — and the market is evolving rapidly.
AI indemnification clauses: Enterprise AI contracts increasingly include indemnification provisions where the AI vendor agrees to defend and indemnify the customer against third-party claims arising from the AI system’s outputs. The scope and limitations of these clauses vary widely and are heavily negotiated.
AI-specific insurance products: The insurance market is developing targeted AI coverage at pace. Armilla Insurance Services launched an AI liability product underwritten by Lloyd’s of London covering hallucinations, degrading model performance, and algorithmic failures. AXA released a cyber policy endorsement covering “machine learning wrongful acts.” Coalition expanded its definitions to include “AI security events” and deepfake-related fraud, and in December 2025 began offering coverage for deepfake-related reputational harm. Relm Insurance launched three AI-specific policies in January 2025. Overall, cyber insurance premiums are projected to rise 15% in 2026, driven partly by AI-related threats.
Model cards and documentation: Best practices for AI governance include publishing model cards — documentation of a model’s intended use, limitations, performance characteristics, and known biases — that serve both as user guidance and as evidence of reasonable care in potential litigation. Frameworks like NIST’s AI Risk Management Framework, ISO/IEC 42001, and the OECD AI Principles provide structured approaches that organizations can adopt to demonstrate compliance and mitigate liability exposure.
Advertisement
Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | High — Algeria launched its National AI Strategy in December 2024 and AI adoption is accelerating in government services, banking, and energy. As deployment grows, liability questions will arise domestically. Algerian companies selling to EU markets must comply with the AI Act by August 2026. |
| Infrastructure Ready? | Partial — Algeria has established an AI Council (June 2023) and the Personal Data Protection Agency, but no AI-specific liability framework exists yet. The legal infrastructure to handle AI disputes is underdeveloped. |
| Skills Available? | Very Limited — Algeria has few legal professionals specializing in technology law or AI regulation. Cross-training between legal and technical disciplines is urgently needed. Law faculties have not yet integrated AI governance into curricula at scale. |
| Action Timeline | 12-24 months — Algeria should begin developing AI governance and liability frameworks now, drawing on the EU AI Act and revised Product Liability Directive as reference models, especially as the AI market is projected to grow from $499M (2025) to $1.69B by 2030. |
| Key Stakeholders | Ministry of Justice, Ministry of Digital Economy, Personal Data Protection Agency, Algerian Bar Association, university law faculties, AI Council, technology companies deploying AI in banking, healthcare, and government |
| Decision Type | Legislative-Strategic — Requires policy development at the national level, informed by international standards and the EU framework |
Quick Take: Algeria has an opportunity to learn from the EU’s approach — particularly the revised Product Liability Directive’s treatment of AI as a product — and develop liability frameworks proactively rather than reactively. For Algerian companies deploying AI systems (especially in banking, healthcare, and government services), the immediate priority is documentation: maintain records of what AI systems are used, what decisions they inform, what data they were trained on, and what human oversight exists. For companies exporting software or services to the EU, compliance with the AI Act is a business requirement by August 2026. Algeria’s legal community should invest in technology law expertise as a strategic priority for the country’s digital future.
Sources
- EU AI Act — Official Text and Implementation Timeline
- DLA Piper — Latest Wave of EU AI Act Obligations (August 2025)
- IAPP — European Commission Withdraws AI Liability Directive
- Goodwin — EU Revised Product Liability Directive and AI
- Latham & Watkins — New EU Product Liability Directive
- Colorado General Assembly — SB 24-205
- Akin — Colorado Postpones AI Act Implementation
- White House — Executive Order on AI National Policy Framework (December 2025)
- Latham & Watkins — President Trump Signs Take It Down Act
- FCC — AI-Generated Voices in Robocalls Ruling
- Fortune — Workday, Amazon AI Employment Bias Claims
- Fortune — Section 230 May Not Protect Big Tech in AI Age
- IAPP — AI Liability Risks and Insurance
- Stanford HAI — AI Index Report
- FTC — AI Enforcement Actions
Advertisement