On the evening of October 2, 2023, a Cruise robotaxi struck a pedestrian in San Francisco who had already been hit by another vehicle. The Cruise AV then pulled over — and dragged the pedestrian approximately 20 feet before stopping. The incident, captured on surveillance footage, triggered a California DMV investigation, a criminal probe, and within weeks the suspension of Cruise’s operating permit. General Motors ultimately pulled its robotaxi fleet entirely and absorbed more than $900 million in losses from the Cruise unit. One question cut through all the corporate statements and regulatory filings: when a machine makes the decision to move, who is responsible for what it does?

That question has no clean legal answer in 2026. And with Waymo now running hundreds of fully driverless vehicles across San Francisco, Los Angeles, Phoenix, and Austin, Tesla deploying its Full Self-Driving software to over 2 million vehicles, and Chinese players like Baidu Apollo and Pony.ai expanding internationally, the pressure on lawmakers, insurers, and courts is accelerating faster than the technology itself.

The SAE Ladder and the Liability Gap

The Society of Automotive Engineers defines six levels of driving automation, from Level 0 (no automation) to Level 5 (full autonomy under all conditions). The legal frameworks that govern liability were built around the assumption of a human driver sitting behind a wheel, alert and in control. Above Level 2 — where the system handles both steering and acceleration but still requires continuous human supervision — that assumption starts to break down.

At Level 3, the car can handle most driving tasks but can ask the human to take over when needed. Volvo’s EX90 and Mercedes-Benz’s Drive Pilot, both approved for limited Level 3 operation in certain US states and Germany, sit in this legally ambiguous zone. Mercedes explicitly accepted manufacturer liability for Drive Pilot incidents when the system is engaged — a historic move that effectively acknowledged the human is no longer the primary decision-maker.

Level 4 vehicles, which can operate without any human intervention within a defined geographic area (a “geofence”), are where things get genuinely complex. Waymo operates at Level 4. There is no steering wheel in the back of a Waymo One robotaxi. If the vehicle strikes a cyclist, the question of “driver error” is almost philosophically incoherent. The human was not driving. The software was.

Real Accidents, Real Disputes

Waymo’s safety record is frequently cited as evidence that AV technology is safer than human driving. A peer-reviewed study published in the journal Nature in 2024 analyzing 7.1 million miles of Waymo autonomous driving found a statistically significant reduction in injury-causing crashes compared to human-driven baselines. But Waymo vehicles have not been accident-free. Documented incidents include low-speed collisions with cyclists, a rear-end collision by a human driver while the Waymo was stopped, and multiple interactions with emergency vehicles that resulted in traffic disruption.

Tesla’s Full Self-Driving situation is categorically different. FSD is a Level 2 system — the driver must remain attentive and is legally responsible for the vehicle’s behavior. Yet the marketing language around FSD has long implied more capability than the technology delivers, and the National Highway Traffic Safety Administration has opened multiple investigations into FSD-related crashes, including a 2023 recall of over 360,000 vehicles after FSD software was found to cause unsafe behavior at intersections. The critical distinction: in Tesla crashes, liability typically falls on the human driver under current law, even when FSD was actively engaged. Plaintiffs’ attorneys are increasingly challenging this in court, arguing that software defects constitute product liability regardless of what the owner’s manual says.

Cruise’s October 2023 incident resulted in the company quietly settling with the victim for an undisclosed amount. It also prompted California to pass SB 915 in 2024, requiring AV companies to share incident data with state regulators within 72 hours.

The US Framework: Patchwork Federalism

The United States has no federal AV liability statute as of early 2026. The NHTSA’s Automated Vehicles Comprehensive Plan, updated in 2023, establishes reporting requirements and voluntary safety assessment guidelines, but it does not resolve the core liability question. That has been left to individual states, resulting in a patchwork that companies and insurers find extremely difficult to navigate.

California requires AV operators to maintain $5 million in insurance per vehicle. Arizona, which has historically been the most permissive AV state, allows fully driverless testing with minimal regulatory reporting. Texas passed legislation in 2023 clarifying that AV operators bear liability when no human is present. Nevada has the longest-standing AV framework, dating to 2011, and has gradually shifted liability toward manufacturers as autonomy levels increase.

Several federal bills have been introduced — the SELF DRIVE Act passed the House in 2017 but has never cleared the Senate. In 2025, a bipartisan proposal to create a federal AV liability framework was introduced, backed by a coalition of automakers and tech companies seeking legal uniformity. It remains in committee.

Europe Takes a Different Path

The European Union moved more deliberately and, arguably, more coherently. The EU’s Product Liability Directive, updated and adopted in late 2024, explicitly includes AI-driven systems — including autonomous vehicle software — within its scope. Manufacturers can be held strictly liable for damage caused by defective AI systems, with the burden of proof partially reversed: plaintiffs no longer need to prove exactly how the software failed, only that damage occurred and the product was defective.

Germany, which already allowed Level 3 operation under the 2021 Autonomous Driving Act (AFGBV), went further in 2023 by requiring a mandatory “Technical Supervisory Person” for Level 4 vehicles — a remote operator who monitors and can intervene, creating a hybrid liability model that assigns partial responsibility to both the operator and the manufacturer.

The UK’s Automated Vehicles Act 2024 introduced a specific “Authorised Self-Driving Entity” (ASDE) designation. When an ASDE-designated system is driving, the manufacturer bears liability — not the owner. This is a clean, elegant solution that removes ambiguity for insurers and courts, and several AV companies have publicly praised the UK model as a template.

Advertisement

China: Permissive Regulation as Industrial Policy

China is pursuing a different philosophy. The Ministry of Industry and Information Technology and the Ministry of Public Security have progressively relaxed testing requirements for Level 4 and Level 5 vehicles, with Beijing, Shanghai, Guangzhou, and Shenzhen all designating large urban areas as open AV test zones. China’s 2022 regulations hold manufacturers liable for autonomous operation failures but have set relatively low insurance thresholds and favor administrative penalties over civil litigation as the primary enforcement mechanism.

This approach is explicitly designed to accelerate domestic AV deployment. Baidu’s Apollo Go robotaxi service operated over 1 million rides without a safety driver in 2023. Pony.ai received its first permit to charge commercial fares for fully driverless vehicles in Guangzhou in November 2023. The regulatory environment is facilitating scale that Western markets, with their more adversarial legal systems, have struggled to match.

Insurance: An Industry Rewriting Its Own Rules

The insurance industry is perhaps the most practically consequential actor in the AV liability debate. Personal auto insurance is a $300 billion annual market in the United States alone, almost entirely predicated on human driver risk. As autonomy increases, the risk shifts from driver behavior to product liability — and insurers are repositioning accordingly.

AXA has piloted AV-specific fleet insurance products in Germany and the UK, pricing premiums based on the software version of the autonomy system rather than driver history. Allianz published a report in 2024 arguing that by 2035, product liability claims could represent 40% of all auto insurance claims in markets with widespread AV deployment. Swiss Re has modeled scenarios in which personal auto premiums collapse by 60% over the next 15 years as liability shifts to manufacturers — who will insure against it through commercial product liability policies.

The practical implication: if you own a Level 4 vehicle and it causes an accident while driving autonomously, your personal auto insurance may not cover it. The manufacturer’s product liability policy would. Lloyd’s of London has begun writing specialized “autonomous systems liability” coverage for AV manufacturers, underwriting the software decisions of machines at scale.

The Strict Liability Question

The most significant unresolved debate in AV law is whether autonomous vehicle manufacturers should face strict liability — that is, liability without the need to prove negligence — for accidents caused during autonomous operation. Strict liability already applies to inherently dangerous activities and, in most jurisdictions, to manufacturing defects in physical products.

Advocates argue that strict liability is the only framework that makes sense for Level 4 and Level 5 systems. When there is no human driver to assess for negligence, the question is simply: did the product cause harm? If yes, the maker pays. This creates powerful incentives to invest in safety and removes the burden on accident victims of reconstructing complex software decision trees in court.

Opponents, primarily AV manufacturers and their lobbying arms, argue that strict liability would chill innovation, make insurance prohibitively expensive, and penalize companies for accidents caused by factors outside their control — a pedestrian stepping unexpectedly into traffic, extreme weather, or infrastructure failures. They advocate for a negligence-based model with updated standards that account for the probabilistic, statistical nature of software decisions.

The EU’s revised Product Liability Directive leans toward strict liability for software defects. The US is moving more slowly. How this question resolves will determine whether the economics of AV deployment are viable — and who pays when the machines we trusted to drive get it wrong.

Advertisement

Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria has no AV regulation, but the global legal frameworks being built now will shape future transport law, insurance regulation, and tech import rules
Infrastructure Ready? No — V2X (vehicle-to-everything) connectivity requires 5G coverage and road sensor infrastructure not yet present at scale in Algeria; road conditions and urban planning also lag behind AV requirements
Skills Available? Partial — Legal tech expertise in autonomous systems is minimal; AV engineering talent is nearly absent domestically, though diaspora and returning engineers could bridge the gap
Action Timeline Monitor only — Algeria is likely 8-12 years from meaningful AV deployment; the priority now is monitoring international frameworks to inform future policy drafts
Key Stakeholders Ministry of Transport, Ministry of Justice, insurance sector (SAA, CAAT), National Road Safety Council, legal academia
Decision Type Educational / Monitor

Quick Take: Algeria’s transport ministry and legal system have a window of opportunity to study international AV liability models — particularly the UK’s clean ASDE framework and the EU’s updated Product Liability Directive — before autonomous vehicles arrive domestically. For Algerian insurers, the shift of liability from driver to manufacturer represents a fundamental business model disruption worth tracking now, not after the technology lands.

Sources & Further Reading