Your face is a password you cannot change. Your fingerprint, your iris, your voiceprint — these are permanent identifiers that, once leaked or misused, cannot be reset like a forgotten PIN. This biological permanence is precisely why biometric data has become the most legally contested category of personal information in 2026. What started with a single Illinois statute in 2008 has metastasized into a global patchwork of regulation, enforcement, and billion-dollar litigation. Companies collecting biometric data now face a compliance map that spans 30-plus U.S. states and more than 60 countries, each with different rules, penalties, and definitions. The biometric privacy explosion is reshaping how employers, retailers, and tech platforms interact with human identity — and the aftershocks are still building.

BIPA: The Lawsuit That Changed Everything

The Illinois Biometric Information Privacy Act (BIPA), enacted in 2008, was largely ignored for its first decade. That changed when plaintiffs’ attorneys discovered its private right of action: any person whose biometric data is collected, used, or disclosed without proper written consent can sue for $1,000 per negligent violation or $5,000 per intentional violation — per occurrence, per person. The math became devastating for large-scale deployments.

The settlements that followed rewrote boardroom risk calculations. Facebook paid $650 million in 2021 to resolve a class action over its facial recognition tag-suggestion feature, which scanned faces of Illinois users without the required written release. TikTok paid $92 million in 2022 over similar BIPA claims related to face and voice data collection within its app. Google settled a $100 million case in 2022 over its Google Photos face-grouping feature. Six Flags theme parks paid $36 million to resolve claims tied to fingerprint scanning for season pass holders. Clearview AI, the facial recognition company whose database scraped billions of public photos, faced BIPA suits from multiple Illinois plaintiffs and agreed to a settlement that, among other remedies, bars it from selling its database to most private businesses in the U.S.

These numbers sent a clear signal: the cost of non-compliance with BIPA exceeds the cost of building compliant systems by orders of magnitude.

The State-Level Cascade

Illinois broke the dam, but it was not alone for long. By early 2026, more than 30 U.S. states have enacted or are actively considering biometric privacy legislation, though the frameworks vary considerably.

Texas enacted its Capture or Use of Biometric Identifier (CUBI) statute in 2009, modeled closely on BIPA but enforced exclusively by the state attorney general — no private right of action. Washington State passed its My Health My Data Act in 2023, which, while primarily a health data law, broadly covers biometric data and does include a private right of action. Washington’s earlier biometric law, however, also lacked private litigation rights.

Colorado, Virginia, Connecticut, and Montana all have comprehensive privacy laws that treat biometric data as a sensitive category requiring explicit opt-in consent. New York passed a law restricting biometric data collection in retail establishments in 2021 and requires signage in stores that deploy facial recognition. California’s Consumer Privacy Act (CCPA) and its amendment, the CPRA, grant residents the right to opt out of the sale of biometric information and require specific disclosures.

The practical consequence is that a national retailer with locations in 15 states must maintain 15 different compliance programs for its biometric-enabled loyalty apps, checkout kiosks, and employee timekeeping systems. Legal teams are overwhelmed. Compliance software vendors selling biometric consent management platforms have seen demand surge.

The EU AI Act and the Biometric Bright Lines

The European Union took a different architectural approach. Rather than creating a standalone biometric statute, the EU embedded biometric restrictions into the AI Act, which entered full enforcement phases in 2025 and 2026. The AI Act prohibits, with narrow exceptions, the use of real-time remote biometric identification systems in publicly accessible spaces by law enforcement. Mass biometric surveillance in public areas is categorically banned for most use cases.

The AI Act also classifies emotion recognition systems and biometric categorization systems — tools that infer race, political opinion, religious belief, or sexual orientation from biometric data — as high-risk or prohibited AI applications. Deployers must register these systems in a public EU database, conduct fundamental rights impact assessments, and implement human oversight mechanisms.

Under GDPR, which predates and layers under the AI Act, biometric data has always been a special category requiring explicit consent or another specific legal basis. The combination of GDPR and the AI Act gives the EU the most restrictive biometric environment among major global economies. Fines under GDPR alone can reach 4% of global annual revenue. The AI Act adds administrative fines of up to €30 million or 6% of global turnover for the most serious violations.

Advertisement

Global Divergence: Three Competing Models

Beyond the U.S. patchwork and the EU framework, the rest of the world is splitting into distinct regulatory philosophies.

Brazil’s Lei Geral de Proteção de Dados (LGPD), modeled on GDPR, classifies biometric data as sensitive personal data. The Brazilian National Data Protection Authority (ANPD) issued sector-specific guidance in 2024 requiring consent or legitimate interest bases for biometric processing, with strict data minimization requirements. Brazil is trending toward alignment with EU standards.

India’s Digital Personal Data Protection Act, enacted in 2023 and in implementation rollout through 2025-2026, does not create a special category for biometric data at the statute level, instead treating it within a general framework of personal data requiring consent. Critics argue this approach is insufficient given the scale of Aadhaar, India’s biometric national identity system covering 1.4 billion people. India’s approach is broadly permissive compared to the EU.

China presents the sharpest contrast. While China has enacted the Personal Information Protection Law (PIPL) in 2021, which nominally requires consent for biometric data processing, government and state-aligned entities are effectively exempt. China operates the world’s most extensive public facial recognition infrastructure, with hundreds of millions of cameras deployed across cities, transit systems, and residential compounds. For private companies operating in China, PIPL compliance is real. For the state, biometric data collection is a tool of governance.

The Enterprise Compliance Burden

For multinational companies, the divergence creates an impossible compliance problem. A technology company with operations in the EU, U.S., Brazil, and China must simultaneously satisfy: GDPR’s explicit consent requirements, AI Act prohibitions, BIPA’s written notice and consent rules in Illinois, CCPA opt-out rights in California, LGPD consent frameworks in Brazil, and PIPL localization requirements in China — each enforced by different regulators with different timelines, audit rights, and penalty structures.

Facial recognition vendors are responding by building regional deployment variants: a EU-compliant version with real-time identification disabled in public spaces, a U.S. version with state-level consent logging, and an Asia-Pacific version with different defaults. The compliance cost for a mid-size enterprise deploying biometric employee timekeeping across 10 countries now routinely exceeds $500,000 annually in legal and software overhead.

Facial Recognition Bans Expand

Cities and jurisdictions continue to extend outright bans on government use of facial recognition. San Francisco, Boston, Portland (Oregon), and several other U.S. cities have banned municipal use. The EU’s AI Act imposes the broadest prohibitions on law enforcement use across 27 countries. Canada’s Office of the Privacy Commissioner has issued findings that Clearview AI’s mass collection of facial images without consent violated federal privacy law.

The direction of travel in democratic societies is clear: biometric data is trending toward the highest tier of privacy protection, equivalent to medical records or financial data, with opt-in consent as the default baseline.

Advertisement

Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria has no specific biometric law yet, relevant as facial recognition adoption grows
Infrastructure Ready? Partial — biometric systems deployed in banking/borders but no regulatory framework
Skills Available? No — limited data protection legal expertise
Action Timeline 12-24 months — monitor GDPR-aligned legislation being drafted
Key Stakeholders Ministry of Interior, Bank of Algeria, ARPCE, legal firms
Decision Type Monitor

Quick Take: Algeria is actively expanding biometric use — from Aadhaar-style national ID modernization to biometric enrollment at banks — without a dedicated legal framework governing data retention, consent, or breach liability. As the country moves toward a more digital economy, drafting a biometric-specific addendum to Law 18-07 would bring it in line with GDPR-adjacent standards and reduce future regulatory friction when partnering with EU companies.

Sources & Further Reading