⚡ Key Takeaways

Brazil’s Federal Senate approved PL 2338/2023, a three-tier AI risk framework that classifies AI systems as excessive risk (prohibited), high risk (regulated with mandatory algorithmic impact assessments), or standard. The ANPD data protection authority will coordinate enforcement with fines up to R$50 million per violation, making it the most comprehensive AI law designed for a developing economy.

Bottom Line: Technology companies operating AI systems in Brazil should begin preparing algorithmic impact assessment capabilities and transparency mechanisms now, as the bill is in its final legislative stage before becoming law.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar

Relevance for Algeria
High

Algeria is developing its own digital governance framework and Brazil’s approach offers a directly applicable model for a developing economy balancing AI innovation with citizen protection.
Infrastructure Ready?
Partial

Algeria has a data protection authority (ANPD equivalent) but lacks the institutional capacity for algorithmic impact assessments and AI-specific enforcement.
Skills Available?
Limited

Algeria’s regulatory bodies have limited AI-specific technical expertise; building assessment teams would require significant capacity development.
Action Timeline
12-24 months

Algeria can study Brazil’s implementation outcomes before drafting its own framework, but early preparation of institutional capacity should begin now.
Key Stakeholders
Policymakers, Ministry of Digital, ARPCE, legal professionals
Decision Type
Strategic

This article provides a regulatory blueprint that Algerian policymakers can adapt, making it directly relevant to national AI governance planning.

Quick Take: Algerian regulators should study Brazil’s three-tier risk model and sandbox provisions as a template for developing Algeria’s own AI framework. The ANPD’s dual role as data protection and AI regulator is particularly relevant, since Algeria could leverage its existing digital governance structures rather than building new institutions from scratch.

The Largest Developing Economy Takes on AI Governance

Brazil’s Federal Senate approved Bill No. 2,338/2023 in December 2024, creating the most comprehensive AI regulatory framework outside the European Union. Now under final review in the Chamber of Deputies, the bill would govern AI systems serving the world’s fifth-largest country by population and ninth-largest economy by GDP.

Unlike the EU AI Act, which emerged from one of the world’s wealthiest economic blocs, Brazil’s approach explicitly accounts for the constraints and priorities of a developing economy. The bill balances fundamental rights protection with provisions designed to avoid stifling a nascent domestic AI industry, including regulatory sandboxes that let developers test AI systems in controlled environments before full market deployment.

The legislation arrives as Brazil executes its national AI strategy, the PBIA 2024-2028, which positions the country to become a regional AI leader in Latin America. With Argentina, Chile, and Colombia all exploring their own frameworks, Brazil’s bill could become the template for AI governance across the region.

Three Tiers of Risk, One Clear Hierarchy

The bill adopts a risk-based architecture that categorizes AI systems into three tiers. Systems deemed “excessively risky” are prohibited outright. This includes AI applications that exploit vulnerable groups, enable indiscriminate social scoring by governments, or deploy subliminal manipulation techniques that cause harm.

High-risk systems face the heaviest regulatory burden. The bill defines these as AI systems that directly affect individuals’ lives or rights in critical domains: healthcare diagnostics, criminal justice, credit scoring, hiring decisions, and public safety applications. Developers of high-risk systems must conduct algorithmic impact assessments before placing any system on the market, ensure human oversight for consequential decisions, provide transparency about how the AI functions, and test for discriminatory bias.

The third tier covers all other AI systems, which face baseline transparency requirements but minimal regulatory friction. This tiered approach mirrors the EU AI Act’s structure but with notable adaptations for Brazil’s regulatory environment, particularly in how it defines high-risk categories and enforcement mechanisms.

Advertisement

ANPD Takes the Regulatory Helm

The bill designates Brazil’s National Data Protection Authority (ANPD) as the coordinator of the new National System for the Regulation and Governance of Artificial Intelligence, known as SIA. This decision leverages the institutional infrastructure Brazil built for its 2020 data protection law, the LGPD, rather than creating an entirely new regulatory body.

The ANPD will act as a “residual regulator” for AI matters not clearly allocated to sector-specific authorities. Where AI systems process personal data, the ANPD serves as the primary regulator. For sector-specific applications like healthcare AI or financial AI, the ANPD coordinates with existing regulators such as ANVISA (health) and the Central Bank.

Enforcement carries significant teeth. Administrative penalties include fines of up to R$50 million per violation (approximately $9 million), or up to 2% of the violating group’s annual revenue from the preceding fiscal year, whichever is greater. The ANPD can also order the reclassification of a system’s risk level or mandate algorithmic impact assessments to guide investigations.

Algorithmic Impact Assessments Become Mandatory

For high-risk and general-purpose AI systems, the bill requires algorithmic impact assessments (AIAs) before any system enters the market. These assessments must evaluate potential risks, quantify benefits, and detail mitigation measures. They must be updated throughout the system’s lifecycle, not just at launch.

The bill specifies that AIAs must be conducted by a professional team with appropriate technical, scientific, and legal expertise. The competent authority retains the power to regulate assessment standards and ensure the independence of assessment teams. This requirement draws from Brazil’s existing Data Protection Impact Assessment framework under the LGPD, creating a harmonized compliance pathway for companies already subject to data protection rules.

Brazil’s competition authority, CADE, has also weighed in, suggesting amendments to address concentration risks in AI markets and ensure the regulatory framework does not inadvertently favor incumbents over startups.

Why This Bill Matters Beyond Brazil

The bill’s significance extends well beyond Brazilian borders. As the first major AI law purpose-built for a developing economy, it offers a regulatory model that balances rights protection with economic development concerns. The regulatory sandbox provision is particularly notable: it allows AI developers to test systems in controlled environments with reduced regulatory requirements, providing a path for smaller companies and startups to innovate without bearing the full compliance burden from day one.

For multinational technology companies, the bill creates another compliance jurisdiction with meaningful penalties. Companies operating AI systems in Brazil will need to conduct impact assessments, implement transparency mechanisms, and potentially designate local representatives, much as the LGPD required for data protection.

The bill also establishes the precautionary principle and “prevention by design” as foundational concepts, requiring developers to anticipate and mitigate potential harms before a system reaches the market. This proactive approach contrasts with the reactive enforcement models still common in many jurisdictions.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

What is Brazil’s AI bill PL 2338/2023 and when will it become law?

PL 2338/2023 is Brazil’s comprehensive AI regulation bill that the Federal Senate approved in December 2024. It is currently under final review in the Chamber of Deputies. Once passed by the Chamber and signed by the president, it will establish a three-tier risk framework governing all AI systems operating in Brazil, affecting over 215 million citizens.

How does Brazil’s AI regulation differ from the EU AI Act?

While both use risk-based classification systems, Brazil’s bill is specifically designed for a developing economy. It includes regulatory sandboxes that allow startups to test AI systems with reduced compliance burdens, and it designates the existing ANPD data protection authority as the AI regulator rather than creating a new institution. Fines cap at R$50 million or 2% of revenue, compared to the EU’s 7% of global turnover.

What do companies need to do to comply with Brazil’s AI law?

Companies deploying high-risk AI systems in Brazil must conduct algorithmic impact assessments before market placement, implement human oversight for consequential decisions, ensure transparency about AI system functionality, and test for discriminatory bias. These assessments must be updated throughout the system’s lifecycle by qualified professional teams with technical, scientific, and legal expertise.

Sources & Further Reading