The Compliance Nightmare

A Fortune 500 company using an AI-powered resume screening tool in early 2026 faces a regulatory environment that would have been unthinkable three years ago. In Illinois, the company must notify every applicant that AI is being used in the hiring process. In New York City, it must commission an annual third-party bias audit and publish the results. In California, it must conduct anti-bias testing and retain all data for four years. In Colorado — assuming the law takes effect on its delayed deadline — it must complete a comprehensive impact assessment before deploying the tool. And in Texas, the same tool is governed by a framework that places most compliance obligations on government agencies while largely sparing private employers from prescriptive mandates.

No federal law harmonizes these requirements. The EEOC issued guidance on AI in employment selection in May 2023, but that guidance was removed from the agency’s website in January 2025 under the new administration. The result is a regulatory patchwork so complex that compliance has become a specialty practice, with law firms and consultancies building entire practice groups around the question of how to lawfully use AI in hiring.

The cost is not merely financial, though the financial burden is substantial. The patchwork creates perverse incentives. Some employers have responded by restricting AI hiring tool usage to states with minimal regulation, creating geographic disparities in hiring practices. Others have abandoned AI-assisted hiring entirely, reverting to manual processes that are themselves prone to the very biases that AI tools were designed to mitigate. A few have adopted the most stringent standard — typically Colorado’s — as their nationwide baseline, accepting higher compliance costs in exchange for operational simplicity.

None of these responses is optimal. The patchwork is not producing better outcomes for workers, employers, or the public interest. It is producing confusion, cost, and a growing sense that the current approach is unsustainable.

Illinois: Disclosure and the Human Rights Act

Illinois became one of the first states to broadly regulate AI in employment when Governor J.B. Pritzker signed HB 3773 on August 9, 2024, amending the Illinois Human Rights Act to explicitly address artificial intelligence. The amendment took full effect on January 1, 2026.

The law makes it a civil rights violation for employers to use AI in ways that have the effect of discriminating against employees based on protected characteristics, even if such discrimination is unintentional. It also makes it a civil rights violation to fail to notify employees of the employer’s use of AI in employment decisions including recruitment, hiring, promotion, renewal of employment, selection for training, discharge, discipline, and tenure. Additionally, the law prohibits using zip codes as a proxy for protected classes.

The notice requirement appears straightforward, but its implementation has raised numerous practical questions. The law defines “artificial intelligence” broadly enough to capture not only purpose-built AI hiring tools but also general-purpose systems like large language models used to draft job descriptions, screen emails, or summarize interview notes. The Illinois Department of Human Rights has published draft rules clarifying notice and recordkeeping obligations, but final regulations are still being finalized.

Employers must determine, for each step of their hiring process, whether any tool or system meets the statutory definition of AI. This audit process itself has proven time-consuming, as many organizations have adopted AI-powered tools incrementally without centralized tracking of which systems are used where.

Critically, Illinois uses a disparate impact framework. If an AI system produces statistically significant disparities in outcomes across protected classes, it is presumptively discriminatory regardless of the employer’s intent. The employer bears the burden of demonstrating that the disparate impact is justified by business necessity.

The enforcement mechanism relies on complaints to the Illinois Department of Human Rights, with the state’s Human Rights Commission also playing an enforcement role. Applicants or workers who believe they have faced violations can file administrative charges and, after exhausting that process, pursue private lawsuits seeking uncapped compensatory damages, back pay, reinstatement, lost benefits, emotional damages, and attorneys’ fees.

Texas TRAIGA: A Fundamentally Different Philosophy

Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA), signed into law by Governor Greg Abbott on June 22, 2025, and effective January 1, 2026, takes an approach to AI governance that is philosophically distinct from Illinois’s framework. However, its practical impact on private employers is more limited than initial drafts suggested.

Earlier versions of the legislation, including the failed HB 1709, would have imposed significant obligations on private employers using AI in hiring, including mandatory impact assessments. The enacted version of TRAIGA instead places most of its compliance obligations on government agencies and excludes commercial and employment contexts from many of its disclosure mandates.

That said, private employers are not entirely exempt. TRAIGA prohibits the intentional use of AI to discriminate, and its biometric misuse provisions still apply. The key distinction from Illinois is the intent standard: to establish a violation, a complainant must generally demonstrate discriminatory intent or knowing failure to address known discriminatory outcomes, rather than relying on statistical disparities alone.

TRAIGA also provides an important affirmative defense. Employers that demonstrate substantial compliance with the NIST AI Risk Management Framework or similar recognized frameworks have a statutory defense against enforcement actions. This gives employers a relatively clear compliance path — align AI governance with NIST standards and document that alignment.

The Texas approach reflects a deliberate policy choice to prioritize innovation and employer flexibility over prescriptive regulation. Supporters argue that the intent-based framework avoids penalizing employers for statistical patterns that may reflect pre-existing societal inequalities rather than algorithmic bias. Critics argue that it effectively immunizes AI discrimination as long as employers avoid explicitly stating discriminatory intentions, and that the government-focused obligations leave private-sector workers with significantly less protection than their counterparts in Illinois or California.

Advertisement

California and Colorado: The Stringent Standards

California and Colorado have enacted the most demanding AI employment regulations in the country, though they differ in significant details.

California’s AI hiring requirements were adopted through California Civil Rights Department (CRD) rulemaking, with regulations approved on June 27, 2025, and taking effect on October 1, 2025. The regulations amend the existing framework under the California Fair Employment and Housing Act (FEHA) and apply to all employers in California that use automated decision systems in recruitment, hiring, and promotion.

The regulations require employers to conduct anti-bias testing of any automated decision system used in employment. To defend against a discrimination claim, employers must demonstrate that they performed proactive testing before and after adopting the system. The CRD identifies six relevant aspects of such testing, including the quality, efficacy, recency, and scope of the testing, the results, and the employer’s response to those results. A single validation at launch is explicitly insufficient. All data generated by AI hiring tools — including applicant data, model inputs and outputs, and decision logs — must be retained for at least four years. This retention requirement has significant implications for data storage costs and data privacy compliance under the California Consumer Privacy Act.

Colorado’s AI Act (SB 24-205) is broader than California’s hiring-specific requirements. It applies to all “high-risk AI systems,” defined as AI systems that make or substantially influence consequential decisions in areas including employment, education, housing, insurance, and legal services. For employment, this captures not only hiring tools but also performance evaluation systems, promotion algorithms, and termination risk models.

The Colorado law requires developers and deployers of high-risk AI systems to conduct comprehensive impact assessments evaluating the system’s purpose, benefits, risks, data governance practices, bias testing methodology and results, transparency measures, and human oversight mechanisms.

The compliance deadline for the Colorado AI Act has been a moving target. Originally set for February 1, 2026, Governor Jared Polis signed SB 25B-004 in August 2025 — a special session bill that delayed the effective date to June 30, 2026. Senate Majority Leader Robert Rodriguez had initially sought to reform the law with compromise legislation but ultimately abandoned those efforts, stating that reaching consensus proved impossible. The Colorado legislature reconvened in January 2026 with the opportunity for additional amendments before the June deadline, but as of early 2026, the substantive requirements remain intact.

NYC Local Law 144: The Audit Model

New York City’s Local Law 144, which took effect in July 2023, is the oldest AI hiring regulation in the country and has served as a template for subsequent legislation. Its central requirement is an annual bias audit of any automated employment decision tool (AEDT) used in hiring or promotion in New York City.

The audit must be conducted by an independent third party and must assess the AEDT’s impact ratios across race/ethnicity and sex categories. The audit results must be publicly posted on the employer’s website, and a summary must be provided to candidates.

Nearly three years into implementation, Local Law 144 has produced a mixed record. A December 2025 audit by the New York State Comptroller’s office, covering July 2023 through June 2025, found significant enforcement gaps. The Department of Consumer and Worker Protection (DCWP) surveyed just 32 companies during the audit period and identified only a single instance of noncompliance. Despite receiving only two AEDT complaints during the entire two-year period, DCWP did not investigate whether its complaint intake process was functioning properly. The Comptroller’s audit also noted that DCWP officials lacked technical expertise to evaluate AEDT use and did not consult with the city’s Office of Technology and Innovation when making determinations.

Critics identify several additional weaknesses. The law’s definition of AEDT is narrower than the AI definitions used in Illinois and Colorado, potentially excluding AI systems that influence but do not directly make hiring decisions. The audit methodology is not standardized, leading to inconsistencies across auditors. And the public posting requirement has not generated the accountability mechanism that proponents envisioned — systematic monitoring and analysis of published audits remains minimal.

Despite its limitations, Local Law 144 has established the principle that algorithmic hiring tools should be subject to regular, independent review. This principle is now embedded in virtually every subsequent AI hiring regulation.

The Federal Preemption Complication

The already complex state-level landscape is further complicated by the Trump administration’s federal preemption initiative. On December 11, 2025, President Trump signed an executive order establishing a DOJ AI Litigation Task Force, which beginning January 10, 2026, is responsible for challenging state AI laws in federal court on grounds that they unconstitutionally burden interstate commerce or are preempted by federal regulations.

The executive order also directs the FTC to issue a policy statement by March 2026 and instructs the Department of Commerce to identify “onerous” state AI laws that conflict with federal policy. However, the order expressly exempts certain categories of state AI laws from preemption, including those relating to child safety and state government procurement of AI.

For employers, this creates a strategic dilemma. Investing heavily in compliance with state AI hiring laws may prove wasteful if those laws are preempted. But failing to comply in the interim creates immediate legal exposure. Most employment lawyers are advising clients to comply with current state requirements while closely monitoring the federal preemption proceedings.

The absence of a comprehensive federal AI employment standard means that preemption would create a regulatory vacuum. Federal employment discrimination law — primarily Title VII of the Civil Rights Act — addresses intentional discrimination and disparate impact, but it was not written with algorithmic decision-making in mind. The EEOC issued guidance on AI hiring tools in May 2023, but that guidance was removed from the agency’s website in January 2025 and lacks the force of law regardless.

Several members of Congress have introduced AI employment-specific legislation. The No Robot Bosses Act (H.R. 6371), introduced in December 2025, would prohibit employers from relying exclusively on automated systems for employment decisions and require pre-deployment testing, annual discriminatory impact analysis, and public reporting. The bill has been referred to the House Education and Workforce Committee but has not advanced further. The prospect of comprehensive federal AI employment legislation passing in the current Congress is considered remote by most observers.

What Employers Must Do Now

In the absence of federal harmonization, employers using AI in hiring face a set of practical imperatives.

The first is comprehensive inventory. Organizations must identify every AI system used in any employment decision — hiring, promotion, performance evaluation, compensation, and termination. This includes not only purpose-built HR tech tools but also general-purpose AI systems used informally by hiring managers.

The second is jurisdictional mapping. For each AI system, employers must determine which state and local regulations apply based on the location of the employer, the location of the applicant or employee, and the job location. This mapping must be maintained dynamically as new laws take effect and existing laws are amended or potentially preempted.

The third is baseline compliance. Many employers are adopting the most stringent applicable standard as their nationwide baseline. In practical terms, this means conducting Colorado-style impact assessments, implementing Illinois-style disclosures, performing California-style anti-bias testing with four-year data retention, and publishing NYC-style audit results.

This approach is expensive but defensible. It provides a consistent framework that can be adjusted as the regulatory landscape evolves, and it demonstrates good faith compliance efforts that may be relevant in enforcement proceedings or litigation.

The AI hiring regulation landscape in 2026 is, by any measure, fragmented and difficult to navigate. It is also, arguably, an inevitable consequence of a federal government that has not acted on a technology that is transforming employment decisions for millions of workers. Until Congress passes comprehensive legislation or the courts resolve the preemption question, employers and workers alike are left navigating a system that no one designed and no one can fully understand.

Advertisement

🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria’s labor code does not address algorithmic hiring, but multinational employers operating in Algeria and Algerian companies adopting global HR tech platforms will face upstream compliance requirements from these laws
Infrastructure Ready? No — Algeria lacks regulatory frameworks for AI in employment, no bias auditing ecosystem exists, and the Ministry of Labor has not issued guidance on algorithmic hiring tools
Skills Available? No — Algeria has minimal expertise in AI fairness auditing, algorithmic bias testing, or employment-specific AI compliance; universities are not yet producing graduates with these specializations
Action Timeline 12-24 months — Monitor US and EU regulatory developments; begin assessing AI tool usage in Algerian HR departments, especially at multinationals and large state enterprises
Key Stakeholders Ministry of Labor, Employment and Social Security; ANEM (national employment agency); HR departments at Sonatrach, Sonelgaz, and other large employers; multinational companies operating in Algeria; Algerian tech startups building HR tools
Decision Type Educational / Monitor

Quick Take: Algeria’s employment market is not yet AI-driven at scale, but global HR technology platforms like Workday, SAP SuccessFactors, and HireVue are increasingly used by multinationals with Algerian operations. As these platforms adapt to comply with US and EU regulations, Algerian employers will inherit those compliance standards by default. Forward-thinking Algerian policymakers should study the emerging international consensus to prepare a coherent domestic framework before the patchwork problem arrives.

Sources & Further Reading