What Illinois HB 3773 Requires — and Why It Landed on January 1, 2026
Illinois was not starting from zero. The state enacted the Artificial Intelligence Video Interview Act in 2020, the first US law to govern AI use in employment — specifically, requiring employers to notify candidates when AI analyzes video interview content (facial expressions, speech patterns) and to obtain consent before doing so. HB 3773 is a substantial expansion of that 2020 foundation: it covers generative AI, screening algorithms, and any automated decision-support tool across the full employment lifecycle, not just video interviews. Employers must retain all AI-related notices and disclosures for four years. Violations expose employers to civil penalties, actual damages, and attorney fees — aligned with Illinois Human Rights Act enforcement, which has resulted in settlements averaging six figures in comparable discrimination cases.
The law amends the Illinois Human Rights Act to treat AI use that produces discriminatory outcomes as a civil rights violation, and to require affirmative disclosure regardless of whether any discrimination occurs. This two-track structure — disclosure obligation plus discrimination prohibition — is what distinguishes HB 3773 from most other US AI employment statutes, which target either disclosure or disparate impact, but rarely both simultaneously.
The practical trigger is broad: any AI tool that “influences or facilitates” a covered employment decision requires disclosure. Covered decisions include recruitment, hiring, promotions, renewals, training and apprenticeship selection, discipline, tenure, and discharge. Common AI tools that trigger disclosure include resume screening software, targeted job advertising platforms, video interview analysis tools, computer-based candidate assessments, and third-party HR data analytics services.
Notice must reach applicants through job postings. Current employees must receive annual notices plus updated notice within 30 days of any new or substantially updated AI system being deployed. The disclosure content is specific: it must identify the AI product name, the vendor, the employment decisions the AI affects, the data categories processed, the job posting scope, contact information for questions, and procedures for accommodation requests. Materials must use plain language and be available in languages common to the workforce.
The California and Colorado Parallel: A Three-State Compliance Architecture
As of 2026, at least 12 US states have introduced or passed legislation addressing AI in employment — ranging from disclosure requirements like Illinois’s to bias testing mandates and prohibitions on specific algorithmic tools. Illinois’s January 2026 effective date came within months of California’s own AI employment regulation. California’s Fair Employment and Housing Act (FEHA) amendments, effective October 1, 2025, make anti-bias testing and proactive evaluation efforts “evidence when evaluating discrimination claims” — meaning employers that fail to conduct ongoing AI bias testing face a weakened legal defense position in discrimination cases, even if no specific testing requirement is formally mandated.
The California framework is more nuanced than Illinois’s: rather than requiring specific disclosures, it creates a powerful incentive structure. Employers that document rigorous, ongoing AI bias testing — examining outcomes across race, gender, disability status, and age — can present that testing as evidence of good-faith compliance. Employers that offer only vendor-provided assurances or one-time pre-deployment tests face minimal defensive value when a complainant demonstrates disparate impact. The California regulation requires that testing be ongoing, recent, and examined by the employer (not just outsourced to the vendor).
Colorado had been positioned to add a third state-level AI employment framework, but implementation was postponed as of early 2026 pending further regulatory development. Colorado’s AI law delay does not remove the compliance pressure it created — it established the legislative precedent that other states are watching. The trajectory across Illinois, California, and the broader state movement points toward a de facto national standard: AI disclosure plus disparate impact assessment, applied across the employment lifecycle.
For multinational employers operating across multiple US states, the pragmatic compliance posture is to build the Illinois disclosure framework as the baseline — since it is the most specific and the most immediately enforced — while simultaneously implementing the California bias-testing documentation structure that provides the best defense position in discrimination claims.
Advertisement
What Enterprise Compliance Teams Must Do Now
1. Complete the AI Tool Inventory Across All HR Functions
The first step is scope definition: you cannot disclose what you do not know you are using. HR and IT teams must jointly inventory every tool that touches an employment decision — including tools procured by individual hiring managers or department heads without central IT involvement (“shadow HR tech”). The inventory should capture the tool name, vendor, the employment decisions it affects, the data categories it processes, whether a data processing agreement exists with the vendor, and whether the vendor has provided bias testing documentation. Any tool used for recruitment, screening, assessment, promotion modeling, or workforce analytics is potentially in scope under Illinois HB 3773.
2. Draft and Post the Required Notices Across Every Required Channel
Illinois HB 3773 specifies four delivery channels for current employee notices: employee handbooks (printed and digital), physical workplace postings, company intranet or website, and job notice systems. Each channel must carry the same disclosure content — product name, vendor, affected decisions, data categories, contact information. For job applicants, disclosure appears in the job posting itself. Missing even one channel creates a violation independent of whether any employee or applicant was actually affected. Legal teams should assign channel ownership and establish a review cycle: annual notice must go out on the same calendar schedule each year, and the 30-day clock for updated notices when new AI systems are deployed must be tracked in the HR technology change management process.
3. Negotiate Vendor Contracts to Secure Bias Testing Data and Audit Rights
The California FEHA amendments create a clear incentive: employers that rely solely on vendor assurances about AI fairness — “our model is tested for bias” — have minimal legal defense value in discrimination claims. The testing must be ongoing, documented, and assess outcomes by protected group. This requires vendor cooperation. Enterprise procurement teams should revise vendor agreements for AI HR tools to include: (a) access to aggregate outcome data by protected group, (b) contractual commitments on testing frequency, (c) notification rights when the model is retrained or updated, and (d) audit rights allowing the employer’s legal counsel to review testing methodology under privilege. Vendors that refuse these terms represent a compliance liability, not just a procurement issue.
4. Establish a Four-Year Records Retention Program for AI Disclosures
Illinois mandates that all AI-related notices and disclosures be preserved for four years. This is not a standard HR records retention period — most employment document retention policies run 1-3 years depending on the document type. The four-year requirement means that notices issued in 2026 must be preserved through at least 2030. Records must be sufficient to demonstrate: when each disclosure was made, which channel was used, which AI tool it covered, and which employee or applicant population received it. HR technology teams should ensure that notice delivery is logged — not just sent — in systems that support a four-year audit trail.
The Structural Lesson: Disclosure Today, Liability Architecture Tomorrow
Illinois HB 3773 is disclosure-focused: it requires that employers tell employees and applicants when AI is in the room, not that they justify the AI’s decisions or achieve specific demographic outcomes. This is a reasonable first-generation regulatory framework — it establishes transparency without yet requiring algorithmic auditability.
But the regulatory trajectory is clear. The Illinois framework’s civil rights hook — treating AI-influenced discrimination as a Human Rights Act violation — provides the legal pathway for second-generation enforcement: plaintiffs who receive disclosure notices can now use those notices as evidence that the employer knew AI was influencing an employment decision when they file a discrimination claim. A disclosed AI tool that produces adverse disparate impact is not absolved by the disclosure — it is more efficiently connected to the employer’s decision-making process in litigation.
The four-year record retention requirement reinforces this trajectory. Retaining four years of AI disclosure records means that when a 2026 hiring cycle becomes a 2028 discrimination lawsuit, the records will show exactly which AI tools were in use, which vendor supplied them, and which candidates received (or did not receive) disclosure notices. The compliance program that Illinois demands today is the litigation evidence ecosystem of tomorrow.
Enterprise compliance officers who approach HB 3773 as a disclosure exercise will find it manageable but miss the strategic point. The organizations that treat this regulatory moment as the catalyst for a comprehensive AI governance program — inventory, bias testing documentation, vendor accountability, and audit trails — will be materially better positioned when the second generation of AI employment regulation arrives, as it will, in most markets where they operate.
Frequently Asked Questions
Q: Does Illinois HB 3773 apply to companies headquartered outside Illinois that hire Illinois residents?
Yes. The Illinois Human Rights Act applies to employers with one or more employees in Illinois. An employer headquartered in another state that has even a single Illinois-based employee — including remote workers — is subject to HB 3773’s requirements for employment decisions affecting that employee. International companies with any Illinois workforce presence must comply.
Q: What is the penalty for failing to post a required AI disclosure notice?
Violations expose employers to actual damages (compensation for the affected employee or applicant), civil penalties, attorney fees, and remedies to make complainants whole under the Illinois Human Rights Act enforcement framework. There is no fixed statutory penalty per violation — the enforcement outcome depends on whether the violation resulted in an adverse employment decision and whether that decision caused demonstrable harm.
Q: Do off-the-shelf HR tools like LinkedIn Recruiter or Workday’s AI features trigger disclosure requirements?
Yes, if they influence covered employment decisions. LinkedIn’s AI-powered candidate ranking features, Workday’s Skills Cloud matching, and similar enterprise HR platform AI components are all covered if an employer uses them to influence hiring, promotion, or other covered decisions. The key legal question is whether the employer’s use of the tool causes AI to “influence or facilitate” the decision — not whether the vendor characterizes the feature as “AI.”
Sources & Further Reading
- Illinois Adopts New AI in Employment Regulations — Hinshaw Law
- AI-Assisted Hiring Faces New Compliance Landscape: California and Illinois — Manatt
- Colorado Postpones AI Law, California Finalizes Employment Regulations — Seyfarth
- Illinois Employers Face AI Transparency Deadline — Reinhart Law
- Illinois Unveils Draft Notice Rules on AI Use in Employment — Ogletree
















