The Largest Democratic Exercise in History Met Its Largest Cyber Threat
The 2024 election cycle was unprecedented in scale and in threat. More than 70 countries with a combined population of roughly four billion people held national elections, from the world’s largest democracy in India to critical contests in the United States, European Union, United Kingdom, Indonesia, and South Africa. Approximately 1.7 billion people actually cast ballots, making 2024 the biggest single-year democratic exercise in history. Every major election faced documented cyber threats, turning the year into a definitive stress test for election cybersecurity in the age of artificial intelligence.
The threat landscape had fundamentally shifted from previous cycles. While the 2016 and 2020 US elections were defined by state-sponsored hacking and social media manipulation, the 2024 cycle introduced generative AI as a force multiplier. Deepfake audio and video, AI-generated text at scale, and synthetic social media personas created a disinformation environment that was faster, cheaper, and more convincing than anything election defenders had previously confronted. CISA (the Cybersecurity and Infrastructure Security Agency) identified AI-enabled disinformation as a top-tier threat to the 2024 US election, alongside traditional infrastructure attacks.
The good news: democratic institutions held in most cases. No major election was overturned by a cyberattack in the traditional sense, and several countries demonstrated effective defensive frameworks. The bad news: Romania’s presidential election was annulled after a court ruled that foreign-backed social media manipulation had distorted the outcome, and the tools available to adversaries continue to evolve faster than the defenses arrayed against them.
The AI Disinformation Playbook: What Happened in 2024
The Slovakia parliamentary election on September 30, 2023, provided the preview. Two days before the vote, a deepfake audio recording appeared on Telegram purporting to capture a phone call between Michal Simecka, leader of the liberal Progressive Slovakia party, and journalist Monika Todova of Dennik N. In the fabricated recording, the two appeared to discuss a scheme to rig the election by buying votes from the Roma minority. The recording was identified as AI-generated, but it rapidly jumped from Telegram to TikTok, YouTube, and Facebook, spreading widely during a pre-election media blackout when factual rebuttals were legally restricted. Both Simecka and Todova denied its authenticity, but the damage during Slovakia’s electoral silence period proved difficult to undo. Slovakia became the template that threat actors would iterate upon throughout the 2024 mega-cycle.
In India’s April-June 2024 general election, deepfake videos of political figures making inflammatory statements circulated on WhatsApp, reaching millions before fact-checkers could respond. The BJP and opposition parties both accused each other of deploying AI-generated content. India’s Election Commission issued guidelines requiring AI-generated content to carry labels, but enforcement was effectively impossible across WhatsApp’s encrypted platform. Meta expanded its Indian fact-checking network to 12 partners covering 16 languages and deployed its Elections Operations Centre, yet acknowledged that disinformation continued to outpace detection across the platform’s 500-million-plus Indian user base.
The 2024 US presidential election saw the most sophisticated deployment. AI-generated robocalls mimicking President Biden’s voice urged New Hampshire primary voters to stay home in January 2024. The FCC adopted a $6 million forfeiture order against Steve Kramer, the political consultant who orchestrated the scheme, while the voice service provider Lingo Telecom agreed to a separate $1 million fine. Kramer also faced criminal charges in New Hampshire. Throughout the general election, AI-generated images, audio clips, and full video deepfakes were deployed by both domestic and foreign actors. Microsoft’s Threat Analysis Center attributed coordinated AI disinformation campaigns targeting the US election to actors linked to Russia (Storm-1516), China (Spamouflage), and Iran (Cotton Sandstorm).
The most dramatic case came in Romania. In the November 2024 presidential election, far-right candidate Calin Georgescu surged from near-zero polling to win the first round, propelled by a coordinated TikTok campaign that Romanian intelligence services linked to Russian-backed operations. More than 25,000 TikTok accounts were allegedly used to amplify his candidacy. Romania’s Constitutional Court annulled the election results in December 2024, marking the first time a European democracy voided an election over foreign-backed social media manipulation. The European Commission subsequently opened a formal DSA investigation into TikTok’s failure to mitigate election integrity risks in Romania.
Advertisement
Infrastructure Under Siege: Beyond Disinformation
While AI disinformation dominated headlines, traditional election infrastructure attacks continued and evolved. Voter registration databases remained prime targets. The UK’s Electoral Commission disclosed in August 2023 that a breach, initially occurring in August 2021 and undetected until October 2022, had exposed the personal data of approximately 40 million voters. In March 2024, the UK’s National Cyber Security Centre attributed the attack to APT31, a Chinese state-backed hacking group. The breach, which went undetected for over a year and took three years and more than GBP 250,000 to fully remediate, underscored the persistent vulnerability of centralized voter data systems.
DDoS attacks against election websites and result-reporting portals were reported in multiple countries. Indonesia’s General Election Commission (KPU) website was hit with what officials described as “hundreds of millions” of denial-of-service attacks on election day, February 14, 2024, during the presidential election count. The kpu.go.id website was temporarily inaccessible, disrupting public access to vote tally data before the KPU Cyber Security Task Force restored service. Similar attacks targeted election infrastructure in Moldova, Georgia, and Romania during their respective 2024 contests, with attribution pointing to Russian-linked actors in several cases. Moldova’s Central Electoral Commission, government cloud systems, and independent media outlets all suffered sustained DDoS attacks during the October 2024 presidential election.
The targeting of election supply chains represents a concerning evolution. Voting technology vendors, election management software providers, and even third-party printing companies producing ballot materials became targets for reconnaissance and compromise attempts. CISA responded by conducting over 700 cybersecurity assessments for local election jurisdictions in 2023 and 2024 alone, reflecting the growing recognition that compromising a single vendor can affect elections across multiple jurisdictions.
Defensive Frameworks That Worked
Several countries demonstrated effective election cybersecurity models during the 2024 cycle. The United States’ approach, coordinated by CISA, centered on the “Rumor Control” framework that preemptively addressed anticipated disinformation narratives with factual content. CISA deployed cybersecurity and election security advisors to all 50 states, conducted 200 tabletop exercises and over 500 trainings reaching more than 30,000 election officials and partners, and established real-time threat-sharing channels between federal agencies, state officials, and social media platforms. While imperfect, the framework provided a structured response mechanism that significantly shortened the time between disinformation appearance and authoritative debunking.
The European Union leveraged the Digital Services Act (DSA) as its primary regulatory tool. Under DSA obligations, very large online platforms including Meta, X, Google, and TikTok were required to assess and mitigate election-related risks, provide researcher access to data, and implement rapid-response mechanisms during election periods. The European Digital Media Observatory (EDMO) coordinated fact-checking efforts across member states. However, enforcement proved slow: the first DSA fine was not imposed until December 2025, when X (formerly Twitter) was fined EUR 120 million for transparency violations including inadequate advertising repository access. TikTok avoided a fine by accepting binding commitments on advertising transparency, though the Commission’s investigation into its role in the Romanian election continued.
Taiwan’s January 2024 presidential election offered perhaps the most instructive model. Facing persistent Chinese influence operations, Taiwan deployed a “humor over rumor” strategy where government agencies and civil society organizations rapidly produced memes and satirical content to defuse disinformation narratives. Established by the Ministry of Digital Affairs around 2022, the strategy leveraged the speed advantage of humor, which can go viral faster than fact-checks, while avoiding the censorship concerns associated with content removal. Combined with robust public digital literacy programs and a highly engaged civil society fact-checking ecosystem, Taiwan successfully navigated its election despite being one of the most intensively targeted democracies on earth.
The 2026 Horizon: Emerging Threats and Evolving Defenses
Looking ahead, three developments will define the next generation of election cybersecurity challenges. First, real-time deepfake video is approaching the threshold of live deployment. The ability to generate convincing deepfake video of a political figure in real-time, potentially during a live-streamed event, creates a threat vector that current detection tools cannot address within the relevant time window. Startups like Reality Defender and others are developing real-time detection capabilities, but deployment at the scale of social media platforms remains a significant engineering challenge.
Second, the convergence of AI and micro-targeting creates hyper-personalized disinformation. Rather than broadcasting a single false narrative to millions, adversaries can now generate thousands of tailored messages designed to exploit specific demographic, psychographic, or regional vulnerabilities. A voter in a swing district could receive AI-generated content calibrated to their specific concerns, delivered through channels they trust, making detection and debunking far more difficult than mass-produced propaganda.
Third, the regulatory landscape remains fragmented. The EU’s DSA provides one model, but most democracies lack equivalent frameworks. The US approach relies heavily on voluntary platform cooperation, which has weakened significantly as major platforms reduced trust and safety teams in 2023-2025. Meanwhile, CISA itself faces political headwinds, with election officials reporting diminished federal support. The gap between the sophistication of AI-enabled election threats and the regulatory tools available to counter them continues to widen.
International coordination offers the strongest path forward. The G7 Hiroshima AI Process and the Bletchley Declaration on AI Safety established high-level principles, but operational election cybersecurity coordination remains ad hoc. A permanent international election cybersecurity coordination center, modeled on CERT-to-CERT relationships, could enable real-time threat sharing across democracies facing common adversaries.
Advertisement
🧭 Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | Medium — Algeria holds elections and faces regional disinformation risks; AI-powered manipulation tools are globally accessible and increasingly used across the MENA region |
| Infrastructure Ready? | No — Algeria lacks a dedicated election cybersecurity agency, platform accountability framework, or systematic disinformation monitoring capability |
| Skills Available? | No — election cybersecurity and AI disinformation detection are specialized fields with no established domestic expertise in Algeria |
| Action Timeline | 12-24 months — building institutional awareness and basic defensive frameworks before the next electoral cycle |
| Key Stakeholders | Ministry of Interior, ANIE (electoral authority), ARPCE, media regulators, civil society organizations, social media platforms operating in Algeria |
| Decision Type | Strategic |
Quick Take: The 2024 mega-election cycle proved that AI-powered disinformation and infrastructure attacks are now standard threats to democratic processes worldwide. Algeria’s electoral infrastructure, social media landscape, and institutional preparedness are not immune to these threats. Learning from defensive frameworks that worked in the US (CISA), EU (DSA), and Taiwan (humor-over-rumor) and adapting them to Algeria’s context is a strategic priority that should not wait for the next electoral cycle.
Sources & Further Reading
- CISA Election Security — #PROTECT2024 Campaign
- Microsoft Threat Analysis Center — Election Threat Reports 2024
- EU Digital Services Act — TikTok Election Integrity Proceedings
- UK Electoral Commission — Cyber Incident Public Notification
- FCC Forfeiture Order — AI-Generated Biden Robocalls
- Jakarta Post — KPU Websites Face Extraordinary Cyberattacks on Voting Day
- Foreign Policy — Taiwan’s Electoral Anti-Disinformation Strategy
- Harvard Kennedy School Misinformation Review — The Slovak Deepfake Case
- The Hacker News — Romania Cancels Presidential Election Results
- EU Register — First DSA Fine Imposed on X (EUR 120M)
- GEOpolitics — Russian Interference in Moldova, Romania, and Georgia 2024 Elections
- Meta — Preparing for Indian General Elections 2024
Advertisement