In September 2023, IBM announced it would train 2 million people in artificial intelligence by the end of 2026. The pledge was made in a press release from IBM’s Chairman and CEO, framed as a response to the global AI skills gap — an estimate that 40% of the workforce would need reskilling within three years. Cisco followed with a commitment to train 25 million people in cybersecurity and digital skills by 2032. SAP joined the AI-Enabled ICT Workforce Consortium co-led by Cisco, alongside Accenture, Google, Intel, IBM, and Microsoft. Accenture committed to training all of its approximately 700,000 employees on generative AI. In aggregate, the numbers are staggering — collectively pledging to train or reskill somewhere between 50 and 100 million people over the next decade.
The question researchers and workforce policy experts are now asking is specific and uncomfortable: what does “training” actually mean in these pledges, and how many learners end up with better jobs? The answer matters beyond PR scorekeeping. If the pledges primarily produce completion certificates for short-form online modules with no documented employment outcome, they function as marketing while the actual skills gap widens. If they produce verifiable, competency-based credentials that employers use in hiring decisions, they are among the most cost-effective workforce interventions in recent history.
The evidence in 2026 is mixed — and the mixed-ness is itself instructive for anyone designing or evaluating corporate training programs.
The Counting Problem at the Core of Every Pledge
What Counts as “Trained”
IBM’s 2 million AI learners figure is drawn from IBM SkillsBuild, the company’s free online platform. IBM has previously reported that its broader 30-million-by-2030 skilling program — which predates the AI-specific pledge — had reached approximately 16 million learners globally by late 2025. This headline number aggregates completions of modules ranging in length from 90 minutes to 12-week structured programs. A learner who completes a single “Introduction to AI” course of 90 minutes and receives a digital badge counts toward the number the same way a learner who completes the full IBM AI Engineering Professional Certificate track (approximately 160 hours) does.
This is not a minor definitional point. The Cisco AI Workforce Consortium’s 2025 annual report makes the same structural choice: it counts people who accessed training resources against the headline commitment, rather than people who demonstrated competency through a standardized assessment and subsequently changed their employment status. Cisco’s own research — published simultaneously — found that 78% of ICT roles now include AI technical skills in their job descriptions, suggesting the demand side of the market is outpacing the supply side even within companies running active training programs.
The Accenture Benchmark — and Its Limits
Accenture’s pledge to train all 700,000 employees on generative AI is the most internally auditable of the major commitments, because it refers to an existing workforce rather than an open global enrollment. Accenture has also been the most specific about what training means: access to internal AI tools, AI literacy courses, and role-specific upskilling tracks built around the AI tasks employees are expected to perform. The company has published that 100% of its employees have completed at least foundational AI literacy as of early 2026 — the only major pledger to have explicitly closed a stated commitment.
What this benchmark reveals is both encouraging and cautionary. Training 700,000 employees on AI literacy in under three years, with employer control over the learning environment, mandatory completion, and management reinforcement, is achievable. Training 25 million external learners on platforms they access voluntarily, with no employer enforcement and no career incentive attached to completion, produces dramatically different completion and retention dynamics.
The Credential Verification Gap — and the Employers Who Are Closing It
Signal 1: Employer Recognition Is the Missing Link
The most structurally important accountability question is not whether IBM trained 2 million people — it is whether the companies hiring software engineers, data analysts, and AI product managers use IBM SkillsBuild credentials as a positive signal in hiring decisions. Evidence here is thin. Credly (the credential verification platform that IBM uses) has published that digital credentials from IBM SkillsBuild are among the most widely shared on LinkedIn globally, but sharing rate is not the same as hiring weight.
A small but growing number of employers — particularly in MENA, Southeast Asia, and Latin America, where local university systems produce graduates with strong theoretical credentials but limited practical AI tool exposure — have begun using IBM and Cisco credentials as filtering signals for junior technical roles. This is meaningful at the margin. It is not yet a systemic shift.
Signal 2: The SAP Enterprise Track Is the Most Employment-Linked
SAP’s contribution to the AI Workforce Consortium is the most directly employment-relevant of the major pledges, because SAP skills are functional requirements for specific enterprise roles rather than general AI literacy. Completing an SAP Business AI Learning Journey and passing the associated certification exam is a prerequisite for roles at SAP partner companies and at large enterprises running S/4HANA. The employment pipeline is visible: SAP partner ecosystem firms (system integrators, consultancies, managed service providers) hire certified SAP consultants into defined roles. This makes the SAP training commitment the most accountable at the individual level — there are real jobs at the end of the credential.
Signal 3: The 78% ICT Role Shift Creates Genuine Urgency
Cisco’s AI Workforce Consortium finding that 78% of ICT roles now include AI technical skills in their descriptions is a concrete market signal that transcends the counting debate. Even if the training pledge mechanics are imperfect, the labor market is moving. Network engineers who cannot demonstrate familiarity with AI-driven network management tools, help desk professionals who cannot use AI triage tools, and data center operators who cannot interpret ML-based anomaly detection dashboards are facing role obsolescence regardless of how IBM counts its learners.
Advertisement
What Enterprise L&D Leaders Should Do About It
1. Audit Credential Depth Before Integrating Into Hiring Frameworks
Not all badges are equal. Before integrating IBM, Cisco, or SAP credentials into hiring or promotion criteria, L&D directors should verify three things about the credential: does completion require a proctored assessment with a passing score? Does the course content map to specific job tasks rather than abstract AI awareness? And does the issuer publish a skills ontology that HR can translate into job description language? IBM’s AI Engineering Professional Certificate and SAP’s S/4HANA AI certification both meet these criteria; many of the shorter free badges do not. Applying this filter before pilots saves the credibility of the entire credential-based screening program.
2. Map Credentials to Role Gaps Before Announcing the Program
The most common failure mode in corporate AI training is enrolling engineers in credentials that don’t connect to a real role change, promotion, or project assignment. For every credential program you consider launching, start with the role gap: which two or three job categories are you actually trying to staff or promote into over the next 12 months? Then work backward from the role’s task requirements to the credential that best covers them. The Cisco AI Workforce Playbook provides a publicly available job-role-to-skill mapping that L&D teams can adapt. The BCG finding that companies with the most value from AI also have the most ambitious upskilling programs suggests the sequencing matters: strategy first, credentials second.
3. Use the Accenture Closed-Loop Model as the Accountability Benchmark
Accenture is the only major pledger to have explicitly closed a stated training commitment — 100% of its 700,000 employees have completed foundational AI literacy as of early 2026. The structural reason is employer control: mandatory completion, management reinforcement, and role-specific tracks rather than voluntary self-enrollment. IBM SkillsBuild and Cisco Networking Academy provide the content infrastructure. The organizations that convert that infrastructure into measurable capability gains are the ones that layer employer-side accountability — mandatory completion, manager coaching, and assessed application to a real work task — on top of the free vendor content. Without that layer, completion rates on voluntary corporate e-learning programs typically fall below 15%.
4. Track Employment Outcomes, Not Completion Certificates
The accountability gap in every major tech pledge is the missing link between training completion and employment outcome. Enterprises running internal AI training can close this gap by tracking two metrics the vendor never will: (1) what percentage of employees who complete a credential are assigned to AI-relevant projects within 90 days, and (2) what is the performance differential (productivity, output quality, retention) between credentialed and non-credentialed employees in equivalent roles? These metrics, tracked over two to three cohorts, tell you whether the training program is producing capability gains or merely compliance checkboxes. The pledge numbers are marketing. Your own cohort data is real infrastructure.
The Structural Lesson for Enterprise L&D Leaders
The accountability check on big tech AI pledges in 2026 yields a specific finding: counting learners is not the same as building capability, and building capability is not the same as improving employment outcomes. The companies whose pledges are most likely to produce employment-level impact are those that attach credentials to specific role requirements, work with employers to make credentials function as hiring filters, and invest in the assessment infrastructure that distinguishes completion from competency.
For L&D directors and HR leaders at companies evaluating whether to integrate IBM, Cisco, or SAP credentials into their own hiring and promotion frameworks, the practical implication is a three-step evaluation: verify that the credential requires demonstrated competency (not just module completion), identify which of your open roles the credential maps to, and pilot the credential as a screening filter for one cohort before scaling. The pledge numbers are marketing. The credential architecture — when it works — is real infrastructure. Distinguishing between the two is the professional skill that 2026’s AI reskilling moment demands.
Frequently Asked Questions
Q: Is IBM SkillsBuild free and accessible globally?
Yes. IBM SkillsBuild (skillsbuild.org) is free, open globally without geographic restrictions, and requires only email registration. Courses are available in multiple languages including French. Completions generate Credly-verified digital badges that can be shared on LinkedIn. There is also an organizational enrollment path for companies that want team-level tracking of progress.
Q: What is the Cisco AI Workforce Consortium and how is it different from the Networking Academy?
The AI-Enabled ICT Workforce Consortium is a research and advocacy body co-led by Cisco with IBM, SAP, Accenture, Google, Intel, Microsoft, and others. It publishes annual workforce reports and an AI Workforce Playbook used by HR professionals to design job frameworks. The Cisco Networking Academy (netacad.com) is the actual training platform for learners — it offers courses in AI, cybersecurity, and networking. The two are related but distinct: the Consortium analyzes the market and sets frameworks; the Academy delivers training.
Q: How do I verify whether a candidate’s IBM or Cisco credential is legitimate?
Credly (credential.net) is the verification platform for IBM SkillsBuild badges. Each digital badge has a unique URL that employers can check — it shows the issuer, the date, the specific learning objectives completed, and whether the credential is still valid. Cisco certifications are verified through the Cisco Certification Verification Tool at cisco.com/go/verifycertificate. SAP certifications are verifiable via the SAP Global Certification Digital Badge program. All three are legitimate, auditable credentials — the quality differentiation is in the curriculum rigor and assessment depth behind the badge, which varies significantly by course.
Sources & Further Reading
- IBM Commits to Train 2 Million in AI by 2026 — IBM Newsroom
- AI Workforce Consortium: 78% of ICT Roles Include AI Skills — Cisco Investor Relations
- ICT in Motion: 2025 AI Workforce Consortium Full Report — Cisco PDF
- Cisco-Led Big Tech Consortium Addresses the AI Skills Gap — Network Computing
- IBM Pledges Free AI Training to 2 Million Workers by 2026 — Futurum Group
- BCG: AI Transformation Is a Workforce Transformation


