AI & AutomationCybersecurityCloudSkills & CareersPolicyStartupsDigital Economy

The AI Scare Trade: When a Karaoke Company Crashes the Stock Market

February 27, 2026

The AI Scare Trade: When a Karaoke Company Crashes the Stock Market

There is a number circulating in boardrooms that should alarm every technology leader on the planet. According to McKinsey’s latest global AI survey, 74% of companies report no tangible value from their AI investments — up from 70% the previous year, even as average spending has doubled.

That number does not describe a technology failure. The models work. The platforms are mature. The tools are available. What it describes is an organizational failure so widespread that it has become the defining challenge of enterprise AI in 2026: the vast majority of companies investing in artificial intelligence have not done the foundational work required to extract value from it.

The Numbers That Don’t Add Up

Taken individually, the data points from the major consultancies tell a story of enthusiastic adoption. Taken together, they reveal a crisis.

Deloitte’s 2026 State of AI in the Enterprise report, surveying more than 3,000 leaders across 24 countries, found that 72% of organizations have started deploying AI models. That sounds like progress. But the same report found that 84% have not redesigned jobs around AI capabilities. Only 21% have a mature model for agent governance. And only 20% of executives are fully confident their data is AI-ready.

Salesforce surveyed 2,000 senior IT leaders and found that 86% believe agentic AI is coming within one to three years. That sounds like strategic awareness. But 74% say their organization is not ready for it. Leaders see autonomous agents as inevitable. They are already paying for them. They have not solved the foundational question of how those agents will integrate into the way work actually happens.

And then the McKinsey number that ties it all together: 74% reporting no tangible value, even as investment doubles. The money is flowing in. The returns are not materializing. And the gap between spending and value is widening, not narrowing.

Investment Without Infrastructure

The pattern is consistent across industries and geographies. Organizations are purchasing AI tools at an unprecedented rate while systematically underinvesting in the organizational infrastructure those tools require.

Deloitte found that while 72% of companies have deployed AI models, data architecture investment has lagged model deployment by 40% in a single year. Companies are buying hammers while neglecting to build the house those hammers are meant to work on.

Consider the practical implications. An AI agent designed to support a sales team needs real-time access to CRM data, customer communication history, product documentation, pricing rules, competitive intelligence, and organizational policies about discounting and escalation. In most enterprises, that information lives in six or seven different systems, managed by different teams, with different access controls and different update cadences. Wiring it together into a coherent context that an AI agent can actually use is not a model problem. It is an integration, governance, and organizational design problem.

Most companies have skipped this step. They deploy the AI tool, point it at whatever data happens to be most accessible, and wonder why the outputs feel generic, disconnected from organizational reality, or outright wrong.

The Microsoft Copilot Warning

The Microsoft Copilot story is the clearest large-scale illustration of what happens when you deploy AI capability without organizational readiness.

When Microsoft launched Copilot in late 2023, the sales campaign was extraordinary. Within months, 85% of Fortune 500 companies had adopted it. It was the fastest enterprise software rollout in recent memory.

Then the numbers stopped growing. Gartner found that only 5% of organizations moved from Copilot pilot to larger-scale deployment. Roughly 3% of the total Microsoft 365 user base became paid Copilot users. Bloomberg reported that Microsoft slashed internal sales targets after the majority of its sales force missed their quotas.

Inside the companies that had signed six-figure Copilot contracts, employees pushed back. Online forums filled with accounts from engineers and knowledge workers at major enterprises describing their organizations downgrading licenses because staff preferred other AI tools — ChatGPT, Claude — or simply did not find Copilot useful enough to justify the disruption.

The common explanation focuses on user experience issues and model quality. Those are real problems. But they are not the root cause. The root cause is that most organizations deployed Copilot without defining how it should integrate into existing workflows, what organizational data it should access, what decisions it should influence, what quality standards should apply to its outputs, or how its use should be governed.

The result was technically functional AI producing organizationally meaningless output. Meeting summaries that missed what actually mattered. Document drafts that did not match the company’s voice or standards. Code suggestions that were syntactically correct but architecturally wrong. Recommendations that made sense in isolation but contradicted organizational strategy.

Copilot was not a bad product deployed into ready organizations. It was a capable product deployed into organizations that had done none of the preparatory work required to make any AI tool useful at scale.

Advertisement

The 84% Problem: Jobs Designed for a Pre-AI World

The Deloitte finding that 84% of companies have not redesigned jobs around AI capabilities is perhaps the most consequential data point in the entire readiness landscape.

When AI is layered onto a job that was designed entirely for human execution, one of two things happens. Either the AI becomes an awkward appendage — technically available but practically ignored, because the workflow was not designed to incorporate it — or the AI disrupts a workflow that has no framework for incorporating its contributions, creating confusion, duplication, and organizational friction.

Neither outcome produces value. Both are common.

Redesigning jobs for AI does not mean replacing humans with agents. It means rethinking how work is decomposed, which components benefit from AI augmentation, which require human judgment, and how the handoffs between human and machine intelligence are structured.

A customer service role in an AI-augmented organization looks fundamentally different from a customer service role in a traditional one. The human agent is no longer handling the high-volume routine inquiries — the AI manages those. The human agent is handling the complex, emotionally sensitive, high-stakes interactions that require judgment, empathy, and institutional knowledge. The skills required are different. The training is different. The performance metrics are different. The career path is different.

Almost no one has done this work. Organizations have added AI to existing job descriptions without rethinking what those jobs should be. They have given knowledge workers AI assistants without redefining what knowledge work means when an assistant can draft, summarize, analyze, and retrieve faster than any human. They have deployed AI into workflows designed for a world without it and then wondered why the value is not materializing.

The Three-Layer Readiness Gap

The organizational readiness crisis operates across three interconnected layers, and failure at any one of them is sufficient to prevent value creation.

Layer 1: Data and Context Infrastructure

AI systems can only be as good as the information they can access. In most enterprises, critical data is fragmented across dozens of systems with inconsistent formats, incompatible access controls, and no unified retrieval layer. Only 14% of organizations have implemented a fully unified data strategy, according to Deloitte. The rest are running AI on partial, inconsistent, often stale information.

This is not a technical problem in the sense that the technology to solve it does not exist. Integration platforms, data lakes, knowledge graphs, and RAG architectures are all mature. It is an organizational problem. Data unification requires cross-departmental coordination, governance decisions about access and privacy, and sustained investment in infrastructure that is invisible to end users. It is difficult, unglamorous work. And it is being systematically deprioritized in favor of flashier AI deployments.

Layer 2: Workflow and Collaboration Design

Even with good data, AI tools need to be embedded into coherent workflows. This means defining, for each major work process, where AI contributes, where humans contribute, how handoffs work, and what quality controls apply.

Most organizations have no framework for this. AI tools are adopted bottom-up by individual employees or pushed top-down by executive mandate, with no systematic thinking about how they fit into the work itself. The result is fragmented, inconsistent AI usage that produces occasional value but cannot be scaled, measured, or governed.

Layer 3: Organizational Intent and Governance

The deepest layer of the readiness gap is about purpose. When an AI agent makes a decision — which customer inquiry to prioritize, what information to include in a summary, how to trade off speed against thoroughness — that decision needs to reflect the organization’s actual values, not just the most easily measurable metric.

Only 21% of organizations have a mature model for agent governance. The remaining 79% are deploying increasingly autonomous AI systems without clear frameworks for what those systems should optimize for, what boundaries they should respect, or how their performance should be evaluated against organizational objectives.

This is how you get Klarna’s outcome: an AI agent that was spectacularly successful at resolving tickets fast while simultaneously destroying the customer relationships the company actually depends on.

Why the Crisis Is Accelerating

The readiness crisis is not stable. It is accelerating, because the capability of AI systems is advancing faster than organizational capacity to absorb them.

Agentic AI — systems that can operate autonomously over extended periods, making decisions, coordinating with other agents, and executing multi-step workflows without human oversight — is arriving at precisely the moment when most organizations have not mastered the basics of AI integration.

Deloitte found that 79% of executives believe agentic AI will significantly improve decision-making within three years. The optimism is not unfounded. The capabilities are real. But decision-making improvement requires the agent to understand what good decisions look like in a specific organizational context. Currently, almost nobody has that infrastructure.

The coming wave of autonomous agents will amplify both the upside and the downside. Organizations with strong readiness infrastructure — good data, coherent workflows, explicit intent frameworks — will see genuine productivity gains and competitive advantage. Organizations without that infrastructure will deploy autonomous agents that optimize brilliantly for the wrong objectives, at scale, for weeks or months before anyone notices.

What Readiness Actually Requires

The path from the current state to genuine AI readiness is not mysterious. It is demanding.

Data unification is non-negotiable. Before deploying any AI system at scale, organizations need a coherent data architecture that gives AI access to the full context it needs to make good decisions. This is the foundation. Without it, nothing else works.

Jobs must be redesigned, not merely augmented. Adding AI to existing job descriptions is not transformation. It is decoration. Every major role that will interact with AI needs to be rethought from first principles: what components are best handled by AI, what components require human judgment, and how the collaboration is structured.

Governance must precede deployment. The sequence matters. Organizations need governance frameworks — decision boundaries, escalation protocols, quality standards, intent parameters — before deploying autonomous agents, not after the damage is done.

Measurement must evolve. Traditional KPIs measure task completion. AI-era measurement must also capture alignment: whether the AI’s decisions served the organization’s broader objectives, not just the immediate ones.

The 74% of companies reporting no tangible value from AI are not failing because the models are inadequate. They are failing because they deployed capable technology into organizations that were not ready for it. The technology will continue to advance. The readiness gap will continue to widen. And the organizations that close the gap first will find themselves with a competitive advantage that compounds over time — because once your organizational infrastructure can absorb AI capability effectively, every improvement in the models translates directly into business value.

For the 84% who have not yet redesigned jobs for AI: the clock is running. Agentic systems do not wait for organizational readiness. They arrive, they are deployed by eager executives with budget authority, and they begin optimizing. The only question is whether they optimize for what you actually need.

Advertisement

🧭 Decision Radar

Dimension Assessment
Relevance for Algeria High — Algerian organizations rushing AI adoption face identical readiness gaps, compounded by less mature data infrastructure
Infrastructure Ready? No — most Algerian enterprises lack unified data architectures and integration layers needed for effective AI deployment
Skills Available? No — organizational design for AI, AI governance, and workflow redesign are not taught in Algerian university programs or professional training
Action Timeline Immediate
Key Stakeholders HR directors, CTOs, COOs, university curriculum designers, ANADE and startup ecosystem leaders
Decision Type Strategic

Quick Take: Algerian organizations have a rare advantage: most are early enough in AI adoption to build readiness infrastructure before deploying at scale, avoiding the costly mistakes of Klarna and the Fortune 500 Copilot adopters. The window to get this right is narrow.

Sources & Further Reading

Leave a Comment

Advertisement