The Paper That Named the Problem

In February 2026, two of Microsoft’s most respected engineering voices published a paper in Communications of the ACM that gave a name to something the industry had been feeling but struggling to articulate. Mark Russinovich, CTO of Microsoft Azure, and Scott Hanselman, a veteran developer advocate at Microsoft Core AI, co-authored “Redefining the Software Engineering Profession for AI,” examining how AI coding agents are reshaping the software engineering workforce. Their conclusion was alarming: AI is making senior engineers dramatically more productive while simultaneously undermining the mechanisms through which junior engineers develop expertise.

They called the phenomenon “AI drag,” a term that has quickly entered the industry lexicon. The concept is straightforward but its implications are profound. When senior engineers use AI coding agents, they benefit enormously. They have the contextual knowledge to direct agents effectively, evaluate output critically, and integrate AI-generated code into complex systems. Their productivity rises measurably. The 2025 Stack Overflow Developer Survey confirmed the pattern: 84 percent of developers now use or plan to use AI tools, and developers with 10 to 19 years of experience were the most likely (84 percent) to cite productivity gains, higher than any less-experienced cohort. But when junior engineers use the same tools, something different happens. They get answers without building understanding. They ship code they cannot debug. They develop a dependency on AI assistance that looks like competence on the surface but collapses under scrutiny.

The paper drew on internal data from Microsoft’s engineering organization as well as external research, including a Harvard study by researchers Seyed M. Hosseini and Guy Lichtinger that tracked 62 million workers across 285,000 U.S. firms from 2015 to 2025. The Harvard data was striking: when companies adopted generative AI tools, junior employment dropped by 9 to 10 percent within six quarters, while senior employment remained virtually unchanged. Not because the juniors were explicitly fired, but because hiring slowed, contracts were not renewed, and the organizational appetite for investing in early-career talent evaporated.

Russinovich and Hanselman did not frame this as an argument against AI tools. Both are enthusiastic advocates of AI-assisted development. Instead, they argued that the industry has adopted these tools without thinking about their second-order effects on organizational learning, and that the result is a slow-motion crisis in the engineering talent pipeline.

The Mechanics of AI Drag

Understanding AI drag requires understanding how junior engineers traditionally develop expertise. The classical model, refined over decades of software engineering practice, works roughly like this: a junior engineer receives a task slightly beyond their current ability, struggles with it, asks for help from a more experienced colleague, iterates on the solution, goes through code review, and gradually builds a mental model of how systems work. The struggle is not a bug in the process. It is the process.

AI coding agents short-circuit this cycle. When a junior engineer faces a task beyond their ability, the AI provides a working solution, often within seconds. The junior ships the code. It passes tests. The pull request gets approved (often with less scrutiny, because AI-generated code tends to look syntactically clean). The task is marked complete. But the junior has not built the mental model. They have not grappled with the tradeoffs. They have not developed the debugging instinct that comes from writing broken code and figuring out why it broke. As one industry analysis put it, the junior of 2026 needs the system-design understanding of a mid-level engineer of 2020, just to be useful, because the AI handles the syntax but not the judgment.

Russinovich and Hanselman identified several specific mechanisms through which AI drag manifests. The first is what they called “comprehension bypass.” Junior engineers using AI agents can produce code that solves the stated problem without understanding the underlying systems. This works fine until something goes wrong in a way the AI did not anticipate, at which point the junior is unable to diagnose or fix the issue.

The second mechanism is “review atrophy.” Code review has historically been one of the most important learning channels for junior engineers. Reviewing a senior’s code teaches patterns. Having your own code reviewed teaches judgment. But when AI generates most of the code, reviews become perfunctory. Reviewers focus on whether the code works, not on whether the author understands it, because the author is not meaningfully the author.

The third mechanism is “mentorship displacement.” In a pre-AI world, senior engineers spent significant time helping juniors. This was costly in the short term but essential for organizational health. AI agents have replaced much of this interaction. Why spend 30 minutes helping a junior understand a design pattern when the junior can ask the AI and get a working implementation in 30 seconds? The senior saves time, the junior gets unblocked, and the organization loses a mentorship interaction that cannot be recovered.

The Harvard Data: What the Numbers Show

The Harvard study that Russinovich and Hanselman cited provides some of the most rigorous empirical evidence for the pipeline problem. The research team, led by Seyed M. Hosseini and Guy Lichtinger, tracked employment data across 285,000 U.S. firms and 62 million workers from 2015 to 2025, comparing companies that adopted generative AI tools with those that had not.

The findings painted a clear picture. At firms with high AI adoption, junior employment declined by 9 to 10 percent within six quarters relative to non-adopters, while senior employment remained largely unchanged. The mechanism was primarily through reduced hiring rather than direct layoffs. Companies simply stopped backfilling junior positions when people left, and slowed the conversion of interns to full-time offers. In wholesale and retail trade sectors, the decline was even steeper, with 40 percent fewer junior hires after AI adoption.

The broader labor market data corroborates this pattern. Entry-level developer opportunities have plummeted by approximately 67 percent since 2022, according to industry tracking. A Stanford University study found that employment among software developers aged 22 to 25 fell nearly 20 percent between 2022 and 2025, coinciding precisely with the rise of AI-powered coding tools. Overall programmer employment fell 27.5 percent between 2023 and 2025. The share of juniors and graduates in IT employment has dropped from approximately 15 percent to just 7 percent over the past three years.

Interestingly, mid-level engineering employment remained relatively stable across both the Harvard study’s cohorts. The squeeze was concentrated at the bottom of the experience spectrum, engineers with zero to three years of experience. This makes intuitive sense: mid-level engineers have enough accumulated knowledge to use AI tools productively, but junior engineers do not. A 2025 LeadDev survey reinforced this dynamic, finding that 54 percent of engineering leaders plan to hire fewer juniors specifically because AI copilots enable seniors to handle more work.

The Stack Overflow 2025 Developer Survey added a qualitative dimension to these numbers. The biggest frustration cited by 66 percent of developers was dealing with AI solutions that are “almost right, but not quite,” and 45 percent found debugging AI-generated code more time-consuming than writing code from scratch. For junior engineers without the mental models to spot subtle errors, these frustrations compound into a learning trap rather than a productivity tool.

The researchers noted that these trends, if continued, would create a significant talent gap within three to five years. A 67 percent hiring cliff in 2024 to 2026 means 67 percent fewer potential engineering leaders in 2031 to 2036. The pipeline that has historically produced mid-level and senior engineers depends on a steady flow of juniors entering the profession and developing through hands-on experience. If that flow is constricted, the downstream effects on the industry’s talent base will be severe.

Advertisement

The Preceptor Model: Russinovich and Hanselman’s Proposed Fix

The most significant contribution of the Russinovich-Hanselman paper, published in Communications of the ACM under the title “Redefining the Software Engineering Profession for AI,” is not the diagnosis (which many in the industry had already intuited) but the proposed solution. They introduced what they call the “preceptor-based organization,” a model borrowed from medical education that they argue is uniquely suited to the AI-augmented engineering workplace. The paper argues that without deliberate changes to early-in-career (EiC) training, the talent pipeline of the software engineering profession faces collapse.

In medical education, a preceptor is an experienced practitioner who works directly with students or residents, supervising their clinical work while allowing them to make decisions and develop judgment. The preceptor does not do the work for the student. Instead, they observe, guide, and intervene only when necessary. The student maintains agency and ownership while having a safety net of expert oversight.

Russinovich and Hanselman propose adapting this model for software engineering. In a preceptor-based engineering organization, senior engineers are explicitly paired with junior engineers, not as occasional mentors but as structured preceptors with defined responsibilities and protected time. The key innovation is how AI agents fit into the triad.

In their model, the AI coding agent is a tool that the junior-senior pair uses together. The junior interacts with the AI, but the senior observes how they interact, what they accept and reject, how they evaluate the output, and where their understanding breaks down. The senior’s role shifts from “person who answers questions” to “person who teaches judgment.” They help the junior develop the critical evaluation skills that distinguish productive AI usage from dependency.

Concretely, this means several changes to standard engineering workflows. Code review becomes a teaching moment, not just a quality gate. The reviewer does not just check whether the code works; they ask the junior to explain it, to identify where the AI’s suggestions were suboptimal, and to articulate the tradeoffs. Pair programming sessions explicitly include AI tools, with the senior guiding the junior in how to prompt effectively, how to spot AI-generated antipatterns, and when to reject a suggestion and think from first principles.

The preceptor model also redefines what “productive” means for senior engineers. In the current paradigm, senior productivity is measured almost entirely by output: code shipped, systems built, incidents resolved. In the preceptor model, a portion of the senior’s performance evaluation is tied to the development trajectory of their junior preceptees. This is not a soft metric. It is tracked through specific milestones: the junior’s ability to independently debug production issues, their contribution quality as assessed by blind code review, and their progression on the technical career ladder.

Organizational Implications and Adoption Challenges

Implementing the preceptor model is not trivial. It requires organizations to make explicit investments in mentorship that the AI-efficiency wave has been pulling away. There are several practical challenges that companies considering this approach will face.

The most immediate challenge is cost. Assigning senior engineers to preceptor roles means those seniors produce less individual output. In an environment where every company is trying to maximize output per engineer (partly to justify their AI tool investments), carving out 20 to 30 percent of a senior’s time for structured mentorship feels like a step backward. The pressure is acute: Salesforce CEO Marc Benioff announced in 2025 that the company would hire “no new engineers,” and similar sentiment pervades C-suites industry-wide. Russinovich and Hanselman argue that this is short-term thinking, but the short-term pressure is real, especially for startups and mid-size companies operating under tight financial constraints.

The cultural challenge may be even more significant. Many senior engineers entered the profession because they love building things, not teaching people. Asking them to shift a substantial portion of their time to mentorship requires a cultural shift that rewards teaching, celebrates the development of others, and treats pipeline building as equally important as feature shipping.

There is also a measurement problem. The benefits of the preceptor model accrue over years, not quarters. A company that invests in preceptor programs in 2026 may not see the full talent-pipeline returns until 2029 or 2030. Most organizations operate on shorter planning horizons, and demonstrating ROI on mentorship investments is notoriously difficult.

Despite these challenges, early adopters are emerging. Microsoft itself has begun piloting preceptor-style programs in several engineering divisions, with Russinovich’s Azure organization serving as a testbed. The pilot pairs senior engineers with cohorts of two to three juniors for six-month rotations, with structured curricula, regular assessments, and explicit AI-tool training integrated into the workflow.

Several other large technology companies have expressed interest in the model. The appeal is not just altruistic. Companies that build effective preceptor programs will have a structural advantage in recruiting. In a market where junior engineers are increasingly anxious about their career development in an AI-dominated workplace, the promise of genuine, structured mentorship is a powerful differentiator.

The Productivity Paradox: More Code, Less Understanding

A UC Berkeley and Yale study published in Harvard Business Review in February 2026 tracked 200 employees at a technology company for eight months and found a pattern that directly supports the AI drag thesis. Senior engineers now handle work that previously required multiple people, and their workload creep and review burdens have increased. Meanwhile, only 17 percent of AI agent users in the 2025 Stack Overflow Developer Survey agreed that agents improved collaboration within their team, the lowest-rated impact by a wide margin.

This data reveals a collaboration paradox. AI tools were expected to make teams more efficient and cohesive. Instead, they are making senior engineers more autonomous and junior engineers more isolated. The seniors produce more and mentor less. The juniors produce AI-assisted output and learn less. The team as a whole ships faster, but the organizational knowledge base erodes beneath the surface.

The MIT Technology Review documented a similar trend in late 2025, noting that AI coding is now ubiquitous but that the people closest to the technology remain deeply ambivalent about its effects on the profession. Positive sentiment toward AI tools dropped from above 70 percent in 2023 and 2024 to just 60 percent in 2025, according to Stack Overflow’s data, suggesting that the industry is beginning to reckon with costs it initially overlooked.

The Stakes: Why This Matters Beyond Tech

The AI mentorship crisis is not just a human resources problem for technology companies. It has implications that extend well beyond the industry. Software engineering is one of the most important skill pipelines in the modern economy. The systems that run finance, healthcare, transportation, energy, and government are all built and maintained by software engineers. If the pipeline that produces those engineers is compromised, the effects ripple outward.

Russinovich and Hanselman’s paper implicitly argues that AI tools are neutral instruments whose impact depends entirely on how organizations choose to deploy them. Used thoughtfully, AI coding agents can accelerate learning, provide on-demand examples, and free up senior engineers to focus on higher-order mentorship. Used carelessly, the same tools hollow out the learning process, create dependent junior engineers, and erode the organizational knowledge base.

The choice between these outcomes is not made by AI. It is made by the humans who design organizational structures, set performance incentives, and decide how to invest their senior engineers’ time. The preceptor model is one answer. There will be others. But the first step is acknowledging that the problem exists, that the current trajectory is unsustainable, and that the industry needs to be as intentional about developing human talent as it is about developing artificial intelligence.

Advertisement

🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria High — Algeria produces 377,000 graduates annually with 57,700+ students in AI programs across 52 universities; the mentorship crisis directly threatens the quality of this pipeline if AI tools are adopted without structural safeguards
Infrastructure Ready? Partial — AI coding tools (GitHub Copilot, ChatGPT) are accessible to Algerian developers, but structured mentorship programs and preceptor-model organizational designs are absent from local tech companies and universities
Skills Available? Partial — Strong computer science foundations exist (ENSIA, ESI, USTHB), but most graduates enter a job market with limited formal mentorship structures; the risk is that AI tools substitute for the human guidance that was already scarce
Action Timeline Immediate — Universities and the nascent Algerian tech ecosystem should integrate AI-tool-literacy and structured mentorship into curricula now, before a generation of graduates becomes dependent on tools they cannot critically evaluate
Key Stakeholders CS faculty at ENSIA and major universities, Algerian tech startups and employers, Algeria Digital Strategy 2030 planners, Huawei-Algeria vocational training partnership, junior developers entering the workforce
Decision Type Strategic — Algeria’s young, tech-educated population (50%+ under 30) is both the asset at risk and the resource that could benefit most from a preceptor-model approach adapted to local conditions

Quick Take: Algeria’s massive computer science education pipeline is simultaneously its greatest strength and its point of vulnerability in the AI mentorship crisis. With over 57,000 students in AI programs and the National School of Artificial Intelligence producing specialists, the raw talent exists. But if graduates enter workplaces where AI tools substitute for human mentorship rather than complement it, Algeria risks producing a generation of developers who can prompt but cannot debug. Universities should study the preceptor model and adapt it to their project-based courses. Algerian tech companies, even small ones, should assign senior developers explicit mentorship responsibilities rather than assuming AI tools will handle junior onboarding.

Sources & Further Reading