← Back to Live in the Future
💼 Labor & AI

A Company Reported 1.3× AI Productivity Gains. The Median Worker Got 2%.

The average hid a bimodal split: top 10% at 7.0× productivity, median at 1.02×. Six months of enterprise AI adoption data reveals that the metrics companies report to boards are designed to obscure, not illuminate.

By Nadia Kovac · Labor & AI Policy · March 13, 2026 · ☕ 9 min read

Abstract visualization of bimodal productivity distribution in a large organization

Here is a number that appeared in a quarterly board presentation at a large technology company: 1.3× average productivity improvement from AI tools.

The board approved expanded AI investment. The CFO cited the number in the earnings call. Analysts repeated it. It became one of those figures that gets passed around conference keynotes until everyone treats it as settled fact.

Here is what the number actually meant. The top 10% of workers — the ones who were already the most productive — hit 7.0× output. They had reorganized their entire workflow around AI agents, automated their repetitive tasks, and were producing at a pace that made the metrics look transformational. The median worker, the person in the middle of the distribution, gained 2%. Two percent. Barely distinguishable from noise.

The average was real. The average was also a lie.

The Bimodal Problem

This pattern — extreme gains at the top pulling up an average that represents almost nobody — keeps showing up in enterprise AI data. Boston Consulting Group’s 2023 study with Harvard found that consultants using GPT-4 improved performance by 40% on average. But the gains were wildly uneven: top performers got modest help, while the bottom half got the biggest lift. Different distribution, same lesson. The average was technically correct and practically useless.

Microsoft’s 2024 Work Trend Index reported that 75% of knowledge workers use AI. What the headline didn’t say: UpGuard found 80%+ of workers use unapproved tools that IT can’t see, can’t audit, and can’t measure. The company reports productivity from the tools it deployed. The workers are productive on tools the company doesn’t know about.

When you model this at scale across an 80,000-person organization — tracking individual trajectories rather than aggregates — the bimodal split produces consequences that averages can’t capture.

MetricBoard PresentationActual Distribution
Productivity multiplier1.3× averageTop 10%: 7.0× / Median: 1.02×
Shadow AI usage“Under control”80%+ using unapproved tools (UpGuard)
Manager span“Optimized”4.4→4.9 average, some 12+ (Pave)
Knowledge retentionNot tracked47.1% reemployment in info sector (BLS)
AI agent reliability“Scaling well”68% need human help within 10 steps (UC Berkeley)

The Quiet Disappearance

The dominant displacement mechanism in 2025-2026 is not layoffs. It is non-replacement.

Someone quits. Their work gets absorbed by remaining staff plus AI tools. The requisition never reopens. No severance. No WARN Act filing. No headline. Shopify’s CEO Tobi Lütke told employees in April 2025 that no team could request new headcount without first proving the work couldn’t be done by AI. McKinsey cut 2,000 positions through attrition-without-backfill. Meta’s “Year of Efficiency” eliminated 21,000 roles, then selectively rehired for AI-focused positions at one-third the volume. IBM told Bloomberg it expected to pause hiring for 7,800 back-office roles that AI could handle.

This is displacement that looks, statistically, like voluntary turnover. BLS counts it as a quit, not a layoff. The worker who left shows up as a job-seeker. The position that vanished shows up as nothing at all.

Bureau of Labor Statistics data shows the information sector has the lowest reemployment rate of any industry: 47.1%. One-third of displaced tech workers left the labor force entirely. Re-hire times are bimodal: AI/ML engineers average 1.4 months. Product managers: 4.8. Engineering managers: 5.8.

The Great Flattening

Middle management is being compressed from both directions simultaneously.

Revelio Labs reports a 40% drop in middle-management job postings since 2023. Gartner projects 20% of companies will eliminate more than half their mid-management positions by 2026. Amazon cut 14,000 manager positions outright, increasing its individual-contributor-to-manager ratio by 15%. Pave data across 257,000+ managers shows average spans growing from 4.4 to 4.9 reports, with some hitting 12 or more.

At the same time, entry-level roles — the pipeline into those management positions — are collapsing. BambooHR and Korn Ferry data show entry-level’s share of total hiring dropped from 43% to 28%. Junior workers aren’t getting hired. Senior workers aren’t getting managed. The mentorship pipeline that turns one into the other is severing at both ends.

The span-of-control research is clear: once a manager has more than 9 direct reports, coaching effectiveness drops 25%. At 12+, it hits zero. Andy Grove’s High Output Management put the ceiling at 8. Companies running managers at 12 aren’t “flattening the org.” They’re eliminating the function while keeping the title.

The Adapts Trap

The most insidious finding has nothing to do with layoffs. It’s about the people who stayed.

Gallup’s 2024 State of the Global Workplace found 53% of employees are “not engaged” — showing up, doing the minimum, collecting the paycheck. They didn’t get fired. They didn’t quit. They adapted. And adaptation, it turns out, often looks like resignation wearing professional clothes.

When you track individual trajectories through six months of AI adoption, the “Adapts” category is the largest and the most dishonest. Survival gets counted as success. An engineer who used to build systems from scratch and now reviews AI-generated code is “adapting.” A PM who used to run user research and now validates agent outputs is “adapting.” A manager whose span went from 6 to 14 and who hasn’t had a one-on-one in three months is “adapting.”

The org chart says they’re fine. LeadDev’s 2025 survey says 46% of engineering leaders report moderate-to-critical burnout. Same people. Different measurement.

The Reversal Nobody Mentions

The strangest thing about the current AI transformation narrative is that several high-profile examples have already started failing.

Klarna replaced 700 customer service representatives with AI in 2024 and announced it publicly as a triumph. By early 2026, quality metrics had deteriorated enough to force a quiet pivot to a “VIP human” model — humans back in the loop for complex and high-value cases. Salesforce cut roughly 4,000 positions tied to its Agentforce rollout, then began quietly rehiring by late 2025 after customer backlash. Duolingo went “AI-first” on content, produced 148 new courses, and saw growth decelerate as quality became harder to maintain.

Forrester’s 2025 employer survey quantified the pattern: 55% of employers who made AI-driven layoffs reported regret within 18 months. Gartner estimates only 20% of 2025 layoffs were genuinely AI-capability-driven — the rest used AI as cover for cost-cutting that would have happened anyway.

Meanwhile, UC Berkeley and Stanford researchers found that 68% of production AI agents need human intervention within 10 steps. Task completion on multi-step workflows: 30-50%. A $47,000 recursive loop between two agents ran for 11 days at one company before anyone noticed. Microsoft considers a 1:20 human-to-agent ratio “industrial stage.” Some companies are running 1:400.

What the Decision Data Shows

One finding that surprised even the analysts tracking it: AI adoption made team-level decisions 15% faster but cross-team decisions 40% slower.

The mechanism is straightforward. Fewer meetings and more individual-contributor autonomy speeds up decisions within teams. But the coordination layer — the middle managers who used to synchronize across teams, translate priorities between departments, resolve resource conflicts — was the layer that got flattened. Without it, cross-team work degrades into email chains and escalations that eventually reach executives who don’t have context.

The aggregate metric? “Decision latency: +40%.” Which is technically true and completely misleading — the same measurement problem as the productivity average, except now it’s hiding a speed-up inside a slow-down.

The Bottom Line

The 1.3× productivity number is real. It will appear in earnings calls, board decks, and analyst notes. It will be used to justify the next round of headcount reductions and the round after that.

But the median worker gained 2%. Entry-level hiring is down 35%. Middle management postings dropped 40%. The people who got displaced have the lowest reemployment rate of any industry. The people who stayed are burning out at record levels. The AI agents doing the “scaled” work fail on 50-70% of multi-step tasks. And the companies that moved fastest — Klarna, Salesforce, Duolingo — are already quietly reversing course.

None of those facts will appear in the board presentation. The average will. Averages are very good at keeping their secrets.

Related