Half the People Getting Fired for AI Are Getting Fired for Nothing. The Other Half Should Be Terrified.
$427 billion in AI investment. $37 billion in AI revenue. A 39-percentage-point gap between how fast developers think AI makes them and how fast it actually does. Companies are firing people for productivity gains that don't exist โ and the real displacement hasn't started yet.
Eleven to one.
That's the ratio of money going into AI ($427 billion in Big Tech capital expenditure in 2025) to money coming out ($37 billion in enterprise AI revenue). Sequoia Capital flagged this in September 2024 as a "$600 billion question." A year and a half later, the question hasn't been answered. It's gotten worse.
And yet, in that same period, companies have cut over 400,000 workers with AI cited as a primary or contributing factor. Klarna eliminated 3,104 positions. Amazon cut 30,000 managers. Block fired 4,000. The stated reason, over and over: AI makes us more productive. We need fewer people.
The problem is that the productivity gains don't survive clinical scrutiny. And the companies doing the cutting mostly haven't checked.
The 39-Point Lie
In early 2026, the Model Evaluation and Threat Research Institute (METR) published the first randomized controlled trial of AI coding assistants with experienced software developers. Not students. Not interns. Developers with a median of five years on the specific codebase they were working in.
The developers using AI tools were 19 percent slower at completing tasks.
That alone would be notable. What made the study devastating was the second finding: those same developers believed they were 20 percent faster.
A 39-percentage-point gap between perceived and actual performance. Not on unfamiliar tasks โ on their own code, in their own repositories, with tools they'd chosen themselves.
METR attributed the gap to several factors. Developers spent more time setting up prompts than they saved on generation. AI-produced code required more review (review time increased 91 percent). The code that shipped had 1.7 times more bugs and a 32.7 percent pull request acceptance rate, compared to 84.4 percent without AI assistance. The tool created a feeling of velocity โ auto-complete, instant generation, the dopamine of watching code appear โ that masked a net throughput loss.
This isn't a study about AI being useless. It's a study about AI creating a perception of usefulness so convincing that experienced professionals couldn't detect the deception from the inside.
The Productivity Numbers That Justify the Layoffs
If a 39-point perception gap sounds like an outlier, it isn't. The pattern repeats across industries and methodologies:
| Source | Claim | Reality |
|---|---|---|
| METR RCT (2026) | Devs believe +20% faster | Actually 19% slower |
| GitClear (2025) | Code output +30% | Code duplication 8ร, refactoring -60% |
| Bain & Company (2025) | Executives expect transformative gains | Actual savings "unremarkable" at 10-15% |
| MIT NANDA study (2025) | 95% of enterprise AI pilots | No measurable financial return in 6 months |
| Gartner (2025) | 88% of AI projects | Never move past pilot stage |
| Stack Overflow survey (2025) | 84% of developers use AI tools | Only 33% trust the output (was 70%+) |
Notice the pattern. Adoption is almost universal (84% of developers, 88% of enterprises piloting). Trust is collapsing (33%, down from 70%+). Measured outcomes are negative or negligible. But the narrative of AI productivity is so deeply embedded in corporate strategy that companies are making hiring and firing decisions based on the adoption numbers, not the outcome numbers.
Schrodinger's Technology
Something strange is happening in AI that has no precedent in technology history. The technology is simultaneously failing at the enterprise level and succeeding at the revenue level.
Anthropic's annual recurring revenue grew from roughly $1 billion to $18-19 billion in 14 months โ 14ร growth. Claude now holds 40 percent of the enterprise large language model market, up from near zero. OpenAI reports 800 million weekly users. The agentic AI market is projected at $89.6 billion, growing 215 percent year-over-year.
And yet: 42 percent of enterprise AI projects have been scrapped. 95 percent show no financial return. Gartner has placed generative AI in its official "trough of disillusionment." Sparkline Capital calculates that AI revenues would need to grow 100ร โ from $37 billion to $2 trillion by 2030 โ to justify the current capital spending.
This isn't a contradiction. It's a familiar one. The same thing happened with the internet.
The Dot-Com Pattern
Between 1997 and 2000, the internet attracted $256 billion in venture investment. Pets.com, Webvan, and eToys became cautionary tales. The NASDAQ lost 78 percent of its value. The conventional wisdom hardened: the internet was overhyped.
Then Amazon, Google, and Facebook built the most valuable companies in human history.
The internet was overhyped โ in 1999. It was also underhyped โ by 2010. The people who lost their jobs during the dot-com bust were casualties of irrational exuberance. The people who lost their jobs to Amazon's logistics automation a decade later were casualties of real capability. The two waves were separated by roughly seven years.
AI is following the same pattern, compressed into a shorter timeline. And the consequences for workers are worse in both waves.
Wave 1: Firing People for Vibes (2024-2027)
In Wave 1, companies are cutting workers based on AI's perceived productivity, not its measured productivity. The evidence:
- Harvard Business Review and Thomas Davenport named the phenomenon "pre-emptive displacement" โ companies cutting based on AI's potential, not its demonstrated performance. Nearly impossible to regulate because the displacement decision precedes any measurable AI impact.
- Klarna eliminated 3,104 positions and claimed AI replaced them. Then CEO Sebastian Siemiatkowski admitted in May 2025 that "we focused too much on efficiency and cost, the result was lower quality." The company began rehiring โ as gig workers at degraded terms. The stock crashed 65 percent post-IPO.
- Gartner projects that 30 percent of companies that replaced workers with AI will rehire for those same roles at higher cost within 18 months โ the "Rehiring Trap."
- Daron Acemoglu of MIT calculates that AI's actual GDP impact is 1.1-1.6 percent over ten years โ roughly 0.05 percent annually. The METR data empirically supports his skepticism over Erik Brynjolfsson's more optimistic 2.7 percent productivity claim (noting that Brynjolfsson co-founded Workhelix, an AI consulting firm with a financial interest in the larger number).
The scale of irrational displacement is hard to pin down precisely, but converging estimates suggest 30-50 percent of current AI-attributed job cuts would not survive a rigorous audit of whether AI actually performs the displaced workers' functions better. Companies are firing people because other companies are firing people. The pressure is mimetic, not empirical.
Wave 2: The Real Thing (2027-2032)
Wave 1 being partly irrational does not mean AI displacement is fake. It means the real wave hasn't fully arrived yet.
The capabilities that will drive Wave 2 are already measurable:
| Capability | Current Status | Displacement Implication |
|---|---|---|
| GPT-5.4 computer use | OSWorld 75% vs human 72.4% | Any job done on a computer is automatable |
| Enterprise API automation | 77% of Claude enterprise usage | The augmentation-to-automation flip already happened |
| Humanoid robots at $4,900 | 14 commercially available models | Physical work buffer collapsed from 10 years to 2-3 |
| Agentic AI market | $89.6B, 215% YoY growth | Multi-step autonomous workflows replacing entire functions |
| Claude enterprise share | 40% LLM market, $18-19B ARR | Revenue gap narrowing; the investment may justify itself |
Anthropic's Economic Index measured the shift: enterprise API traffic went from 57 percent augmentation to 77 percent automation between Q1 and Q3 2025. That crossover โ from "help me think" to "do it for me" โ is the leading indicator. Phase 2 is organizations restructuring around AI capabilities, not individuals using AI tools. Phase 2 eliminates functions, not tasks.
GPT-5.4's computer use capability (released March 5, 2026) is the clearest signal. The model scores 75 percent on OSWorld โ a benchmark measuring the ability to operate any desktop computer application end-to-end โ exceeding the human average of 72.4 percent. This is the first general-purpose model that can physically operate any computer better than a typical user. The implication is simple: any job that consists primarily of operating a computer is now within the capability envelope of commercially available AI.
The Cry-Wolf Catastrophe
Here's why the two-wave pattern is worse than either wave alone.
Wave 1 creates political urgency. Workers are displaced. Media covers it. Politicians propose responses โ the Warner-Hawley AI-Related Job Impacts Clarity Act, the Warner-Rounds Economy of the Future Commission, Colorado SB 24-205. The moment feels like a crisis, and crises generate action.
Then Wave 1 partially self-corrects. Gartner's 30 percent rehiring happens. Klarna-style quality reversals accumulate. Enterprise AI projects get scrapped at a 42 percent rate. The Trough of Disillusionment does its work. Some displaced workers get rehired (at degraded terms, but that's invisible to headlines). The narrative shifts: "AI displacement was overhyped."
Political capital evaporates. The Warner-Rounds Commission publishes its report. Congress files it. Colorado's enforcement framework quietly dies. The $362 billion corporate training industry pivots back to "AI reskilling" courses. Everyone relaxes.
Then Wave 2 arrives โ with capabilities that actually work, at enterprise scale, with proven ROI โ and nobody listens. The political system already processed AI displacement and concluded it was a false alarm. The people screaming that the next wave is real sound exactly like the people who screamed about Wave 1. The Overton window has closed.
This is the dot-com pattern mapped onto human livelihoods. In 2001, "the internet is overhyped" was correct. In 2010, the same sentence was catastrophically wrong. The difference: dot-com destroyed investor capital. The AI two-wave pattern destroys careers, communities, and whatever political response might have been built during the window that Wave 1 briefly opens.
The DOGE Rehearsal
We already have a proof-of-concept for what unmanaged displacement looks like without political response. The Department of Government Efficiency eliminated 322,000 federal positions in under a year. Congress passed $9 billion in nominal savings. The actual fiscal cost exceeded $135 billion โ because cutting the IRS's enforcement division cost $12.6 billion in uncollected revenue, far exceeding the $4.3 billion in salary savings.
The most relevant DOGE finding for the two-wave hypothesis: 322,000 displaced federal workers produced zero organized political response. No movement. No constituency. No march on Washington. The largest peacetime workforce reduction in American history generated a placement rate of 0.06 percent (187 of 322,000 through Civic Match). Everyone else was on their own.
If 322,000 people with a common employer, a clear villain, and a shared geographic concentration in the DC metro area couldn't organize, what happens when Wave 2 displaces millions of private-sector workers across every zip code simultaneously?
What the Evidence Actually Supports
The honest assessment isn't "AI displacement is fake" or "AI displacement is the end of work." It's both, at different times, for different workers:
- 30-50% of current AI-attributed cuts are irrational โ driven by executive FOMO, stock market signaling, and productivity metrics that are wrong by 39 percentage points. These workers are casualties of a narrative, not a technology.
- The remaining 50-70% are real, accelerating, and tracking toward capabilities that will only get better. Enterprise API traffic has already crossed from augmentation to automation. Humanoid robots cost less than a used car. GPT-5.4 can operate a computer better than most humans.
- The political window opened by Wave 1's visible displacement is the only moment structural policy has a chance. Once the correction makes AI displacement seem exaggerated, that window closes โ possibly permanently.
The Bottom Line
Right now, companies are simultaneously spending $427 billion on AI, firing workers for AI productivity gains that a rigorous trial shows are negative, and planning layoffs based on capabilities that haven't been deployed yet. The technology is real. The current displacement is at least partially not. And the worst outcome isn't that AI takes everyone's job โ it's that the cry-wolf cycle ensures nobody is prepared when it actually does.
The people getting fired today for AI they don't need โ they'll be the lucky ones. Gartner says 30 percent will get rehired. Some will land on their feet. The political system will process their displacement as a false alarm.
The people who lose their jobs in 2028 or 2029, when the technology actually works and the political will to respond has been exhausted by a correction that already happened? They're on their own. And there will be a lot more of them.
Sources
- METR โ "Measuring the Impact of Early 2025 AI on Experienced Open-Source Developer Productivity" (2026)
- Sequoia Capital โ "AI's $600B Question" (Sep 2024)
- GitClear โ "Coding on Copilot: AI's Downward Pressure on Code Quality" (2025)
- Anthropic โ "The Anthropic Model Spec and the Economic Index" (2026)
- Sparkline Capital โ "$37B Revenue vs. $427B Capex: AI's Revenue Gap" (2026)
- Gartner โ "Beyond the Hype: The State of Enterprise AI" (2025-26)
- The Guardian โ "Klarna CEO Admits AI-Driven Cuts Led to Lower Quality" (May 2025)
- Acemoglu โ "The Simple Macroeconomics of AI" (NBER Working Paper 32487, 2024)
- Stack Overflow โ "Developer Sentiment: AI/ML Tools" (2025)
- Layoffs.fyi โ Tech Layoff Tracker (2024-2026)
- MIT Sloan / NANDA โ "Enterprise AI Pilot Success Rates" (2025)