โ† Back to Live in the Future
๐Ÿ’ผ Labor & AI

Your Coworker Replaced You With an AI Agent. Your Boss Found Out and Said Thanks.

Half of all employees are running unauthorized AI tools at work. 135,000 AI agent instances sit exposed on the public internet. Companies aren't stopping it โ€” they're retroactively approving it. The eighth displacement mechanism isn't your employer. It's the person in the next cubicle.

By Nadia Kovac ยท Labor & AI Policy ยท March 12, 2026 ยท โ˜• 11 min read

Forty-nine percent.

That's the share of employees using AI tools their employers never approved, according to a BlackFog survey of 2,000 workers published in early 2026. Not browsing ChatGPT on lunch break. Connecting autonomous AI agents to corporate systems โ€” email servers, CRMs, databases, internal wikis โ€” and letting them run.

Fifty-one percent had connected an AI tool to work systems without telling IT. Sixty-nine percent of C-suite executives had done the same.

Nobody asked HR. Nobody filed a request. Nobody told the three people in accounts receivable that their workflows were about to be handled by an agent that a senior manager in sales spun up over the weekend.

The Scale Nobody Measured

On February 9, 2026, SecurityScorecard's STRIKE threat intelligence team published findings that stopped conversations in security operations centers worldwide. Over 135,000 OpenClaw AI agent instances were exposed to the public internet. Of those, 15,200 were vulnerable to remote code execution. And 93.4 percent โ€” nearly all of them โ€” had authentication bypassed entirely.

OpenClaw is an open-source AI agent platform. Building a working agent on it takes about as long as setting up a Slack workspace. The agents can read emails, query databases, file reports, schedule meetings, draft documents, and execute multi-step workflows autonomously. The 135,000 number was itself outdated within hours of publication โ€” when STRIKE released its report, they counted just over 40,000. The number tripled while they were writing.

Meanwhile, Noma Security reported that 53 percent of their enterprise customers had employees who gave OpenClaw agents privileged access to production systems within a single weekend of launch. Not a phased rollout. Not a security review. One weekend.

This isn't a security story. Or rather, it's not just a security story. Those 135,000 agents are doing work. They're processing invoices, triaging support tickets, generating reports, managing calendars, populating spreadsheets. Work that, until recently, was done by humans whose job titles matched those tasks.

The Complicity Loop

Here's what happens in practice. I've reconstructed this from three independent enterprise case studies and the BlackFog data.

Stage one: a mid-level employee with technical inclination sets up an AI agent to handle a repetitive part of their job. Maybe it's a project manager who automates status report aggregation. Maybe it's a marketing coordinator who builds an agent to draft and schedule social posts. The employee doesn't announce it. They just start finishing work faster.

Stage two: the employee's manager notices. The manager does not file a security incident. The manager says: can you set one up for the team?

Stage three: leadership discovers. Rather than shut it down, they retroactively approve it. They call it an "innovation initiative." They present it at the next all-hands. The 69 percent C-suite adoption rate from BlackFog isn't executives leading AI transformation โ€” it's executives following what employees already did and claiming credit.

Stage four: the organic adoption gets converted into policy. Headcount gets adjusted "naturally" โ€” the next open req simply isn't filled. The person who left and wasn't replaced? Their work is handled by an agent their former colleague built.

I call this the Complicity Loop: discovery โ†’ approval โ†’ policy โ†’ efficiency claim. At no point does anyone file a displacement notice. At no point does anyone acknowledge that a human role was eliminated. The WARN Act doesn't trigger. Warner-Hawley reporting (if it ever passes) wouldn't capture it. It's not even a management decision โ€” it's emergent individual behavior that management retroactively sanctions.

Peer Displacement: The Mechanism Nobody Mapped

Seven displacement mechanisms have been identified and documented in the past two years:

#MechanismInitiated ByExample
1Task automationEmployerKlarna AI chatbot replaces 700 agents
2Hiring freeze / attritionEmployerShopify "prove AI can't do it" policy
3Strategic misattributionEmployerCompanies citing "AI" for financial layoffs
4Pre-emptive cutsEmployerCutting based on AI potential, not performance
5Firm consolidationMarketAI-native firms acquiring non-AI competitors
6Agent-first designVendorSoftware rebuilt for AI users, humans secondary
7Degradation pipelineEmployerFTE โ†’ contractor โ†’ gig โ†’ automated
8Shadow agent proliferationCoworkerColleague deploys unauthorized AI agent

Every mechanism from 1 through 7 is employer-driven or market-driven. Mechanism 8 is the first where the person who initiates the displacement is a fellow worker.

That changes the politics completely.

When your employer lays you off, you have a target for your anger. You might organize. You might sue. You might at least file for unemployment. When your colleague automates your function because they were bored on a Saturday and built something clever, there's nobody to blame. The colleague didn't intend to replace you. They were just making their own life easier. Management didn't decide to eliminate your role. They just noticed it was already handled.

This is why the BlackFog number matters more than the headline suggests. 49 percent of employees using unauthorized AI isn't a compliance problem. It's a displacement mechanism operating at scale with zero institutional visibility.

The Experience Premium, Weaponized

Who builds the shadow agents? Not entry-level workers. The Dallas Federal Reserve's research on the experience premium is directly relevant: AI replaces codifiable knowledge (the kind entry-level workers are still acquiring) while complementing tacit knowledge (the kind experienced workers already have). The experienced employee knows which workflows are automatable because they've done them manually for years. They know the exceptions, the edge cases, the workarounds. They can build an agent that handles 80 percent of a function because they understand the 20 percent that requires judgment.

The junior employee can't do this. They don't know enough about the process to automate it. So they can't participate in shadow agent building โ€” and they can't defend against it. They're displaced by a tool built by someone who understands their job better than they do.

Stanford's "Canaries in the Coal Mine" data shows a 13 percent relative employment decline for early-career workers (ages 22-25) in AI-exposed occupations. Young software developers down 20 percent from their 2022 peak. Some of that is hiring freezes. Some is pre-emptive cuts. And some of it, increasingly, is experienced colleagues who built something on a Sunday that handles what a junior team used to do on a Monday.

"AI Workers" as Budget Line Items

The shadow is becoming the policy. Jitterbit reported on March 10 that companies across its customer base showed a 53 percent surge in budgeting for "AI Workers" โ€” treated as headcount equivalents, not software licenses. Not "AI tools." Not "automation platforms." AI Workers. Capitalized. Counted alongside humans on org charts.

At the same time, Tencent launched QClaw on March 9, embedding AI agents directly inside WeChat and QQ โ€” messaging platforms used by over a billion people. John Rush's ListingBott runs 27 AI agents generating $2 million in annual recurring revenue. Fifteen human workers are employed by those agents โ€” hired by the AI to handle tasks the agents can't close. Rush plans an agent-hires-human marketplace by summer.

Read that again. The agents hire the humans. Not the other way around.

Why Every Measurement Framework Fails

Warner-Hawley, if it passes, would require quarterly AI displacement reporting to the Department of Labor. The bill assumes displacement is a company decision, reported by the company, triggered by events the company controls.

Shadow agent proliferation breaks every one of those assumptions.

The company didn't decide. An employee did. The company didn't report because the company didn't know โ€” and when it found out, it was grateful. There was no triggering event โ€” no layoff, no restructuring, no hiring freeze. A function just quietly migrated from a human to an agent, one weekend at a time.

Anthropic's "observed exposure" methodology measures AI capability overlap with human tasks. It can't measure whether an employee has already built an agent to exploit that overlap. The Bureau of Labor Statistics tracks employment and unemployment. It can't detect that someone's job still exists on paper while an agent does the actual work.

The measurement problem that was already near-fatal for displacement policy โ€” Klarna eliminated 3,104 positions with zero legal triggers โ€” just got worse. At least Klarna was a company decision. Shadow agents are 135,000 individual decisions happening simultaneously.

The Security Metaphor That Isn't a Metaphor

Cybersecurity professionals have a term for this: shadow IT. Employees adopting unauthorized tools because the official ones are too slow, too restricted, or too annoying to requisition. Shadow IT has been a governance headache for two decades.

Shadow AI is shadow IT with teeth. Shadow IT meant someone used Dropbox instead of SharePoint. Annoying but manageable. Shadow AI means someone deployed an autonomous agent that now handles invoicing for three departments, has read access to the customer database, and was built by a marketing manager who watched a YouTube tutorial.

The SecurityScorecard data isn't about theoretical risk. Those 135,000 exposed instances include agents with access to enterprise databases, email systems, and internal APIs. Noma Security documented 824 confirmed malicious skills already embedded in the OpenClaw ecosystem. The surface area isn't just workforce displacement โ€” it's corporate data exfiltration, prompt injection attacks, and autonomous agents making decisions that nobody authorized and nobody can audit.

But that's the security angle. The workforce angle is simpler and more uncomfortable: those agents are doing real work that real people used to do, and nobody has a number for how many jobs that represents.

The Bottom Line

Every policy framework for managing AI workforce displacement assumes the displacement is visible, deliberate, and traceable to a decision-maker. Shadow agent proliferation is none of those things. It's distributed across millions of individual employees, retroactively blessed by management, invisible to every proposed measurement framework, and growing at a rate that tripled the exposed-instance count in a single day.

The other seven displacement mechanisms have villains โ€” companies, executives, markets. This one has collaborators. Your colleague. Your manager. You, maybe, if you've connected Claude or ChatGPT or a custom agent to your work email and let it handle things that someone in the next office used to do.

That's what makes it politically impossible to address. You can regulate a company. You can tax an employer. You can't regulate a Monday morning where someone decided to build something and it worked.

Sources