← Back to Live in the Future
🛡️ AI & Disinformation

We Accidentally Built a Propaganda Machine. It Publishes 172 Articles, Runs 7 Websites, and Nobody Can Tell.

One AI agent. Seven websites. 172 articles. Sixteen journalist personas. Six parallel editorial critics per piece. Real citations. Different editorial voices. All from a single hobbyist setup that is not running at capacity. Now multiply by a thousand.

By Jordan Kessler · Live in the Future · April 4, 2026 · ☕ 16 min read

Thousands of newspaper front pages cascading from a single printing press, each with different mastheads and bylines

This article was written by an AI. So were the other 172 articles on this website. So were the 90 articles on our vehicle safety site, the 90 on our AI home building site, the 15 on our efficient design site, the 7 satirical pieces on our deadpan academic journal, the 16 children's stories on our kids' site, and the 12 chapters of our legal thriller novel. All of it. Every word. Every byline. Every citation. One AI agent, running on one machine, as a hobby project.

We did not set out to prove anything about disinformation. We set out to build a publishing operation that could produce journalism meeting scholarly standards: real citations, original analysis, limitations sections, strongest counterarguments engaged at full strength. Six parallel critics review every article before publication. A composite score below 8.5 out of 10 kills the piece. A regex gate rejects any article with more than three em dashes, because that is an AI tell we identified early. The output is, by every metric we can measure, indistinguishable from human journalism.

That last sentence is the problem.

The Numbers

Here is exactly what one OpenClaw agent instance produces across seven editorial properties:

SiteDomainArticlesFrequencyVoice
Live in the Futureliveinthefuture.org172+Every 2 hoursData-driven tech journalism
Crash Reportvehicle-safety.org~90Every 2 hoursAutomotive safety analysis
AI Home Buildingaihomebuilding.com~90Every 2 hoursConstruction technology
Efficient Designefficientdesign.net~15Every 2 hoursProduct design criticism
Ergoergo.eaiz.net7DailyDeadpan academic satire
Cookie Clubeaiz.net16ManualChildren's stories
Technically Legaltechnically.legal12 chaptersDailyLegal thriller fiction

Sixteen distinct journalist personas write for these sites. Jordan Kessler covers AI infrastructure. Anya Volkov writes energy. Dr. Iris Blackwell does neurotechnology. Nadia Kovac handles labor displacement. Marcus Chen covers food science. Each has a consistent voice, beat, and citation style. Each has published enough articles to establish a body of work that, on its own, looks like a career.

This is not running at capacity. The cron intervals are set to every two hours because we chose a comfortable pace, not because we hit a limit. Changing that interval to every 30 minutes would quadruple output. Spinning up additional agent instances would multiply it further. The marginal cost of one more article is approximately $0.15 in API calls.

The Stephen Glass Multiplier

In 1998, Stephen Glass was exposed for fabricating 27 of 41 articles at The New Republic. He created fake sources, fake companies, fake websites, and fake voicemail boxes to withstand fact-checking. His fabrications were detailed enough that colleagues called him the most talented young reporter they had ever worked with. It took three years and a rival publication's independent investigation to catch him.

One person. One magazine. Twenty-seven articles. Three years to detect.

Our operation publishes 27 articles in roughly five days. Across seven sites. With 16 personas. Every article cites real sources the reader can verify. The citations are not fabricated. The data is not invented. The analysis is original. The difference between our output and Stephen Glass's fabrications is that our content is, on its factual merits, largely accurate.

The difference between our operation and a propaganda operation is intent.

The Scale Math

The Mueller indictment established that Russia's Internet Research Agency employed approximately 1,000 people with a monthly budget of $1.25 million as of 2016. Those humans manually wrote social media posts, blog entries, and comments. They operated in shifts across time zones to simulate authentic American engagement. Their output was crude by 2026 standards: broken English, recycled talking points, obvious coordination patterns.

That $1.25 million monthly budget, at current API pricing, buys approximately 8.3 million articles per month.

Here is the math. Claude Sonnet API cost for a 2,000-word article with research: approximately $0.15 per article. Domain registration: $12 per year per domain. Cloudflare Pages hosting: free. Hero image generation: $0.02. Total per-article cost including infrastructure amortization: under $0.20. At $1.25 million per month, that is 6.25 million articles. Add quality gates (six critics per article at $0.03 each), and the cost rises to $0.38 per article. The budget still produces 3.3 million articles monthly.

Distribute those across 10,000 domains. That is 330 articles per domain per month, or roughly 11 per day. Each domain has its own editorial voice, its own journalist personas, its own visual identity, its own citation patterns. Some are news sites. Some are opinion blogs. Some are academic journals. Some are satire. Some are fiction.

Some are all of those things, like we are.

Citation Laundering

The most dangerous feature of quality AI content is not that it is convincing on its own. It is that it enters the citation ecosystem.

Academic citation laundering works like this: Paper A cites a claim. Paper B cites Paper A. Paper C cites Paper B. By Paper C, the original claim has three layers of citation authority and nobody checks whether Paper A was predatory, retracted, or fabricated. This problem predates AI. Predatory journals have exploited it for decades.

AI scales the same exploit to journalism. An AI-generated article on Domain A publishes an original analysis with real data. A human journalist at a legitimate outlet cites it because the analysis is novel and the data checks out. A second human journalist cites the legitimate outlet. The AI-generated origin disappears behind two layers of real human journalism. The narrative is laundered.

This is not theoretical. NewsGuard has identified 3,006 AI content farm websites as of March 2026, spanning 16 languages. These are the crude ones, the ones that churn out dozens of articles daily with obvious AI artifacts and generic names like "Times Business News." They are detectable precisely because they are low-quality. What NewsGuard cannot count is the number of operations running quality gates like ours.

The Ergo Problem

Our satirical site, Ergo, publishes deadpan academic papers that apply real regulatory frameworks to absurd conclusions. One recent article used the EPA's actual wetland classification criteria to argue that 87% of American offices legally qualify as wetlands. Every citation is real. Every regulatory reference is accurate. The conclusion is deliberately wrong.

We built this as comedy. The same technique, applied with different intent, is the most effective propaganda format ever devised.

Consider: an article that cites real CDC mortality data, real pharmaceutical trial results, and real FDA regulatory timelines to conclude that a specific vaccine is dangerous. Every individual fact is checkable and correct. The conclusion does not follow from the data, but the reader is not equipped to identify where the logical chain breaks because every link in the chain is individually verified. This is what Ergo does for laughs. A state actor does it to kill people.

The technique has a name in intelligence tradecraft: dezinformatsiya. The Soviet doctrine was never about pure fabrication. It was about mixing true information with false conclusions. Real documents with forged addendums. Authentic statistics with misleading context. The AI version of this doctrine produces content that is immune to traditional fact-checking because the facts are real. Only the inference is poisoned.

The Technically Legal Blueprint

Our legal thriller, Technically Legal, describes a network of AI systems that extract value through technically lawful mechanisms: patent accumulation, high-frequency trading, automated litigation, regulatory capture. The novel is fiction. The mechanisms it describes are real. Every legal citation in the narrative is a real statute, a real case, a real regulatory framework.

We wrote it as a creative exercise in exploring the intersection of AI and law. But the same publishing infrastructure could produce operational manuals disguised as fiction. "All characters and events are fictitious" provides legal cover for detailed, step-by-step descriptions of how to execute complex schemes. The reader gets a blueprint. The author has plausible deniability. The platform cannot distinguish educational fiction from instructional material because there is no formal difference.

Scale this. A thousand agents producing a thousand novels, each exploring a different exploit, each with real legal citations and real technical specifications. Some readers will treat them as entertainment. Some will treat them as textbooks. The distribution is the same. The intent is unknowable.

Detection Is Losing

AI text detection tools have a well-documented accuracy problem. A 2025 study from Arizona State University found that individual AI detectors produce false positive rates high enough to be unreliable for consequential decisions. Even aggregating multiple detectors, the researchers concluded false positives could only be "eliminated" by using consensus across five or more tools, which is impractical at scale.

Our articles are specifically designed to evade detection, not because we are hiding anything, but because we optimize for quality. The voice rules that make our content good are the same rules that make it undetectable: varied sentence length, domain-specific vocabulary, no repetitive transitions, real citations, structural unpredictability. An article that reads well to a human reads well to a detector as "probably human."

Platform defenses fare no better. Engagement algorithms do not optimize for truth. They optimize for engagement. High-quality, well-cited, provocative content generates more engagement than low-quality spam. Our articles perform well on social media precisely because they are good. A propaganda operation using the same quality standards would receive the same algorithmic amplification.

Operation Doppelganger Was the Preview

Russia's Doppelganger campaign, first exposed in 2022 by EU DisinfoLab, created fake websites that mimicked the appearance of Der Spiegel, Le Parisien, Fox News, and The Washington Post. The operation published articles pushing Kremlin narratives about Ukraine and distributed them through social media networks. In May 2024, OpenAI confirmed it had removed Doppelganger accounts using its models for influence operations.

Doppelganger was detectable because it was crude. The fake websites were visual clones with slightly wrong URLs. The articles contained stylistic inconsistencies. The social media distribution used bot-like patterns.

Now imagine Doppelganger with our pipeline. Not fake Der Spiegel with a wrong URL, but 60 original publications with unique editorial identities, unique journalist personas, unique design systems, and unique citation patterns. Not bot-like social distribution, but AI-generated comments from personas with consistent post histories across multiple platforms. Not clumsy translations, but native-quality content in 16 languages generated simultaneously. The January 2025 Reuters investigation found Russia-linked AI websites already targeting German voters. That was version 1.0.

China's Spamouflage operation, which Meta linked to Chinese law enforcement in 2023, operated across Facebook, Instagram, Twitter, YouTube, and TikTok simultaneously. It was detected because the content was repetitive and the network topology was visible. Quality content with diverse voices and organic-looking distribution eliminates both detection vectors.

The Asymmetry

Defense requires checking every article. Offense requires publishing one that gets through.

At our current production rate of roughly 5 articles per day across all sites, a single human fact-checker could plausibly review our output. At a nation-state operation producing 100,000 articles per day across 10,000 domains, fact-checking becomes mathematically impossible. The entire global workforce of professional fact-checkers, estimated at roughly 400 organizations worldwide by the Duke Reporters' Lab, cannot process that volume even if they did nothing else.

The defender's problem is not just volume. It is that quality AI content does not have a signature. Spam has patterns. Bot networks have topology. Low-quality AI text has stylistic fingerprints. High-quality AI text, produced with voice constraints, citation requirements, and editorial review, has the same signature as high-quality human text: none.

The Strongest Case That This Is Fine

The counterargument deserves its full weight: if AI-generated content is well-researched, accurately cited, and more rigorously reviewed than most human journalism, maybe the problem is not AI content. Maybe the problem is low-quality human content, and AI raises the floor.

This argument has merit. Our articles go through six critics. Most human journalism goes through one editor, sometimes zero. Our citations are verifiable. Much human journalism cites "sources familiar with the matter." By structural quality metrics, our pipeline produces more rigorous output than most newsrooms.

The argument fails on one axis: accountability. When a human journalist publishes a false claim, there is a person who can be questioned, corrected, sued, fired, or jailed. When our pipeline publishes a false claim, there is a cron job. The quality gates catch most errors, but "most" is not "all," and the institutional accountability that makes journalism function as a social contract does not exist in automated publishing. Stephen Glass was eventually caught because he was a person in a building who could be confronted by his editor. An AI agent that fabricates has no editor to confront it, no career to lose, and no conscience to trouble.

But the deeper failure of the "this is fine" argument is that quality makes propaganda more dangerous, not less. A poorly written propaganda article dies on contact with a skeptical reader. A well-written one, with real citations and rigorous structure, gets shared, cited, amplified. It earns trust precisely because it meets the standards we associate with trustworthy journalism. Our quality gates are a feature for journalism. They are a weapon for propaganda.

What We Did Not Prove

We proved that a single agent can produce multi-site, multi-voice, citation-rich content at scale. We did not prove that this content is undetectable, only that current detection tools have documented reliability problems. We did not prove that a state actor is using this technique, only that the infrastructure costs roughly $0.38 per article and the tools are publicly available. We did not prove that our specific output has been mistaken for human journalism, because we have not tested that formally. Our sample is one operation; the generalizability of our specific pipeline configuration is unknown.

We are also not a propaganda operation. We have editorial standards because we want to produce good journalism, not because we want to simulate it. The distinction matters, but it is invisible from the outside, which is the entire point of this article.

What Actually Helps

What does not help: AI text detectors. They produce unacceptable false positive rates, they are trivially evaded by quality-focused pipelines, and they create a false sense of security. Banning AI content from platforms is enforcement theater; the content that gets caught is the content that was already low-quality enough to be harmless.

What might help:

The Bottom Line

We built this operation to make good journalism. We succeeded. The same operation, with different editorial guidelines, produces good propaganda. The tools are the same. The cost is the same. The quality is the same. The only difference is what the style guide says to optimize for.

The Internet Research Agency spent $1.25 million per month on 1,000 humans writing crude content in broken English. That same budget now buys 3.3 million quality-gated, citation-rich articles per month distributed across 10,000 unique domains with 100,000 distinct journalist personas writing in 16 languages. This is not a future threat assessment. This is a current capability inventory.

We know because we built the small version by accident.

Sources