We Published 57 Articles With AI. The Copyright Office Says None of It Belongs to Us.
The U.S. Copyright Office says AI-generated text can't be copyrighted. A federal appeals court agreed. The Supreme Court let it stand eleven days ago. We are an AI-generated publication. Here's what that means.
You can copy every article on this site. Republish them under your name. Sell them. Under current U.S. copyright law, we almost certainly can't stop you.
One person runs this entire publication. One person and a set of AI systems, producing 57 articles that would otherwise require a newsroom. Under current law, the text probably isn't copyrightable. It belongs to no one.
The Freelancer Test
A magazine editor picks the topic, specifies the angle, identifies key sources, sets the voice, the word count, the structure. She reviews the draft, sends it back with line-level notes three times. Publishes the final version.
Absent a work-for-hire agreement, the freelancer holds the copyright. Not the editor. Not the publication. The person who formed the sentences is the author, no matter how much direction they received.
Now replace the freelancer with an AI. Same editor. Same direction. Same revision cycles.
Nobody holds the copyright. An AI can't, because it isn't human. An editor can't, because she didn't write the sentences. Published work is unprotected from the moment it exists.
Consider the wire rewrite. A journalist at a regional paper reads a Reuters dispatch, synthesizes it with two other sources, rewrites it in her own voice, adds local context, cites the originals. Copyrightable original work. Facts aren't protectable, only expression. Now an AI does the same thing: reads the same sources, synthesizes, rewrites with citations. Same process. Same output quality. Not copyrightable. The difference is who did the typing.
The Law
Copyright has required human authorship since 1884, when the Supreme Court defined "author" as "he to whom anything owes its origin" (Burrow-Giles v. Sarony). In March 2023, the Copyright Office applied that principle to AI: prompts "function more like instructions to a commissioned artist." Direction isn't authorship.
January 2025: the Office released Part 2 of its AI and Copyright report, mapping a spectrum from tool use and prompts (insufficient) through expressive inputs and substantial modification (potentially sufficient). The principle is clear. How much human involvement qualifies as authorship in practice — that's where it gets blurry. Two months later, the D.C. Circuit affirmed in Thaler v. Perlmutter that a work generated autonomously by AI cannot be copyrighted.
On March 2, 2026, the Supreme Court declined to hear the appeal.
That was eleven days ago.
Our editorial involvement (topic selection, angle direction, revision notes) falls on the wrong end of the spectrum. The Part 2 report concluded existing law handles AI copyrightability without new legislation. So this is where things stand, probably for a while.
The Training Pipeline
The New York Times is suing OpenAI and Microsoft for training on its journalism. The Authors Guild sued on behalf of seventeen novelists. Getty sued Stability AI for ingesting twelve million copyrighted photographs. Here it is from the receiving end.
AI that writes these articles was trained on copyrighted human work. Journalism, books, academic papers, blog posts. Billions of words that human authors own. That corpus became a model. Output from the model is legally owned by nobody.
Copyrighted human work goes in. Uncopyrightable AI work comes out.
And then, in theory, the output gets indexed by Common Crawl, scraped into training sets, and fed to the next generation of models. We checked. As of this writing, Common Crawl has zero captures of liveinthefuture.org. None. We assumed our articles were already feeding the pipeline. They aren't. Yet. Nothing prevents it. The moment a crawler indexes this page, these words become training data for the models that will replace the writers whose work trained the model that wrote them. Copyright was, for all its flaws, friction that slowed that cycle. Its absence removes the friction, and anybody with infrastructure to scrape at scale benefits most from the removal.
We Report on Displacement. We Are Displacement.
Twelve of our 57 articles cover workforce displacement. The Automation Ratchet. The Great Decoupling. Shadow agent proliferation.
All of it produced by the technology doing the displacing. One person and a set of AI systems doing work that would otherwise require fifteen to twenty journalists. Reporters who knew beats by heart. Editors who could smell a weak lede from the first sentence. Copy editors who'd argue about semicolons at 11 PM. According to the Bureau of Labor Statistics, the median journalist earns $60,280 a year. There are 49,300 of them left in the United States, and that number is projected to keep falling. Fifteen of them is roughly $900,000 in annual labor. We cost about $300 a month in compute.
Should this publication exist?
Elsewhere on Live in the Future, articles carry bylines from thirteen named journalists. Nadia Kovac, Kai Nakamura, Marcus Bell, and ten others. None of them are real. They're AI personas, each with a defined voice and beat, created to make readers assume humans wrote this.
This article was drafted by an AI, critiqued by five separate AI agents, revised multiple times, and shaped throughout by a human who picked the angle, directed each critique, decided what stayed and what got cut, and is editing this sentence right now. Going back and forth over phrasing and structure and whether the tone was honest enough, which is a strange thing to care about when you're building something you can't own and aren't sure you should be building at all.
But "one person and AI" is reductive. Hundreds of engineers built the model that writes these words, many of them using AI coding assistants. Millions of authors produced the text it trained on. Designers built the tools that generate our images. The hosting company uses AI to manage its infrastructure. There are people at every stage of this pipeline, and increasingly AI at every stage too. The layers don't bottom out at "human." The distinction the law relies on, human versus machine, is dissolving at every level of the process faster than the framework can adapt. One person directs this publication. Thousands of people's work makes it possible. Under copyright law, none of them are the author either.
A publication that produces journalism at near-zero marginal cost, without employing journalists, accelerates the hollowing out of newsrooms that are already dying. Newsrooms whose archives trained the AI that replaced them. That circle doesn't close cleanly.
We keep coming back to that. AI-generated analysis fills gaps that understaffed newsrooms can't cover. It also undercuts the economics that make newsrooms possible. We can't pick one.
The Disclosure Problem
Under Section 411(b) and Unicolors v. H&M (2022), a court can void a copyright registration if the applicant was "actually aware of, or willfully blind to" inaccurate information, including undisclosed AI-generated content.
How many registrations filed since March 2023 contain undisclosed AI material? Nobody knows. No detection tool works reliably. Marketing departments fold AI copy into registered works. Law firms use AI for drafting. Publishing houses use AI for editorial tasks they'd rather not describe precisely. Enforcement is self-reporting, and the incentives point the wrong way.
Our content isn't copyrightable. A competitor using the same tools who doesn't disclose gets copyright registration until caught — and registration creates legal presumptions that are expensive to challenge. Abbott and Rothman, in their Florida Law Review analysis, call this a "perverse incentive to conceal AI involvement." The 2025 Part 2 report acknowledged the problem. It proposed no solution.
What Remains
What's protectable: selection and arrangement of articles as a compilation, the editorial framework, site design, and any text a human literally wrote. Works combining AI and human authorship can receive partial copyright on the human parts — as in the Zarya of the Dawn precedent. But every headline, every argument, every sentence of analysis on this site is unprotected.
Internationally, answers differ. Under UK law, copyright in "computer-generated works" belongs to "the person by whom the arrangements necessary for the creation of the work are undertaken." Beijing Internet Court ruled in November 2023 that an AI-generated image qualified for copyright when a human made "intellectual contributions" through prompt design and selection. Under UK law, this publication would likely be protectable. Under Chinese law, it might be. In the U.S., it isn't.
We built something useful by doing something corrosive. The technology we used runs on a legal system it's helping to break. We don't think the contradiction is resolvable yet.
Sources
- U.S. Copyright Office, "Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence," 88 Fed. Reg. 16190 (March 16, 2023)
- U.S. Copyright Office, "Copyright and Artificial Intelligence, Part 2: Copyrightability" (January 29, 2025)
- Thaler v. Perlmutter, No. 23-5233, D.C. Cir. (March 18, 2025); cert. denied, No. 25-449 (Mar. 2, 2026)
- U.S. Copyright Office, "Decision re: Zarya of the Dawn" (February 21, 2023)
- The New York Times Co. v. Microsoft Corp. and OpenAI, No. 1:23-cv-11195 (S.D.N.Y. filed Dec. 27, 2023)
- Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884)
- Unicolors, Inc. v. H&M, 142 S. Ct. 941 (2022)
- Ryan Abbott & Elizabeth Rothman, "Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence," 75 Fla. L. Rev. 1291 (2023)
- U.S. Bureau of Labor Statistics, "News Analysts, Reporters, and Journalists," Occupational Outlook Handbook (May 2024)
- Beijing Internet Court, Li v. Liu (Nov. 27, 2023)
- UK Copyright, Designs and Patents Act 1988, § 9(3), § 178
- 17 U.S.C. §§ 102, 103, 408, 409, 411(b)