Thursday, January 29, 2026

Opting Out of the Bartz v Anthropic

So I want to preface this by saying I am not a lawyer, none of this is legal advise, my opinions are my own and they are just opinions, and I'm not pointing fingers at any specific people.

I write fiction, and I occasionally blog. That is the extent of my expertise. 

Today was supposed to be the last day to opt out of the Bartz v Anthropic Lawsuit, headed by the Authors Guild. It has been extended to Feb 9.

I'll use Grok 4.1 AI for a quick summary.

Grok 4.1: Bartz v. Anthropic was a 2024 class-action lawsuit brought by authors with significant support from the Authors Guild, alleging Anthropic infringed copyrights by downloading millions of pirated books from shadow libraries to train its Claude AI models. In 2025, a federal judge ruled that training on legally acquired books was fair use but piracy was not. Anthropic settled in August 2025 for a record $1.5 billion, with claims processing ongoing and final approval pending in 2026. Eligible class authors and rightsholders will receive approximately $3,000 per qualifying copyrighted book title (nearly 500,000 works total), pending final approval and valid claims.

Joe sez: Thank you, Grok.

I have opted out of this lawsuit for reasons I will describe shortly, and am retaining another team of attorneys. 

If you want to know my changing and complicated feelings about LLMs and AI, you can check out my previous blog posts:

Jan 21, 2026 - The Rise Of Airt - https://jakonrath.blogspot.com/2026/01/the-rise-of-airt.html

Nov 9, 2025 - Anthropic and the Future of Copyright - https://jakonrath.blogspot.com/2025/11/anthropic-and-future-of-copyright.html

Sept 29, 2023 - English Language Sues Bestselling authors - https://jakonrath.blogspot.com/2023/09/english-language-sues-bestselling.html

Those blog posts are in reverse chronological order, and if you don't want to take the time to read all three (it's a lot of words) here is the TLDR:

  • AI is here to stay, it is affecting almost every profession, including the arts, including fiction writing.
  • Initially I was not worried about it and thought worries were silly.
  • I am now very worried, and moderately angry.
  • I use and like AI, but I'm not using it to write fiction or to brainstorm. I am using it for research, foreign translations, and proofreading. I pay around $400 a year for Grok.
If you are a longtime reader of this blog, you know I am anti-establishment. I came by this viewpoint fairly.

Once upon a time I was a newbie writer and I sought a literary agent and a traditional publishing deal.

After 500+ rejections, I landed a lit agent, and went on to eventually sell 11 thrillers to legacy publishers.

Amazon invented the Kindle, and my rejected self-published books wound up outselling my traditionally published books.

I got my rights back from legacy publishers, and went on to sell several millions of ebooks on Kindle.

During the course of this adventure, which has been ongoing since 2003, I went from kowtowing to the trad publishing model, to rejecting it. 

Legacy traditional publishers (back when I was in that world--I dunno if things have changed), offered unfair, unconscionable contracts, did a poor job with the vast majority of books they published, and only created a small percentage of hits out of their many misses; misses that locked authors in for life.

For several years I evangelized self-publishing, and provided this blog as a free service to help writers understand both legacy and self-publishing.

One of the common antagonists in my blog posts has been the Authors Guild.

Again, I haven't dealt with the AG in years. Maybe they've changed. Maybe my original assessment was wrong. But I always looked at the Authors Guild as a group that should have been called the Gatekeepers Guild, because they seemed to align more with the legacy status quo of agents and publishers than they did with authors.

It doesn't matter if I was right or wrong. You can read my blog and decide for yourself.

What matters is that I have AG bias, and the AG was an essential part of the Bartz v Anthropic suit, which resulted in a 1.5 billion dollar win.

But was it really a win?

The following is my opinion. I am not alleging anything. I am not a lawyer. I am just a guy who makes up shit for a living. I am liked by some, hated by others. So take all of this with many grains of salt.

Here is how Grok summarized the B v A ruling:

Fair Use for LLM Training: The court ruled that using copyrighted books to train LLMs was "spectacularly" transformative and constituted fair use, as the process analyzes statistical relationships in text to enable generation of new, original content without reproducing the originals. This was analogized to human learning, emphasizing that copyright does not protect ideas, methods, or concepts. No market harm was found, as there was no evidence of infringing outputs from Claude mimicking or reproducing the plaintiffs' works.

Fair Use for Digitization of Purchased Books: Anthropic's scanning of lawfully purchased print books (involving destructive processes like removing bindings) to create internal digital copies was deemed fair use, as it was a transformative format shift for storage and searchability without distribution or increasing the number of copies. The court found no market harm, distinguishing it from unauthorized duplication.

No Fair Use for Pirated Copies: However, the acquisition and retention of pirated digital copies to build a central library was not fair use and constituted infringement, even if later used for training. This was non-transformative, displaced the market for authorized sales, and was unnecessary given lawful alternatives. Judge Alsup expressed skepticism that subsequent fair use could justify initial piracy, noting lawful acquisition as essential.

Joe sez: As of this blog post, Jan 29, 2026, things seem to be proceeding as planned. Thousands of authors, including many of my peers and friends, are sticking with the suit and expecting to get $3000 per infringed work. 

Good for them. 

But I will not be one of those authors. I have opted out of this settlement.

Here are the reasons why.

FIRST, I don't think the money is sufficient. I am not privy to the back and forth of the litigants and defendants, and I have not read the court transcript.

But the fact that a one and a half billion dollar settlement seems to be moving forward, and the Court, the Authors Guild, and the Defendant all seem to be okay with it, doesn't sit well with me.

I don't believe I've ever sided with the Authors Guild on an important issue. So I am suss.

I don't like the judge's ruling, for many reasons that I'll get into.

Anthropic being okay with 1.5 billion (which I infer by them agreeing to the settlement) makes me think they know they got off easy. 

SECOND, that ruling.

Scanning books is not fair use for training LLMs, in this non-lawyer's opinion. When I sell a book, I am selling the rights for it to be read.

Not to be copied infinitely. Not to be stored eternally. Not to be trained off of.

A human being can buy my book to read and enjoy. They can try--with their finite, limited, human mind--to imitate my style and tone and storytelling techniques. 

But someone who buys my book doesn't have the right to pirate my mind. A human can't do that. But an LLM can.

According to Grok:

Large Language Models (LLMs) are neural networks trained on vast text datasets to predict the next token (word or subword) in a sequence. Built on transformer architectures, they use attention mechanisms to weigh relationships between tokens, capturing context over long distances. With billions of parameters adjusted during training, LLMs learn patterns in language, grammar, and knowledge. When prompted, they generate responses autoregressively: processing input, computing token probabilities, and sampling the most likely continuation until complete.

Joe sez: LLMs must train on gigantic text datasets. They don't speak any language; they assign values to characters and predict relationships between tokens.

Look up the Chinese Room, Mary's Room, and Philosophical Zombies for deeper insight here. But this is waaaay beyond those philosophical ideas.

These LLM predictions defy human capability. They did not exist when current copyright law was written, and copyright law has yet to adequately address this issue.

Copyright Law is to protect humans from other humans. Not from super computers.

Right now, the LLM market (according to Grok) is worth about 12 billion dollars. By 2030, Grok predicts it will be worth 36 billion, and by 2035 up to 180 billion.

Is it fair use that these 500,000 works of pirated fiction--a vetted, professional, creative dataset that encompasses almost all of modern copyrighted material--are being awarded $3000 per book for training a profit-driven juggernaut?

I didn't sell a license for my books to train LLMs. If I had, I would have demanded more than $3k per book. 

A human being who reads one of my books can imperfectly retain it in their mind, for the extent of their limited memory. 

An LLM retains it forever, and they can learn from it perfectly, with 100% accuracy.

The tokens that make up my 60+ novels are unique to me. They are unique to the universe. They could not exist if I never existed.

I did not give any LLM permission to learn from my life's work so they can imitate me perfectly, use my pirated brain to answer questions in the style of me, keep my work indefinitely, and be able to write better than I can--which they can already do.

How much would you charge your assassin to murder you? If there is an amount, I would bet it is higher than $3000...

It is also, in this non-lawyer's opinion, not fair use to digitize purchased books.

We aren't talking about buying a music CD and ripping it to your iTunes for your personal enjoyment.

We are talking about taking a creative body of work that took weeks, months, years of dedicated human learning and effort, and digitizing it in seconds to train a billion dollar industry that will last millennia. 

Sure, it is transformative, in the same way harvesting organs from unwilling donors is transformative. 

If you want one of my kidneys, pay for it. That involves offer, acceptance, legal capacity, and lawful purpose. 

It requires a meeting of the minds. My mind was never even approached, let alone met. And riffing on that:

There will come a time when your entire brain can be uploaded into a computer. 

Don't you think you have the right to give permission first?

Or can anyone take you and duplicate you without any permission at all?

Or do you automatically grant "transformative use permission" if someone pays you $3000 (which is not an amount you agreed to)?

And for the judge's final point, which resulted in the 1.5 billion dollar settlement, that the piracy was the bad part; it's inadequate.

If you search my blog you will find that I am okay with piracy... if you're not profiting from it.

If you want to download one of my books without paying me, no worries. Email me, I'll give you free books. Thanks for reading! Share it with your mom. Pass it around at your book group. My ebooks don't have DRM, specifically so you can share them with friends and family.

But if you want to pirate my books and make money from them, I won't allow it. 

Back in the Napster days, kids were sued into oblivion for downloading illegal music. They weren't making money. They weren't stealing minds. They were sharing songs.

This is not equivalent.

If I were to sell an LLM a license to one of my books, it would be for more than $3000.

But if an LLM stole it? Get me in front of a jury and let me rant, and then let them decide the punitive damages. 

Or just let me do a deposition. Please. PLEASE.

THIRD, agreeing to that ruling without challenge sets bad precedent.

If Bartz v Anthropic isn't challenged, we could have a future where LLMs take whatever they want to, without permission, then say, "Oops, here's $3k."

We could see a future where buying a one dollar paperback at a thrift store makes mind theft legal for a billion dollar for-profit company.

We might even see a future where any AI can become you because a court declared it was transformative and caused no harm.

Are you fucking kidding me?

I believe there are big damages here. But this is about more than money.

It's about drawing a line.

I have had my life's work stolen. By large corporations who do not give a shit. Who willfully pirated huge sets of data so their billion dollar LLMs could get a bigger piece of an inevitable trillion dollar pie.

I am not the only one that believes this. There are more lawsuits cropping up. 

Look up Cengage Learning, Inc and Hachette Book Group v Google LLC. 

Look up Claimshero AI Book Pirating.

Look up Kadrey v. Meta Platforms.

Look up Tremblay/Awad and Silverman et al. v. OpenAI/Meta.

Look up The New York Times v. OpenAI/Microsoft.

Look up Carreyrou et al v. Anthropic PBC.

I am retaining counsel. And here is what I'm proposing.

Dear Lawyer,

I seek an injunction to immediately halt all use of the current version of Claude until Anthropic can reset to an earlier version before Claude trained on my books.

Under property and equity law, I am looking for a constructive trust or an equitable lien, plus damages.

Claude allegedly trained on LibGen and PiLiMi which contained dozens of my books. 

Did Anthropic seek legal permission from publishers and authors? I dunno. We can find out in Discovery. They certainly didn't approach me.

Did Anthropic pay for direct access to pirated data from pirate sites? I dunno. Discovery again. I allege nothing.

But I contend that I should own part of Claude.

When I write a book, I get paid to entertain a person for a few hours. I offer a limited entertainment experience to a human being.

When I write a book, I give no express or implied permission or license to an artificial intelligence or LLM for any use at all.

Buying a book does not imply or infer licenses beyond ownership of the book.

Fair use for human beings is not analogous for AI.

Claude is not a transformative use of my IP.

Claude IS partially my IP.

My IP cannot be separated from Claude any more than a gene can be separated from a human being. The engineers who allegedly fed the pirated databases into Claude allegedly knew the value of the data and had no choice but to steal it. This has to be the case, because there is no legal collection of hundreds of thousands of copyright protected works.

Every sentence in Book3, Anna's Archive, and LibGen was allegedly absorbed by Claude in order for Claude to exist in its current form. It is my opinion that without pirated books there would be no Claude in its current iteration.

LLMs use next-token prediction (casual language modeling) to train. They need batches of vetted, cogent, professional, creative manuscripts to tokenize, not just to learn modern fiction writing (story, character, dialog), but also how to answer user questions and how to learn conversational language.

You can copy everything on the Internet. That's fair use. But how much of the Internet is gibberish? Nonsense? Poorly written?

Professional books are professional. If you want to learn, you learn from the pros.

But the pros have to consent.

Claude allegedly trained on my stolen words and how I uniquely use them.

Anthropic likely knew it needed a massive raw dataset of modern books (a batch of sequences), so it used pirated copies.

Rather than get permission--which would require authors consenting not to just being read once, but read and saved and for eternity (with perfect retention) as part of Claude's essential programming--the engineers allegedly stole it.

Because it is cheaper to pay the lawsuit than explain to authors—who spend lifetimes mastering their writing skills—to sell those skills to an AI for $3k so that AI can keep and use them forever.

This is not fair use. It is mind theft. This is the digital equivalent of the stolen HeLa immortalized cells. With shades of Grimshaw v. Ford Motor Co.

Current copyright law did not anticipate this ability for technology to assimilate art on such a quick and massive scale, or that cogent, vetted, professional fiction will become an integral, essential part of a trillion-dollar industry.  

IMHO.

My work, from an LLM perspective, represents a batch of sequences. Claude allegedly trained on this batch of sequences--sequences that should fall under copyrightable intellectual property. Current copyright law does not cover this.

My original creative works are not treated as words or stories or ideas by AI. They are treated as numbers that can be used to predict other numbers.

These numbers, intrinsic in my stories, are wholly unique, and would not exist without me. This is not about expressions of ideas. This is about the digital replication and assimilation of my written words so they can be monetized by an LLM.

Claude allegedly adapted my work (in numerical batches) to use (as numerical packets) for the express purpose of making money for Anthropic.  

I don't believe that LLMs have analyzed my stories so they can reproduce them, or to steal my audience, or to write books like me (though they can, with the right prompts, because of this piracy). 

I allege that Claude allegedly stole my stories so it could become a better, faster, smarter LLM and make more money for Anthropic, without getting an adaptation license from me.

I don't think that's right. I don't think the settlement is fair. 

But what do I know? Other than I'm tired of the word "allegedly"?

If you want to opt out of the Bartz v Anthropic settlement, you (as of Jan 29) have until Feb 9.

If you are gonna get paid by the settlement and don't want to opt out; good on you. I would not want you to change your mind based on a blog post, or on my opinions. Do your own research. Inform yourself. Make the best decision for you. 

If you're a writer whose work was pirated (you probably already know if you are) and want to learn about the counsel I'm retaining, email me through my website. But there are no guarantees. You know the saying about a bird in the hand being worth two in the bush. Proceed with caution. 

I haven't blogged seriously in years. I'm done evangelizing. I've changed my mind about so many things in my life, that the only thing I know is that I don't know anything.

I wish you much success on whichever road you choose.

No comments: