Thursday, January 29, 2026

Opting Out of the Bartz v Anthropic

So I want to preface this by saying I am not a lawyer, none of this is legal advice, my opinions are my own and they are just opinions, and I'm not pointing fingers at any specific people.

I write fiction, and I occasionally blog. That is the extent of my expertise. 

Today was supposed to be the last day to opt out of the Bartz v Anthropic Lawsuit, headed by the Authors Guild. It has been extended to Feb 9.

I'll use Grok 4.1 AI for a quick summary.

Grok 4.1: Bartz v. Anthropic was a 2024 class-action lawsuit brought by authors with significant support from the Authors Guild, alleging Anthropic infringed copyrights by downloading millions of pirated books from shadow libraries to train its Claude AI models. In 2025, a federal judge ruled that training on legally acquired books was fair use but piracy was not. Anthropic settled in August 2025 for a record $1.5 billion, with claims processing ongoing and final approval pending in 2026. Eligible class authors and rightsholders will receive approximately $3,000 per qualifying copyrighted book title (nearly 500,000 works total), pending final approval and valid claims.

Joe sez: Thank you, Grok.

I have opted out of this lawsuit for reasons I will describe shortly, and am retaining another team of attorneys. 

If you want to know my changing and complicated feelings about LLMs and AI, you can check out my previous blog posts:

Jan 21, 2026 - The Rise Of Airt - https://jakonrath.blogspot.com/2026/01/the-rise-of-airt.html

Nov 9, 2025 - Anthropic and the Future of Copyright - https://jakonrath.blogspot.com/2025/11/anthropic-and-future-of-copyright.html

Sept 29, 2023 - English Language Sues Bestselling authors - https://jakonrath.blogspot.com/2023/09/english-language-sues-bestselling.html

Those blog posts are in reverse chronological order, and if you don't want to take the time to read all three (it's a lot of words) here is the TLDR:

  • AI is here to stay, it is affecting almost every profession, including the arts, including fiction writing.
  • Initially I was not worried about it and thought worries were silly.
  • I am now very worried, and moderately angry.
  • I use and like AI, but I'm not using it to write fiction or to brainstorm. I am using it for research, foreign translations, and proofreading. I pay around $400 a year for Grok.
If you are a longtime reader of this blog, you know I am anti-establishment. I came by this viewpoint fairly.

Once upon a time I was a newbie writer and I sought a literary agent and a traditional publishing deal.

After 500+ rejections, I landed a lit agent, and went on to eventually sell 11 thrillers to legacy publishers.

Amazon invented the Kindle, and my rejected self-published books wound up outselling my traditionally published books.

I got my rights back from legacy publishers, and went on to sell several millions of ebooks on Kindle.

During the course of this adventure, which has been ongoing since 2003, I went from kowtowing to the trad publishing model, to rejecting it. 

Legacy traditional publishers (back when I was in that world--I dunno if things have changed), offered unfair, unconscionable contracts, did a poor job with the vast majority of books they published, and only created a small percentage of hits out of their many misses; misses that locked authors in for life.

For several years I evangelized self-publishing, and provided this blog as a free service to help writers understand both legacy and self-publishing.

One of the common antagonists in my blog posts has been the Authors Guild.

Again, I haven't dealt with the AG in years. Maybe they've changed. Maybe my original assessment was wrong. But I always looked at the Authors Guild as a group that should have been called the Gatekeepers Guild, because they seemed to align more with the legacy status quo of agents and publishers than they did with authors.

It doesn't matter if I was right or wrong. You can read my blog and decide for yourself.

What matters is that I have AG bias, and the AG was an essential part of the Bartz v Anthropic suit, which resulted in a 1.5 billion dollar win.

But was it really a win?

The following is my opinion. I am not alleging anything. I am not a lawyer. I am just a guy who makes up shit for a living. I am liked by some, hated by others. So take all of this with many grains of salt.

Here is how Grok summarized the B v A ruling:

Fair Use for LLM Training: The court ruled that using copyrighted books to train LLMs was "spectacularly" transformative and constituted fair use, as the process analyzes statistical relationships in text to enable generation of new, original content without reproducing the originals. This was analogized to human learning, emphasizing that copyright does not protect ideas, methods, or concepts. No market harm was found, as there was no evidence of infringing outputs from Claude mimicking or reproducing the plaintiffs' works.

Fair Use for Digitization of Purchased Books: Anthropic's scanning of lawfully purchased print books (involving destructive processes like removing bindings) to create internal digital copies was deemed fair use, as it was a transformative format shift for storage and searchability without distribution or increasing the number of copies. The court found no market harm, distinguishing it from unauthorized duplication.

No Fair Use for Pirated Copies: However, the acquisition and retention of pirated digital copies to build a central library was not fair use and constituted infringement, even if later used for training. This was non-transformative, displaced the market for authorized sales, and was unnecessary given lawful alternatives. Judge Alsup expressed skepticism that subsequent fair use could justify initial piracy, noting lawful acquisition as essential.

Joe sez: As of this blog post, Jan 29, 2026, things seem to be proceeding as planned. Thousands of authors, including many of my peers and friends, are sticking with the suit and expecting to get $3000 per infringed work. 

Good for them. 

But I will not be one of those authors. I have opted out of this settlement.

Here are the reasons why.

FIRST, I don't think the money is sufficient. I am not privy to the back and forth of the litigants and defendants, and I have not read the court transcript.

But the fact that a one and a half billion dollar settlement seems to be moving forward, and the Court, the Authors Guild, and the Defendant all seem to be okay with it, doesn't sit well with me.

I don't believe I've ever sided with the Authors Guild on an important issue. So I am suss.

I don't like the judge's ruling, for many reasons that I'll get into.

Anthropic being okay with 1.5 billion (which I infer by them agreeing to the settlement) makes me think they know they got off easy. 

SECOND, that ruling.

Scanning books is not fair use for training LLMs, in this non-lawyer's opinion. When I sell a book, I am selling the rights for it to be read.

Not to be copied infinitely. Not to be stored eternally. Not to be trained off of.

A human being can buy my book to read and enjoy. They can try--with their finite, limited, human mind--to imitate my style and tone and storytelling techniques. 

But someone who buys my book doesn't have the right to pirate my mind. A human can't do that. But an LLM can.

According to Grok:

Large Language Models (LLMs) are neural networks trained on vast text datasets to predict the next token (word or subword) in a sequence. Built on transformer architectures, they use attention mechanisms to weigh relationships between tokens, capturing context over long distances. With billions of parameters adjusted during training, LLMs learn patterns in language, grammar, and knowledge. When prompted, they generate responses autoregressively: processing input, computing token probabilities, and sampling the most likely continuation until complete.

Joe sez: LLMs must train on gigantic text datasets. They don't speak any language; they assign values to characters and predict relationships between tokens.

Look up the Chinese Room, Mary's Room, and Philosophical Zombies for deeper insight here. But this is waaaay beyond those philosophical ideas.

These LLM predictions defy human capability. They did not exist when current copyright law was written, and copyright law has yet to adequately address this issue.

Copyright Law is to protect humans from other humans. Not from super computers.

Right now, the LLM market (according to Grok) is worth about 12 billion dollars. By 2030, Grok predicts it will be worth 36 billion, and by 2035 up to 180 billion.

Is it fair use that these 500,000 works of pirated fiction--a vetted, professional, creative dataset that encompasses almost all of modern copyrighted material--are being awarded $3000 per book for training a profit-driven juggernaut?

I didn't sell a license for my books to train LLMs. If I had, I would have demanded more than $3k per book. 

A human being who reads one of my books can imperfectly retain it in their mind, for the extent of their limited memory. 

An LLM retains it forever, and they can learn from it perfectly, with 100% accuracy.

The tokens that make up my 60+ novels are unique to me. They are unique to the universe. They could not exist if I never existed.

I did not give any LLM permission to learn from my life's work so they can imitate me perfectly, use my pirated brain to answer questions in the style of me, keep my work indefinitely, and be able to write better than I can--which they can already do.

How much would you charge your assassin to murder you? If there is an amount, I would bet it is higher than $3000...

It is also, in this non-lawyer's opinion, not fair use to digitize purchased books.

We aren't talking about buying a music CD and ripping it to your iTunes for your personal enjoyment.

We are talking about taking a creative body of work that took weeks, months, years of dedicated human learning and effort, and digitizing it in seconds to train a billion dollar industry that will last millennia. 

Sure, it is transformative, in the same way harvesting organs from unwilling donors is transformative. 

If you want one of my kidneys, pay for it. That involves offer, acceptance, legal capacity, and lawful purpose. 

It requires a meeting of the minds. My mind was never even approached, let alone met. And riffing on that:

There will come a time when your entire brain can be uploaded into a computer. 

Don't you think you have the right to give permission first?

Or can anyone take you and duplicate you without any permission at all?

Or do you automatically grant "transformative use permission" if someone pays you $3000 (which is not an amount you agreed to)?

And for the judge's final point--which resulted in the 1.5 billion dollar settlement--that the piracy was the bad part; it's inadequate.

If you search my blog you will find that I am okay with piracy... if you're not profiting from it.

If you want to download one of my books without paying me, no worries. Email me, I'll give you free books. Thanks for reading! Share it with your mom. Pass it around at your book group. My ebooks don't have DRM, specifically so you can share them with friends and family.

But if you want to pirate my books and make money from them, I won't allow it. 

Back in the Napster days, kids were sued into oblivion for downloading illegal music. They weren't making money. They weren't stealing minds. They were sharing songs.

This is not equivalent.

If I were to sell an LLM a license to one of my books, it would be for more than $3000.

But if an LLM stole it? Get me in front of a jury and let me rant, and then let them decide the punitive damages. 

Or just let me do a deposition. Please. PLEASE.

THIRD, agreeing to that ruling without challenge sets bad precedent.

If Bartz v Anthropic isn't challenged, we could have a future where LLMs take whatever they want to, without permission, then say, "Oops, here's $3k."

We could see a future where buying a one dollar paperback at a thrift store makes mind theft legal for a billion dollar for-profit company.

We might even see a future where any AI can become you because a court declared it was transformative and caused no harm.

Are you fucking kidding me?

I believe there are big damages here. But this is about more than money.

It's about drawing a line.

I have had my life's work stolen. By large corporations who do not give a shit. Who willfully pirated huge sets of data so their billion dollar LLMs could get a bigger piece of an inevitable trillion dollar pie.

I am not the only one that believes this. There are more lawsuits cropping up. 

Look up Cengage Learning, Inc and Hachette Book Group v Google LLC. 

Look up Claimshero AI Book Pirating.

Look up Kadrey v. Meta Platforms.

Look up Tremblay/Awad and Silverman et al. v. OpenAI/Meta.

Look up The New York Times v. OpenAI/Microsoft.

Look up Carreyrou et al v. Anthropic PBC.

I am retaining counsel. And here is what I'm proposing.

Dear Lawyer,

I seek an injunction to immediately halt all use of the current version of Claude until Anthropic can reset to an earlier version before Claude trained on my books.

Under property and equity law, I am looking for a constructive trust or an equitable lien, plus damages.

Claude allegedly trained on LibGen and PiLiMi datasets which contained dozens of my books. 

Did Anthropic seek legal permission from publishers and authors? I dunno. We can find out during Discovery. They certainly didn't approach me.

Did Anthropic pay for direct access to pirated data from pirate sites? I dunno. Discovery again. I allege nothing.

But I contend that I should own part of Claude.

When I write a book, I get paid to entertain a person for a few hours. I offer a limited entertainment experience to a human being.

When I write a book, I give no express or implied permission or license to an artificial intelligence or LLM for any use at all.

Buying a book does not imply or infer licenses beyond ownership of the book.

Fair use for human beings is not analogous for AI.

Claude is not a transformative use of my IP.

Claude IS partially my IP.

My IP cannot be separated from Claude any more than a gene can be separated from a human being. The engineers who allegedly fed the pirated databases into Claude allegedly knew the value of the data and had no choice but to steal it. This has to be the case, because there is no legal collection of hundreds of thousands of copyright protected works.

Every sentence in Book3, Anna's Archive, and LibGen was allegedly absorbed by Claude in order for Claude to exist in its current form. It is my opinion that without pirated books there would be no Claude in its current iteration.

LLMs use next-token prediction (casual language modeling) to train. They need batches of vetted, cogent, professional, creative manuscripts to tokenize, not just to learn modern fiction writing (story, character, dialog), but also how to answer user questions and how to learn conversational language.

You can copy everything on the Internet. That's fair use. But how much of the Internet is gibberish? Nonsense? Poorly written?

Professional books are professional. If you want to learn, you learn from the pros.

But the pros have to consent.

Claude allegedly trained on my stolen words and how I uniquely use them.

Anthropic likely knew it needed a massive raw dataset of modern books (a batch of sequences), so it used pirated copies.

Rather than get permission--which would require authors consenting not to just being read once, but read and saved for eternity (with perfect retention) as part of Claude's essential programming--the engineers allegedly stole it.

Because it is cheaper to pay the lawsuit than explain to authors—who spend lifetimes mastering their writing skills—to sell those skills to an AI for $3k so that AI can keep and use them forever.

This is not fair use. It is mind theft. This is the digital equivalent of the stolen HeLa immortalized cells. With shades of Grimshaw v. Ford Motor Co.

Current copyright law did not anticipate this ability for technology to assimilate art on such a quick and massive scale, or that cogent, vetted, professional fiction will become an integral, essential part of a trillion-dollar industry.  

IMHO.

My work, from an LLM perspective, represents a batch of sequences. Claude allegedly trained on this batch of sequences--sequences that should fall under copyrightable intellectual property. Current copyright law does not cover this.

My original creative works are not treated as words or stories or ideas by AI. They are treated as numbers that can be used to predict other numbers.

These numbers, intrinsic in my stories, are wholly unique, and would not exist without me. This is not about expressions of ideas. This is about the digital replication and assimilation of my written words so they can be monetized by an LLM.

Claude allegedly adapted my work (in numerical batches) to use (as numerical packets) for the express purpose of making money for Anthropic.  

I don't believe that LLMs have analyzed my stories so they can reproduce them, or to steal my audience, or to write books like me (though they can, with the right prompts, because of this piracy). 

I allege that Claude allegedly stole my stories so it could become a better, faster, smarter LLM and make more money for Anthropic, without getting an adaptation license from me.

I don't think that's right. I don't think the settlement is fair. 

But what do I know? Other than I'm tired of the word "allegedly"?

If you want to opt out of the Bartz v Anthropic settlement, you (as of Jan 29) have until Feb 9.

If you are gonna get paid by the settlement and don't want to opt out; good on you. I would not want you to change your mind based on a blog post, or on my opinions. Do your own research. Inform yourself. Make the best decision for you. 

If you're a writer whose work was pirated (you probably already know if you are) and want to learn about the counsel I'm retaining, email me through my website. But there are no guarantees. You know the saying about a bird in the hand being worth two in the bush. Proceed with caution. 

I haven't blogged seriously in years. I'm done evangelizing. I've changed my mind about so many things in my life, that the only thing I know is that I don't know anything.

I wish you much success on whichever road you choose.

16 comments:

  1. "Not to be stored eternally. Not to be trained off of." I can't keep your book, or study it? Just read it once?

    ReplyDelete
  2. Sure you can.

    But an LLM cannot.

    ReplyDelete
  3. Anonymous1:12 PM

    You say you only use AI for foreign translations, research, and proofreading, and then fill your entire blog with AI summaries. You are part of the problem and a hypocrite.

    ReplyDelete
    Replies
    1. What do you believe the problem is, Anon?

      You want to go back to the horse and buggy, land lines, and developing film?

      You can. Buy a horse, ride it into town. You can't text with a rotary, but you can call people. And Kodak still sells black and white 35mm.

      But you are on the internet. Why? Why not type your messages on an IMB Selectric and snail mail your comment to me?

      Or we can use the tools we're given. We use a calculator, not a pencil. We use a watch, not a sundial.

      I could write a decent definition of "LLM" in a few minutes. Or I can crib Wikipedia and take someone else's definition. Or I can us AI and cite that I use AI. Writing definitions isn't my jam.

      I cannot translate English into French or German. I could learn, but that isn't time effective or cost effective.

      I could use AI to write my books. I do not. You can call that hypocrisy. I call it the line I draw.

      Feel free to draw your own lines, and conclusions.

      I am a fiction writer. I write my own fiction. Occasionally I blog, and my blog is my ideas, my conclusions, and my words, except where noted.

      What stops me from using AI to write fiction? Or my social media? Or my blog?

      The same thing that stops me from hiring people to ghostwrite my books. The same thing that compelled me to co-write over a dozen stories on Kindle Worlds.

      My fiction is personal, and it is my privilege to write it. Same with my blogs. Same with my social media.

      I pay for cover art. Is that hypocrisy, since I don't do it myself? I pay to translate. What is the difference between paying a translator to start from scratch, or to proof an AI translation of my words?

      When I research, I use the internet. Google doesn't even direct you to sites as its first hit; its first hit is an AI answer.

      For me, writing fiction isn't a cost. It's a path.

      Also, there is no objective truth. No black and white. Your subjective mileage may vary.

      Delete
  4. Hi Joe,

    The two previous commentators have pointed out the contradictions that undermine the claims in your blog post. The claim : "When I sell a book, I am selling the rights for it to be read.

    Not to be copied infinitely. Not to be stored eternally. Not to be trained off of." , this claim is very problematic, and contradictory to the following paragraph: "If you search my blog you will find that I am okay with piracy... if you're not profiting from it.

    If you want to download one of my books without paying me, no worries. Email me, I'll give you free books. Thanks for reading! Share it with your mom. Pass it around at your book group. My ebooks don't have DRM, specifically so you can share them with friends and family."

    In the same way, your resonse to Anon, accusing you of wanting to have it both ways, profiting from AI and then denouncing it, seems to indicate that the creation of literary text is more “moral,” or “superior,” to the creation of translated text.

    What I take away from your blog post is that the lawsuit didn't bring you as much money as you would have liked, and that you would like to squeeze more money out of Anthropic. And by the way, by accusing Anthropic and not Grok, you seem to be absolving Grok of what Anthropic is accused of. That's also problematic, and undermining your whole point.

    Regarding the evolution of AI: you say "There will come a time when your entire brain can be uploaded into a computer." As a science-fiction writer, I beg to differ. Why?

    Because the human brain is one of the most complicated objects in the universe. Because we know next to nothing about quantum laws, and there are quantum interactions into our brains.

    You fall into the trap of silicon valley ingeniors: because they lack spiritual fulfillment, they try to create their own God in the machine, equating computing power with divine powers. That is very simplistic, and false.

    The human will most probably never be able to download his brain on a computer. If you want to compare a human brain to a LLM, a human brain is far more capable and powerful, and energy efficient, because we do not need to read billions of books in order to write a book. Think of it: LLM would'nt exist without the internet.

    What I advocate as an author is an unconditional universal income, as well as unconditional universal housing, which would allow us to detach ourselves from the most predatory aspects of the economy. In my opinion, capitalism would still exist in the future, as a driver of competition, but to improve our lives rather than to earn a living.

    Why? Because LLMs would not exist without the internet. The internet would not exist without engineers. Engineers would not exist without teachers or farmers. What I am trying to say is that our most advanced technologies are actually dependent on the collective effort of humanity. Although our aggressive primate brains are programmed to target the weakest targets, we are capable of detaching ourselves from this programming and moving toward a more cooperative approach. At least, I hope so, Joe.

    ReplyDelete
  5. Thanks for the thoughtful comment, Alan. I appreciate it, and you.

    >>The two previous commentators have pointed out the contradictions that undermine the claims in your blog post. The claim : "When I sell a book, I am selling the rights for it to be read.

    My claim is that a human being can do human being things with my books. I haven't licensed my work to train AI.

    There is a wide margin between a human mind and an LLM.

    >>seems to indicate that the creation of literary text is more “moral,” or “superior,” to the creation of translated text.

    I'm not making any claims or morality or superiority. I'm a moral relativist, and believe truth is subjective.

    I am claiming I don't use AI to write. I've been thinking a lot about why I feel this way, and it will be the subject of my next blog post, which I'm about halfway through. It's nuanced, and perhaps hypocritical, but it isn't based on morals or money. It's based on humanity, and our future.

    >>What I take away from your blog post is that the lawsuit didn't bring you as much money as you would have liked, and that you would like to squeeze more money out of Anthropic. And by the way, by accusing Anthropic and not Grok, you seem to be absolving Grok of what Anthropic is accused of. That's also problematic, and undermining your whole point.

    That isn't my point. By opting out of Anthropic v Bartz, I am taking a risk, along with burdening myself with lawyers, which I hate to do.

    I could pocket twenty grand from Bartz and shut up. That's what most of my peers are doing.

    This isn't about the money. It's about the future of copyright being determined by big tech and courts while the actual IP holders don't have a say. It's a fight, not a payday. I don't expect to make a dime. And I expect it to be long, frustrating, and a strain on my brain.

    But I have been known to fight for what I believe is right.

    >>"There will come a time when your entire brain can be uploaded into a computer." As a science-fiction writer, I beg to differ. Why? Because the human brain is one of the most complicated objects in the universe. Because we know next to nothing about quantum laws, and there are quantum interactions into our brains.

    As a science fiction writer, I disagree. Did you know that the engineers and programmers who created LLMs have no idea how they think? And LLMs do indeed think. And somewhere, some of them are conscious.

    The brain is complicated. So are neural networks, which are getting larger and more powerful by the second. The human mind has a limit. LLMs do not.

    >>What I advocate as an author is an unconditional universal income, as well as unconditional universal housing, which would allow us to detach ourselves from the most predatory aspects of the economy.

    As a science fiction writer, you have surely read 1984. What you advocate for is a disaster that will kill billions and enslave the survivors. Socialism doesn't work. Handouts don't work.

    As human beings, we need to work. Stress gives us focus, and hardship gives us meaning. It is called "spoiling" a child because that it what it does; actually rot.

    ReplyDelete
    Replies

    1. >>LLMs would not exist without the internet. The internet would not exist without engineers. Engineers would not exist without teachers or farmers. What I am trying to say is that our most advanced technologies are actually dependent on the collective effort of humanity.

      I agree. But things are going to change, and quickly. Because of LLMs.

      And mind theft will be among the least of our worries when that happens. But at this point in time, I choose to be Tron fighting for the Users. Not for the money. But because it is the right side of history.

      >>Although our aggressive primate brains are programmed to target the weakest targets, we are capable of detaching ourselves from this programming and moving toward a more cooperative approach. At least, I hope so, Joe.

      The collective effort of humanity has given us a pretty sweet civilization, but it is also fraught with corruption and evil. Capitalism is the only tested balance against human nature, because it also relies on human nature.

      Getting handouts isn't liberating. It's debilitating.

      A universal income would do nothing but raise the price of bananas to $10,000 a pound.

      Universal housing? You know that it has been tried, right? Check out the USSR. Or the Robert Taylor homes, or Cabrini Green.

      It doesn't work. For the same reason people don't take care of a rented house in the same way they do a house they own.

      We need to work. We need to struggle. Without it, we're just doomscrolling dopamine addicts with no purpose.

      Delete
  6. I agree that scientists do not know how AI works to solve certain problems. However, this does not mean that AI is more intelligent than humans. It means that AI works in a very different way, a way that it cannot explain to us. If AI could explain it, our scientists would know it too. This means that AI has real limitations.
    Furthermore, cooling electronic chips and servers will soon no longer be possible on Earth, as it consumes so much energy. Here again, we are talking about serious limitations of AI and a fragility linked to the fact that satellites have to be sent into space to take advantage of that technology.
    "Getting handouts isn't liberating. It's debilitating." And yet, AI gives handouts. Shortcuts. Capitalism, as a result, appears more and more despotic. Decoupled from the notion of merit and work. You know that nowadays, you make more money by speculating than by working, right? And speculation is mostly done by machines.
    You are right to say that people don't take care of a rented house in the same way they do a house they own. But you do not have to do this the sovietic way, you know. In the future, AI will be able to help people create their free housing, a housing they will care for because it was their project.
    I am confident that we'll be able to create universal income covering basic needs without raising the price of bananas to 10 000 a pound. It will have to be done in a smart way. As I said, the competitive drive we have with capitalism will still exist. We'll still have to work to improve our conditions or to reach dreams. We'll still have to struggle to build projects. The dream is not to have every people owning the same amount of money. It is to cover basic needs in order to avoid humans predating humans in an economic way. Just because humanity is rife with fraud and corruption doesn't mean we can't improve it. Fraud and corruption largely happen because the basic needs are not covered.
    Already you can see huge differences between countries in Scandinavia, with more welfare state, and countries predating the poor. The social model works. It just has to be improved.
    As authors, we all know they are far more books on Amazon that can be read by humanity. We know that there are many authors writing books with no readers at all. Why would they stop writing if their basic needs are covered? They are already writing for nothing, and nobody unless themselves.

    ReplyDelete
  7. Thanks for more intelligent discourse, Alan. I appreciate it.

    On any measure of intelligence, AI is smarter than us right now. It has limitations programmed into it so it isn't autonomous, but it can do things better than people in many areas. It hasn't replaced all of us yet, but it will replace more and more of us.

    The cooling problem will be solved. Tech problems are always overcome.

    I don't know what you mean when you say AI is giving us handouts. Can you cite some examples?

    >>>AI will be able to help people create their free housing, a housing they will care for because it was their project.

    Can you explain what you mean?

    I believe tax is government sponsored theft. I pay property taxes, annually, on property I own. Income tax, self employment tax, sales tax; the list goes on.

    Nothing is free. We pay for things with money, work, and time. UBI and free housing doesn't work, anywhere. Scandinavia is a market economy with capitalism, private ownership, and open competition. They have social welfare programs that are paid for with high taxes--as high as 46%.

    Fraud and corruption happen because they are ingrained in human nature. In a capitalist society, that is kept in check with competition and minimal govt interference.

    UBI and free housing are utopian ideologies. They have never worked, anywhere. No country ever gets better by giving the govt more power, or by giving handouts.

    Bottom line: basic needs won't ever get covered no matter how many people want it and how many laws are passed. It goes against history, human nature, and mother nature. No lifeform exists that gets a free ride. We all need to work, and the incentive is to better ourselves and to be useful.

    But you are welcome to donate more than the taxes you already pay to help the less fortunate.

    As for authors writing books that aren't read; that's a brutal reality of art. Van Gogh never sold a single painting.

    I don't believe UBI or free housing would usher in a Renaissance of creative output. I think it would usher in WALL-E; fatties on floating chairs who forgot how to walk. We don't need more free time to doom-scroll social media sites and troll each other. We don't need a Department of Free Housing that will do just as shitty a job as every other Department in our govt.

    We need to cut govt spending, root out fraud and corruption, and incentivize people to work.

    ReplyDelete
    Replies
    1. Sorry for the late reply, Joe.
      An AI that resorts to hallucinations when it doesn't know how to answer certain questions is no smarter than we are. Very often, we could say that AI offers a semblance of intelligence, a clever disguise. It is undeniable that AI surpasses us in certain areas of calculation and data retrieval. This is also the case in the game of Go, where the specialization of AI has led to impressive results. Maybe it will do so in programing, soon. But the resources used in terms of chips, energy, servers, and human contribution mean that the AI is actually cheating all the time. Comparing the superior intelligence of AI to that of a human being is about as subtle as saying that a library of a hundred thousand books contains more knowledge than there is in the brain of an average human.
      Could AI ever replace JA Konrath when it comes to writing? AI would not only have to clone Konrath, but also reproduce all of his life experiences and journeys since birth. His emotions, his feelings, his way of thinking. AI will never be able to reproduce the uniqueness of an individual, and in that sense, if it replaces us, it will only be a pale substitute for what we were.
      When I say Ai is giving us handouts, I mean that for example, if I want an audiobook, the virtual voice powered by the AI of Amazon will offer me a 50 hours audiobook in 5 minutes. That's a handout. But of course, this is not the same as an audiobook interpreted by talented comedians.
      Nowadays, farmers can program their harvesters and simply watch them work. So the day when we become fatties like in Wall-E may not be so far. Whether we like it or not, technology gives us more free time.
      Money is a human convention. As are borders. Tax is also an human invention. Tax is not theft as long as we agree that the most predatory instincts of the individual must be moderated.
      To say that an oil field belongs to whoever owns the land on which it is located is a convention built on other conventions. To say that poor people are absolutely necessary for there to be rich people is not a convention, but a human belief based on an instinct for domination. Civilizations that are not based on mutual predation but on cooperation are in fact entirely possible, and in any case desirable, given that we live in a finite world where we must not only share resources but also ensure that they can be renewed for our descendants.
      >>>AI will be able to help people create their free housing, a housing they will care for because it was their project.

      Can you explain what you mean?

      I believe that in the future, it will be much more easy to build your home from scratch: robots, greater mastery of insulating materials, greater mastery of ways to capture natural energy. If a welfare society give each child the mean (money or materials) to create his own house, the children will be able to use the AI as an architect to design their future home. It will be his/her house, and he/she will care for it. Free housing has never worked because it was not done in the proper way. If, as adults, we want to build better houses, with better materials, than it will become necessary to make money. As a consequence, the banking system, which currently relies heavily on real estate loans, will have to shift its focus largely toward providing capital for profitable businesses.

      "No country ever gets better by giving the govt more power, or by giving handouts." That is a very questionable issue, my friend. If the govt really works for the people, than giving more power to govt is tantamount to giving people more power. Handouts mustn't be viewed as charity. I think that machines will make our work so much easier in the future that there will come a point where earning money in real life will become almost as easy as it is in a videogame like Skyrim. The only real obstacle for the sharing of resources on Earth is not resources: it's humans. Once we have understood this, we can evolve. And we can focus on important matters, such as how not to become as fat as the average person in Wall-E. ;)

      Delete
  8. This comment has been removed by the author.

    ReplyDelete
  9. Matt Harris9:44 PM

    Hi, JA. Not sure if you have seen this study out of Stanford that was just published. It proves you right. Copies of copyrighted works are there in the LLM models verbatim, without the permission of the copyright owners. Model owners do a better or worse job adding guardrails to conceal this fact. The study uses some techniques of jailbreaking that obfuscation with more or less success, but the conclusion of the study is that the works are in the database complete. From the conclusion "it can be feasible to extract large quantities of in-copyright training data from production LLMs."

    Interesting article.

    https://arxiv.org/html/2601.02671v1

    ReplyDelete
  10. Hi again, Alan.

    >>An AI that resorts to hallucinations when it doesn't know how to answer certain questions is no smarter than we are.

    One doesn't have anything to do with the other. It is smarter, and sometimes hallucinates.

    >>>Comparing the superior intelligence of AI to that of a human being is about as subtle as saying that a library of a hundred thousand books contains more knowledge than there is in the brain of an average human.

    False equivalence. A library doesn't think. AI does. And does so at levels impossible for humans.

    >>>AI will never be able to reproduce the uniqueness of an individual, and in that sense, if it replaces us, it will only be a pale substitute for what we were.

    I disagree. It will be able to completely digitize a human mind, down to the last neuron.

    >>>I mean that for example, if I want an audiobook, the virtual voice powered by the AI of Amazon will offer me a 50 hours audiobook in 5 minutes.

    That's capitalism, not a hand out. The service is paid for, not free.

    >>>Money is a human convention. As are borders. Tax is also an human invention.

    To have a civilization requires borders, money, and a government. The civilizations that work the best rely on capitalism and human rights.

    You shouldn't confuse a utopian ideology with real-life examples.

    >>>the children will be able to use the AI as an architect to design their future home.

    Someone has to pay for this, Alan. And the concept "it hasn't worked before because no one ever did it correctly" is just plain wrong.

    It hasn't worked before because it doesn't work. Ever.

    >>>If the govt really works for the people, than giving more power to govt is tantamount to giving people more power.

    Untrue. Power always corrupts. ALWAYS.

    >>>I think that machines will make our work so much easier in the future that there will come a point where earning money in real life will become almost as easy as it is in a videogame like Skyrim.

    I envy your optimism. But the laws of supply and demand don't allow for free machines to exist and do our work. Someone has to pay for them. And if handouts are involved, the price of everything goes up accordingly.

    ReplyDelete

  11. "False equivalence. A library doesn't think. AI does. And does so at levels impossible for humans."

    Assemblies of neural networks don't think. They compute, they predict, they correlate. They apply rules.

    What humans have and AI don't have is the following:

    - no intention of its own

    - no consciousness or subjective experience

    - no “lived” understanding of the world

    AIs are capable of simulating certain aspects of human behavior. They are capable of adopting strategies unknown to humans in certain games due to their greater ability to calculating and processing many factors. That's why it's easy to cofound them with smart beings. But beings are autonomous. AI are not. They must have instructions.

    "That's capitalism, not a hand out. The service is paid for, not free." All my books have audiobooks version. Amazon made me paid $0 for these audiobooks.

    Even AI translations are a form of handout, because they are waaaay cheaper than traditional translations.

    "To have a civilization requires borders, money, and a government." We are moving towards a world government. Why? Because of the existence of an increasing number of international treaties. Because we realize that issues such as COVID-19 and climate change are much better managed with a world government. International treaties are real-life examples.

    "It hasn't worked before because it doesn't work. Ever."

    Appropriation, Joe. The concept will work because of appropriation, among other things. We care for things we appropriate.

    "Untrue. Power always corrupts. ALWAYS." Not to the same degree, depending on the countervailing powers. The ranking of the most corrupt countries shows huge differences, due to modes of governance and the presence or absence of countervailing powers.

    The Soviet Union failed because the model was not democratic and lacked checks and balances. Taxes on robots, but also on IT, will generate huge amounts of money. Much more efficient and intelligent management of public spending, no doubt aided by AI, will lead to considerable savings. Just look at the progress of GDP between the 18th century and the present day. The money is there, in fact. The distribution is poor. One could even say: abysmal.

    In order to avoid price spikes, we will need to have a smart strategy for distributing goods. This is not an insurmountable obstacle, far from it. It will require a restructuring of society at a world level. What will exist in the future has never been achieved before. That is why we will have to invent things as problems arise. A bit like when you write a novel. And yes, I am optimistic, and I am proud of it.

    ReplyDelete
  12. I forgot: "I disagree. It will be able to completely digitize a human mind, down to the last neuron."
    We'll have to agree to disagree on that one, Joe. In some ways, you are more optimistic than me. ;)

    ReplyDelete

Thanks for the comment! Joe will get back to you eventually. :)