“A bronze scale of justice cracked under the weight of books on one side and streams of binary code on the other, representing the strain of copyright law in the AI era

When AI Creates Value, Settlements Aren’t Arm’s Length — They’re Arm’s Twist

The $1.5 billion Anthropic settlement reveals a fundamental truth about AI economics: when copyright meets artificial intelligence, traditional valuation benchmarks break down.

The Speeding Ticket, Not the Speed Limit

An AI robot racing across a digital highway of glowing book pages stopped by a giant red copyright ticket sign, symbolizing the Anthropic settlement as a speeding ticket rather than a true valuation benchmark

Anthropic’s $1.5 billion class settlement with authors is historic — the largest publicly reported copyright recovery on record — but it isn’t a price tag for AI training rights. It’s a retroactive penalty for past inputs. At roughly $3,000 per book (about 500,000 works), the number looks hefty until you remember what it isn’t: a forward-looking license, output immunity, or an arm’s-length valuation of what books are worth to model performance.

Two data points show why. First, the court observed Anthropic could have acquired hard copies for about a dollar each and scanned them — a comparison that highlights how far the settlement sits above bare copy costs, not how close it comes to a true license rate. Second, Anthropic’s partnerships lead Tom Turvey told the court there’s no viable at-scale market for licensing book datasets: the few offers he found were economically unjustifiable,1 and the most cited example — a $5,000 per-book, three-year, opt-in HarperCollins program — is a boutique exception, not a scalable solution.

Put differently: Anthropic didn’t negotiate permission; it optimized risk. The company ingested nearly 7 million shadow-library books, then paid to wipe the slate on past conduct. When AI creates value, this settlement tells us less about “what a book is worth to a model” and more about what it costs to avoid a trial. For a company valued at $183 billion after closing a $13 billion funding round, this isn’t a valuation benchmark — it’s the cost of doing business.

Anthropic’s lawyers were betting on forgiveness being cheaper than permission. The settlement proves they were right.

The Arm’s Length Illusion

In transfer pricing (the method for valuing transactions between related corporate entities for tax purposes). The Anthropic settlement fails this test spectacularly.

This wasn’t a voluntary license negotiation. Judge William Alsup drew a sharp line: training AI on legally acquired books was transformative fair use, but training on pirated material was “inherently, irredeemably infringing.” The judge’s order asserts that Anthropic pirated more than 7 million copies of books, potentially exposing the company to statutory damages that could have been enormous.

Faced with potential statutory damages of up to $150,000 per work — theoretically over $1 trillion for 7 million books — Anthropic had a gun to its head. This settlement represents damage control, not market pricing.

What Authors Actually Get: The Reality Check

Shattered copyright sign © on the ground, while a humanoid robot in a business suit casually steps over it carrying a briefcase full of glowing cash, bright neon backlighting, symbolic of efficient breach.

The $3,000 per book figure masks a more complex reality:

The Settlement Funnel:

  • Started with: ~7 million pirated books
  • Eligible for settlement: ~465,000-500,000 works (only those registered with the U.S. Copyright Office within specific timeframes)
  • Payment structure: Four installments over two years, with $300 million due by October 2, 2025
  • Publisher/author split: Non-mandatory 50/50 with publishers for trade books

The Price of Forgiveness instead of Permission: Scope, Deletion, and Litigation Risk

Anthropic is paying only for past conduct through August 25, 2025, including “torrenting, scanning, retention, and use of works, including training, research, development, and production of AI models.”

Crucially absent:

  • Future training rights
  • Claims arising from allegedly infringing outputs
  • Any forward-looking license

Instead of licensing, the settlement actually requires Anthropic to delete the copies derived from LibGen and PiLiMi. While Anthropic will destroy its LibGen and PiLiMi datasets, the knowledge extracted from them will presumably remain embedded in Claude’s neural networks. The settlement buys amnesty for past sins, not permission for future use.

The Pricing Gulf and the Market That Isn’t There

Judge Alsup underscored the absurdity of using the $3,000 per-book settlement as a benchmark when Anthropic could have legally purchased physical copies for as little as $1 each. That comparison highlights how distorted the settlement figure is: it’s not a licensing rate, but a litigation premium.

Turvey’s sworn declaration explains why no “true” licensing price exists to fill that middle ground. In his first months at Anthropic, he explored book licensing at scale and found the options fundamentally inadequate. Out of tens of millions of published books, only a tiny fraction were even theoretically licensable for LLM training. And in those few cases, licensors demanded prices that made no economic sense relative to the incremental training value the books would add. As Turvey put it: there simply isn’t a viable market that could ever approach the scale Anthropic’s models require.

Judge Chhabria in Kadrey v. Meta2 sharpened the point: copyright law doesn’t automatically create or protect a “training license” market. Even if one might emerge, courts won’t recognize it unless plaintiffs can prove market substitution or dilution. In other words, it’s not enough to assert that a market should exist — plaintiffs have to show that AI outputs are actually displacing sales of the original works.

The upshot is a pricing gulf with no middle ground: $1 for destructive scanning, $3,000 under settlement duress, and no functioning licensing market in between. That’s why this figure tells us little about the “arm’s length” value of AI training rights — it reflects litigation risk management, not market reality.

This is supported by the parties themselves. Class Counsel explicitly argued that the settlement was justified due to the severe litigation risks:

Once trial began on December 1, the Class faced a real risk of an adverse jury verdict, or a recovery far smaller than that provided for in the Settlement Agreement. As the Court noted in
denying the motion to stay, “'[f]or all we know at this stage, Anthropic will persuade the jury to find facts vindicating it completely.'”

Bartz v. Anthropic, Unopposed Motion for Preliminary Approval of Class Settlement, September 5, 2025

The Real Benchmarks? “Voluntary” Licensing Deals

Let’s take a look at some recent “voluntary” licensing deals noted in the Anthropic filings and press coverage:

It’s worth remembering that even “voluntary” licensing deals aren’t truly arm’s length. Axel Springer’s CEO called their pact with OpenAI a “deal with the devil,” while the Financial Times framed its agreement as a way to keep journalism sustainable. Publishers are negotiating under pressure; AI companies hold the leverage to move forward without permission until forced to pay.

While it appears that these voluntary deals value content higher than settlements, tie payments to ongoing use, and grant forward-looking rights, even these “market” prices are shaped by the legal uncertainty hovering over the entire AI training ecosystem.

Lessons from Transfer Pricing: Courts Scrutinize Litigation-Influenced Agreements

Scales of justice tangled in red tape and patents, symbolizing how litigation-driven agreements like Medtronic’s Pacesetter deal distort true valuation.

The transfer pricing world offers a cautionary tale about using litigation-influenced agreements as valuation benchmarks. In Medtronic v. Commissioner (2016-2022), the Tax Court and Eighth Circuit wrestled with whether a patent cross-license agreement arising from litigation settlement (the Pacesetter agreement) could serve as a comparable uncontrolled transaction for transfer pricing purposes.

The courts’ concerns were telling: agreements arising from litigation carry the taint of legal pressure. The Eighth Circuit demanded extensive analysis of whether the Pacesetter settlement was truly comparable to an ordinary business transaction, noting that settlements reflect litigation risk and bargaining positions under legal threat — not pure economic value. After years of litigation and multiple remands, the Tax Court ultimately concluded that too many adjustments were needed to make the litigation settlement comparable, rejecting it as a reliable benchmark.

The parallel to Anthropic is clear: just as the Medtronic courts questioned whether litigation settlements could establish arm’s length pricing for ongoing royalties, we should question whether copyright infringement settlements can establish the true value of AI training rights. Both involve intellectual property, both involve legal coercion, and both produce prices distorted by litigation dynamics rather than market forces.

Not Everyone Can Throw Money at the Problem

An empty startup office with courtroom scales casting a shadow, representing ROSS Intelligence shutting down under copyright litigation

The Anthropic settlement shows what happens when a well-funded AI company can “throw money at the problem.” But not all players have that option.

ROSS Intelligence — once a promising AI-powered legal research startup — shut down in 2021 under the weight of litigation brought by Thomson Reuters over the use of Westlaw headnotes to train its search engine. Even though the company no longer exists, the litigation continues. In 2025, ROSS filed its opening brief in the Third Circuit, appealing Judge Stephanos Bibas’s ruling that its training practices infringed copyright.

That appeal now raises the very questions the Anthropic settlement sidestepped: whether training on functional or factual legal texts can ever be fair use, and how copyright should apply to AI systems built in industries where licensing models are still unformed. Unlike Anthropic’s $1.5 billion deal, the ROSS case may force courts to provide answers — because there is no longer a company left to negotiate a payout.

Three Uncomfortable Truths for AI Valuation

1. Efficient Breach Lives in Silicon Valley

Anthropic’s behavior exemplifies “efficient breach” — violating rights when the expected cost of violation is less than compliance. With Anthropic expecting to make $5 billion in sales this year, a $1.5 billion settlement for foundational training data looks like a bargain.

2. Scale Changes Everything

Traditional copyright dealt with discrete violations. AI training involves millions of works simultaneously. The sheer scale transforms infringement from a legal issue into a business model optimization problem.

3. Knowledge Persists After Deletion

Destroying the pirated datasets doesn’t remove the knowledge from Claude’s neural networks. This is like shutting down Napster after everyone’s already downloaded the music — the genie doesn’t go back in the bottle.

What This Means for the Industry: The True Cost of “Move Fast and Break Things”

Had Anthropic pushed forward to trial in December, the financial stakes could have escalated far beyond the settlement figure. A jury loss might have produced damages in the multiple billions — enough to jeopardize the company’s future. The settlement avoided that existential risk, but the trade-off is clear: it reflects crisis management, not true market valuation.

This outcome also highlights the broader tension in AI copyright. On one side, creators deserve recognition and payment; on the other, AI companies depend on massive datasets to remain competitive. The existing copyright framework struggles to reconcile those competing needs at AI scale. What Anthropic’s deal actually delivers is not a framework for future licensing, but a price tag for resolving one chapter of past infringement.

This isn’t an arm’s length transaction — it’s an arm’s twist, with the courts providing the torque. As we build frameworks for valuing AI training rights, we must recognize the difference between prices set by markets and those set by magistrates.

Meta and Anthropic are discussed and compared in my previous article, Can We Stop Comparing AI to Napster?

Lili Kazemi is General Counsel and AI Policy Leader at Anant Corporation, where she advises on the intersection of global law, tax, and emerging technology. She brings over 20 years of combined experience from leading roles in Big Law and Big Four firms, with a deep background in international tax, regulatory strategy, and cross-border legal frameworks. Lili is also the founder of DAOFitLife, a wellness and performance platform for high-achieving professionals navigating demanding careers.

Follow Lili on LinkedIn and X

🔍 Discover What We’re All About

At Anant, we help forward-thinking teams unlock the power of AI—safely, strategically, and at scale.
From legal to finance, our experts guide you in building workflows that act, automate, and aggregate—without losing the human edge.
Let’s turn emerging tech into your next competitive advantage.

Follow us on LinkedIn

👇 Subscribe to our weekly newsletter, the Human Edge of AI, to get AI from a legal, policy, and human lens.

Subscribe on LinkedIn


📎 Sidebar: Two Meanings of “Arm’s Length”

The $1.5 billion fund divided across 482,460 pirated works yields roughly $3,000 per book — a figure the motion highlights as four times the statutory minimum damages and fifteen times the innocent infringement minimum. But this payout is a per-work allocation, not a per-author windfall; where multiple rights holders exist (author + publisher), the amount will be split under a court-supervised distribution plan. And crucially, the deal covers only inputs — it releases claims about past training data but leaves outputs claims alive.

In other words, this is a litigation-driven compromise that prices risk, not a true market-tested license for the profit potential of copyrighted works in AI training.

In Class Action Settlements (Rule 23)In Tax & Transfer Pricing (TP)
Focus is on process: were negotiations adversarial, supervised by a mediator, and free of collusion?Focus is on price: what would unrelated parties agree to for the same bundle of intangibles in an open market?
“Arm’s length” = assurance that class members weren’t sold short by collusive counsel.“Arm’s length” = measure of true economic value of rights transferred (CUTs, profit splits, DEMPE analysis).
Risk avoidance drives outcomes: parties compromise based on litigation risk, not market value.Market potential drives outcomes: value reflects profit potential of the rights licensed, not the cost of avoiding trial.
Benchmark = procedural fairness.Benchmark = economic comparability.

🔮 Sidebar: Cases That Will Force the Valuation Question (or Settlement)

Unlike Anthropic, some defendants can’t simply settle their way out. Several ongoing or newly filed suits are likely to push courts to confront valuation head on:

  • New York Times v. Microsoft & OpenAI (S.D.N.Y. MDL): Whether ChatGPT’s outputs infringe and displace revenue from news organizations.
  • Disney & Universal v. Midjourney (C.D. Cal.): Alleging that AI-generated images infringe character copyrights by substituting creative works.
  • Disney et al. v. MiniMax (C.D. Cal.): Aiming to test whether AI outputs may be held as derivative works in their own right.
  • Encyclopaedia Britannica & Merriam-Webster v. Perplexity (S.D.N.Y.): Claiming that Perplexity’s answer engine free-rides on encyclopedic content without payment.
  • Thomson Reuters v. Ross Intelligence (3d Cir. appeal): An older case still alive, questioning whether AI training on legal headnotes ever qualifies as fair use.
  • Hendrix & Roberson v. Apple (N.D. Cal.): Authors allege Apple used the Books3 dataset and Applebot web crawling to train its “OpenELM” AI models without consent, compensation, or credit. PublishersWeekly.com+1

Each of these will put pressure on courts to decide: if AI creates value by reference to copyrighted works, who should receive compensation and how should that value be measured when no clear comparable exists?


  1. See Declaration of Tom Turvey in Support of Defendant Anthropic’s Motion For Summary Judgement, Filed, August 19, 2024, Case No. 3:24-CV-05417-WHA
    ↩︎
  2. Kadrey v. Meta Platforms, 3:23-cv-03417 (N.D.Cal.). ↩︎