By Lili Kazemi
General Counsel & AI Policy Leader, Anant Corporation
What This Article Is About
The copyright battles over generative AI are unfolding on two distinct fronts. One set of lawsuits primarily challenges the inputs—whether companies can legally train models on vast collections of books, songs, and images without permission. These cases against Anthropic, Meta, and Apple ask whether ingesting creative works into training datasets constitutes infringement.
The second set targets the outputs—what AI systems generate and whether those outputs unlawfully replace the original works. This includes NYT v. OpenAI (challenging journalism summaries), Disney and Warner Bros. v. Midjourney (AI-generated characters), and the recent Britannica & Merriam-Webster v. Perplexity case (reference publishing through RAG systems).
Why Output-Focused Cases Matter
This article focuses on that second category: output litigation. As Judge Alsup noted in Bartz v. Anthropic, the plaintiffs hadn’t alleged infringing outputs—”[it] would be a different case” if they did. That “different case” is now emerging across multiple industries, and it could reshape AI copyright law entirely. The Anthropic settlement only addressed past training conduct and left output liability unresolved—making the outcomes of pending cases like Perplexity and Midjourney crucial for determining how courts will handle AI-generated content going forward.
The economic stakes are clear. When AI systems generate outputs that compete directly with original works, they threaten core revenue streams:
- Answer engines that provide summaries instead of directing users to publisher sites undercut advertising and subscription revenue
- Image generators producing recognizable characters can replace licensed art and merchandising
- AI tools that reproduce or paraphrase reporting compete directly with original journalism
Courts are now being asked to draw the line between fair use and unlicensed substitution—determining when AI outputs step into the same marketplace as the works they’re built upon.
The Latest Cases Defining the Landscape of AI Copyright Law
The cases discussed below primarily focus on the nature of copyright law as applied to the outputs of large language models (LLMs). For a summary of the cases that allege infringement of works when AI companies use them as inputs in training LLMs, see my summary here:
Table of Recent Cases
| Case | Docket No. & Court | AI Technology | Plaintiff Allegations | AI Company Defense | Key Quote / Highlight |
|---|---|---|---|---|---|
| Bartz et al. v. Anthropic (for comparison to output-focused) | 3:24-cv-05417 (N.D. Cal.) | Claude (LLM) | Training on pirated books; outputs reproduce literary text; market harm to authors | Fair use; transformative in nature; sole purpose of copying authors’ books was to train LLMs | Judge Alsup: plaintiffs did not allege infringing outputs—“this would be a different case” if they did. Soon after the Anthropic parties proposed a $1.5B settlement, authors Grady Hendrix and Jennifer Roberson filed a proposed class action against Apple in the Northern District of California. The complaint alleges that Apple trained its OpenELM and Apple Intelligence models on Books3, a dataset of ~196,000 pirated books. |
| Concord v. Anthropic | 5:24-cv-03811 (N.D. Cal.)(venue changed from Middle District of Tennessee) | Claude (LLM) | Lyric reproduction from Concord’s catalog; unauthorized training | Guardrails in place; no intent to substitute | Focus on music industry protections and secondary liability. “This foundational rule of copyright law dates all the way back to the Statute of Anne in 1710… That principle does not fall away simply because a company adorns its infringement with the words “AI.’” |
| In re: OpenAI Copyright Litigation (includes NYT & Authors Guild) | 1:25-md-03143 (S.D.N.Y.) | ChatGPT, Copilot (LLM) | Verbatim reproduction of news; summaries substitute for reporting; licensing bad faith | Outputs transformative; factual content limited protection; innovation benefits | Transfer order notes: outputs generate “verbatim and detailed summaries of news content.” Judge consolidates NYT claims with other cases that focus on infringing inputs; differences in claims do not present “significant obstacle” |
| Encyclopedia Britannica v. Perplexity AI | 1:25-cv-07546 (S.D.N.Y.) | RAG “Answer Engine” | Free-rides on Britannica content; diverts click revenue; false attribution; trademark misuse | Not yet disclosed; likely fair use; cites attribution and transformative Q&A | Complaint: “Perplexity’s ‘answer engine’ eliminates users’ need to visit the original sources… and systematically starves web publishers of the revenue that funds their content creation.” Complaint cites Perplexity AI’s FAQs that assert users can “skip the links” and utilize AI-generated answers. |
| Disney, Universal & Warner Bros. v. MiniMax (C.D. Cal., Sept. 2025) | 2:25-cv-08768 (C.D. Cal.) | Hailuo AI (text-to-image/video) | Hailuo generates downloadable images/videos of Darth Vader, Wonder Woman, Shrek, Minions; MiniMax allegedly used them in its own marketing; refused to implement filters despite ability | Anticipated: Fair use for training; expressive differences; jurisdictional objections | Complaint: “MiniMax’s bootlegging business model [is] not only an attack on [Disney] and the hard-working creative community that brings the magic of movies to life, but [] also a broader threat to the American motion picture industry.” |
| Disney v. Midjourney | 2:25-cv-05275 (C.D. Cal.) | Midjourney (Image Gen) | Outputs replicate Disney/Universal characters; “virtual vending machine” for IP | Fair use; user-driven prompts; not all use of content is for commercial purposes; no specific harm identified | Midjourney: plaintiffs “cannot have it both ways,” citing Disney CEO Bob Iger: “Technology is an invaluable tool for artists, and generative AI is no different.” |
| Warner Bros. Discovery v. Midjourney | 2:25-cv-08376 (C.D. Cal.) | Midjourney (Image & Video Gen) | Direct copying of DC, Looney Tunes, Cartoon Network, Scooby-Doo, Tom & Jerry; used in marketing | Defense not yet filed; likely fair use and user-agency | Warner complaint: “Midjourney thinks it is above the law. It sells a commercial subscription service… developed using illegal copies of Warner Bros. Discovery’s copyrighted works.” |
Case Spotlights

1. Britannica & Merriam-Webster v. Perplexity AI (S.D.N.Y., Sept. 2025)
Focus: RAG as a Free-Rider
Perplexity’s “answer engine” is accused of crawling Britannica and Merriam-Webster, copying articles, and producing near-verbatim outputs. Plaintiffs also allege Lanham Act violations, claiming the system falsely attributes AI hallucinations to their marks.
Key Quote:
“Perplexity’s ‘answer engine’ eliminates users’ need to visit the original sources… and systematically starves web publishers of the revenue that funds their content creation.”
While the Britannica and Merriam-Webster case frames the issue squarely as copyright infringement, Rolling Stone’s parent company’s new lawsuit against Google over AI Overviews raises strikingly similar themes. It’s not a copyright case—it’s antitrust—but the underlying concern is the same: AI summaries that free-ride on publishers’ work, divert traffic, and erode the economic foundations of original content.
2. Disney & Universal v. Midjourney (C.D. Cal., June 2025)
Focus: Character Cloning & Commercial Substitution
The complaint paints Midjourney as a “virtual vending machine” for Disney and Universal characters. Outputs cited include near-identical Elsa, Shrek, and Simpsons imagery.
Midjourney’s response leans heavily on fair use, comparing training to human learning and shifting responsibility to user prompts. Most provocatively, it turned plaintiffs’ own words against them.
Key Quote:
“As a matter of equity, Plaintiffs cannot have it both ways, seeking to profit—through their use of Midjourney and other [GenAI] tools—from industry-standard AI training practices on the one hand, while on the other hand accusing Midjourney of wrongdoing for the same”
3. Disney, Universal, and Warner Bros. v. MiniMax: A Hollywood Showdown
If the Midjourney suits looked like warning shots, the MiniMax case is the full barrage. Three of the biggest studios—Disney, Universal, and Warner Bros. Discovery—filed a joint complaint in Los Angeles federal court against Shanghai-based MiniMax, maker of the Hailuo AI platform.
The plaintiffs’ evidence is unusually vivid: screenshots of Hailuo generating downloadable videos and images of Darth Vader, Wonder Woman, and the Minions. Even more damning, the lawsuit shows MiniMax using those outputs in its own marketing campaigns on Instagram and WeChat, branded with “Hailuo” logos. That shifts the case from theoretical infringement into clear commercial substitution.
According to the complaint, MiniMax allegedly ignored repeated requests to install copyright filters—despite already deploying filters for nudity and violence. The complaint frames this as willful blindness and seeks statutory damages of up to $150,000 per infringed work.
Key Quote:
“MiniMax’s bootlegging business model and defiance of U.S. copyright law are not only an attack on Plaintiffs and the hard-working creative community that brings the magic of movies to life, but are also a broader threat to the American motion picture industry.”
Jurisdiction won’t be simple: MiniMax is a Chinese company with a Singaporean affiliate. But the studios point to U.S. app-store distribution, Stripe payments, and CDN services as strong ties to the American market. Courts have historically accepted these contacts as sufficient for jurisdiction.
4. Warner Bros. Discovery v. Midjourney (C.D. Cal., Sept. 2025)
Focus: Expanding the Studio Front
Filed just last week, Warner Bros. Discovery accuses Midjourney of “brazenly dispens[ing]” its IP—pointing to Batman, Bugs Bunny, Scooby-Doo, and Rick and Morty outputs—”as if it were their own”. The complaint highlights how even prompts like “classic superhero battle” yield recognizable Warner characters.
Warner’s opening salvo is unusually sharp. They plead, “Midjourney will not stop stealing” until the court orders them to stop.
Key Quote:
“Midjourney thinks it is above the law. It sells a commercial subscription service… developed using illegal copies of Warner Bros. Discovery’s copyrighted works.”
5. Concord Music/UMG v. Anthropic (N.D. Cal., Oct. 2023)
Focus: Lyrics Reproduction
Publishers allege Claude trained on and outputs copyrighted lyrics. Guardrails were implemented, but the fight continues over training data. The judge presiding over the case denied UMG’s request for a preliminary injunction against Anthropic, ruling that publishers failed to demonstrate “irreparable harm.” Importantly, this ruling doesn’t affect a previous agreement approved by the court in January 2025, where Anthropic implemented “guardrails” to prevent Claude from reproducing copyrighted lyrics in its responses to users.
6. The New York Times v. OpenAI (S.D.N.Y. 2025)(Dec. 2023 complaint, now consolidated)
Focus: Journalism & Market Substitution
Now consolidated as In re OpenAI, this case tests whether AI can lawfully compete with journalism. The court’s order requiring preservation of 400+ million ChatGPT logs underscores the discovery risks ahead.
Key Quote:
“Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.”

Competing Legal Theories
Market Substitution Concerns
- Economic Harm: AI answers and images can substitute directly for licensed works
- Direct Competition: Outputs are designed to replicate the “look and feel” of originals
- Trademark Risks: When hallucinations or omissions are paired with trusted brands, reputational harm follows
Innovation and Fair Use Arguments
- Fair Use: Training resembles human learning; outputs are transformative
- User Agency: Companies argue they don’t control the specific prompts or outputs
- Public Benefit: Broad liability could chill socially beneficial AI innovation
- Attribution: Some systems link back to sources or block outputs via guardrails
The Free Rider Pattern

Economists define the free rider problem as what happens when someone benefits from a resource without contributing to the cost of creating or maintaining it. In public goods theory, this is the classic tension: everyone enjoys the streetlights, but not everyone pays their share of the electricity bill.
That same tension is surfacing in AI copyright disputes.
- Creators and Rightsholders argue that AI companies are exploiting decades of investment in journalism, film, and music without permission or compensation.
- AI Companies respond that publishers and studios themselves are adopting generative AI tools—so they can’t condemn a technology they also rely on.
The issue isn’t whether both sides use technology. It’s about scale, leverage, and value extraction.
Key Questions and Strategic Takeaways
The emerging case law makes clear that fair use in the AI context is highly fact-dependent. Courts are scrutinizing not just whether copyrighted works are used, but how, why, and with what effect. Both AI developers and content owners need to adjust their strategies with this in mind.
For AI Developers:
- Acquisition Matters: Document how training materials are sourced. Pirated or unauthorized copies weigh heavily against fair use.
- Purpose & Use: Limit retention of works to what is necessary for transformative training. Storing libraries of copyrighted material for general use risks liability.
- Guardrails & Outputs: Prevent systems from regurgitating verbatim text, lyrics, or images. Courts will look closely at whether outputs serve as market substitutes.
- Transparency & Partnerships: Proactively disclose training methods, adopt attribution practices, and pursue licensing arrangements with high-value content providers.
For Creators & Rightsholders:
- Document Harm: Build evidence that AI tools divert clicks, undermine subscriptions, or replace licensing revenue. Courts require concrete—not speculative—market harm.
- Frame Substitution Clearly: Emphasize when AI outputs stand in for the original work in user workflows (news summaries, lyrics, character art).
- Expand Claims Beyond Copyright: Consider trademark and false attribution theories where AI associates errors or hallucinations with trusted brands.
- Negotiate While Litigating: Litigation may set precedent, but licensing and partnership models are emerging as parallel revenue streams.
Bottom Line
The core question is evolving from “Can AI train on copyrighted works?” to “Can AI outputs replace those works in the marketplace?”
Outputs that substitute for creative labor or eliminate licensing revenue are the riskiest legal frontier. The outcome of these lawsuits will shape not only copyright doctrine, but also the economics of the entire AI ecosystem.
This piece is Part III of my four-part series on AI and copyright.
- Part I, The Wild West of AI, broke down the copyright rules for training data, the Meta and Anthropic rulings, and how courts are beginning to draw the first lines around what’s fair—and what’s not.
- Part II tackles the Napster problem: why analogies to music piracy fall short in the world of LLMs, and why AI synthesis requires a new legal lens.
- Part III (you’re here) will zoom in on output liability—when models mimic protected works in style, voice, or structure—and whether that’s enough to trigger copyright infringement.
- Part IV will shift from law to money: exploring how AI systems are being valued, taxed, and priced across jurisdictions, and what this means for global IP and transfer pricing regimes.
Lili Kazemi is General Counsel and AI Policy Leader at Anant Corporation, where she advises on the intersection of global law, tax, and emerging technology. She brings over 20 years of combined experience from leading roles in Big Law and Big Four firms, with a deep background in international tax, regulatory strategy, and cross-border legal frameworks. Lili is also the founder of DAOFitLife, a wellness and performance platform for high-achieving professionals navigating demanding careers.
Follow Lili on LinkedIn and X

🔍 Discover What We’re All About
At Anant, we help forward-thinking teams unlock the power of AI—safely, strategically, and at scale.
From legal to finance, our experts guide you in building workflows that act, automate, and aggregate—without losing the human edge.
Let’s turn emerging tech into your next competitive advantage.
Follow us on LinkedIn
👇 Subscribe to our weekly newsletter, the Human Edge of AI, to get AI from a legal, policy, and human lens.
Subscribe on LinkedIn

