Just the word choice of the title — The Empire of AI — makes it clear where journalist Karen Hao stands. “Empire” is not a neutral description; it signals consolidation, hierarchy, power, and the costs of expansion. Even the cover design reinforces this with its orange and pink sunset hues that evoke both grandeur and foreboding. That framing runs through every chapter of Hao’s sweeping chronicle of artificial intelligence, from the philosophical debates of the 20th century to the commercialization of OpenAI and beyond.
Hao, who spent years covering AI for MIT Technology Review and The Wall Street Journal, had extraordinary access to Sam Altman and the OpenAI orbit. The result is a book that reads like a novel — vivid character sketches, rich backstory, tension rising with every page — but it is not fiction. It is reportage stretching across decades, contextualizing today’s generative AI boom as the culmination of a long arc of scientific rivalry, Silicon Valley culture, and corporate strategy.
My aim here is not to endorse or dismiss Hao’s conclusions. This is her view, grounded in reporting and experience. I came to this book fresh — no reviews, no summaries, just the text itself for my book club. What The Empire of AI did for me, as a lawyer and policy leader, was sharpen the questions I ask about the industry: how law and governance should respond to rapid scaling, how much we can trust self-regulation, and where human accountability must be emphasized.
Reasons to Read
First, Hao is a gifted writer. She makes Sam Altman feel like a character in a novel — aloof yet magnetic, ambitious to the point of obsession. Elon Musk appears as a clear villain. Ilya Sutskever, OpenAI’s co-founder and chief scientist, takes on almost messianic overtones. These portrayals are not caricature; they are drawn from observation and interviews, but they are written with a narrative flair that pulls you in. You also experience the perspectives of employees caught in the crossfire — from Google researchers navigating the fallout of the infamous “Stochastic Parrots” paper to data labelers in developing countries whose work conditions expose the human cost of training AI systems. Hao captures workplace tensions where employees find themselves caught between research integrity and corporate pressures.
Second, the book is far broader than a profile of OpenAI. Hao traces AI’s intellectual lineage back to the long rivalry between symbolism (rule-based approaches) and connectionism (neural networks). Connectionism, championed by Geoffrey Hinton and taken forward by Sutskever, eventually rose to dominance and laid the foundation for today’s deep learning systems. This history is essential: it reminds us that AI did not appear suddenly in 2022 when ChatGPT was released, but builds on decades of contested ideas. I found myself underlining passages and flipping back and forth between chapters — a sure sign that a book is dense with interconnected ideas worth revisiting.
Third, Hao explains technical terms in a way that non-specialists can understand. She unpacks what “pre-training” means in “generative pre-trained transformers” — noting that the “transformer” architecture itself originated at Google. She describes training datasets like Common Crawl and GitHub repositories, and how books and artwork were scraped to fuel LLMs. Hao frames this as a raging debate over copyright and fair use, which has proven almost prophetic given the wave of IP infringement lawsuits that have since emerged. She also captures Google’s ironic position: having invented the foundational technology yet finding itself playing catch-up in the commercial race it inadvertently enabled. Her explanations are colorful and plain-spoken without oversimplifying.
Finally, the book brings the drama. It covers the infamous “Stochastic Parrots” paper — the research that questioned whether large language models were simply sophisticated pattern-matching systems rather than true intelligence, and the institutional pushback that followed its publication. Hao documents the OpenAI–Anthropic divorce, and the fever-pitch race to commercialize generative AI. She makes you feel like you’re watching The Social Network unfold in real time, except the stakes are global. Hao emphasizes that this wasn’t inevitable: “Not even in Silicon Valley did other companies and investors move until after ChatGPT to funnel unqualified sums into scaling. This included Google and Deepmind…It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive…that created a ripe combination for its particular vision to emerge and take over.” (p. 132) Much of this obsession centers on achieving AGI — artificial general intelligence — a concept that remains theoretical and poorly defined even by the book’s end, yet drives trillion-dollar investments and existential fears alike.
Things I Loved

Hao captures the Silicon Valley ethos: ask forgiveness, not permission. Startups pushed into legal gray zones of copyright and privacy, assuming regulators would lag. Datasets built from pirated books and scraped art were easier to defend later than seek permission upfront.
She also traces the OpenAI–Anthropic “Divorce”, showing how tensions over safety and profit produced two rival companies. That tension has proven prophetic: in August 2025, Anthropic revoked OpenAI’s API access to Claude, citing violations. What began as an internal rift has become an industry schism.
At times, The Empire of AI reads like a script: gods and demons, bunker plans, “doomer” vs. “boomer” factions (pessimists vs. optimists). The storytelling makes the stakes feel urgent even if you don’t follow AI policy daily.
And Hao crystallizes her thesis with sharp quotes:
- On OpenAI’s mission shift: “Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission … may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure.” (p. 400)
- On the “mega-hyperscale” pace set by ChatGPT: “Not even in Silicon Valley did other companies and investors move until after ChatGPT to funnel unqualified sums into scaling. This included Google and DeepMind…It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive…that created a ripe combination for its particular vision to emerge and take over.” (p. 132)
- On exploitation: “The empire’s devaluing of the human labor that serves it is also just a canary: It foretells how the technologies produced atop this logic will devalue the labor of everyone else.” (p. 223)
- On the influence of AI over science and research: “The history of AI shows us that AI development has always been shaped by a powerful elite. Even in the early days, before commercial interests made the politics of the AI revolution far more visible, the field’s scientific exploration has lurched and served amid heated clashes over funding and influence.” (p. 94)
- On AI’s power concentration: “This is the empire’s logic. The perpetuation of the empire rests as much on rewarding those with power and privilege as it does on exploiting and depriving those … without them.” (p. 115)
What I Wanted More Of

While I admire Hao’s sweep, her framework leans deterministic — as if machines and their corporate backers dictate destiny, and that destiny is the collapse of an empire, leaving us, as weakened humans, to pick up the pieces. Hao isn’t alone in her doom-and-gloom perspective; even recent MIT reports claiming 95% of AI projects fail often miss the nuance of how success is actually measured and managed.
Altman, meanwhile, is both vilified and deified. He emerges as anti-hero, “the most ambitious man on the planet,” and yet a stereotypical megalomaniac CEO. Hao leaves him unredeemed: neither villain nor visionary, but something uncomfortably between.
She also underplays democratization. Power may concentrate, but AI lowers barriers too. Anyone with a laptop can now use ChatGPT, Claude, or open-source models on GitHub. Tools like Model-Context-Protocol (MCP) servers give professionals and hobbyists alike entry into innovation.
This is where I diverge from Hao. I lean optimistic, maybe even rose-colored. That optimism is partly generational: I formed an identity before internet saturation and AI ubiquity. Younger generations, raised alongside these tools, face different risks and require more thoughtful safeguards for mental development. But optimism matters. It keeps alive the possibility that democratization, not empire, will define this era.
My Reactions: Polarizing, Yet Pleasing

Scaling and Resources
Hyperscaling does demand vast compute, energy, electricity, water and minerals. Microsoft and OpenAI’s Stargate supercomputer illustrates the scope. But human systems adapt — cloud computing once seemed unsustainable too.
Lawsuits and Responsibility
Hao’s caution about harms feels sharper now amid lawsuits like Raine v. OpenAI, where parents allege ChatGPT contributed to their son’s suicide. These cases are heartbreaking. But causation is complex, as courts learned in the Michelle Carter texting case. Technology may amplify risk, but human responsibility remains central.
Safety and Governance
Hao argues OpenAI abandoned safety for speed. The erosion of nonprofit ideals after Microsoft’s billion-dollar funding supports that claim. But the real lever is governance: law, tax, compliance. Risk can be right-sized with rules, not just promises.
Empire Logic vs. Human Resilience
Empire logic is real: concentration of power, exploitation of labor. But empires are not destiny. Antitrust law, labor movements, and treaties have rebalanced power before. We should expect the same with AI.
Closing Takeaway: Why I Still Have Faith in Human Adaptation

The Empire of AI is worth reading not because you must agree, but because it is too important to ignore. Hao’s narrative spans decades, weaving breakthroughs, betrayals, and anxieties into a single story. It reads like a novel, but it is fact. Her reporting demands a visceral reaction — one that sharpens the questions law, tax, and governance leaders must ask.
Her framing of “empire” is provocative. It forces us to ask: are we consolidating power in ways that exploit, or opening broader participation in the next era of technology? The answer isn’t fixed. It depends on governance, policy, and the choices we make now.
My view: humans remain smarter than the systems we build. No matter the future of the “Empire of AI”, even viewed in the most pessimistic light, we’ve adapted through wars, financial collapses, and technological revolutions that once seemed apocalyptic. We’ve always found ways to right-size ourselves. Now we stand at a crossroads: AI can scale human flourishing or magnify harm — but which path we take will always remain a human choice.
Follow Lili on LinkedIn and X

🔍 Discover What We’re All About
At Anant, we help forward-thinking teams unlock the power of AI—safely, strategically, and at scale.
From legal to finance, our experts guide you in building workflows that act, automate, and aggregate—without losing the human edge.
Let’s turn emerging tech into your next competitive advantage.
Follow us on LinkedIn
👇 Subscribe to our weekly newsletter, the Human Edge of AI, to get AI from a legal, policy, and human lens.
Subscribe on LinkedIn
