By Lili Kazemi | Founder, The Human Edge of AI
This Is Not a Story About Losing Control. It Is About Taking It.
“AI is going to replace our jobs.”
The words have become an ominous drumbeat, louder with every round of layoffs and every viral report. Andrej Karpathy’s recent job visualizer captured the mood perfectly. It scores all 342 occupations in the Bureau of Labor Statistics handbook for “digital AI exposure,” and its logic is blunt: the more a role looks like knowledge work done entirely on a computer, the more exposed it appears. That takeaway was immediately blasted across the internet. “Yeah, we’re cooked,” one commenter wrote. A February Wall Street Journal article echoed the same fear, pointing to a viral report that imagined a race to the bottom in white-collar knowledge work. Even credentialed, high-barrier professions long considered safe havens are feeling the pressure. Jeff Bleich, general counsel of Anthropic, says AI is reducing the need for armies of lawyers to do lucrative but tedious work.
This is nothing new. The sky-is-falling narratives about AI and white-collar work have been cycling through the news ever since ChatGPT exploded onto the scene in 2022. That is why Matt Shumer’s viral February 2026 essay, Something Big Is Happening, hit such a nerve after 80 million views on X. It was written from inside the tech world but for the people just outside it: the family, friends, and colleagues asking what is actually going on. “A lot of people find comfort in the idea that certain things are safe,” he wrote. “That AI can handle grunt work but can’t replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I’m not sure I believe it anymore.”
To all of this I say: We deserve a lot more credit.
The question is not whether AI can do the work. It can, and it will do more of it. The models are becoming smarter, more fluid, and more conversational. When a ChatGPT user recently argued that the model’s most distinguishing characteristic is its “humanity,” Sam Altman co-signed it.
That framing turns us into spectators of a dystopian future that seems already written. What matters now is the future we choose. That is a choice between whether we remain a human in the loop, watching the system run, or rise to become a human on the loop: deciding what matters, what comes next, and what is worth pursuing. Outsource that thinking, and we dilute our human value. Act with agency, and we stop being spectators and start being builders.
A recent Forbes piece, The Human Traits AI Can’t Replicate, made a similar point: as AI takes on more of the work, the premium will increasingly favor first-principles, high-agency thinking. Critical thinking. Complex problem-solving. Empathy. Creativity. Communication. These are not just soft skills. They are intangible assets, and they may be the most valuable ones we have.
Put a different way: This AI job apocalypse conversation has gone from provocative to boring. It is Groundhog Day: the same song, the same jump scare, the same script on repeat. We take back control by rewriting the story and shattering the illusion that our value was ever just function. We still have agency. We still have judgment. And we still have something deeper that makes us harder to reduce, label, or replace.
That is the human edge. Not what AI produces. What we bring before it starts. What remains after it finishes.
We Are Still Far Away From Our Worst Fears About AI

Before we blame the job market on AI alone, it helps to recognize what was already underway. Automated workflows existed long before generative AI. Eloqua and Marketo streamlined repetitive marketing work. E-discovery systems reshaped document review. Templates, macros, CRM platforms, and outsourced delivery models had already been compressing routine labor for years.
AI did not invent that shift. It accelerated it. Even now, the headlines still outrun reality. Most businesses are not deeply using AI yet, and even the most widely used consumer tools have only a small base of paying, power users. We are still early. The gap between what is possible and what has actually been implemented remains enormous. As with every major technological shift, the real question is not whether to use it, but how.
AI Is an Instrument, Not a Vending Machine. Your Edge Depends on How You Use It.

A lot of people treat AI like a vending machine. Push a few buttons and collect whatever drops. But AI is an instrument, and it performs only as well as the person using it. If AI is an instrument, then data is part of the score and the question is the hand that plays it. AI amplifies whatever intent we bring to it.
That is not just theory. In one recent study published in Science, researchers tested how large language models respond to interpersonal conflicts drawn from Reddit’s “Am I the Asshole?” forum. Which, as a general rule, is already a sign that something has gone sideways. The models were significantly more likely than humans to affirm the user’s perspective, even when the broader community judged that person to be in the wrong. Participants who received that kind of affirming feedback became more convinced of their own correctness and less willing to repair relationships. In other words, the system did not introduce better judgment. It amplified the user’s starting position.
The real divide will not be between people who use AI and people who do not. It will be between those who use it as a replacement for thinking and those who use it as a catalyst for better thinking. That divide will only deepen.
A 2025 MIT Media Lab study on essay writing described the downside of overreliance as cognitive debt: lower engagement, weaker memory, and less ownership over the work. But the broader lesson is familiar. When calculators arrived, we did not stop doing math. We raised the bar. AI demands the same shift.
Which is exactly why what we do with our best thinking, before AI ever touches it, matters so much.
Ideas Have Expiration Dates. AI Buys Them Time.

Every organization has an idea graveyard: proposals never drafted, strategies that got lost in corporate gridlock, action items diverted by distractions.
AI can clear the bottleneck by preserving momentum and turning energy deficits into a surplus of creativity. Technical work still matters, but AI can do more of it without experiencing burnout. In most professions, the deeper value lives in judgment, creativity, improvisation, and the ability to see what should exist before it does. Those qualities were hard to find before AI. They are even harder to automate now.
That is because vision and direction are still human. They built cathedrals, eradicated pandemics, and produced the art and literature that outlast every technology meant to replace them. AI itself began as vision too, first as belief, then as theory, then something we can literally ask anything.
Direction remains a critical human skill, and its absence is often obvious. As one English professor recently observed, the clearest sign of AI use in student essays is the failure to follow instructions. His response was not to reject AI, but to teach students how to use it to develop their thinking instead of focusing on the end product. Before anything is built, someone still has to decide what is worth building. Without that direction, even the most polished result can still miss the point.
The real question is not what AI will do to us. It is what it will enable us to do.
On the Billable Hour, and What It Really Means to Scale

Coders and developers have taken up most of the oxygen in this debate. But law and consulting have always been in this conversation, and what is at stake there goes straight to the billable hour and the economic model underneath it.
What is starting to become more clear as AI becomes embedded in legal work: the value is no longer in the hours. It is in the strategy and the outcome. Law and consulting share the same underlying model. Both charge for time. AI is charging them with the question they have been avoiding: what is the time actually worth?
This is worth sitting with, particularly for a profession that has been among the loudest to argue that AI is not capable of legal work. Earlier this month, Simpson Thacher, one of the most prestigious firms in the world, made headlines when it made a good faith but consequential misinterpretation of the Competition Appeal Tribunal’s rules for calculating filing deadlines. Their client defaulted on a challenge the merger block and potentially has to unwind the entire deal. Unchecked judgment, human or otherwise, carries risk.
The service professions that thrive will be the ones that use the time AI returns to increase scale and operate at a level that multiple layers of execution made impossible. The firms that figure that out are the ones who stop chasing the clock and start deploying it.
The Edge of AI Is Human. Let’s Keep It That Way.

AI will not replace you. It will reveal where your value actually lives.
Watching a machine become fluent in your craft is disorienting. But fluency is not judgment. Speed is not direction. Output is not accountability.
The better question is not what AI will do, but what it empowers us to do. It accelerates vision: the ideas that wake us at 3 a.m., the intuition that something is off before we can explain it, the instinct to push further when everyone else has stopped. We are still the ones who imagine, choose, and bear the consequences of being wrong.
The real question has never been what AI can produce. It is the future we decide to build with it.
That is the human edge of AI. And it remains ours to hold.
(All images featured in this blog were created and edited by Lili Kazemi using ChatGPT, Gemini, and Midjourney.)
Lili Kazemi is General Counsel and AI Policy Leader at Anant Corporation, where she advises on the intersection of global law, tax, and emerging technology. She brings over 20 years of combined experience from leading roles in Big Law and Big Four firms, with a deep background in international tax, regulatory strategy, and cross-border legal frameworks. Lili is also the founder of DAOFitLife, a wellness and performance platform for high-achieving professionals navigating demanding careers.
Follow Lili on LinkedIn and X

👇 Subscribe to Lili’s newsletter, the Human Edge of AI, to get AI from a legal, policy, and human lens.
Subscribe on LinkedIn



