When AI Works, Who Gets Paid?

Rethinking Value, Taxation, and Workforce Policy in the Age of Intelligent Automation

The Great Transformation: AI’s Expanding Role in the Workforce

AI isn’t just replacing routine tasks—it’s fundamentally transforming how work gets done, from the ground up and the C-suite down. We’re witnessing a seismic shift from AI as assistant to AI as architect, collaborator, and autonomous decision-maker.

Consider the evolution of development tools. Platforms like Cursor, Claude Code, Windsurf, OpenAI Codex, and AWS CodeWhisperer have moved far beyond simple autocomplete functions. They now optimize architecture, test for vulnerabilities, and deploy entire software systems. These AIs don’t just respond to problems—they anticipate them before they arise.

OpenAI’s ambitions go even further. In a strategy document, the company outlined its 2025–2026 roadmap to transform ChatGPT into a “super assistant”—one that “knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.” 

The broader direction is clear: AI is being positioned not just as a tool, but as a ubiquitous presence in our lives—serving as a planner, researcher, co-worker, and even executive decision-maker. This transformation extends well beyond developers and engineers. As Forbes recently noted, AI is reshaping the very definition of leadership roles. The future C-suite may involve AI agents managing entire workflows, providing strategic forecasts, and surfacing risks in real time. In financial services, AI systems are already functioning as sophisticated investment advisors—analyzing credit risk, detecting arbitrage opportunities, and adjusting to market shifts with superhuman speed.


The Human Factor: Where People Still Matter

Yet this rapid automation creates profound tension: As AI assumes more operational and analytical responsibilities, what unique value do humans provide?

The answer lies in creativity, oversight, and judgment. As Amazon CEO Andy Jassy recently observed, generative AI allows us to focus “less on rote work and more on thinking strategically.” The New York Times Magazine identified emerging human roles like AI auditors, ethics directors, and integration specialists—professionals who ensure AI tools align with business objectives, regulatory requirements, and human values.

Crucially, AI systems remain prone to hallucinations and fabrications when handling complex or ambiguous scenarios. Humans provide the critical layer of interpretation, accountability, and decision-making that transforms AI output into meaningful action.

Here’s the reality: Jobs aren’t lost to machines—they’re lost to people who know how to use machines. Upskilling isn’t optional; it’s imperative for workforce survival.


The Economic Earthquake: When AI Works, Who Gets Paid?

Understanding AI’s impact on the workforce is just the beginning. As machines take on roles once held by people, a deeper, more urgent question emerges—one that cuts to the core of taxation, policy, and economic sustainability: When AI creates value, who gets the credit, and who gets paid?

This isn’t a thought experiment. For consultants, attorneys, and policymakers, it’s a pressing challenge that will redefine how we measure productivity, allocate profits, and fund the systems that support human society. The shift from people to platforms isn’t just technological—it’s economic, legal, and deeply human.

The Ghost Workforce: AI That Creates Value Everywhere—and Lives Nowhere

A new kind of labor force is reshaping the global economy—one that doesn’t sleep, spend, or even exist in any one place. Call it the ghost workforce: AI systems that generate real value for companies without participating in the economic ecosystems that support human life.

But the implications go beyond the economics. This “ghost workforce” is creating real headaches for tax policymakers. Unlike human employees, AI agents don’t trigger payroll taxes, don’t belong to any jurisdiction, and don’t fit neatly in with the “significant people functions” framework under current OECD guidelines—a foundational requirement for attributing value and taxing rights in transfer pricing.

The OECD’s Working Party 6, which oversees transfer pricing policy, has recently announced a formal review of the OECD Transfer Pricing Guidelines to address the impact of AI. This signals a pivotal inflection point. The traditional framework, grounded in significant people functions (SPFs), is increasingly strained by the reality of distributed workflows involving both human and AI contributors. In many cases, it’s no longer possible—or even meaningful—to isolate a single function performed by a single individual. Organizations no longer operate in discrete “functions” anymore; they operate in workflows.

And that’s not just a semantic shift. As this article has shown, the business value chain itself has fundamentally evolved—and continues to do so. When that value chain diverges so dramatically from the one envisioned in legacy tax rules, particularly in how profits are attributed within multinational groups, the governing legal frameworks must evolve in parallel. Adaptation isn’t optional—it’s structural..

These open questions are explored in greater detail in my article, Prompt, Train, Allocate: Rethinking Transfer Pricing for AI Systems. The core insight is this: without a clear functional map of who—or what—is driving economic value, the architecture of modern transfer pricing starts to falter.

In this sense, the ghost workforce isn’t just invisible—it’s unaccounted for in the systems we use to allocate profit and assign tax liability. That invisibility may be where the real risk lies.


The Curious Case of Company X: Illustrating Digital Labor Disruption

To understand the broader impact of AI on labor markets and tax systems, consider a very simplified example, which is actually based on one of our customer experiences: Company X, a multinational tech firm that has replaced 6,000 customer support and operations employees with an in-house AI system.

The financial gains are dramatic—$420 million in payroll costs eliminated, $32 million in payroll taxes erased, and $85 million saved on benefits and training. The AI handles customer inquiries, generates executive reports, and supports strategic decisions. It learns, adapts, and scales—at a fraction of the cost of human labor. Unlike humans, AIs can multitask effortlessly—processing one prompt while simultaneously working on another. If you’ve ever queued up new instructions while a previous task was still running, you’ve seen this in action.

This example focuses on headcount reduction and doesn’t reflect the rising wage premium for AI-skilled workers. Their value can vary significantly across industries, making it harder to quantify consistently. Still, many companies are paying more—not less—for experienced talent who can work effectively with AI.

While AI brings undeniable productivity gains—operating 24/7, scaling instantly, and enabling companies to do more with less—the economic tradeoffs are far from simple. When workers are displaced, it’s not just wages that disappear—it’s the ripple effect across everyday life.


Credit Where Credit Is…Blurred

Adding to the complexity, AI isn’t rooted anywhere. It’s built from open-source code, trained on global data, and hosted on cloud servers spanning multiple jurisdictions. Who owns the value it creates—and who gets to tax it?

AI is no longer just a cost-cutting tool—it’s becoming a strategic execution partner. Platforms like McKinsey’s Lilli now draft proposals and synthesize research once handled by teams of analysts. The full value that AI will create and how to account for it from an business, operational and finance perspective can be tricky. Klarna learned this the hard way: after replacing 700 customer service agents with an AI assistant, the company reversed course in 2025, citing a drop in quality and customer satisfaction. Its solution? A hybrid model where AI handles routine tasks, and humans step in for nuance and empathy—a reminder that AI is a tool, not a team.

This partnership model is also emerging in finance and law. Quinn, an AI platform for wealth planning, aims to democratize financial advice at scale (an initiative shared by other developers in different forms). Meanwhile, AI systems have passed the bar exam—in the 90th percentile range, challenging traditional definitions of expertise. Across industries, AI isn’t just assisting professionals—it’s redrawing the boundaries of value creation.

If human and AI contributions are intertwined, how do we account for the non-human share? And if we agree that some portion of that value should be credited to humans—who exactly gets the credit? Anyone who’s done a functional analysis or takes depositions can relate: responsibility at highly sophisticated industries is rarely clear-cut. With the advent of agentic AI frameworks that can act autonomously, the lines get even more blurred.


Permanent Establishment in the Age of AI

Under OECD guidelines, a Permanent Establishment (PE) may arise when a foreign enterprise has either (1) a fixed place of business through which its operations are conducted, or (2) a dependent agent who habitually concludes contracts on its behalf. As AI systems increasingly assume roles once performed by humans, including sales, contract negotiation, and execution, these traditional definitions are under strain. For example, an AI-powered sales bot deployed by a multinational enterprise (MNE) may autonomously interact with customers, customize pricing, and finalize agreements—functionally replacing a human agent in closing deals. Even if physically hosted on servers outside the customer’s jurisdiction, the economic activity and value creation may be taking place where the customer resides.

Under the OECD’s Authorized OECD Approach (AOA), once a PE is established, profits must be attributed as if the PE were a separate and independent enterprise performing the same functions under similar conditions. This raises difficult attribution questions in an AI context: Who programmed the model? Who controls the data and infrastructure? Where are key decisions made? If the AI system’s outputs are shaped by ongoing user interactions, model fine-tuning, and training managed by a related party, it may be difficult to disentangle where value is truly created. As MNEs increasingly embed AI into customer-facing operations, tax authorities may begin testing the limits of both the PE threshold and profit attribution under the AOA—especially in cases where AI acts with functional autonomy but remains economically controlled by a central group entity.


A Futuristic Framework? Rethinking Intangibles in the Age of AI

International tax rules have moved beyond requiring physical presence—but they still struggle to capture how AI creates value. Take this example: A user prompts an AI to “develop a comprehensive, culturally attuned sales strategy for Country X’s luxury goods market.” The output may rival human work, yet the location of value creation—and taxing rights—remains murky.

So who should get to tax that value?

  • Country X, where the strategy is implemented?
  • The country where the AI was trained or hosted?
  • The user’s own jurisdiction?

And if the AI’s response enhances brand positioning, does it create a marketing intangible? If so, is it attributable to the user’s prompt—or the AI system that generated it?

In enterprise LLM use, it may make sense to assign value to where the AI infrastructure or model training occurred. But in a borderless digital economy, even that logic starts to blur. These are precisely the questions today’s transfer pricing frameworks aren’t quite equipped to answer.

OECD’s BEPS 2.0 nods to the digital economy, but generative AI wasn’t a huge part of the conversation when those standards were drafted. Now, the lines between creator, tool, and jurisdiction are more tangled than ever.

We’re also seeing these issues emerge prominently in copyright and intellectual property law. Courts are reaching for dated precedents—like the 2015 Google Books ruling—to make sense of generative AI cases like Anthropic’s. Newer cases, such as Midjourney’s, challenge conventional ideas of authorship and fair use. Can our legal and tax systems evolve fast enough—or will they always be one step behind? 


The Ownership Puzzle: Who Owns Intelligence in a Decentralized AI World?

As companies race to build powerful AI tools, they’re leaning on a tangled web of open-source models, third-party APIs, proprietary data, and employee feedback. That raises a fundamental question: Who owns the “AI”? Is it the platform provider, the legal entity holding the IP, or the global network of contributors whose data and inputs made the system possible?

Many are beginning to rely on decentralized architectures powered by the Model-Context-Protocol (MCP) framework. In this system, models are not centrally controlled—instead, they interact with diverse contexts and follow standardized protocols that allow them to be hosted, queried, and updated across a distributed network.

This decentralization raises fundamental questions about ownership, attribution, and economic rights. Unlike traditional models where development and deployment are housed within a single legal entity, MCP-based systems can involve multiple contributors across jurisdictions. Agents and servers may reside in different countries, and anyone can run an MCP node, making the system borderless and dynamic—much like the internet itself.

What’s more, these systems are now becoming more agentic and independent. Instead of simply generating outputs, they can reason, plan, and adapt, or even evolve workflows through iterative “genetic” methods. In practice, that means AI agents can select tools, pursue objectives, and coordinate across networks without immediate human direction.

For policymakers, this level of autonomy heightens long-standing challenges around Permanent Establishment and attribution. If an AI agent is effectively negotiating or executing business functions across borders, is it acting as a dependent agent under OECD rules? And if value is being created in multiple jurisdictions simultaneously, how do tax authorities determine who gets to tax the profit? These questions illustrate how MCP architectures and agentic AI are not just technical innovations—they’re pressure points for the next generation of tax and legal frameworks.


A Strategic Framework for the AI Economy

Organizations deploying AI at scale need to future-proof their tax and transfer pricing strategies now. Here’s a practical framework:

  • Map the Complete AI Lifecycle: Document who develops, trains, deploys, and benefits from AI systems across your organization. Understanding this value chain is essential for proper tax allocation.
  • Document Local Entity Contributions: Capture how different business units contribute data, feedback, and expertise to AI development. This documentation will be crucial for defending transfer pricing positions.
  • Align IP Ownership with Economic Reality: Don’t assume traditional single-entity IP models will withstand scrutiny in an AI-driven world. Consider how value creation actually occurs across your organization.
  • Explore Alternative Valuation Methods: Traditional approaches may not capture the value of hybrid or non-traditional intangible assets that emerge from AI development.
  • Proactively Engage Tax Authorities: Be transparent about AI deployment and user-generated value creation. Early dialogue can prevent future disputes.
  • Account for Human-Machine Collaboration: Recognize where human judgment, creativity, and accountability remain essential to value creation, even in highly automated processes.
  • Standardize AI Governance: Develop consistent definitions and style guidelines for AI-generated content, especially for legal, contractual, and policy applications at scale.

The Path Forward: Collaboration in the Age of Intelligent Machines

AI is fundamentally changing who does work and where value resides. Yet our systems for allocating profits, distributing opportunity, and funding society haven’t evolved to match this new reality.

What’s next is the rise of more agentic, independent AI systems—models that don’t just generate content but can reason, plan, and act across networks with minimal human oversight. These developments will intensify the pressure on existing tax and legal frameworks, especially around questions of defining IP and attribution of its value across jurisdictions.

But here’s what we must remember: Behind every AI output still lies human choice—what goals to pursue, which insights to trust, and how to act on them. Value creation in the AI era is inherently collaborative, even as machines assume greater autonomy. Our future tax policies, governance frameworks, and workforce strategies must be ready for that blend of human oversight and machine agency.

The next time your AI tool produces a client memo, product strategy, or legal brief, ask yourself: Who got paid—and who didn’t? That question will define the future of work, taxation, and economic opportunity in the age of intelligent automation.

About the Author: Lili Kazemi is General Counsel and AI Policy Leader at Anant Corporation. A seasoned international tax attorney and digital strategist, she advises on the legal, tax, and regulatory challenges at the intersection of AI, automation, and value creation. Lili is also the founder of DAOFitLife, where she explores performance science and wellness for high-performing professionals.