The Efficiency Trap

24 min read

ai, economics, jevons-paradox, productivity, labor-markets, energy, inequality

Jevons observed that efficient steam engines increased coal consumption, not decreased it. Thirteen months after DeepSeek triggered the same debate for AI, the evidence supports both sides — which is exactly the problem.


On January 27, 2025, as markets panicked over DeepSeek's open-source model undercutting Western AI pricing by an order of magnitude, Satya Nadella posted four words on X that reframed the entire debate: "Jevons paradox strikes again!"

The reference was to William Stanley Jevons, a Victorian-era economist who noticed something counterintuitive about James Watt's steam engine. Watt's design was four to five times more fuel-efficient than the older Newcomen engine. The expectation: Britain would burn less coal. What actually happened: coal consumption soared. Cheap energy didn't conserve fuel. It made steam power viable for thousands of new applications. "It is wholly a confusion of ideas," Jevons wrote in The Coal Question (1865), "to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth."

Nadella's argument, stripped to its core: as AI makes cognitive work cheaper, we will demand vastly more of it.

That was thirteen months ago. The evidence since then supports both sides, which is exactly the problem.

The headlines from January 2026: 108,000 job cuts in a single month, the worst January since 2009, with AI cited in announcement after announcement. IMF Managing Director Kristalina Georgieva told CNBC that AI was "hitting the labor market like a tsunami." Sam Altman acknowledged real displacement while warning about "AI washing," his term for companies blaming AI for layoffs they would have made anyway.

The headlines from February 2026: Erik Brynjolfsson, the Stanford economist who coined the "Productivity J-Curve," wrote in the Financial Times that U.S. productivity jumped roughly 2.7% in 2025, nearly double the prior decade's 1.4% average. Hyperscaler capital expenditure projections hit $602 billion for 2026, a 36% increase. Companies are spending more on AI infrastructure, not less, even as per-token inference costs fell by roughly two orders of magnitude in under three years. The Jevons Paradox, playing out in real time.

Both sets of headlines are true. Displacement and demand explosion are not competing theories. They are different phases of the same process, and distinguishing them requires asking a question that neither headline addresses: in any given sector, where does cognitive labor sit in the cost structure? The answer determines whether cheaper AI creates a Jevons explosion or an agricultural collapse — and who captures the surplus from either outcome. Tracing that question across three decades of probable economic diffusion is the task of this essay.


I. The Augmentation Era (2025–2035): The J-Curve Decade

The most common error in forecasting AI's labor market impact is mistaking capability for diffusion. AI can already generate code, draft contracts, and analyze medical images at levels that rival trained professionals. Yet most programmers, lawyers, and radiologists still work in ways recognizable to their 2015 counterparts. The technology has arrived, but organizations have barely changed how they work.

Brynjolfsson, Daniel Rock, and Chad Syverson documented this lag in their 2021 paper "The Productivity J-Curve" (American Economic Journal: Macroeconomics). General-purpose technologies show decreased measured productivity during initial adoption because firms must invest in intangible complements (training, process redesign, organizational restructuring) before benefits materialize.

The historical precedent is instructive. Paul David's 1990 study, "The Dynamo and the Computer" (American Economic Review), showed that electric power took approximately four decades to appear in productivity statistics after Edison's Pearl Street Station opened in 1882. Factories had to be completely redesigned, from multi-story steam-shaft-driven layouts to single-floor production lines with individual motors, before the gains arrived. Information technology followed a compressed but similar arc. Robert Solow observed in 1987 that you could "see the computer age everywhere but in the productivity statistics." The IT productivity boom didn't arrive until the mid-1990s, roughly two decades after widespread commercial adoption.

If adoption lags keep shortening, AI could be on a ten-to-fifteen-year timeline, placing the inflection point around 2035. Brynjolfsson's February 2026 data (2.7% growth) suggests the curve may already be bending upward. But as he cautioned, "several more periods of sustained growth are needed to confirm a long-term trend."

Not everyone accepts the historical analogy. Dario Amodei, CEO of Anthropic, argued in "Machines of Loving Grace" (2024) that AI could compress fifty to a hundred years of scientific progress into five to ten — "a country of geniuses in a datacenter." In a follow-up essay, "The Adolescence of Technology" (2025), he predicted that half of all entry-level white-collar jobs could be displaced within one to five years, a timeline that would collapse the J-Curve before institutions have time to adapt. If Amodei's capability estimates are correct, the augmentation era may be measured in years, not decades. The question is whether organizational inertia — the same force that kept factories running on steam-shaft layouts for decades after Edison — acts as a buffer or simply delays a harder landing.

The micro-evidence is accumulating faster than the macro statistics. Brynjolfsson, Li, and Raymond found that an AI assistant for customer service agents produced 14% average productivity gains, with 34% gains for less experienced workers. Noy and Zhang showed ChatGPT reduced time on writing tasks by 40% among college-educated professionals. Dell'Acqua and colleagues at Harvard found BCG consultants with AI access completed 12% more tasks, 25% faster, with 40% higher quality.

These are augmentation effects. The workers are still there. They are doing more.

A necessary caveat: all three studies measured narrowly defined tasks at individual organizations. The Brynjolfsson study covered customer service at a single company. Noy and Zhang used 453 professionals in a lab setting. Dell'Acqua's BCG experiment involved 758 consultants on tasks selected to be within AI's capability frontier — and notably, on tasks outside that frontier, AI users performed 19 percentage points worse than the control group. Whether these micro-level gains aggregate to economy-wide productivity growth is precisely the question that Acemoglu's macro framework (Section VI) challenges. The J-Curve theory provides a partial reconciliation: micro-gains are real but macro-effects lag because implementation costs, organizational restructuring, and task heterogeneity absorb much of the surplus before it reaches the productivity statistics. The gap between the micro-evidence and the macro-data is not a contradiction. It is the J-Curve in action.

The hypothesis for this era: By 2035, AI will drive productivity growth primarily through augmentation rather than replacement, with AI-complementary skills commanding significant wage premiums and new AI-native job categories absorbing displaced workers. PwC's 2025 Global AI Jobs Barometer found that jobs requiring AI skills already command a 56% wage premium, a figure that doubled in a single year. And David Autor documented in his 2024 Quarterly Journal of Economics paper "New Frontiers" that 60% of employment in 2018 existed in job titles that did not exist in 1940.

Autor's finding reframes the question. The conversation assumes a binary: augment or automate. His historical evidence points to a third outcome that matters more: transformative technologies create entirely new categories of work that nobody anticipated. Nobody in 1990 predicted that "social media manager" or "data scientist" would be major job categories by 2020. The most significant employment effects of AI may be in roles that do not yet have names.

The biggest threat to this timeline is general-purpose robotics. If physical AI systems match the flexibility of large language models in manipulating the material world, full automation becomes cheaper than augmentation across most industries and the ten-year window collapses. Regulatory resistance in medicine and law could also steepen the J-Curve. And geographic concentration, with AI activity clustering in a handful of innovation hubs, could produce divergence worse than anything seen with previous general-purpose technologies. The January 2026 White House Council of Economic Advisers report, "Artificial Intelligence and the Great Divergence," draws explicit parallels to the Industrial Revolution's divide between industrializing and non-industrializing nations.


II. The Restructuring Era (2035–2045): The Jevons Explosion

Assume the augmentation story holds and organizations have spent a decade learning to use AI properly. What happens when cognitive work gets cheap? Not 20% cheaper. Ninety percent cheaper.

Jevons answered this in 1865. Cheap coal didn't mean less coal. It meant coal-powered everything. When a service becomes dramatically cheaper, existing users consume more and entirely new use cases become viable that were previously prohibitive. The combination can increase total spending despite radical price declines.

The pattern is already visible in AI infrastructure. Hyperscaler capital expenditure hit $602 billion for 2026, a 36% year-over-year increase, even as the cost per token of AI inference fell by roughly two orders of magnitude over the prior three years. The more efficient AI gets, the more of it companies buy.

But the original Jevons Paradox was about a depletable physical resource. Coal is rival and excludable: burning it in one engine means it cannot be burned in another. Cognitive services produced by AI are non-rival and have near-zero marginal cost of reproduction. An AI-generated legal brief can be copied infinitely at zero cost. That makes AI output more like software than like coal, and the economics of information goods (Shapiro and Varian, 1999) applies better than extractive-resource economics. The Jevons analogy requires a condition the essay must be honest about: sufficiently elastic demand.

Demand elasticity determines which cognitive sectors see Jevons explosions and which just get cheaper.

Consider the radiology example, which appears in nearly every optimistic AI forecast. If AI reduces interpretation costs by 90%, could MRI scans become standard for annual physicals and consumer wellness monitoring? The demand-expansion math is seductive: scans increase twentyfold, cost per scan drops tenfold, total spending doubles. But interpretation is only one component of scan cost. The MRI machine, technician time, and facility overhead (the dominant cost components) are unaffected by AI interpretation. A 90% reduction in interpretation cost does not produce a 90% reduction in total scan cost. Jevons effects require that the efficiency gains hit the binding cost constraint. In radiology, they may not.

Amodei's "marginal returns to intelligence" framework arrives at the same conclusion from the supply side: even unlimited AI intelligence hits diminishing returns wherever progress depends on physical experiments, regulatory cycles, or intrinsic complexity rather than cognitive effort. Radiology is constrained by machines and facilities. Software is constrained by thinking. The Jevons mechanism operates in the second category, not the first.

Contrast this with software development, where the binding constraint is cognitive labor: specification, coding, testing, debugging. Or content creation, where production cost is almost entirely human time. In these domains, demand elasticity is plausibly high enough for genuine Jevons effects. In domains where cognitive labor is a small share of total cost (manufacturing, logistics, construction), the effects will be muted. The question for each cognitive sector is whether it looks more like software (elastic, Jevons-compatible) or more like agriculture (inelastic, displacement-prone). When productivity in a sector with bounded demand improves dramatically, employment collapses rather than expands. Agriculture lost 95% of its workforce over a century despite enormous productivity gains. Treating "cognitive work" as a single category with uniform elasticity is the most common error in popular applications of the Jevons framework.

A second condition: sufficient competition to pass efficiency gains to consumers as price reductions. If AI providers capture efficiency gains as oligopoly profit rather than passing them through as lower prices, the demand-stimulating mechanism stalls. Given the current market structure, with a handful of foundation model providers and the Magnificent Seven controlling $19.6 trillion in market capitalization while absorbing 75% of S&P 500 earnings growth in 2024, this is not a hypothetical concern.

Open-source AI complicates the oligopoly story in ways that cut both for and against the Jevons thesis. DeepSeek's R1 model, released in January 2025, matched proprietary frontier performance at a fraction of the cost and forced immediate price cuts across the industry. Meta's Llama series and Mistral's open-weight models have a similar effect: they set a price floor of approximately zero for inference on commodity hardware, which makes it harder for any single provider to capture efficiency gains as profit. If open-source models continue to approach frontier capability — and the trend since 2023 suggests they will, with roughly a six-to-twelve-month lag — then the competition condition for Jevons effects strengthens. Efficiency gains pass through to consumers, prices fall, demand expands. But open-source also accelerates the displacement side. When a startup can run a capable model on its own servers for the cost of electricity, the barrier to automating cognitive tasks drops from "can we afford the API?" to "can we write the prompt?" The Jevons mechanism and the displacement mechanism both intensify. The distribution question becomes more urgent, not less.

The hypothesis for this era: By 2045, global spending on high-elasticity cognitive services (software, content, analytics, design) will be 3–5x higher in real terms than 2035 levels, despite AI being orders of magnitude cheaper. New industries will emerge that are only viable with near-zero cognitive costs. But low-elasticity sectors will see efficiency captured as cost savings, not demand expansion.


III. The Energy Wall: Physical Limits on a Digital Paradox

The Jevons Paradox predicts demand expansion. But demand can only expand as fast as the physical infrastructure allows. And right now, the infrastructure is losing the race.

The International Energy Agency estimates global data center electricity consumption at 415 TWh in 2024, about 1.5% of global electricity. By 2030, the IEA projects this will more than double to 945 TWh, equivalent to Japan's entire current electricity demand. Goldman Sachs forecasts a 165% increase in data center power demand over the same period, with AI's share rising from roughly 14% of data center load to 35–50%.

These projections collide with a physical bottleneck: grid interconnection. As of late 2024, approximately 2,600 GW of proposed generation and storage were waiting in the U.S. interconnection queue, more than twice the country's installed capacity. The median time from interconnection request to commercial operation has doubled from under two years in the early 2000s to over four years today. Only 13% of capacity that submitted interconnection requests from 2000 to 2019 had reached commercial operation by end of 2024. The rest was withdrawn or stuck.

AI demand is scaling on a one-to-two-year cycle. Grid expansion takes five to ten. The timelines are fundamentally mismatched.

The consequences are already visible at the local level. Virginia, the world's largest data center market, devotes more than a quarter of Dominion Energy's electricity sales to data centers, according to the Virginia Joint Legislative Audit and Review Commission. In Ireland, EirGrid has imposed a de facto moratorium on new data center connections in the Dublin region through 2028 because of grid capacity concerns. These are not projections. They are current constraints on where new capacity can be sited.

This mismatch has triggered a nuclear renaissance. Microsoft signed a 20-year power purchase agreement to restart Three Mile Island Unit 1 (835 MW). Meta announced "Prometheus," a 6.6 GW nuclear procurement program spanning deals with Oklo, Vistra, and TerraPower. Google, Amazon, and OpenAI have all signed nuclear agreements of their own. OpenAI's flagship Stargate facility in Abilene, Texas includes an on-site natural gas plant because the grid cannot deliver enough power. The pattern is clear: AI companies are becoming energy companies out of necessity.

The irony is recursive. Nvidia claims a roughly 100,000x improvement in AI energy efficiency per watt since 2016. Yet the company shipped 3.76 million data center GPUs in 2023 alone, over a million more than the prior year, and hyperscaler capex hit $602 billion for 2026. More efficient chips do not reduce energy consumption. They make it economical to run more chips. Khowaja et al. formalized this in "From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate" (ACM FAccT 2025), arguing that "efficiency gains may paradoxically spur increased consumption."

A double Jevons: cheaper AI drives more demand for AI services, and more efficient chips drive more demand for chips. Both are constrained by the same physical infrastructure: grid capacity, cooling water, and chip packaging.

Does the energy wall break the Jevons thesis? No, but it bends it. The correct analogy is the original coal story. Jevons was right that efficient steam engines increased coal demand. But coal supply constraints (mining capacity, transportation, labor) created price floors that moderated growth rates. The demand explosion was real but not infinite. AI will likely follow the same pattern: Jevons effects are real, but infrastructure constraints create a natural speed governor, keeping AI costs higher than pure algorithmic efficiency would allow and moderating the demand explosion to the pace at which physical supply can expand.

Epoch AI's comprehensive analysis concluded that electrical power is "the constraint likely to bind first" among all AI scaling bottlenecks. Jensen Huang called energy "the bottleneck" and placed it at the base of a "five-layer cake" for the AI industry. If the bottom layer constrains, everything above it constrains too.


IV. The Distribution Question: Who Captures the Surplus?

Even if the Jevons explosion materializes, it matters enormously who benefits. The efficiency surplus from AI has to go somewhere: to workers as higher wages, to consumers as lower prices, to capital owners as profits, or to specific geographies as concentrated growth. The early evidence suggests the flows are highly uneven.

The wage premium is real but concentrated. PwC's 2025 Global AI Jobs Barometer, which analyzed close to a billion job ads across six continents, found that jobs requiring AI skills command a 56% wage premium over comparable roles. That premium doubled in a single year. But Brookings and GovAI analysis shows the gains peak around $90,000 in annual income and stay high for six-figure earners. Workers earning less have less access to AI tools in their existing workflows. The people best positioned to gain from AI are the people who were already doing well.

The deeper shift is from labor to capital. The IMF's April 2025 working paper found something counterintuitive: AI may actually narrow wage inequality by displacing expensive high-wage cognitive tasks, compressing the wage distribution. But it simultaneously widens wealth inequality, because the surplus from automating expensive labor flows to capital owners through corporate profits and stock appreciation. The wealthiest 10% of U.S. households hold roughly 87% of all corporate equities, according to the Federal Reserve's Distributional Financial Accounts. When AI drives corporate earnings, the gains concentrate among people who own shares, not people who earn wages. As the IMF put it: "AI is likely to substantially increase wealth inequality" even in scenarios where wage gaps shrink.

The geographic concentration is stark. Brookings found that 30 U.S. metro areas capture 67% of all AI job postings. The Bay Area alone absorbs 82% of global generative AI venture capital. Just 15 cities account for two-thirds of AI assets and capabilities in the United States. This concentration is more extreme than the IT era. The compute infrastructure and specialized talent required for frontier AI development are orders of magnitude more expensive than what the internet boom demanded.

The international picture is worse. The EU has produced three foundation models to America's forty and holds 5% of global high-end AI compute versus America's 74%. India's $227 billion BPO industry, which employs five million people and contributes 7.4% of GDP, faces direct disruption, with an estimated 1.65 million voice support and data processing workers at risk. Africa's entire AI market is projected at $4.5 billion in 2025, roughly 1–1.5% of global AI spending. DeepSeek's open-source models and China's Digital Silk Road offer an alternative pathway for the Global South (20 digital infrastructure projects announced with Africa at the 2024 FOCAC Summit), but at the cost of technological dependency on a different superpower.

China offers a natural experiment. China's AI ecosystem — the world's second largest, with DeepSeek, Qwen, and dozens of foundation models — operates under fundamentally different institutional conditions: state-directed industrial policy, a managed labor market, and a different relationship between the state and technology companies. When displacement hits Chinese manufacturing or services, the response involves centrally coordinated retraining programs and directed investment in new industries, not market-mediated adjustment. If China's approach produces lower displacement costs and faster redeployment of workers than the Western model, the implication is that the Jevons thesis is not just about technology and elasticity — it is about governance. The essay's framework is implicitly Western, and China's trajectory will test whether that framing holds or whether institutional design is more determinative than market structure.

The institutional variable matters most. In Nordic countries with strong trade unions and active labor-market policies, AI adoption has been associated with narrower wage gaps. In the United States and United Kingdom, the same technology coincides with sharper polarization. The OECD's November 2024 study found that AI reduced inequality within the most AI-exposed occupations during 2014–2018, the opposite of what robots did. The mechanism: AI automates cognitive tasks that complement rather than replace physical work, lifting the relative wages of lower-skilled workers. But whether this equalizing effect persists depends entirely on whether workers or capital owners capture the productivity gains. The same technology, different institutions, opposite distributional outcomes.

The SAG-AFTRA and WGA strikes of 2023 produced the first successful labor negotiations over AI rights: consent requirements, compensation protections, and transparency rules for AI use of human work product. These serve as a template, but Hollywood writers have more bargaining power than call center workers in Manila. The broader policy toolkit has not yet produced a credible answer to the distribution question. Robot taxes were rejected by the EU. Altman's UBI pilot ($1,000/month to 1,000 low-income individuals) helped cover essentials but did not improve employment quality or health. Brookings found that workers from high AI-exposed jobs earn 25% less after retraining than workers from low-exposure jobs. The mechanisms for distributing the AI surplus remain, for now, theoretical.

Even the most optimistic projections acknowledge the contingency. Amodei's "Dream scenario" for developing economies — 20% annual GDP growth through AI-driven technology diffusion — carries an explicit caveat: it requires "strong efforts on our part." Benefits do not distribute themselves. He also identifies an "opt-out problem": populations that resist AI-enhanced services fall progressively further behind, creating feedback loops that compound existing inequality. The parallel to vaccine hesitancy is uncomfortable but instructive. When a technology's benefits are large and its adoption is uneven, the gap between adopters and holdouts widens faster than any redistribution mechanism can close.


V. The Institutional Era (2045–2055): The Perez Transition

Suppose the Jevons explosion happens in elastic sectors and the energy constraints bend without breaking. Productivity gains still need institutions to catch up, and that takes decades.

Carlota Perez's techno-economic paradigm theory (2002) documents this pattern: steam power required limited liability corporations and railway systems; electricity required scientific management and mass production; information technology required lean production and agile methodologies. Each general-purpose technology required new institutions, not just new tools. The technology always arrived decades before the institutions caught up, and the productivity gains came from the match between them.

The institutional adaptation is already underway, though in fragments. The EU AI Act, which entered force in stages beginning August 2024, created the first comprehensive regulatory framework for AI systems, classifying them by risk level and imposing transparency and audit requirements on high-risk applications. California's SB 1047, debated through 2024 and vetoed by Governor Newsom, would have required safety testing for frontier models above a compute threshold — the first attempt to regulate AI at the capability level rather than the application level. The SAG-AFTRA and WGA agreements of 2023 established consent requirements and compensation protections for AI use of human creative work, a template that other industries have yet to replicate. Singapore's Model AI Governance Framework takes a different approach entirely: voluntary, industry-led, emphasizing organizational accountability over prescriptive rules.

None of these frameworks addresses the deeper structural questions. Who owns the output of an AI system trained on copyrighted work? How should educational institutions prepare workers for roles that do not yet exist? What tax and transfer mechanisms can redistribute the surplus from AI-driven productivity gains without suppressing the investment that produces them? The copyright question is working its way through U.S. courts — the New York Times v. OpenAI and Thomson Reuters v. Ross Intelligence cases will establish precedent — but the broader institutional redesign has barely begun.

The hypothesis for this era: By 2055, the organizational structures that dominate the economy will bear little resemblance to 2025 corporations, just as a 2025 tech company bears little resemblance to a 1960s industrial conglomerate. The question is not whether institutions will adapt but how much damage accumulates during the lag between technological capability and institutional readiness.

Perez's historical analysis suggests these transitions are rarely smooth. They involve financial crises, political upheaval, and generational conflict before a new institutional framework stabilizes. The transition from the Victorian economy to the Progressive Era took decades of labor unrest, antitrust action, and regulatory invention. The AI transition will not be gentler.


VI. The Bear Case: What If the Pessimists Are Right?

The scenarios above assume AI will follow the historical pattern of general-purpose technologies. That assumption may be wrong. Three critiques are stronger than the optimistic case usually admits.

Robert Gordon's secular stagnation. Gordon's argument in The Rise and Fall of American Growth (2016) is not that innovation has slowed. It is that the "one-time-only" inventions of 1870–1970 (clean water, electricity, internal combustion, telecommunications) transformed material existence in ways that are, by definition, unrepeatable. Indoor plumbing can only be invented once. Gordon also identifies structural headwinds (rising inequality, educational stagnation, aging demographics, mounting government debt) that suppress growth regardless of technological progress. AI enthusiasts must explain not just why AI is transformative but why it overcomes headwinds that have been dragging on productivity for fifty years.

Daron Acemoglu's direction of innovation. Acemoglu's critique is more specific and more quantitatively damaging than most popular accounts suggest. In "The Simple Macroeconomics of AI" (NBER Working Paper 32487, 2024; published in Economic Policy, January 2025), he estimates that AI will increase total factor productivity by at most 0.53–0.66% over a decade, with cumulative GDP gains of only 1.1–1.6%. These numbers are an order of magnitude below the optimistic projections.

The mechanism: Acemoglu's task-based framework (developed with Pascual Restrepo) identifies displacement, productivity, and new task creation as the three forces shaping automation's labor market impact. The key insight: automation can produce productivity gains and still harm workers if displacement exceeds new task creation. His concept of "so-so technologies," automation that displaces workers without generating large productivity gains (like self-checkout machines that are not dramatically more efficient than cashiers but do eliminate jobs), applies uncomfortably well to many current AI deployments. The January 2026 HBR analysis found that 60% of organizations surveyed had already reduced headcount in anticipation of AI's potential, while only 2% reported large layoffs tied to actual AI implementation. Companies are cutting workers for what AI might do, not what it has done.

The measurement problem. GDP does not capture quality improvements from AI. Better recommendations, personalized content, faster service, free AI tools: none of this appears in productivity statistics. But the costs (displacement, retraining, energy consumption) do. William Nordhaus tested empirically whether economic data support accelerating growth consistent with transformative AI scenarios (American Economic Journal: Macroeconomics, 2021). His conclusion: they do not.

The three critiques point in different directions. Gordon says the big gains are behind us. Acemoglu says the gains are smaller than projected and going to the wrong places. Nordhaus says the data doesn't show acceleration. None of them are obviously wrong. But they also have to explain the 2.7% productivity jump in 2025 and $602 billion in hyperscaler capex. Something is happening. The debate is about magnitude, direction, and distribution.


VII. Tracking the Trajectory: How to Know Which Scenario Is Winning

These scenarios generate testable predictions.

By 2030, four indicators will distinguish the paths. The first is employment in software development and content creation — the domains where cognitive labor is the binding cost constraint. If both sectors have grown despite AI tools that already cut production time by a third, demand is outrunning displacement. If both have contracted, the agricultural model holds. The second is the AI wage premium. PwC measured it at 56% in 2025; if it persists above 20%, the augmentation era is proceeding. If it collapses to near zero, AI fluency has commoditized too fast for augmentation to drive a sustained transition. The third is new job category emergence — whether the BLS begins coding AI-native occupations that do not exist today, as Autor's framework predicts. The fourth is data center energy share. If U.S. data centers consume more than 12% of national electricity, the energy wall is binding. Below 8%, the physical constraints are bending.

By 2035, the macro picture clarifies. Sustained total factor productivity growth above 2% would validate the J-Curve inflection. Cognitive service spending growth of 3x or more in software, analytics, and content (inflation-adjusted) would confirm Jevons effects in high-elasticity domains. The ratio of AI venture capital flowing to the top 10 metros versus the rest of the country — currently around 70/30 — will show whether geographic concentration is intensifying or dispersing. And the labor share of income, which fell from 53% to 52.4% between 2014 and 2024, will indicate whether workers or capital owners are capturing the surplus.


Conclusion: The Binding Constraint

Everything in this essay turns on where cognitive labor sits in the cost structure.

In software development, it is the cost structure. Specification, coding, testing, debugging — these are not secondary line items subordinate to machines and facilities. They are the product. Cut those costs in half and a company that could not justify a five-person engineering team ships with two. Cut them by ninety percent and businesses that never had software budgets start building custom tools. Content creation follows the same logic: writing, design, video production are pure cognitive labor. When production costs collapse in domains where labor is the binding constraint, cheaper does not mean less. It means more. That is the Jevons mechanism, and software and content are where it faces its cleanest test.

The numbers will be unambiguous. If the United States employs more software developers and content creators in 2030 than it does today, despite tools that already cut production time by a third, then demand is outrunning displacement. If both sectors have shrunk, the precedent is not coal. It is agriculture — a century of productivity gains, ninety-five percent of the workforce eliminated, because demand for food has a ceiling. What matters is whether cheaper cognitive labor opens new markets or merely lowers costs in existing ones.

Acemoglu's projection — 0.53 to 0.66 percent TFP growth over a decade — is not a measure of what AI can do. It is a measure of what AI will do if current incentives hold. If companies automate for headcount reduction rather than market expansion. If efficiency gains pool in a handful of firms instead of reaching consumers as lower prices. If the grid cannot scale fast enough to power the demand explosion that cheaper cognition would otherwise produce. Each of those conditions is a choice, not a fate.

Jevons was right about coal because the market worked: cheaper steam opened industries that burned more coal than the old ones ever had. Whether he is right about intelligence depends on whether the gains from cheaper cognition reach a broader economy or settle in the accounts of the companies that already dominate it.


Sources:

  • Amodei, Dario, "Machines of Loving Grace" (darioamodei.com, October 2024)
  • Amodei, Dario, "The Adolescence of Technology" (darioamodei.com, January 2025)
  • Acemoglu, Daron, "The Simple Macroeconomics of AI" (NBER Working Paper 32487, 2024; Economic Policy, January 2025)
  • Acemoglu, Daron & Restrepo, Pascual, "Automation and New Tasks: How Technology Displaces and Reinstates Labor" (Journal of Economic Perspectives, 2019)
  • Autor, David, Chin, Caroline, Salomons, Anna & Seegmiller, Bryan, "New Frontiers: The Origins and Content of New Work, 1940–2018" (Quarterly Journal of Economics, Vol. 139, Issue 3, 2024)
  • Brookings Institution & GovAI, "AI and the Labor Market" (2025)
  • Brynjolfsson, Erik, Li, Danielle & Raymond, Lindsey, "Generative AI at Work" (NBER Working Paper 31161, 2023)
  • Brynjolfsson, Erik, Rock, Daniel & Syverson, Chad, "The Productivity J-Curve: How Intangibles Complement General Purpose Technologies" (American Economic Journal: Macroeconomics, 2021)
  • Challenger, Gray & Christmas, "January 2026 Job Cuts Report" (February 2026)
  • David, Paul A., "The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox" (American Economic Review, 1990)
  • Dell'Acqua, Fabrizio et al., "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality" (Harvard Business School Working Paper 24-013, 2023)
  • Epoch AI, "Can AI Scaling Continue Through 2030?" (2025)
  • Federal Reserve, Distributional Financial Accounts (2024)
  • Goldman Sachs, "AI, Data Centers and the Coming US Power Demand Surge" (2024)
  • Gordon, Robert J., The Rise and Fall of American Growth (Princeton University Press, 2016)
  • International Energy Agency, "Energy and AI" (2025)
  • IMF Working Paper WP/25/68, "AI Adoption and Inequality" (April 2025)
  • Jevons, William Stanley, The Coal Question (1865)
  • Khowaja, Sunder Ali et al., "From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate" (ACM FAccT, 2025)
  • Marguerit, David, "Augmenting or Automating Labor?" (arXiv:2503.19159, 2025)
  • Nordhaus, William, "Are We Approaching an Economic Singularity?" (American Economic Journal: Macroeconomics, Vol. 13, No. 1, 2021)
  • Noy, Shakked & Zhang, Whitney, "Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence" (Science, Vol. 381, 2023)
  • OECD, "What Impact Has AI Had on Wage Inequality?" (November 2024)
  • Perez, Carlota, Technological Revolutions and Financial Capital (Edward Elgar, 2002)
  • PwC, "2025 Global AI Jobs Barometer" (2025)
  • Rogers, Everett M., Diffusion of Innovations (1962; 5th Edition, 2003)
  • Shapiro, Carl & Varian, Hal, Information Rules (Harvard Business School Press, 1999)
  • Solow, Robert, "We'd Better Watch Out" (New York Times Book Review, July 12, 1987)
  • TechInsights, "Nvidia Data Center GPU Shipments" (2024)
  • Virginia Joint Legislative Audit and Review Commission, "Data Centers in Virginia" (2024)
  • Council of Economic Advisers, "Artificial Intelligence and the Great Divergence" (White House, January 2026)