Lessons from "The Global Intelligence Crisis of 2028": How Near-Zero Execution Costs in the AI Era Are Shifting the Scarcity of Decision-Making Power

Markets
Updated: 2026-02-25 07:49

In February 2026, a research report titled "The 2028 Global Intelligence Crisis" sparked widespread attention across financial markets. Released by Citrini Research, this macro scenario simulation imagines the economic trajectory from now through 2028: US unemployment surges past 10%, the S&P 500 falls 38% from its peak, and a structural crisis driven by AI quietly unfolds.

The report paints a sobering picture: as AI agents perform complex white-collar tasks at near-zero marginal cost, business models in software services, financial intermediation, and professional consulting are systematically dismantled. Companies use the funds saved from layoffs to purchase more AI computing power, leading to further layoffs—a "feedback loop with no natural brake." While economic output continues to grow, it no longer flows to human consumption sectors—the report calls this phenomenon "Ghost GDP."

What makes this report so impactful isn’t the accuracy of its predictions, but its focus on a fundamental question: as machine intelligence gradually replaces human intelligence—once the scarcest production factor—do existing economic theories still hold? As co-author Arup Shah emphasized in an interview, "This isn’t a prediction, but a stress test based on long-term models—if AI really keeps getting stronger as everyone expects, which business logics will break down first?"

Using this as a starting point, this article constructs a "Challenge–Trend–Impact–Response" analytical framework, focusing on the "intergenerational transition period" from 2025 to 2075. We explore how the structure of production factor scarcity shifts as the cost of "execution" approaches zero, and how wealth distribution and the social contract may evolve.

Revaluing Scarcity: The Shift in Production Factor Scarcity

The Zeroing Trend in Execution Costs

"Execution" refers to repetitive intellectual and manual tasks that can be algorithmized and routinized—basic programming, financial accounting, content generation, and so on. AI is driving the marginal cost of such execution toward zero. The scenario depicted in "The 2028 Global Intelligence Crisis": India’s IT services industry, with over $200 billion in annual export revenue, faces disruption as global clients turn to AI coding agents whose costs are essentially just electricity. The report notes, "The entire model is built on one value proposition—Indian developers cost a fraction of their US counterparts. But the marginal cost of AI coding agents has collapsed to, essentially, the price of electricity."

This trend is already supported by data. US IT sector employment fell 8% from its 2022 peak to early 2026. This sector is at the forefront of AI penetration. Shah points out, "The easier it is for an industry to hand tasks over to AI, the more obvious the job losses. And the jobs most easily replaced are white-collar." Information processing, data analysis, workflow approvals—tasks that once required highly educated, highly paid workers—can now be done by AI at minimal cost.

From an economics perspective, this is fundamentally a structural adjustment in the relative scarcity of production factors. In a paper for Financial Review, Zhang Xiaojing and Li Jingjing argue that AI is driving a "scarcity shift"—a change in the relative scarcity structure of dominant resources amid technological transformation. This means intangible capital (data, algorithms, computing power, etc.) is gaining weight, while the scarcity status of some labor factors is being eroded.

The Rising Value of Scarcity in Decision-Making

As execution costs fall, the value of "decision-making power" rises. Decision-making includes: bearing risk under incomplete information, allocating resources, setting goals, handling ethical dilemmas, and critically evaluating and making final judgments on AI outputs.

Economic theories of entrepreneurship have long shown that decision-making and risk-taking are the fundamental sources of profit. When the supply of execution is unlimited, its price (wages) trends toward zero, making "decision-making" the bottleneck factor, and its value (rents/profits) inevitably rises. This is the other side of the "scarcity shift"—AI systems automate complex cognitive tasks, reducing the scarcity of human labor in information processing, but simultaneously creating new sources of scarcity.

At the organizational level, AI is reshaping decision-making mechanisms. Decisions that are rule-based, data-rich, and repetitive are most easily replaced by AI. For higher-risk, accountable decisions, AI acts more as a "thinking partner." A Monte Carlo simulation study suggests that in complex scenarios, human-AI collaboration yields the highest economic utility, but only if true "augmentation" is achieved; without synergy, human-AI collaboration can perform worse than either pure machine or pure human strategies.

Structural Phase Change: The Evolution of Income Distribution

From Labor Income to Capital and Decision-Making Income

AI is transforming the foundational structure of income distribution. Analysis by IPPR shows that jobs in the UK with automation potential account for £290 billion in wages—about a third of the total wage bill. If automation leads to lower average wages or reduced working hours, a significant share of national income will shift from labor to capital.

"The 2028 Global Intelligence Crisis" report projects the extreme outcome of this trend: labor’s share of GDP plunges from 56% in 2024 to 46% in the 2028 scenario. Wealth becomes increasingly concentrated in the hands of "owners of computing power and capital," while labor income continues to shrink. This isn’t just another round of technological unemployment—it’s a decoupling of value creation and value distribution. "Machines don’t need to spend money on consumption." When output growth no longer translates into purchasing power, the foundation of the economic cycle begins to falter.

The polarization of distribution in the AI era is rooted in the scarcity of new intangible capital, whose increasing marginal value and concentrated ownership are rewiring the logic of modern economic factor allocation. When capital ownership is highly unequal, rising capital income shares inevitably exacerbate inequality—"whoever owns the robots will own an ever-larger share of national wealth."

This impact is spreading from specific industries to the broader economy. Shah notes that the top 20% of income earners account for about 65% of US consumer spending. If white-collar incomes falter, the entire consumer chain’s cash flow comes under pressure. The report models a scenario: a 5% rise in white-collar unemployment could trigger a far greater than 5% drop in consumption—a product manager earning $150,000 a year who loses their job and turns to gig work could see their income fall by over 70%.

Policy Debates on Socializing AI Gains

As AI becomes the core productive force in society, should its vast gains be redistributed through some mechanism? This question is gaining traction. Experts at Baker Tilly point out, "For an AI-driven economy to thrive, society must ensure consumers maintain purchasing power. Some form of universal basic income or its variants can provide this safety net."

Tech giants have floated similar proposals. OpenAI CEO Sam Altman proposed the "American Equity Fund," which would tax large corporations and private land at 2.5% to pay annual dividends to every American adult. Mustafa Suleyman, head of Microsoft’s consumer AI business, advocates for "Universal Basic Services," framing access to powerful AI systems as a basic right.

But these proposals face significant skepticism. A closer look reveals that Altman’s plan doesn’t advocate for worker control of OpenAI, nor for public ownership of AI infrastructure—it merely hopes for government to socialize the gains, while the chips, algorithms, and platforms that generate wealth remain tightly held by a handful of super-rich individuals. Japanese media have raised a fundamental question: when so much value has already been converted into equity and inherited wealth, can dividends truly benefit ordinary people?

Moreover, for most countries without leading AI companies, if local jobs are automated away and profits concentrate abroad, who pays income to their citizens? One possible solution is to establish an "International AI Dividend Fund," which would levy moderate taxes on the profits of the largest AI companies to support countries most affected by these shocks.

Adaptive Strategies: Anchoring Value During the Transition

Individual Level: From Skills Competition to Decision-Making Literacy

As knowledge retention and memory become AI’s absolute strengths, education must change. The core competitiveness of individuals and organizations in the future will not be how much they remember, but how quickly they can learn new things and adapt to change.

This means education should shift from "knowledge transmission" to "decision-making literacy"—including critical thinking, systemic risk assessment, ethical dilemma analysis, and the ability to "calibrate" and "veto" AI outputs. Forrester predicts that by 2026, 30% of large enterprises will mandate AI training to boost employee "AIQ" and reduce liability risks.

The "job freeze" phenomenon revealed in "The 2028 Global Intelligence Crisis" is worth noting: companies are adopting subtler approaches—business grows, but all new tasks go to AI, with no new hires. This may seem benign, but it deeply affects the labor market’s ability to regenerate. Shah notes that even companies with healthy finances today are seeing their stock prices fall—for a simple reason: "If every company is using AI to replace humans to protect margins, then three years from now, who will buy their products?"

Societal Level: Exploring a New Social Contract

At the institutional level, the transition period calls for a new social contract. Potential policy directions include: establishing lifelong learning accounts, improving social safety nets, and exploring mechanisms for recording and returning value for "data as labor."

The UNDP points out that the trajectory of AI isn’t determined by the pace of technological progress, but by "who benefits from it." This path isn’t set at the moment of invention, but is shaped by careful choices about how, where, and for whom AI is used. In practice, AI’s spread often happens not through national strategies, but through everyday procurement, platform, and operational decisions.

Macro policy frameworks also need updating. Traditional models assume factor scarcity and rising marginal costs, but with AI driving marginal costs toward zero, inflation measurement becomes unreliable and the job market faces a "skills–rules" mismatch. Some experts suggest incorporating metrics like the "algorithmic substitution rate" and "digital Gini coefficient" into policy tools, shifting from aggregate control to a dynamic balance of governance costs and innovation returns.

Asset Level: Analysis from an Ownership Perspective

Given the earlier conclusion that "wealth tilts toward capital and decision-making," during the intergenerational transition, the core of personal wealth anchoring may shift from "selling labor for cash" to "owning productive assets." Broadly defined, "productive assets" include not only traditional corporate equity and real estate, but also new AI-economy infrastructure—computing power, data ownership, and governance tokens for platforms.

IPPR proposes expanding capital allocation and diversifying ownership models to democratize "who has a claim on the dividends of the automated economy." Specific strategies include citizen wealth funds, employee ownership trusts, and new profit-sharing models. The core belief: new, diversified ownership models are essential to ensure automation creates shared prosperity.

This analysis is not investment advice, but an objective assessment of macro trends—to help readers understand the economic logic behind changes in asset value. As the UNDP notes, decisions about how data is generated, shared, retained, and reused determine whether organizations can understand how AI systems create impact, intervene when problems arise, and improve performance over time.

Conclusion: Social Choices After the Scarcity Shift

The fundamental transformation of the AI era is the shift in value from "execution" to "decision-making" and "ownership." During the "intergenerational transition" from 2025 to 2075, the challenge is how to smoothly manage this structural transformation.

The authors of "The 2028 Global Intelligence Crisis" emphasized in response to market upheaval: "If you take the most optimistic view of AI’s disruptive impact, what happens next? As a society, we must confront and seriously consider this reality." The value of this report lies not in the accuracy of its predictions, but in forcing us to address questions we might otherwise overlook.

The shape of future society—whether it trends toward more concentrated "algorithmic centralization" or a fairer "ownership society"—won’t be determined by technology alone. The key issue now isn’t "whether to develop AI," but "how to develop AI," and "who benefits from AI." Without effective governance of key resources, forward-looking adjustment of distribution structures, and responsible planning for future generations, even exponential technological progress may have its welfare benefits offset by structural risks. Ultimately, the endpoint of all technology should be human well-being. Adhering to a "human-centered" principle—aimed at creating a society of shared prosperity—must become the central goal of AI development.

The content herein does not constitute any offer, solicitation, or recommendation. You should always seek independent professional advice before making any investment decisions. Please note that Gate may restrict or prohibit the use of all or a portion of the Services from Restricted Locations. For more information, please read the User Agreement
Like the Content