The AI industry also has its own Satoshi Nakamoto—Jensen Huang

PANews
BTC-5,59%

Author: Luo Yihang, Silicon-Based Perspective

The token you once saw because you believed in it, now you can see without belief. It’s the next after Watt, Ampere, and Bit.

In January 2009, an anonymous person invented something called a “token.” You invest computing power, earn tokens, which circulate, price, and trade within a consensus network. This gave birth to the entire crypto economy. Over a decade later, people still debate whether these tokens have value.

In March 2025, a man in leather redefined another kind of token. You invest computing power, produce tokens, which are immediately consumed during AI inference & reasoning processes: thinking, reasoning, coding, decision-making. This accelerates the AI economy. No one debates whether these tokens have value because you just used millions of them this morning.

Two types of tokens, same name, same underlying structure: Input computing power, output valuable results.

图片

In March 2026, I sat at NVIDIA GTC listening to Jensen Huang deliver a nearly product-free keynote. Yes, he announced Vera Rubin, a CPU-GPU hybrid product. But this time, he didn’t talk about chip specs or process technology; he presented a complete economics of token production, pricing, and consumption—

Which model corresponds to which token speed; which token speed corresponds to which price range; what hardware level supports each price range.

He even provided data center hardware allocation plans for CEOs and decision-makers holding corporate budgets: 25% for free tier, 25% for mid-range, 25% for high-end, 25% for premium.

Yes, he didn’t sell a specific GPU model this time, just like two years ago with Blackwell. But this time, he’s selling something bigger. After two hours, I felt his core message was: Welcome to consume tokens, and only Nvidia’s factory could produce.

At that moment, I realized this man, and the anonymous person who mined the first token 17 years ago, are doing the exact same thing structurally.

The Same Conversion Rules

The anonymous “Satoshi Nakamoto” wrote a nine-page white paper in 2008, designing a set of rules: invest computing power, complete a mathematical proof (Proof of Work), and earn crypto tokens as rewards.

The brilliance of this rule is that it requires no trust—if you accept the rules, you automatically participate in this economy. The rule is correct because it brings together many who would otherwise deceive each other.

And Jensen Huang, on the GTC 2026 stage, did something structurally identical.

He showed a diagram illustrating the relationship and tension between inference efficiency and token consumption: Y-axis is throughput (tokens per megawatt), X-axis is interactivity (perceived token speed per user). Below the X-axis, he marked five pricing tiers: Free with Qwen 3, $0/million tokens; Medium with Kimi K2.5, $3/million tokens; High with GPT MoE, $6/million tokens; Premium with GPT MoE 400K context, $45/million tokens; Ultra at $150/million tokens.

This diagram could almost serve as the cover of Huang’s “Token Economics” white paper.

图片

Satoshi Nakamoto defined “meaningful computation”—completing SHA-256 hash collisions is valuable. And Jensen Huang defined “meaningful inference”—producing tokens at a specific speed under given power constraints for specific scenarios is valuable.

Neither Satoshi nor Huang directly produce tokens; they define the rules and mechanisms for token creation and pricing.

A phrase Huang said on stage could almost be directly included in a white paper on token economics—

Tokens are the new commodity, and like all commodities, once it reaches an inflection point, once it matures, it will segment into different parts.

Tokens are the new bulk commodity. Once mature, bulk commodities naturally stratify. Huang isn’t describing the current state; he’s predicting a market structure and precisely aligning his hardware product lines at each layer of this structure.

The production processes of these two tokens even have a semantic symmetry: mining is called mining; inference is called inference.

The essence of mining and inference is both turning electricity into money. Miners spend electricity to mine crypto tokens, then sell them; AI models and agents spend electricity to generate AI tokens, then sell them by the million. The intermediate steps differ, but both ends are the same: meters on the left, income on the right.

Two Ways of Expressing Scarcity

The most important design decision Satoshi Nakamoto made wasn’t Proof of Work, but the 21 million Bitcoin cap. He used code to create artificial scarcity—no matter how many miners join, the total Bitcoin supply will never exceed 21 million. This scarcity anchors the entire crypto economy’s value.

Huang, on the other hand, creates natural scarcity through physical laws. He says:

“You still have to build a gigawatt data center. You still have to build a gigawatt factory, and that one gigawatt factory for 15 years amortized… is about $40 billion even when you put nothing on it. It’s $40 billion. You better make sure you put the best computer system on that thing so that you can have the best token cost.”

A 1GW data center will never become 2GW. This isn’t a code limit; it’s a physical law.

Land, electricity, cooling—each has physical limits. The amount of tokens a factory can produce over 15 years depends entirely on the computing architecture you install.

图片

Satoshi’s scarcity can be forked. If you dislike the 21 million cap, fork a new chain, change it to 200 million, call it Ether or whatever, and release a white paper. People have indeed done this, enthusiastically.

Huang’s created scarcity cannot be forked. You can’t fork the second law of thermodynamics, the capacity of a city’s power grid, or the physical land area.

But whether it’s Satoshi or Huang, their creation of scarcity leads to the same result: a hardware arms race.

The history of mining: CPU → GPU → FPGA → ASIC. Each generation of dedicated hardware renders the previous obsolete. The history of AI training and inference is repeating: Hopper → Blackwell → Vera Rubin → Groq LPU. General hardware starts it, specialized hardware finalizes it. Huang’s GTC 2023 showcase of Groq LPU, a deterministic dataflow processor after acquiring Groq, with static compilation, no dynamic scheduling, 500MB on-chip SRAM—architecturally an ASIC for inference. Doing one thing, but doing it to the extreme.

Interestingly: GPUs played key roles in both waves.

Around 2013, miners discovered GPUs are better than CPUs for crypto mining, and Nvidia GPUs sold out. Ten years later, researchers found GPUs are best for training and inference of AI models, and Nvidia data center cards sold out again. As a processor category, GPUs served two generations of token economies.

The difference: the first time, Nvidia was a passive beneficiary; the second, during the shift from pretraining to inference in AI compute, Nvidia quickly seized the opportunity, designing the entire game and becoming the rule-maker.

The Most Profitable “Shovel” in the World

In the gold rush, the most profitable weren’t the miners, but the shovel sellers—Levi Strauss. In the mining boom, it wasn’t the miners, but Bitmain and Wu Jihan, who sold mining hardware. In AI pretraining and inference waves, it’s not the foundational models or agents, but Nvidia selling GPUs.

But honestly, the roles of Bitmain and Nvidia in their respective industries are no longer comparable.

  • Bitmain only sells mining hardware; Nvidia was once a supplier to Bitmain. Once you buy a miner, what coin to mine, which pool to join, at what price to sell, has nothing to do with Bitmain. It’s a pure hardware supplier, earning one-time device profits.
  • Nvidia is different. It doesn’t just sell hardware. Since the AI inference boom in 2025, it has deeply defined what to mine with this GPU, how to price tokens, who to sell tokens to, how to allocate data center compute—these are all in Huang’s presentations: market divided into five tiers, each with specific models, context lengths, interaction speeds, and prices… Nvidia has standardized and formatted the future AI inference-driven market.

Around 2018, global compute power was concentrated in a few large pools—F2Pool, Antpool, BTC.com—they competed for share, but the hardware source was highly centralized at Bitmain.

Today, Nvidia’s revenue is mainly from “hyperscalers” competing with each other—AWS, Azure, GCP, Oracle, CoreWeave—and 40% from decentralized AI natives, sovereignty AI projects, and enterprise clients. Large “mining pools” contribute most revenue; smaller “miners” provide resilience and diversification.

The structure of these two ecosystems is identical. But Bitmain later faced competitors—Shenma Mining Machines, Canaan, Innosilicon—eating into its market share. Mining hardware is relatively simple ASIC design, giving challengers a chance. But shaking Nvidia seems increasingly difficult: 20 years of CUDA ecosystem, hundreds of millions of GPUs installed, NVLink sixth-generation interconnect, the decoupled inference architecture after Groq’s integration—Nvidia’s technological complexity and ecosystem barriers make most competitive tools ineffective.

This may last 20 years.

The Fundamental Fork of the Two Tokens

What makes crypto and AI inference/learning tokens fundamentally different is the motivation and psychology of their users.

Crypto tokens are driven by speculation. No one “needs” Bitcoin to do work. All white papers claiming blockchain tokens solve problems are scams. Holding crypto is because you believe someone will buy it from you at a higher price later. Bitcoin’s value comes from a self-fulfilling prophecy: if enough people believe it’s valuable, it is. This is a faith economy.

AI tokens, on the other hand, are about productivity. Nestlé needs tokens for supply chain decisions—its supply chain data refreshes every 15 minutes, now down to 3 minutes, reducing costs by 83%. The value directly maps to P&L. Nvidia’s engineers now need tokens to code instead of manual work; research teams need tokens for scientific work. You don’t need to believe tokens are valuable; just use them, and their value is self-evident in use.

This is the core difference: Crypto tokens are produced to be held and traded—their value lies in not using them. AI tokens are produced to be immediately consumed—their value lies in their use at the moment of consumption.

One is digital gold, accumulating value as stored wealth; the other is digital electricity, burned upon production.

This difference ensures that the AI token economy won’t bubble like crypto. Bitcoin’s wild swings are driven by speculation and emotion. Token prices are driven by usage and production costs—so long as AI remains useful—so long as people use Claude Code to code, ChatGPT to write reports, agents to run business processes—demand for tokens won’t collapse. It’s not based on faith, but on necessity.

  • In 2008, the Bitcoin white paper repeatedly questioned why a decentralized electronic cash system would be valuable. Seventeen years later, people still debate.
  • In 2026, token economics have caused no controversy; they’ve become a consensus without debate. When Huang said at GTC “tokens are the new commodity,” no one questioned. Because everyone in the audience had just used millions of tokens with Claude Code or ChatGPT that morning. They don’t need convincing of token value—their credit card bills already prove it.

In this sense, Huang is truly a copy of Satoshi Nakamoto—the one who left behind the monopoly on mining hardware, defined token use cases and standards, and annually hosts a show at San Jose’s SAP Center to showcase the next-generation AI training and inference “mining machines.”

Satoshi Nakamoto has a cautious, romantic charm—design the rules, hand them over to code, then disappear. That’s the cyberpunk ideal. Huang, more like a businessman than a scientist, designs the rules, maintains them himself, constantly improves, and builds his own fortress.

The token you once saw because you believed in it, now you can see without belief. It’s the next after Watt, Ampere, and Bit.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments