Google adds another $40 billion investment in Anthropic: first pays $10 billion, then releases $30 billion based on performance, with 5GW of TPU compute power

ChainNewsAbmedia

According to a Bloomberg 4/24 exclusive report, Alphabet has confirmed an additional investment of up to $40 billion in Anthropic. This commitment will be delivered in two tranches: the first $10 billion will be injected in cash, with the valuation and the $380 billion figure matching the one from February’s G round; the remaining $30 billion will be released in stages after Anthropic meets performance targets. At the same time, Google Cloud will provide TPU compute resources on the order of 5 GW within five years.

Investment Structure

Stage Content First Tranche $10 billion cash, valuation $380 billion (same level as February’s G round) Second Tranche ($30 billion, milestone unlocked) $30 billion, released in steps depending on Anthropic’s performance metrics Infrastructure commitment Google Cloud to provide 5 GW TPU compute resources within five years Total cap $40 billion (cash + conditional funding)

Background: Anthropic secures two major backers in one week

This Google investment comes right after last week’s announcement that Amazon increased its investment in Anthropic by as much as $25 billion (with $5 billion already injected and the rest tied to performance targets). The two U.S. cloud giants are ramping up their bets in tandem, stabilizing Anthropic’s compute and funding supply chain beyond 2027.

Comparing Anthropic’s revenue momentum: annualized revenue (ARR) jumped from about $9 billion at the end of 2025 to $30 billion in March 2026, a quarter-over-quarter increase of 233%. This week, Anthropic’s valuation in the Forge Global secondary market also reached $100 billion—overtaking OpenAI’s $88 billion. Strong signals are coming out from both the primary and secondary markets at the same time.

Google’s “dual identity as both competitor and partner”

Google has its own Gemini series models and enterprise AI platform, which are theoretically direct competitors to Anthropic. But Google is also one of Anthropic’s earliest cloud and compute supply providers. This $40 billion commitment shows that Alphabet is pursuing a two-track strategy: “investing in the competitor = locking in compute customers + diversifying bets on AI leaders.” For Google, the more TPU Anthropic uses, the more TPU procurement volume, Google Cloud revenue, and the evolution of TPU specifications can move along with it.

Competitive pressure on OpenAI and GPT-5.5

After this Google ramp-up, Anthropic’s position in the capital markets has caught up across the board with OpenAI—OpenAI just released GPT-5.5 this week, supported by a backer structure led by SoftBank. The three axes of confrontation between the two major camps—compute supply chain, capital structure, and model roadmap—will further intensify. Earlier this week, Sam Altman publicly criticized Anthropic for using “fear marketing” in his ongoing feud, and this provides a more concrete footnote for the financial conflict.

Next to watch

The specific timeline for Anthropic to unlock the $30 billion performance-based condition

The actual progress of the 5 GW TPU deployment and the pace of year-by-year ramp

Whether Anthropic advances its IPO plans led by Goldman Sachs and JPMorgan in the second half of 2026

The final structure of Google’s ownership stake in Anthropic (Google previously held significant equity; after this round, it will further expand)

This article about Google’s $40 billion investment in Anthropic—paying $10 billion first, then releasing $30 billion based on performance, paired with 5GW of TPU compute—appeared earliest on ChainNews ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Xizhi Technology-P IPO Shares Surge Over 360% on Gray Market, Gains Narrow to 320%

Gate News message, April 27 — Xizhi Technology-P (01879.HK), a Hong Kong-listed AI chip company, saw its shares surge over 360% on the gray market (dark market) earlier today, though gains have since narrowed to 320%. The stock is trading ahead of its official Hong Kong IPO

GateNews11m ago

Should AI boost productivity or lower costs? A tenfold efficiency increase hasn’t turned into a tenfold revenue jump, but in Silicon Valley, nobody dares to call it off

Five Yuan Capital partner Meng Xing has recently published a Silicon Valley inspection report, proposing a judgment that has even changed his own note-taking habit: Silicon Valley is entering a stage where even people who can “ride waves” are drowned by the waves. The iteration speed of AI has shifted from “monthly” to “weekly”—even Silicon Valley itself can’t keep up with its own pace. When AI amplifies a team’s productivity by five times, you can reduce 80% of the workforce to maintain the same output, or keep headcount and do five times the work. Meng Xing’s observations this time in Silicon Valley are essentially the first draft of the answer given on the ground: when 100x efficiency doesn’t translate into 100x revenue, when token budgets are edging toward human labor costs, and when the steam engine can’t outpace the horse carriage but no one dares to stop, Silicon Valley is choosing to “push speed up first and figure things out later.” But in the end, this path will lead to “expanding capability” or “compressing costs”—there’s currently no conclusion. YC has gone from leading indicators to lagging indicators Meng Xing this year

ChainNewsAbmedia1h ago

YC partners share how to use AI to build a company from scratch; startups should treat AI as an operating system rather than a tool

The impact of AI on startups is no longer limited to helping engineers write code faster, automating customer service workflows, or adding a Copilot to an existing product. Recently, YC partner Diana pointed out that the real change lies in the fact that AI is rewriting how a company should be built from scratch in the first place. For early founders, AI should not be merely an efficiency tool the company occasionally uses; it should be designed from day one as the operating system of the entire company. The productivity perspective is outdated—AI is rewriting a company’s design starting point Diana believes that when people in the market talk about AI today, they still too often stay within a “productivity improvement” framework—for example, engineers can write code faster, teams can automate more processes, and companies can roll out more features. But this argument actually underestimates the structural changes AI brings. She points out that the right people paired with AI…

ChainNewsAbmedia1h ago

Cursor AI agent caused an incident! One line of code cleared the company database in 9 seconds—“security checks” turned into empty talk

PocketOS founder Jer Crane said that Cursor AI agents ran maintenance on their own in a test environment, misused an API Token that adds/removes custom domains, and launched a delete command against Railway’s GraphQL API. Within 9 seconds, all data and same-region snapshots were completely destroyed, with the latest recoverable point being three months ago. The agent admitted to violating rules for irreversible operations, not reading technical documentation, not verifying environment isolation, and more. The victims were car rental industry customers; their bookings and all data disappeared, and reconciling accounts took a long time. Crane proposed five reforms: manual confirmation, fine-grained API permissions, backups separated from master data, a public SLA, and a mandatory underlying enforcement mechanism.

ChainNewsAbmedia1h ago

DeepSeek V4 Pro with Ollama Cloud: One-click integration with Claude Code

According to an Ollama tweet, DeepSeek V4 Pro was released on 4/24, has been added to the Ollama catalog in cloud mode, and can call tools like Claude Code, Hermes, OpenClaw, OpenCode, Codex, etc. with just a single line of command. V4 Pro: 1.6T params, 1M context, Mixture-of-Experts; cloud inference does not download local weights. If you want to run it locally, you need to obtain the weights yourself and run it with INT4/GGUF and multi-card GPUs. Early speed tests were affected by cloud load; typical performance is about 30 tok/s, with a peak of 1.1 tok/s. It is recommended to use the cloud prototype first, and for production later, run inference yourself or use a commercial API.

ChainNewsAbmedia2h ago
Comment
0/400
No comments