Meta plans to deploy Google's Tensor Processing Unit (TPU) in its own data centers by 2027, and may rent related Computing Power through Google Cloud starting in 2026. This collaboration is seen as a significant breakthrough for Google in the AI chip market, and following the news, NVIDIA's stock price initially fell by 7% before narrowing to 2.6%.
Meta's billions of dollars turn to Google TPU
According to The Information, Meta is in talks to deploy Google's Tensor Processing Unit (TPU) in its own data centers by 2027 and may rent related Computing Power through Google Cloud as early as 2026. This collaboration is astounding in scale, with an estimated procurement amount involving billions of dollars, marking a significant breakthrough for Google in the AI chip market. After providing millions of TPUs to Anthropic, it has once again attracted attention from heavyweight clients, demonstrating that Google's competitiveness in the AI infrastructure field is rapidly increasing.
This decision has multiple strategic implications for Meta. First, it reflects Meta's possible unwillingness to overly rely on the chip market leader NVIDIA, as the pain points of high prices and long delivery times force companies to seek alternatives. NVIDIA's H100 and H200 GPUs are in short supply, with order queuing times often reaching several months or even a year, and prices continue to rise. For Meta, which needs to rapidly scale its AI Computing Power, this supply bottleneck directly affects the training and inference efficiency of its Llama model.
Secondly, Google TPU has become a competitive option due to its AI-specific architecture, which allows for deeper adjustments in inference and training for large language models. TPU is an ASIC (Application-Specific Integrated Circuit) chip designed for AI computing, and it is highly integrated with its own DeepMind models (such as Gemini). Outsiders believe that TPU has advantages in efficiency, customization capabilities, and cost, which are key reasons for enterprises considering a shift from NVIDIA.
Third, a multi-vendor strategy has become a consensus among tech giants. Due to high costs, supply constraints, and risk diversification, companies are no longer willing to rely solely on NVIDIA, so most cloud and AI-related companies have begun to adopt a “multi-vendor strategy,” procuring GPUs, TPUs, and other alternatives simultaneously. Meta has chosen to use both NVIDIA GPUs and Google TPUs, ensuring sufficient Computing Power while gaining more leverage in price negotiations.
The market is also reacting quickly, with Alphabet's market value approaching $4 trillion, and Taiwan's MediaTek stock price benefiting from a surge of 8%, indicating that the spillover effect of Google TPU is taking shape. As a key partner in the Google TPU supply chain, MediaTek plays an important role in the packaging and testing stages, and Meta's large orders will directly drive its revenue growth.
NVIDIA stock price falls 7% before a strong comeback
(Source: Google Finance)
After the exposure of the Meta rumors, NVIDIA's stock price once plummeted by 7% before narrowing down to 2.6%, with a single-day market value evaporating by more than $200 billion. However, the company responded to this on X: “We are pleased to see Google's success, they have made great progress in the field of AI, and we continue to supply Google.” This seemingly polite statement actually reminds the market that Google itself is also a major customer of NVIDIA GPUs, and the two companies are not in a completely opposing relationship.
NVIDIA meaningfully stated: “We are still a generation ahead of the entire industry, being the only platform that can run all AI models in all scenarios, providing better performance, versatility, and substitutability than ASIC.” This rebuttal directly targets the core weakness of Google TPU. Although ASIC chips are more efficient for specific tasks, they lack flexibility. TPU is primarily optimized for the TensorFlow framework and Google's own models, and its performance advantage may significantly diminish when running other frameworks (like PyTorch) or third-party models.
In contrast, NVIDIA's GPUs use a general computing architecture that supports almost all mainstream AI frameworks and models. From OpenAI's GPT series, Anthropic's Claude, Meta's Llama to the open-source community's Stable Diffusion, the vast majority of AI models are trained on NVIDIA GPUs. This ecosystem advantage makes it difficult for developers and companies to completely detach from NVIDIA, even if they increase their use of Google TPUs.
A few weeks ago, Google released the widely acclaimed AI model Gemini 3, which was trained on the company's TPU rather than NVIDIA's GPU, highlighting the intense competition in chip manufacturing. This case demonstrates the successful application of TPU within Google's internal ecosystem, but it also reveals its limitations—Gemini 3's efficient operation on TPU is largely due to Google's engineers designing the model architecture with TPU's characteristics in mind from the very beginning. For businesses using standard frameworks and open-source models, the cost of such deep customization may outweigh the savings on the chips themselves.
Three Main Points of NVIDIA's Counterattack
Ecosystem Advantages: All mainstream AI frameworks and models are optimized for NVIDIA GPUs, with high conversion costs.
Irreplaceable Versatility: GPUs can perform a variety of tasks such as training, inference, and graphic rendering, while ASICs are limited to specific scenarios.
Technical Leadership: The latest H200 and the upcoming B100 still outperform competitors by a generation in terms of performance.
AI chip market pattern shifts from monopoly to multipolar competition
The tripartite dynamics of Google, Meta, and NVIDIA indicate that the AI chip battlefield is entering a new stage. As the market presence of Google's TPU rapidly expands, recent news suggests that Meta may become its next multi-hundred-million-dollar client, delivering a shockwave to the AI chip supply chain. Under NVIDIA's long-standing monopoly in the AI chip market, the competition among the three companies not only affects the computing power layout of tech giants but also influences global stock markets, supply chains, and the AI model ecosystem.
This competition will determine the core architecture of the next generation of AI infrastructure. If NVIDIA continues to maintain its technological leadership and sustain ecosystem barriers, its dominant position will remain solid. If Google TPU successfully penetrates more enterprise customers, proving its cost-effectiveness advantage in specific scenarios, the market will enter a multipolar competitive landscape. If tech giants like Meta fully embrace self-developed chips or a multi-supplier strategy, NVIDIA's pricing power and market share may face substantial challenges.
From NVIDIA's stock price reaction, it is clear that the market takes this competitive threat seriously. Although the 7% fall narrowed to 2.6% at closing, the single-day fluctuation of $200 billion in market capitalization indicates that investors are highly sensitive to changes in the AI chip market landscape. This volatility also reflects that NVIDIA's astonishing gains over the past few years have already priced in a lot of optimistic expectations, and any potential competitive threat could trigger profit-taking.
For the supply chain, this competition also has far-reaching implications. NVIDIA's GPUs are primarily manufactured by TSMC, and Google TPU also relies on TSMC's advanced processes. Regardless of who wins, TSMC will benefit. However, suppliers in downstream sectors such as packaging, testing, memory, and PCB face the risk of reallocation. MediaTek's stock price surged 8%, indicating that the market believes an increase in Google TPU orders will drive new opportunities for Taiwan's semiconductor supply chain.
For AI model developers, chip selection will directly impact model design and optimization strategies. If the market share of Google TPU continues to expand, developers may need to optimize models specifically for TPU, which will increase development costs but may also enhance performance in specific scenarios. If the market maintains a multi-vendor landscape, developers will need to ensure that models can run efficiently on different chips, which raises higher demands for abstraction and standardization at the framework level.
The choices and positions of various parties may become key variables in reshuffling the entire market. If NVIDIA can create a significant technological gap in its next-generation products (such as the B100), it will solidify its leading position. If Google can prove the cost-effectiveness of its TPUs and attract more customers, it will truly threaten NVIDIA's monopoly. Meta's final decision will serve as a bellwether, influencing the computing power procurement strategies of other tech giants.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
NVIDIA stock price falls sharply by 7%! Meta turns to Google chips, the three tech giants go to war.
Meta plans to deploy Google's Tensor Processing Unit (TPU) in its own data centers by 2027, and may rent related Computing Power through Google Cloud starting in 2026. This collaboration is seen as a significant breakthrough for Google in the AI chip market, and following the news, NVIDIA's stock price initially fell by 7% before narrowing to 2.6%.
Meta's billions of dollars turn to Google TPU
According to The Information, Meta is in talks to deploy Google's Tensor Processing Unit (TPU) in its own data centers by 2027 and may rent related Computing Power through Google Cloud as early as 2026. This collaboration is astounding in scale, with an estimated procurement amount involving billions of dollars, marking a significant breakthrough for Google in the AI chip market. After providing millions of TPUs to Anthropic, it has once again attracted attention from heavyweight clients, demonstrating that Google's competitiveness in the AI infrastructure field is rapidly increasing.
This decision has multiple strategic implications for Meta. First, it reflects Meta's possible unwillingness to overly rely on the chip market leader NVIDIA, as the pain points of high prices and long delivery times force companies to seek alternatives. NVIDIA's H100 and H200 GPUs are in short supply, with order queuing times often reaching several months or even a year, and prices continue to rise. For Meta, which needs to rapidly scale its AI Computing Power, this supply bottleneck directly affects the training and inference efficiency of its Llama model.
Secondly, Google TPU has become a competitive option due to its AI-specific architecture, which allows for deeper adjustments in inference and training for large language models. TPU is an ASIC (Application-Specific Integrated Circuit) chip designed for AI computing, and it is highly integrated with its own DeepMind models (such as Gemini). Outsiders believe that TPU has advantages in efficiency, customization capabilities, and cost, which are key reasons for enterprises considering a shift from NVIDIA.
Third, a multi-vendor strategy has become a consensus among tech giants. Due to high costs, supply constraints, and risk diversification, companies are no longer willing to rely solely on NVIDIA, so most cloud and AI-related companies have begun to adopt a “multi-vendor strategy,” procuring GPUs, TPUs, and other alternatives simultaneously. Meta has chosen to use both NVIDIA GPUs and Google TPUs, ensuring sufficient Computing Power while gaining more leverage in price negotiations.
The market is also reacting quickly, with Alphabet's market value approaching $4 trillion, and Taiwan's MediaTek stock price benefiting from a surge of 8%, indicating that the spillover effect of Google TPU is taking shape. As a key partner in the Google TPU supply chain, MediaTek plays an important role in the packaging and testing stages, and Meta's large orders will directly drive its revenue growth.
NVIDIA stock price falls 7% before a strong comeback
(Source: Google Finance)
After the exposure of the Meta rumors, NVIDIA's stock price once plummeted by 7% before narrowing down to 2.6%, with a single-day market value evaporating by more than $200 billion. However, the company responded to this on X: “We are pleased to see Google's success, they have made great progress in the field of AI, and we continue to supply Google.” This seemingly polite statement actually reminds the market that Google itself is also a major customer of NVIDIA GPUs, and the two companies are not in a completely opposing relationship.
NVIDIA meaningfully stated: “We are still a generation ahead of the entire industry, being the only platform that can run all AI models in all scenarios, providing better performance, versatility, and substitutability than ASIC.” This rebuttal directly targets the core weakness of Google TPU. Although ASIC chips are more efficient for specific tasks, they lack flexibility. TPU is primarily optimized for the TensorFlow framework and Google's own models, and its performance advantage may significantly diminish when running other frameworks (like PyTorch) or third-party models.
In contrast, NVIDIA's GPUs use a general computing architecture that supports almost all mainstream AI frameworks and models. From OpenAI's GPT series, Anthropic's Claude, Meta's Llama to the open-source community's Stable Diffusion, the vast majority of AI models are trained on NVIDIA GPUs. This ecosystem advantage makes it difficult for developers and companies to completely detach from NVIDIA, even if they increase their use of Google TPUs.
A few weeks ago, Google released the widely acclaimed AI model Gemini 3, which was trained on the company's TPU rather than NVIDIA's GPU, highlighting the intense competition in chip manufacturing. This case demonstrates the successful application of TPU within Google's internal ecosystem, but it also reveals its limitations—Gemini 3's efficient operation on TPU is largely due to Google's engineers designing the model architecture with TPU's characteristics in mind from the very beginning. For businesses using standard frameworks and open-source models, the cost of such deep customization may outweigh the savings on the chips themselves.
Three Main Points of NVIDIA's Counterattack
Ecosystem Advantages: All mainstream AI frameworks and models are optimized for NVIDIA GPUs, with high conversion costs.
Irreplaceable Versatility: GPUs can perform a variety of tasks such as training, inference, and graphic rendering, while ASICs are limited to specific scenarios.
Technical Leadership: The latest H200 and the upcoming B100 still outperform competitors by a generation in terms of performance.
AI chip market pattern shifts from monopoly to multipolar competition
The tripartite dynamics of Google, Meta, and NVIDIA indicate that the AI chip battlefield is entering a new stage. As the market presence of Google's TPU rapidly expands, recent news suggests that Meta may become its next multi-hundred-million-dollar client, delivering a shockwave to the AI chip supply chain. Under NVIDIA's long-standing monopoly in the AI chip market, the competition among the three companies not only affects the computing power layout of tech giants but also influences global stock markets, supply chains, and the AI model ecosystem.
This competition will determine the core architecture of the next generation of AI infrastructure. If NVIDIA continues to maintain its technological leadership and sustain ecosystem barriers, its dominant position will remain solid. If Google TPU successfully penetrates more enterprise customers, proving its cost-effectiveness advantage in specific scenarios, the market will enter a multipolar competitive landscape. If tech giants like Meta fully embrace self-developed chips or a multi-supplier strategy, NVIDIA's pricing power and market share may face substantial challenges.
From NVIDIA's stock price reaction, it is clear that the market takes this competitive threat seriously. Although the 7% fall narrowed to 2.6% at closing, the single-day fluctuation of $200 billion in market capitalization indicates that investors are highly sensitive to changes in the AI chip market landscape. This volatility also reflects that NVIDIA's astonishing gains over the past few years have already priced in a lot of optimistic expectations, and any potential competitive threat could trigger profit-taking.
For the supply chain, this competition also has far-reaching implications. NVIDIA's GPUs are primarily manufactured by TSMC, and Google TPU also relies on TSMC's advanced processes. Regardless of who wins, TSMC will benefit. However, suppliers in downstream sectors such as packaging, testing, memory, and PCB face the risk of reallocation. MediaTek's stock price surged 8%, indicating that the market believes an increase in Google TPU orders will drive new opportunities for Taiwan's semiconductor supply chain.
For AI model developers, chip selection will directly impact model design and optimization strategies. If the market share of Google TPU continues to expand, developers may need to optimize models specifically for TPU, which will increase development costs but may also enhance performance in specific scenarios. If the market maintains a multi-vendor landscape, developers will need to ensure that models can run efficiently on different chips, which raises higher demands for abstraction and standardization at the framework level.
The choices and positions of various parties may become key variables in reshuffling the entire market. If NVIDIA can create a significant technological gap in its next-generation products (such as the B100), it will solidify its leading position. If Google can prove the cost-effectiveness of its TPUs and attract more customers, it will truly threaten NVIDIA's monopoly. Meta's final decision will serve as a bellwether, influencing the computing power procurement strategies of other tech giants.