Alphabet Inc.'s Google is reportedly in talks with Marvell Technology to develop two new chips designed to improve how artificial intelligence models are run.
Summary
- Google is in talks with Marvell to develop two AI-focused chips, including a memory processing unit and a next-generation TPU, to improve model efficiency.
- The push is part of Google’s effort to position its TPUs as an alternative to Nvidia GPUs, while expanding partnerships with Intel and Broadcom.
- The move comes alongside the launch of Gemma 4, as Google aligns its AI models and hardware stack amid intensifying competition in AI computing.
According to a report by The Information, citing people familiar with the matter, one of the proposed chips could be a memory processing unit built to work alongside Google’s tensor processing units, or TPUs. The second chip is expected to be a new TPU tailored specifically for running AI workloads more efficiently.
The move is part of Google’s effort to position its in-house chips as an alternative to Nvidia’s GPUs. TPU adoption has been contributing to Google Cloud revenue growth, as the company looks to show returns on its AI infrastructure spending.
The report added that Google plans to complete the design of the memory-focused chip by next year before moving to test production. At the same time, it has expanded partnerships with chipmakers such as Intel and Broadcom to support growing demand for AI infrastructure.
Rising competition in AI hardware
As Google steps up development of its AI accelerators, it could begin to challenge Nvidia’s long-standing lead in high-performance computing.
NVIDIA, for instance, is advancing its own lineup of AI inference chips, including designs that incorporate technology from Groq. The entry of another large-scale competitor may intensify the race in AI hardware and reshape how companies source computing power for models.
Investors are likely to look for further clarity when Google reports its first-quarter results on April 29. The earnings release is expected to offer signals on cloud performance, advertising trends, and how aggressively the company plans to invest in AI and semiconductors in the coming quarters.
AI model advances support hardware push
Google’s latest chip discussions come as it continues to expand its AI model capabilities. Earlier this month, the company introduced Gemma 4, a new open model family built for advanced reasoning and agent-style workflows.
Gemma 4 is available in four sizes and is designed to handle multi-step logic and structured problem-solving more effectively. It has also delivered improved results in benchmarks tied to mathematics and instruction-following tasks.
The models include features such as native function calling, structured JSON outputs, and system-level instructions, allowing developers to build autonomous systems that can connect with APIs and external tools. They can also generate code offline, turning local machines into capable AI coding assistants.
Together, the model upgrades and chip development plans show how Google is aligning its software and hardware stack as competition in the AI space continues to intensify.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
OpenAI Appoints Former Airbnb Executive Emmanuel Marill to Lead EMEA Expansion
OpenAI hires Emmanuel Marill as EMEA managing director to push expansion amid sovereignty concerns and competition from Mistral AI; EMEA subscriptions are rising.
Abstract: OpenAI names Emmanuel Marill as managing director for Europe, the Middle East, and Africa to accelerate expansion amid sovereignty concerns and EU scrutiny. The move follows rising EMEA subscriptions and aims to build a stronger local footprint against rivals such as Mistral AI.
GateNewsJust Now
Thinking Machines Lab Secures Multi-Billion Dollar Cloud Deal with Google
Gate News message, April 23 — Thinking Machines Lab, an AI startup founded by former OpenAI executive Mira Murati, has signed a cloud infrastructure deal with Google Cloud valued at a single-digit billion US dollars. The agreement marks the company's first partnership with a major cloud provider
GateNews10m ago
SK Hynix Q1 Operating Profit Surges 406% to Record on AI Chip Demand
SK Hynix posts Q1 operating profit of 37.6 trillion won and revenue of 52.6 trillion won, up sharply on AI-driven demand for DRAM and NAND and the shift to real-time inference.
GateNews1h ago
OpenAI Reaches $1 Trillion Pre-IPO Valuation Amid Race with SpaceX and Anthropic
OpenAI nears a $1T implied pre-IPO via on-chain bets; SpaceX and Anthropic target similar valuations as AI infrastructure costs surge, driving subscription revenue while Anthropic faces pricing confusion.
GateNews2h ago
DeepSeek's Valuation Surges Past $20 Billion as Tencent and Alibaba Weigh Investments
DeepSeek seeks >$20B as Tencent/Alibaba discuss investment; Nvidia warns US chip edge could be undermined by Huawei; AI funding continues to surge with Vast Data's $1B round and OpenAI/Anthropic/xAI investments.
DeepSeek aims for a valuation above $20 billion amid talks with Tencent and Alibaba, while Nvidia warns that shifting AI models to Huawei chips could erode U.S. lead. The piece also notes a global surge in AI funding, including Vast Data's $1 billion round at a $30 billion valuation and major investments in OpenAI, Anthropic, and xAI.
GateNews4h ago
OpenClaw, Hermes, and SillyTavern Confirmed in GLM Coding Plan Support
Zhipu AI PM Li announces OpenClaw, Hermes, and SillyTavern as supported GLM Coding Plan projects; other tools will be evaluated case-by-case. Do not share credentials or use subscriptions as API access; contact support for error 1313.
Zhipu AI product manager Li announced that OpenClaw, Hermes, and SillyTavern are officially supported under the GLM Coding Plan, with other tools evaluated case-by-case. The note cautions against sharing credentials or using subscriptions as API access and directs users with error 1313 to contact support.
GateNews7h ago