Google in Talks with Marvell to Develop Custom AI Chips, Memory Processing Unit Planned for 2027

Gate News message, April 20 — Google is in talks with US chipmaker Marvell Technology to develop two custom chips designed to run AI workloads more efficiently and reduce reliance on Nvidia’s GPUs.

One chip will be a memory processing unit (MPU) designed to work alongside Google’s tensor processing unit (TPU), while the other will be a new TPU built specifically for AI model inference. The companies aim to complete the MPU design as early as 2027 before advancing to test production.

The partnership reflects Google’s broader push to develop proprietary silicon for its cloud AI infrastructure, enabling the company to optimize performance while building alternatives to existing GPU-based solutions.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

China's AI Model Call Volume Drops 23.8% Week-over-Week, U.S. Surpasses for First Time in Two Months

Global AI model call volumes declined to 206 trillion tokens last week. China's calls dropped to 444.1 trillion, while U.S. volumes rose to 490.8 trillion, surpassing China's for the first time in two months. Four of the top nine models are Chinese, with DeepSeek V3.2 second in calls.

GateNews19m ago

Axios exclusive: The U.S. NSA bypassed the Pentagon blacklist to use Anthropic Mythos, and Dario Amodei urgently met with the White House to negotiate

Despite the U.S. NSA banning Anthropic within the Pentagon, it still uses its strong model Mythos, sparking disagreements and skepticism among government agencies. The NSA’s use and the Pentagon’s ban create a self-contradiction, exposing internal inconsistencies in U.S. AI governance. The CEO of Anthropic has met with White House officials to discuss usage boundaries and safety concerns, and in the future may adjust government procurement processes and transparency standards.

ChainNewsAbmedia21m ago

Top AI Models Lag on Routine Enterprise Tasks, Databricks Says Smaller Specialized Models Outperform

David Meyer of Databricks highlights the limitations of top AI models in routine enterprise tasks, contrasting their success in complex problems. Fundamental differences in data types impact performance, leading to a shift towards smaller, efficient models tailored for specific workflows to improve reliability and cost-effectiveness in AI applications.

GateNews34m ago

Third-party AI breaches Vercel; Orca urgently rotates the key and confirms the agreement is secure

Decentralized exchange Orca announced that it has completed key rotation and confirmed that users’ funds are safe. This was done because the cloud platform Vercel was attacked. The attack method used a third-party AI tool’s OAuth integration to enter the Vercel system. A supply-chain vulnerability made it difficult for traditional security measures to be detected. Vercel reminded users to review environment variables to strengthen security protections, and noted that encryption projects’ reliance on cloud infrastructure creates a new security risk.

MarketWhisper46m ago

Claude Haiku 3 officially retires on 4/19: Anthropic forces migration to Haiku 4.5, and developers must change the model ID and parameter settings

The Claude Haiku 3 model officially stopped service on April 19, 2026. Developers need to update the model ID in their API requests to Haiku 4.5 and note two breaking changes. Enterprises should strengthen AI model lifecycle management to avoid service interruptions caused by model deprecation. It is recommended that developers update their code immediately and monitor changes in costs.

ChainNewsAbmedia47m ago

Claude Opus 4.7 hides the price increase: a new tokenizer makes the same text use 37–47% more tokens, while the fee rate stays the same but the bill gets more expensive

Anthropic’s Claude Opus 4.7 new tokenizer splits the same text into more tokens, increasing input and output costs by 37–47%. Although official rates remain unchanged, businesses need to review contract terms and implement cost monitoring, as a model upgrade may lead to budget overruns. In addition, insufficient transparency will attract the attention of regulators and become a new regulatory concern.

ChainNewsAbmedia50m ago
Comment
0/400
No comments