#加密市场回调 NVIDIA leads the decline of Gold Ball Asset 📉fall. A little-known company is doing ·……
This time, from Ultraman to Huang Renxun, and then to Trump, the Americans finally collapsed: the originally confident competition with China in AI encountered Huawei, and unexpectedly encountered the same hidden depth. So, DeepSeek went viral in the United States overnight When the AI in the United States was completely counterattacked, it was speechless and completely caught off guard! From OpenAI to Anthropic, from Meta to xAI, from Google to Microsoft, the entire American AI industry did not expect that they would train top-notch large models that would often require 20,000 powerful NVIDIA GPUs. However, DeepSeek R1, trained by Deep Inquiry, only used 1,024 low-end NVIDIA GPUs. What's even more impressive is that DeepSeek R1 immediately dealt a heavy blow to the proud American models Llama 3.1, GPT-4o, and Claude Sonnet 3.5! What does this mean? The American-style big model training method, which has always been imitated and even admired, has actually gone in the wrong direction. The previous investments of billions of dollars may have been wasted. This Media Discusses Domestic AI Challenging OpenAI The matter is not over yet. The question is, how long can we continue to increase AI computing power? Now the situation in the United States is completely chaotic: if we continue to increase investment, Meta announced that capital expenditure surged to over 60 billion US dollars This is a joke. Deep exploration is definitely silent and does not speak; without increasing investment, it is difficult to adjust the large model technology route they have set for themselves, and it seems that there are no other options. If only the American factories did not expect all this, even NVIDIA, which provides AI computing power, did not escape the blow: OpenAI, Meta, and xAI all promised to buy GPUs frantically and build data centers or supercomputer clusters frantically. However, DeepSeek-V3, especially DeepSeek R1, made Huang Renxun realize overnight that the once scarce NVIDIA GPUs were no longer so important: training the world's top generative AI models does not require 99999 H100/H200 GPUs, not even 9999, or 999. Sure enough, when he delved deep into the matter, Huang Renxun suddenly felt puzzled: who will NVIDIA sell to in the future, when they couldn't even buy a single GPU back then? This is truly a big mystery. Okay, Nvidia may reduce production, and the market value may fall below one trillion dollars, but what will the U.S. government do? Measures restricting exports to China, from Nvidia GPUs to data center AI cloud, have been laid out, limiting AI computing power quotas in 140 countries and regions worldwide. What does this mean? Before the large-scale model DeepSeek-V3, especially DeepSeek R1 in China, has attracted widespread attention, the US government kept saying that they wanted to undermine China's AI. However, when DeepSeek-V3, especially DeepSeek R1, shocked the world, the US instead dug a hole for themselves. The key point is that China's AI suddenly went from behind the scenes to the global forefront: On the training method of large models, the American large models are no longer the benchmark. Just copying the practice of Chinese DeepSeek is enough; On AI computing power, American GPUs and data centers are no longer guiding the direction. In the future, the global AI community may choose to use Chinese AI chips and Chinese data centers! So, what does the export ban backlash in the United States count for? The biggest surprise is that China's AI has surpassed the United States, setting the world's AI landscape in one stroke!
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
#加密市场回调 NVIDIA leads the decline of Gold Ball Asset 📉fall. A little-known company is doing ·……
This time, from Ultraman to Huang Renxun, and then to Trump, the Americans finally collapsed: the originally confident competition with China in AI encountered Huawei, and unexpectedly encountered the same hidden depth. So,
DeepSeek went viral in the United States overnight
When the AI in the United States was completely counterattacked, it was speechless and completely caught off guard!
From OpenAI to Anthropic, from Meta to xAI, from Google to Microsoft, the entire American AI industry did not expect that they would train top-notch large models that would often require 20,000 powerful NVIDIA GPUs. However, DeepSeek R1, trained by Deep Inquiry, only used 1,024 low-end NVIDIA GPUs. What's even more impressive is that DeepSeek R1 immediately dealt a heavy blow to the proud American models Llama 3.1, GPT-4o, and Claude Sonnet 3.5!
What does this mean? The American-style big model training method, which has always been imitated and even admired, has actually gone in the wrong direction. The previous investments of billions of dollars may have been wasted.
This
Media Discusses Domestic AI Challenging OpenAI
The matter is not over yet. The question is, how long can we continue to increase AI computing power? Now the situation in the United States is completely chaotic: if we continue to increase investment,
Meta announced that capital expenditure surged to over 60 billion US dollars
This is a joke. Deep exploration is definitely silent and does not speak; without increasing investment, it is difficult to adjust the large model technology route they have set for themselves, and it seems that there are no other options.
If only the American factories did not expect all this, even NVIDIA, which provides AI computing power, did not escape the blow: OpenAI, Meta, and xAI all promised to buy GPUs frantically and build data centers or supercomputer clusters frantically. However, DeepSeek-V3, especially DeepSeek R1, made Huang Renxun realize overnight that the once scarce NVIDIA GPUs were no longer so important: training the world's top generative AI models does not require 99999 H100/H200 GPUs, not even 9999, or 999.
Sure enough, when he delved deep into the matter, Huang Renxun suddenly felt puzzled: who will NVIDIA sell to in the future, when they couldn't even buy a single GPU back then? This is truly a big mystery.
Okay, Nvidia may reduce production, and the market value may fall below one trillion dollars, but what will the U.S. government do? Measures restricting exports to China, from Nvidia GPUs to data center AI cloud, have been laid out, limiting AI computing power quotas in 140 countries and regions worldwide.
What does this mean? Before the large-scale model DeepSeek-V3, especially DeepSeek R1 in China, has attracted widespread attention, the US government kept saying that they wanted to undermine China's AI. However, when DeepSeek-V3, especially DeepSeek R1, shocked the world, the US instead dug a hole for themselves. The key point is that China's AI suddenly went from behind the scenes to the global forefront:
On the training method of large models, the American large models are no longer the benchmark. Just copying the practice of Chinese DeepSeek is enough; On AI computing power, American GPUs and data centers are no longer guiding the direction. In the future, the global AI community may choose to use Chinese AI chips and Chinese data centers! So, what does the export ban backlash in the United States count for? The biggest surprise is that China's AI has surpassed the United States, setting the world's AI landscape in one stroke!