China's Mini Max is once again challenging the global generative war with the AI model 'M2.1'.

Chinese artificial intelligence startup MiniMax has launched a new AI model M2.1, once again challenging the global generative AI competition. This model significantly enhances the diversity of programming languages and the ability to respond to real-world work environments, demonstrating overall improved performance not only in code assistance but also in document writing and conversational responses.

The newly released M2.1 has significantly improved the understanding and generation accuracy of various programming languages through performance upgrades that surpass the previous model M2. The supported languages have been expanded to include Rust, Java, Go, C++, Kotlin, Objective-C, TypeScript, JavaScript, etc., and it has also received significant praise for its enhanced capabilities in user interface design and aesthetics on Web, Android, and iOS platforms.

Notably, M2.1 not only strengthens the correctness of simple code execution, but also enhances its ability to interpret and follow complex task instructions or detailed guidelines, thereby building an AI that is more aligned with actual office environments. MiniMax has improved the model's dialogue capabilities and document writing skills, demonstrating excellent performance across a wide range from everyday conversation to technical document writing and structured responses.

Scott Breitenother, co-founder and CEO of the AI proxy open-source platform Kilo Code, stated: “In the initial tests, M2.1 demonstrated outstanding results throughout the entire development process, including architecture design, code orchestration, review, and deployment,” and commented that the model meets both cost-effectiveness and high-level performance.

This model is also evaluated using a new benchmark metric called VIBE (Vision and Interactive Benchmark for Execution). VIBE is built on five core areas: Web, Simulation, Android, iOS, and Backend Development, and it comprehensively assesses the interactive logic and visual elements of the generated results through an agent-based validation language. According to MiniMax, M2.1 scored an average of 88.6 in this benchmark test, with excellent scores of 91.5 and 89.7 in the Web and Android fields, respectively.

M2.1 has also been compared in performance with mainstream AI models. In the evaluation results alongside the flagship models from major vendors such as Anthropic, Google, OpenAI, and DeepSeek, M2.1 demonstrated strong problem-solving capabilities in high-difficulty benchmark tests like “Humanity’s Last Exam” and “Toolathon.” Notably, it scored 22.0 points in the HLE w/o Tools project and achieved 88 points in the MMLU(Pro) comprehensive test in the fields of humanities, science, and technology, ranking alongside top AI models.

M2.1 is currently available for download via MiniMax's proprietary API or Hugging Face, and the company representative service for MiniMax agents is also based on M2.1. The release of this model demonstrates the accelerating evolution of multilingual coding support and the AI agent market, and it signifies that the universality and competitiveness of generative AI originating from China are continuously expanding.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)