I saw news about the launch of a new AI model from the miHoYo founding team earlier this month. Named Anuttacon, it was released with LPM 1.0, a real-time video character creation technology that’s quite interesting.



From what I’ve seen, this model supports full video-audio conversation. It’s smooth in speaking, singing, listening, and reacting simultaneously. What stands out is the realism of the details, such as lip movements, facial expressions, and body gestures, making it look more natural.

Technically, LPM 1.0 uses 17B parameters and has been adapted into a low-latency streaming version. Another interesting point is that it supports a variety of character styles, from realistic forms, 2D animations, 3D game characters, to non-human creatures, all without additional customization. The team also launched LPM-Bench, a standard for evaluating character performance.

Compared to competing models, old limitations like only being able to generate 30 seconds of content have been removed. LPM 1.0 supports unlimited output duration, making it suitable for NPC conversation agents in games and live virtual broadcasts.

Notably, Anuttacon explicitly states that this launch is an academic exchange. There are no plans to offer APIs or public products at this time. However, the community sees this as potentially a way to attract AI talent.

In fact, this marks another step for Tsai Haoyu in AI development after miHoYo previously worked on AI tools and chatbots. It seems that miHoYo’s direction in AI for game engines and intelligent NPCs is entering a new phase.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin