GTC 2026: NVIDIA OpenUSD Enables AI Entertainment and Film Ecosystem for Universal Scenarios

ChainNewsAbmedia

At GTC 2026, Shangying Group shared how they are leveraging NVIDIA OpenUSD (Universal Scene Description) to enable cross-software collaboration and accelerate film and television production. By integrating AI-generated content (AIGC) into NVIDIA Omniverse application databases, film and TV teams utilize AI to extend from initial script development to virtual filming and post-production, creating an AI-driven entertainment and film ecosystem.

NVIDIA GPU Computing Clusters Provide High Performance

During digital transformation, the film industry’s demand for computing resources has shifted from simple image rendering to supporting complex inference for large-scale language models (LLMs) and diffusion models. Shangying has established an AI Compute Pool centered on NVIDIA GPU high-efficiency clusters, offering flexible multi-tenant scheduling that supports multiple film teams working simultaneously. To handle ultra-large assets like 4K and 8K without latency, the system incorporates InfiniBand and high-speed Ethernet technologies, enabling rapid data transfer between computing units. This ensures computational power is precisely embedded into every stage of film creation, delivery, and operation.

OpenUSD Connects Cross-Software Collaboration

In the AI-driven digital transformation of the film industry, OpenUSD (Universal Scene Description) has become the core framework connecting various workflows. This system transforms scenes, characters, and props from single files into reusable “content assets,” breaking down compatibility barriers between software. Artists can seamlessly collaborate using tools like Maya, Blender, Unreal Engine, and Houdini, avoiding data loss from repeated format conversions. With structured storage and version control, digital assets can be accurately managed and retrieved, significantly reducing operational costs from re-modeling. Combined with NVIDIA Omniverse, OpenUSD offers an auditable, modular asset management plan that maximizes the value of digital assets.

NVIDIA Omniverse Enables AI Virtual Production

NVIDIA Omniverse achieves 3D Gaussian virtual filming by converting real-world footage into 3D Gaussian models using 3dgrut technology. These models are exported as USDZ files and imported into Omniverse for high-quality rendering, saving costs on physical set construction and achieving highly realistic scene reproduction. Coupled with virtual filming technology, actors are seamlessly integrated into 3D environments, resulting in convincing visual composites. This practical approach allows for scene and prop reuse, reducing costs and enhancing production efficiency.

What Makes NVIDIA Omniverse Special?

NVIDIA Omniverse’s uniqueness lies in creating a Single Source of Truth (SSOT) environment that enables multiple applications to synchronize in real-time. This innovation solves the traditional problem where modelers, animators, and lighting artists hold separate copies, leading to version confusion and redundant data transfers.

The core of this solution is the Nucleus server and OpenUSD architecture. All scene data is stored in OpenUSD format within Nucleus, eliminating the need to repeatedly transfer bulky files. Instead, only changes are synchronized, allowing global teams to connect to the same SSOT simultaneously. This not only removes unnecessary copies but also ensures everyone sees the latest, most accurate scene, enabling truly parallel, real-time creative workflows.

AIGC in Script Development and Virtual Production

Artificial Intelligence Generated Content (AIGC) shortens film development cycles from months to weeks. In early stages, AI analyzes scripts, summarizes key points, and maps character relationships, automatically generating story outlines and character bios. Text-to-Image technology converts descriptions into visual references, improving communication. During filming, NVIDIA RTX real-time ray tracing enables virtual backgrounds and camera linkage for “virtual shooting.” AI tools like the Consistent Character Creator maintain character appearance across different lighting conditions, while virtual light-matching simulations reduce on-set adjustments.

AI is transforming the fundamental logic of film production, shifting from linear workflows to cross-software collaboration supported by NVIDIA OpenUSD and cloud computing. Editing, VFX, and pre-visualization can now occur simultaneously with on-set filming, no longer waiting for post-production to finish.

This article, “GTC 2026: NVIDIA OpenUSD Universal Scene Description Realizes AI Entertainment and Film Ecosystem,” originally appeared on ABMedia.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments