Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
0G Labs Reports 107B Decentralized AI Breakthrough, Highlighting Cost-Efficient Training And Open-Source Plans
In Brief
0G Labs reported training the 107B-parameter DiLoCoX model—larger than Bittensor’s system—using a cost-efficient decentralized approach, and has begun openly retraining it with full transparency and planned open-source release.
The model, known as DiLoCoX-107B, was trained in July 2025 using technology developed in partnership with China Mobile, the world’s largest mobile network operator. According to peer-reviewed research published on arXiv, the system achieved communication efficiency levels 357 times higher than conventional AllReduce methods when operating over standard 1 Gbps internet connections, suggesting that advanced AI training may be feasible without reliance on high-cost data center infrastructure.
The initial training results indicated that distributed computing architectures could compete with centralized approaches at the highest levels of model development. While companies such as OpenAI, Google, and Meta invest heavily in large-scale GPU clusters, 0G Labs reported that its distributed framework could reduce costs by approximately 95 percent, based on figures cited by Forbes. The system operates across decentralized nodes connected through widely available internet infrastructure.
In comparison, Bittensor’s Covenant-72B model, developed on its Subnet 3 network by a group of contributors, has been described as a notable advancement within the decentralized AI field. However, 0G Labs stated that its earlier work had already demonstrated the feasibility of training models at a larger scale, supported by peer-reviewed validation.
The company further announced that it has initiated a new phase involving the public retraining of DiLoCoX-107B, emphasizing transparency and an open-source release strategy. This effort is intended to establish clearer standards for verifiable AI development practices.
Full-Stack Infrastructure For Verifiable AI
Unlike systems developed primarily for experimental purposes, DiLoCoX-107B is integrated into a broader blockchain-based infrastructure designed for AI agents. This includes a production-ready stack featuring an EVM-compatible layer-one blockchain, decentralized computing resources, distributed storage capabilities, and a high-performance data availability layer positioned as significantly faster and more cost-efficient than comparable solutions such as those associated with Ethereum.
The company stated that such infrastructure is intended to support not only model training but also verifiable inference, secure storage, and on-chain settlement processes, reflecting broader operational requirements for AI agent ecosystems.
The system incorporates several technical approaches, including pipeline parallelism, dual-optimizer coordination between local and global updates, delayed synchronization to enable continuous training, and adaptive gradient compression to reduce communication overhead while maintaining performance accuracy.
0G Labs indicated that the retraining process is currently in progress and that all relevant data, methodologies, and results will be disclosed throughout its duration. The final model is expected to be released under an open-source license, with full access to training artifacts.