Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic's new model Claude Mythos Preview has been released with an official benchmark. We've been accustomed to seeing AI's coding capabilities, but Mythos takes it to a level that is both incredible and quite frightening.
EXCLUSIVE LATEST COIN & MARKET UPDATES on GATE SQUARE ✅ FOLLOW ME NOW 🔥💰💵 #GateSquareAprilPostingChallenge
In the open-source world, known for its highly secure operating systems, this model has discovered a 27-year-old zero-day vulnerability in OpenBSD, which had gone unnoticed by the world's top security researchers over the past two and a half decades. Even in giant ecosystems like Linux kernel and Windows, this AI model has quickly identified thousands of security flaws.
However, interestingly, Anthropic has decided not to release this model to the general public at this time. Because its cybersecurity attack capabilities or offensive potential are so high that, if misused, it could cause a major digital catastrophe. The company admits that this model's autonomous reasoning ability is far sharper than any previous model, such as (Opus 4.6). Not only does it find vulnerabilities, but it can also write effective exploit code overnight to leverage those vulnerabilities and take over entire systems.
So, will we be unable to use this powerful AI? The answer is, for now, no. Anthropic has launched a special defensive initiative called Project Glasswing, where access has been granted to only 40 trusted organizations, including Google, Microsoft, Amazon, and NVIDIA. The goal is to ensure that before this technology falls into the hands of dangerous hackers, AI can be used to secure critical internet infrastructure, banking, and healthcare-sensitive codebases.