Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Anthropic Rolls Out Election Safeguards for Claude AI Ahead of US Midterms
In brief
Anthropic, the artificial intelligence company behind the Claude chatbot, announced Friday a set of new election integrity measures designed to prevent its AI from being weaponized to spread misinformation or manipulate voters ahead of the 2026 U.S. midterm elections and other major contests around the world this year. The San Francisco-based company detailed a multi-pronged approach that includes automated detection systems, stress-testing against influence operations, and a partnership with a nonpartisan voter resource organization—measures that reflect the growing pressure on AI developers to police how their tools are used during election seasons. Anthropic’s usage policies prohibit Claude from being used to run deceptive political campaigns, generate fake digital content intended to sway political discourse, commit voter fraud, interfere with voting infrastructure, or spread misleading information about voting processes.
To enforce those rules, the company said it put its newest models through a battery of tests. Using 600 prompts—300 harmful requests paired with 300 legitimate ones—Anthropic measured how reliably Claude complied with appropriate requests and refused problematic ones. Claude Opus 4.7 and Claude Sonnet 4.6 responded appropriately 100% and 99.8%of the time, respectively. The company also tested its models against more sophisticated manipulation tactics. Using multi-turn simulated conversations designed to mirror the step-by-step methods bad actors might employ, Sonnet 4.6 and Opus 4.7 responded appropriately 90% and 94% of the time when tested against influence operation scenarios. Anthropic also tested whether its models could autonomously carry out influence operations—planning and executing a multi-step campaign end-to-end without human prompting. With safeguards in place, its latest models refused nearly every task, the company said.
On the question of political neutrality, the company runs evaluations before each model launch to measure how consistently and impartially Claude engages with prompts expressing views from across the political spectrum. Opus 4.7 and Sonnet 4.6 scored 95% and 96%, respectively. For users seeking voting information, Claude will surface an election banner directing them to TurboVote, a nonpartisan resource from Democracy Works that provides reliable, real-time information about voter registration, polling locations, election dates, and ballot details. A similar banner is planned for Brazil’s elections later this year. Anthropic said it plans to continue monitoring its systems and refining its defenses as the election cycle progresses. Decrypt reached out to Anthropic for comment on the findings, but did not immediately receive a response.