Anthropic Launches Auto Mode for Claude Code, AI Autonomously Assesses Operation Safety

Gate News Report, March 25 — Anthropic has launched a new permission mode called auto mode for the AI programming tool Claude Code, offering a third option between default item-by-item approval and complete permission bypass. The core mechanism of auto mode involves a classifier that pre-screens each tool invocation to determine if the operation is potentially destructive, including bulk file deletions, sensitive data leaks, malicious code execution, and more. Operations deemed safe are automatically allowed, while dangerous ones are blocked, and Claude will attempt to complete the task using alternative methods. If Claude repeatedly tries to perform blocked operations, the system will eventually prompt the user for confirmation. Anthropic states that this mode reduces risk but does not eliminate it entirely, and recommends using it in isolated environments. Auto mode is currently available in research preview to Team Plan users, with Enterprise and API users expected to gain access soon. Users can enable it with claude --enable-auto-mode, and switch to this mode during use by pressing Shift+Tab.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments