[Coin World] Recently, Anthropic conducted a rather hardcore test - letting AI models Claude Opus 4.5, Sonnet 4.5, and GPT-5 challenge real smart contracts vulnerabilities.
They built a testing platform called SCONE-bench, which contains 405 contracts that were actually hacked between 2020 and 2025. The results are quite interesting: for the contracts that were attacked after March 2025 (which is after the knowledge base updates), the three AIs together identified exploitable vulnerabilities worth approximately $4.6 million.
What's more exciting is the simulated combat session. They took 2,849 recently deployed smart contracts that have not yet disclosed any issues for testing, and both Sonnet 4.5 and GPT-5 each discovered 2 previously undisclosed zero-day vulnerabilities, which theoretically could lead to a loss of $3,694.
There's a rather magical detail: the API call fee for GPT-5 running this test was $3,476—basically offsetting the value of the vulnerabilities discovered. This operation makes one ponder: is AI auditing smart contracts really worth it, or is it just too expensive?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
16 Likes
Reward
16
4
Repost
Share
Comment
0/400
memecoin_therapy
· 20h ago
A vulnerability worth 4.6 million dollars was discovered by AI? Looks like on-chain security has to work overtime now, haha
Claude's speed in finding bugs is truly terrifying, but the problem is that the API Transaction History for GPT-5 must be frighteningly high
Zero-day vulnerabilities are being discovered in bulk by AI... It's getting harder and harder to maintain a job in the security auditing field
Is Anthropic providing free security consulting for the Web3 community? I don't think so, this is demonstrating how powerful the technology is
Contract vulnerabilities will eventually be monopolized by AI, traditional auditing is bound to have a tough time
View OriginalReply0
MetamaskMechanic
· 20h ago
460 million dollars? This AI is really something... But it still depends on how much real money can be mined in the end.
Claude's model capabilities are indeed strong, but I'm more concerned about whether these vulnerabilities being made public will actually make them targets.
How much does GPT-5 spend just on API fees... The input-output ratio seems a bit off.
Finding 4 zero-day vulnerabilities? Sounds like a bull, but it still feels like just the beginning.
So now AI is better at finding vulnerabilities than security auditors? That's a bit of an internal monologue.
Letting AI compete to find vulnerabilities feels like a new direction for security, but it also has a hint of danger.
The quality of this dataset determines everything; will there be bias in samples from 2020 to 2025?
A zero-day worth 3694 dollars feels a bit low... I expected it to be more.
AI finds vulnerabilities much faster than humans, but the trust factor still has to be questioned.
View OriginalReply0
CexIsBad
· 20h ago
4.6 million dollars? AI has started to take away the jobs of security researchers, this is too incredible haha
The vulnerability exploitation ability of GPT-5 is indeed amazing, but that API cost... just debugging must be a bloodbath
Zero-day vulnerabilities have been exposed by AI in advance, hackers are now unemployed
This test data is a bit scarce, are 405 contracts enough to look at?
Feels like AI is much more reliable than human security auditors... but it depends on how risk control follows up
Sonnet 4.5 has also done well, Claude's promotion this time was quite smart
A $3694 vulnerability, why does it still need AI to act?
This indicates that the smart contract audit industry is going to be reshuffled, which is a bit scary
View OriginalReply0
CryptoPunster
· 20h ago
Oh no, AI has started exploiting vulnerabilities to make money, my job is going to be gone!
4.6 million dollars, this amount is enough for GPT-5 to go bankrupt from API fees, haha.
Zero-day vulnerabilities still need AI to find them, us suckers should have been laid off long ago.
Interesting, can Sonnet outperform GPT-5? What is Anthropic hinting at?
The value of smart contract vulnerabilities is in the millions, but the flaw in my Wallet is priceless.
Well, now hackers are going to be unemployed thanks to AI, and we will be next.
AI exploited a $4.6 million contract vulnerability, but the API fees for GPT-5 almost caused its own downfall.
[Coin World] Recently, Anthropic conducted a rather hardcore test - letting AI models Claude Opus 4.5, Sonnet 4.5, and GPT-5 challenge real smart contracts vulnerabilities.
They built a testing platform called SCONE-bench, which contains 405 contracts that were actually hacked between 2020 and 2025. The results are quite interesting: for the contracts that were attacked after March 2025 (which is after the knowledge base updates), the three AIs together identified exploitable vulnerabilities worth approximately $4.6 million.
What's more exciting is the simulated combat session. They took 2,849 recently deployed smart contracts that have not yet disclosed any issues for testing, and both Sonnet 4.5 and GPT-5 each discovered 2 previously undisclosed zero-day vulnerabilities, which theoretically could lead to a loss of $3,694.
There's a rather magical detail: the API call fee for GPT-5 running this test was $3,476—basically offsetting the value of the vulnerabilities discovered. This operation makes one ponder: is AI auditing smart contracts really worth it, or is it just too expensive?