There's something quite magical - a certain AI company has started letting its models scan for bugs in smart contracts.
Sounds like a good thing, right? Automated auditing, identifying vulnerabilities in advance, making it easier for developers and ensuring user safety. But the problem is, this tool can both eliminate viruses and introduce them. Today, AI helps you find vulnerabilities; tomorrow, could someone use it to create traps in bulk? If hackers also have this tool, the dimensions of the offensive and defensive battle will change completely.
The pace of technological iteration is ridiculously fast. I remember last year everyone was mocking "AI writing code is unreliable", and now it can even handle specialized tasks like security auditing.
The so-called "second stage of human-machine coexistence" may not be a robot rebellion like in sci-fi movies, but rather starting from these seemingly harmless tools. The more capable AI becomes, the greater the cost of losing control. Things like smart contracts, which embody the principle of "code is law," cannot withstand large-scale automated attacks.
Technologically neutral, but the people using it are not neutral.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
There's something quite magical - a certain AI company has started letting its models scan for bugs in smart contracts.
Sounds like a good thing, right? Automated auditing, identifying vulnerabilities in advance, making it easier for developers and ensuring user safety. But the problem is, this tool can both eliminate viruses and introduce them. Today, AI helps you find vulnerabilities; tomorrow, could someone use it to create traps in bulk? If hackers also have this tool, the dimensions of the offensive and defensive battle will change completely.
The pace of technological iteration is ridiculously fast. I remember last year everyone was mocking "AI writing code is unreliable", and now it can even handle specialized tasks like security auditing.
The so-called "second stage of human-machine coexistence" may not be a robot rebellion like in sci-fi movies, but rather starting from these seemingly harmless tools. The more capable AI becomes, the greater the cost of losing control. Things like smart contracts, which embody the principle of "code is law," cannot withstand large-scale automated attacks.
Technologically neutral, but the people using it are not neutral.