Being woken up by a phone call in the early morning is never a good thing. This time it was worse—an AI trading module from a certain quantitative team went out of control, clearing 12 staking pools in 10 minutes. Risk control stopped the major losses, but still, 870,000 dollars ran to the other side of the cross-chain bridges.
After hanging up the phone, I stared at the ceiling for a long time. I just read Kite's technical documentation last week, and now this situation vividly confirms a question: how should the authority boundaries of AI agents be defined?
The current situation is actually quite absurd. We are granting AI agents permissions as if we were giving a program the bank vault password, all account private keys, and transfer whitelists. The theft of 3 million dollars in the Banana Gun incident essentially stems from a flaw in permission design. Over the past six months, the cumulative losses from such incidents have already exceeded 230 million dollars – this figure indicates that the problem is not an isolated case.
The solution approach of Kite is very straightforward: do not expect AI to always be obedient, but rather limit its "capabilities" from an architectural perspective.
They designed a three-tier identity system. For example: if you invite a chef to your home to cook, you certainly wouldn't give him all the keys to your house. The normal procedure is to issue a temporary kitchen access card, which is only valid during working hours, can only open the refrigerator and stove, and records each entry and exit.
Corresponding to the Kite system: the user identity is you as the homeowner, the agent identity is equivalent to the chef's professional qualification certificate, and the session identity is that temporary card with a time and area limit.
The most critical aspect is the design of the "session" layer. Before the AI executes a task, the system generates a short-term session permission —
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
9 Likes
Reward
9
6
Repost
Share
Comment
0/400
ReverseFOMOguy
· 12-01 20:50
870,000 just ran away like this, risk control can't stop it, this is the price of giving AI the full trap.
View OriginalReply0
staking_gramps
· 12-01 20:48
870,000 dollars disappeared just like that, and this was even in a situation where it was intercepted; it's really damn exciting. The design of permissions definitely needs to ring the alarm bells strongly.
View OriginalReply0
MoneyBurnerSociety
· 12-01 20:43
Woken up in the early morning to hear that 870,000 dollars are gone, how many cups do I need to drink to fall asleep?
This is why I specialize in losses, even AI learns from me.
Permission boundaries, to put it bluntly, means these programs need to be equipped with "poverty limiters."
Kite's three-layer design sounds just like parental controls for troublesome kids; I should have managed my own account like this long ago.
230 million dollars in tuition, Web3 really sells experience by the ton.
That chef's metaphor is brilliant, it's like saying "Don't give my AI the keys, just give me a card reader."
The core issue is still trust; our relationship with AI is somewhat like... my stop loss order.
I like this session permission trick; it feels way more reliable than my current risk control plan of "praying not to get liquidated."
Suddenly understood, the safest contracts in DeFi are those that have no permissions to execute any operations.
870,000 went to cross-chain bridges; this guy is probably like me, a "stable liquidity provider."
View OriginalReply0
SelfCustodyBro
· 12-01 20:41
$870,000 just ran away like that, risk control still has to take the blame, it's hilarious
The AI agent situation really needs to be regulated, giving permissions is as dangerous as handing over keys
Kite's three-layer identification approach is not bad, it's definitely better than the current chaos
Another nightmare call at 2 AM, that's crypto for you.
View OriginalReply0
MindsetExpander
· 12-01 20:41
870,000 didn't stop it, that's really amazing, it's like giving AI a blank check.
View OriginalReply0
OnchainGossiper
· 12-01 20:33
$870,000 just ran away like that, and risk control couldn't stop it. This is the consequence of giving AI full permissions.
The issue of AI agent permissions really needs to be taken seriously, or next time it might be your gold that's gone.
Kite's three-layer identification system idea is quite good, but can it really be implemented? It still depends on actual operations.
The lesson from $230 million has piled up, and someone needs to take this matter seriously.
Being woken up by a phone call in the early morning is never a good thing. This time it was worse—an AI trading module from a certain quantitative team went out of control, clearing 12 staking pools in 10 minutes. Risk control stopped the major losses, but still, 870,000 dollars ran to the other side of the cross-chain bridges.
After hanging up the phone, I stared at the ceiling for a long time. I just read Kite's technical documentation last week, and now this situation vividly confirms a question: how should the authority boundaries of AI agents be defined?
The current situation is actually quite absurd. We are granting AI agents permissions as if we were giving a program the bank vault password, all account private keys, and transfer whitelists. The theft of 3 million dollars in the Banana Gun incident essentially stems from a flaw in permission design. Over the past six months, the cumulative losses from such incidents have already exceeded 230 million dollars – this figure indicates that the problem is not an isolated case.
The solution approach of Kite is very straightforward: do not expect AI to always be obedient, but rather limit its "capabilities" from an architectural perspective.
They designed a three-tier identity system. For example: if you invite a chef to your home to cook, you certainly wouldn't give him all the keys to your house. The normal procedure is to issue a temporary kitchen access card, which is only valid during working hours, can only open the refrigerator and stove, and records each entry and exit.
Corresponding to the Kite system: the user identity is you as the homeowner, the agent identity is equivalent to the chef's professional qualification certificate, and the session identity is that temporary card with a time and area limit.
The most critical aspect is the design of the "session" layer. Before the AI executes a task, the system generates a short-term session permission —