These days, everyone is arguing intensely about whether the funding rate will flip or continue to bubble up, but I find myself thinking more about AI Agents: no matter how well they run, there are many steps that require human oversight. For example, during authorization, a single wrong signature is no longer just a "strategy mistake"; there are also cross-chain/bridges, contract upgrades, routing aggregation, on-chain slippage, and retry failures—Agents can automate these, but when anomalies occur, whether to keep executing or not needs someone to hold the reins. Then there's the information source: feeding it a bunch of data that "looks very much like the truth" might just lead to faster self-delusion. I'm tired but still here, so first, get the controllable permissions, limits, and whitelists in order, and then slowly observe which teams on L2 are truly taking security seriously.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments