Recently, a bunch of AI agents have started to go on-chain and "do work" by themselves.


Honestly, they’re pretty good at clicking buttons, but when it comes to dirty work, humans still have to cover: that one authorization/signature, with more parameters, it's easier to sign incorrectly;
when cross-chain or aggregator routing changes, how to handle slippage and retries doesn’t really "hurt" them;
and the most annoying part is when mempool transactions get front-run or snatched, how aggressive the protection strategies should be—agents usually just follow the rules, and end up losing money without any expression.

These days, new L1/L2s are offering incentives to attract TVL.
Old users curse "mining, selling," which I can understand.
Agents are more like an assembly line: claim, swap, withdraw—leaving pretty neat footprints on-chain.
But if incentives are temporarily changed, blacklists are added, or there are small contract pitfalls, ultimately, humans still take the blame.

I now prefer to think of agents as autonomous driving: they can drive, but don’t keep your hands off the wheel for too long...
Which step do you think most needs human oversight?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin