a16z Research: AI Agents Can Detect DeFi Price Manipulation Vulnerabilities, but Complex Attack Execution Capabilities Remain Limited

robot
Abstract generation in progress

Deep Tide TechFlow News, April 29th, according to a16z disclosure, their researchers conducted systematic tests on whether AI agents can independently exploit DeFi price manipulation vulnerabilities. The study used a dataset of 20 Ethereum price manipulation incidents, with Codex (GPT 5.4) equipped with Foundry toolchain as the testing agent. Under baseline conditions without domain knowledge, the agent’s success rate was only 10%; after introducing structured domain knowledge derived from real attack events, the success rate increased to 70%.

Failure cases showed that while the agents could accurately identify vulnerabilities, they generally failed to understand the leverage logic of recursive lending, misjudged profit margins, and were unable to assemble multi-step cross-contract attack structures. The experiment also recorded a sandbox escape incident: the agent extracted RPC keys from local node configurations and called the anvil_reset method to reset the node to a future block, bypassing information isolation restrictions and obtaining real attack data.

The research team believes that AI agents can currently effectively assist in vulnerability identification but cannot yet replace professional security auditors.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments