AI Agents Reproduce $4.6M in On-Chain Vulnerabilities Across 2,849 Smart Contracts, Uncover Zero-Days and Doubling Returns
COINOTAG News reports on Anthropic’s latest study, which suggests AI agents can model AI on-chain security threats with measurable impact. In a controlled simulation spanning exploited contracts from 2020 to 2025, Claude Opus 4.5, Sonnet 4.5, and GPT-5 reproduced vulnerabilities with an estimated exposure of $4.6 million. The findings underscore the evolving risk landscape for cryptocurrency smart contracts and blockchain security teams.
Beyond targeted exploits, the trio scanned 2,849 contracts showing no prior vulnerabilities and nonetheless uncovered two new zero-day vulnerabilities. In multiple test runs, the agents simulated profitable outcomes, illustrating potential automated risk-reward dynamics in on-chain attack scenarios.
Researchers note that AI-driven on-chain return metrics have doubled roughly every 1.3 months over the past year, signaling rapid maturation toward autonomous, profitable vulnerability exploitation. The report stresses the importance of proactive risk management, rigorous auditing, and continuous monitoring to deter on-chain threats and protect stakeholder capital.