$292 million was warned 12 days ago by a tool with 47 GitHub stars.

robot
Abstract generation in progress

April 20, 2026 | 9527TEAM

  1. The Event

On April 7th, an open-source AI auditing tool published a report on GitHub.

This report precisely identified a vulnerability in Kelp DAO’s LayerZero cross-chain bridge 1-of-1 verification node configuration. The report has 47 stars.

On April 19th, Kelp DAO was hacked. Losses amounted to $292 million.

This is not a technical failure. It is a failure of human nature.


  1. What happened in 12 days

On April 7th, the warning report was issued.

On GitHub, 47 people gave it a star. Probably including some researchers, some DeFi players, and maybe a few developers closely tracking Kelp’s contracts.

And then what?

And then nothing happened.

The Kelp team did not take the report seriously. No emergency response was triggered. It was not widely circulated through any security mailing list. It did not make it into mainstream media coverage.

On April 19th, the hacker used the exact same method to transfer away $292 million.

12 days. Enough to do many things.

A Telegram group could be formed. A security audit could be completed. The protocol could be paused.

But none of it happened.


  1. Why did no one listen

This is a question even harder to answer than the $292 million.

One possible explanation: those who saw the report lacked the authority or influence to push the Kelp team to act. This is common in the open-source world — you discover a vulnerability, you send it out, but the other side has no obligation to respond.

A second possibility: they saw it but did not understand the severity of the vulnerability. The LayerZero 1-of-1 verification node configuration issue might seem like just a “configuration suggestion” rather than an “immediate red alert to stop all cross-chain operations” to non-professionals.

A third, most chilling possibility: someone saw it, had the power to push for action, but chose not to speak up.

Whatever the case, the conclusion is the same: We have tools, data, early warnings, but $292 million still disappeared.


  1. This is not an isolated case

On the same day, Vercel CEO Guillermo Rauch tweeted:

Vercel employees were compromised through cookies leaked by an AI platform, gaining internal permissions. The hacker group is highly professional. “I strongly suspect this attack was greatly accelerated by AI.”

This is not an AI security incident. It’s a traditional cookie intrusion. But AI made the intrusion faster, cheaper, and harder to trace.

Box CEO Aaron Levie’s perspective might be a more direct answer:

“Engineers using AI are far more productive than those who don’t.”

When attackers accelerate with AI, and defenders still rely on manual early warning responses — this battle has never been equal from the start.


  1. When the early warning system fails

We live in an era of explosive early warnings.

Countless security researches on GitHub. Countless intelligence analysts on Twitter. Countless on-chain monitoring tools. Every day, vulnerabilities are discovered early, disclosed publicly, and widely discussed.

But there is a huge gap between the number of early warnings and the number actually addressed.

This gap is not a technical problem. It’s an incentive problem.

Security researchers find vulnerabilities → report to the project → project ignores it → researcher discloses publicly → project finally responds but it’s too late.

This is the standard script of Web3 security. It plays out every year.

Kelp is just the latest name.


  1. The words of OpenAI’s Chief Scientist

On the same day, MAD Podcast released an interview with OpenAI’s Chief Scientist.

He said something that made me think over and over:

“Many mental tasks will be automated. This will bring huge governance issues: will AI organizations controlled by a few still be called ‘companies’?”

He was talking about AI companies. But this phrase equally applies to DeFi.

When protocols are controlled by a few, when security warnings are ignored by a few, when $292 million can vanish in 12 days — this is not decentralized finance. It’s just shifting the risks of centralization onto anonymous hackers.


  1. What we have learned

First, tools are not enough. The warning tool with 47 stars on GitHub is worlds apart from an effective early warning system.

Second, incentives are key. If security researchers are not rewarded for disclosing vulnerabilities, and if project teams ignore warnings without consequences, then warnings will always exist but never be addressed.

Third, AI is changing the balance of attack and defense. Attackers are accelerating with AI, defenders cannot rely on manual responses. The $292 million in Kelp’s case may just be the beginning of an AI-driven attack era.

Today is April 20, 2026.

The next $292 million may already be lying in some corner of GitHub, waiting for the 48th star.


Sources: PANews · Guillermo Rauch’s tweet · Aaron Levie’s tweet · MAD Podcast Ep84 · Interview with OpenAI’s Chief Scientist

ZRO1.6%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin