here we are! the first known zero-day developed with ai just got caught by google.


someone used an llm to spot a flaw in an open-source admin tool. then had the model write a python script that bypasses 2fa on it. then started using it on real targets.
how do we know it was ai? clean docstrings everywhere. a cvss severity score the model literally hallucinated. nobody writes python this tidy at 3am hunting bugs.
the bug itself was a logic flaw, a trust assumption some dev hard-coded years ago. the kind of mistake llms are good at catching.
for years the question was "could ai help attackers?" lots of hedging. "in theory." "eventually." "with enough scaffolding."
that debate is done.
the new question is how fast the window closes between "vuln exists somewhere in your stack" and "vuln gets exploited at scale." it used to be months. it's days now.
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin