Anthropic announced a model this month called Claude Mythos. Said it was too dangerous to release.


They locked it behind a vetted coalition of AWS, Apple, Microsoft, Google and JPMorgan at 5 times the price of their next best model.
The pitch was that it finds software vulnerabilities no human or AI has found before.
2 weeks later a security researcher pulled the exact bugs Anthropic paraded as proof and fed them into small, cheap, open models anyone can run.
Every single one found the headline bug. An open weight model from a Chinese lab also noticed something Anthropic’s announcement didn’t: the bug is wormable.
It spreads from machine to machine on its own.
That’s the difference between a serious flaw and the kind of thing that takes down hospitals across 150 countries in a weekend.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin