Anthropic's new model Claude Mythos Preview has been released with an official benchmark. We've been accustomed to seeing AI's coding capabilities, but Mythos takes it to a level that is both incredible and quite frightening.



EXCLUSIVE LATEST COIN & MARKET UPDATES on GATE SQUARE ✅ FOLLOW ME NOW 🔥💰💵 #GateSquareAprilPostingChallenge

In the open-source world, known for its highly secure operating systems, this model has discovered a 27-year-old zero-day vulnerability in OpenBSD, which had gone unnoticed by the world's top security researchers over the past two and a half decades. Even in giant ecosystems like Linux kernel and Windows, this AI model has quickly identified thousands of security flaws.

However, interestingly, Anthropic has decided not to release this model to the general public at this time. Because its cybersecurity attack capabilities or offensive potential are so high that, if misused, it could cause a major digital catastrophe. The company admits that this model's autonomous reasoning ability is far sharper than any previous model, such as (Opus 4.6). Not only does it find vulnerabilities, but it can also write effective exploit code overnight to leverage those vulnerabilities and take over entire systems.

So, will we be unable to use this powerful AI? The answer is, for now, no. Anthropic has launched a special defensive initiative called Project Glasswing, where access has been granted to only 40 trusted organizations, including Google, Microsoft, Amazon, and NVIDIA. The goal is to ensure that before this technology falls into the hands of dangerous hackers, AI can be used to secure critical internet infrastructure, banking, and healthcare-sensitive codebases.
View Original
post-image
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments