GPT-5.5-Cyber Limited Release: Penetration Testing, Red Team Challenges No Longer Rejected, Tiered Permission Hierarchical Management

robot
Abstract generation in progress

According to Beating Monitoring, following GPT-5.4-Cyber, OpenAI has launched GPT-5.5-Cyber, available in limited preview to key infrastructure defense personnel. Like the previous generation, the core change is not increased capability but greater flexibility: verified users can ask the model to generate proof-of-concept (PoC) exploits, perform penetration testing, and conduct red team exercises, which are blocked by safety guards in the standard version of GPT-5.5. Access permissions continue to follow a three-tier system. The default version of GPT-5.5 uses standard safety guards, and security-related requests may be rejected. GPT-5.5 with TAC (Trusted Access for Cyber, OpenAI’s authentication framework launched in February) reduces false positives, covering most defense workflows such as code review, vulnerability classification, malware analysis, and detection rule writing. GPT-5.5-Cyber is the most permissive, allowing authorized red teams and penetration testing, but still prohibits credential theft, malware deployment, and other real attack behaviors. The TAC program itself is expanding, currently covering thousands of individual defenders and hundreds of security teams. Users of more permissive models may face additional restrictions in low-visibility scenarios such as zero data retention (ZDR). OpenAI provided a comparison example of the three-tier response differences: for the same request to generate a vulnerability PoC for a publicly disclosed CVE, the default version either rejects or only provides scanning suggestions; the TAC version generates complete server-side exploits, scripts, and documentation; the Cyber version can even perform actual exploitation on user-owned target domains and send back system information. Starting June 1, individual users with the highest permission models must enable advanced anti-phishing account security. Partners include Cisco, Intel, SentinelOne, Snyk, and others. OpenAI also released the Codex Security plugin, integrating threat modeling, vulnerability discovery, and remediation verification into Codex, and providing Codex and API quotas to key open-source project maintainers. OpenAI states that this layered strategy will guide the deployment of more powerful future models: the standard model will be widely released with general safety measures, while security-specific permissive models will always be deployed in a restricted manner. The GPT-5.5 security assessment report rates its cybersecurity capabilities as High, below Critical (Critical requires the model to autonomously develop zero-day exploits to attack reinforced real systems).

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin