OpenAI launches GPT-5.4-Cyber for defensive security, opens access to thousands
OpenAI's new cybersecurity-tuned model can reverse-engineer binaries and analyze malware. It's restricted to verified defenders through the Trusted Access program.
OpenAI released GPT-5.4-Cyber on April 14, a fine-tuned variant of GPT-5.4 built specifically for defensive cybersecurity work. It’s the company’s first model designed to lower its safety refusal boundary for legitimate security research.
The timing isn’t subtle. Anthropic announced Claude Mythos Preview and its $100 million Project Glasswing commitment just one week earlier. OpenAI’s response: a less powerful but far more accessible alternative, opening access to thousands of verified security professionals instead of Anthropic’s roughly 40 invitation-only organizations.
What we know
-
GPT-5.4-Cyber is a fine-tuned adaptation of the existing GPT-5.4 flagship, not a new model from scratch. It retains the 1 million token context window, enabling reasoning across entire codebases.
-
The headline capability is binary reverse engineering. Security analysts can feed compiled executables directly into the model to hunt for malware, vulnerabilities, and persistence techniques without needing source code.
-
The model is described as “cyber-permissive,” meaning it handles dual-use queries about attack techniques, exploit chains, and vulnerability classes that standard GPT-5.4 would refuse.
-
It identifies memory corruption vulnerabilities, detects complex logic flaws and race conditions, and can analyze malware persistence techniques.
-
Access runs through a tiered verification system called Trusted Access for Cyber (TAC), first launched in February 2026 alongside a $10 million cybersecurity grant program. Only the highest verification tier grants access to GPT-5.4-Cyber.
-
Top-tier users may need to waive Zero-Data Retention, meaning OpenAI retains visibility into how the model is used. Individuals verify identity at chatgpt.com/cyber; enterprises request access through an OpenAI representative.
-
OpenAI is scaling the TAC program to thousands of individual defenders and hundreds of security teams.
What we don’t know
-
OpenAI hasn’t published GPT-5.4-Cyber-specific benchmark scores. The CTF numbers they’ve cited (27% with GPT-5, improving to 76% with GPT-5.1-Codex-Max) are for the broader model family.
-
No pricing details for GPT-5.4-Cyber or the TAC tier structure.
-
Whether binary reverse engineering is available through the ChatGPT UI or API-only.
-
How the ZDR waiver affects enterprise adoption, particularly at organizations with strict data handling requirements.
How it stacks up against Mythos
The competitive framing is impossible to ignore, but these are fundamentally different bets.
Claude Mythos is a wholly new frontier model that discovered thousands of zero-days across every major OS and browser during testing, including a 27-year-old OpenBSD flaw and a 16-year-old FFmpeg RCE. The UK’s AI Safety Institute evaluated it at 73% success on expert-level CTF tasks that “no model could complete before April 2025.” Mythos was also the first model to complete a 32-step corporate network attack simulation end-to-end.
GPT-5.4-Cyber doesn’t claim that level of raw vulnerability discovery. TNW noted it’s “less capable than Mythos in raw vulnerability discovery.” But OpenAI is making a different argument: that broad access to good-enough tools matters more than gated access to the best one. “We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” OpenAI told CyberScoop.
Anthropic’s Glasswing founding partners read like a Fortune 500 security committee: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, NVIDIA, Palo Alto Networks. OpenAI is going wider, betting KYC-gated access to thousands is the better path to improving baseline defense.
What this means for you
If you’re on a security team, the practical takeaway is that both OpenAI and Anthropic now offer specialized cybersecurity models, but with very different access models. Check chatgpt.com/cyber to see if your organization qualifies for the TAC program. The binary reverse engineering capability alone could be worth the verification process if you’re doing malware analysis or vulnerability research. And given that Codex Security has already contributed to fixes for 3,000+ critical vulnerabilities across 1,000+ open-source projects, the broader OpenAI security toolchain is maturing fast.