devtake.dev
AI

OpenAI shipped GPT-5.5 seven weeks after 5.4. API tokens now cost twice as much.

OpenAI released GPT-5.5 (codename Spud) on April 23. The API runs at $5/$30 per million tokens, double GPT-5.4, with Pro at $30/$180.

Dieter Morelli · · 6 min read · 5 sources
OpenAI's GPT-5.5 model launch with ChatGPT and Codex interfaces
Image: TechCrunch · Source

OpenAI shipped GPT-5.5 on April 23, seven weeks after GPT-5.4 hit the same surface on March 5. API tokens doubled in price. The company called it “a new class of intelligence” and rolled it out to ChatGPT Plus, Pro, Business, and Enterprise, plus Codex, on the same day. The cadence is what matters here, not the name.

What GPT-5.5 actually does

The headline claim in the official launch post is that GPT-5.5 handles five jobs more autonomously than the prior model: data analysis, code writing and debugging, direct software operation, online research, and long-form document and spreadsheet drafting. Less user steering, more follow-through.

OpenAI framed the shift in efficiency terms. 9to5Mac notes the model is marketed as “more token efficient,” delivering better answers with fewer tokens than GPT-5.4 for most use cases. Greg Brockman, OpenAI’s president, told reporters the release is “one step” toward a future class of agentic computing, not the destination. Jakub Pachocki, chief scientist, said the lab sees “pretty significant improvements in the short term, extremely significant improvements in the medium term.” That’s hedged-forward language from a chief scientist, which is its own signal.

Three variants ship in parallel:

  • GPT-5.5 is the default. Plus, Pro, Business, and Enterprise tiers get it in ChatGPT.
  • GPT-5.5 Thinking spends more test-time compute on hard problems. Use it when the task rewards slow, deliberate reasoning.
  • GPT-5.5 Pro is the accuracy-maxed variant for demanding workloads. Pro, Business, and Enterprise only.

On benchmarks, TechCrunch summarizes OpenAI’s framing as “consistently outperforms GPT-5.4, Google’s Gemini 3.1 Pro, and Anthropic’s Claude Opus 4.5.” OpenAI didn’t publish a detailed table of the suite it ran, which is a pattern now: headline claims, limited head-to-head data. Third-party evaluations are the ones that matter, and they won’t land for a week or two.

The pricing is the story

GPT-5.5 costs 2x what GPT-5.4 did on the API: $5 per million input tokens, $30 per million output tokens. Cached input runs at $0.50 per million. The 1M-token context window stays the same.

GPT-5.5 Pro prices at $30 input / $180 output per million. That’s the tier you’ll reach for on complex agentic runs and deep research, and it’s where the math gets interesting. Those are Opus 4.7 numbers on input ($5 vs Anthropic’s $5 per million) and below Opus 4.7 on output ($30 vs Anthropic’s $25). We covered Opus 4.7’s benchmark regressions on long-context retrieval last week. If GPT-5.5 Pro holds up on 256k-token workloads where Opus 4.7 fell from 91.9% to 59.2% on MRCR v2, the pricing comparison tips OpenAI’s way for that specific workload.

Batch and Flex tiers run at half the standard rate. Priority processing is 2.5x. Codex Fast mode ships at 1.5x output speed for 2.5x the cost per token.

One caveat for teams planning budgets: 2x input cost against GPT-5.4 is a real increase. If your current ChatGPT Business bill is driven by API usage, your April invoice will show a step-function in the line where you flip the model dropdown. Enterprise contracts negotiated at GPT-5.4 rates will need a renegotiation window, and smaller shops on usage-based billing should expect to eat the delta until their contract renewal cycle catches up. OpenAI’s pricing page lists cached input at $0.50 per million, which is the one lever you can pull immediately: restructure prompts to maximize cache hits and the effective input rate drops by 90%.

Codex got most of the headline features

The most concrete new behavior landed in Codex, not in plain ChatGPT. Codex now runs web app workflows end-to-end: it clicks through pages, tests user flows, captures screenshots, and iterates until a task is done. That’s the kind of browser use Anthropic has been rolling out piece-by-piece in Claude Code, and it lands in Codex the same week OpenAI shipped Workspace Agents to ChatGPT Business and Enterprise tenants.

Put the two releases together and the shape is clearer. Workspace Agents give Codex a team-level runtime and Slack integration. GPT-5.5 upgrades the underlying model and Codex’s browser tool. Together they let an ops lead build a shared agent that tests a signup flow on staging every morning at 7 a.m. and reports regressions in Slack. That’s the “superapp” frame Brockman keeps using, not a single product but a coordinated rollout.

Mark Chen, OpenAI’s chief research officer, pointed at scientific and technical research as the area where GPT-5.5 shows “meaningful gains.” That’s worth flagging because OpenAI has been edging into biotech and drug-discovery partnerships for months. Fortune reported those capabilities as a specific emphasis in the briefing. If the next months bring an OpenAI-branded lab-assistant product, GPT-5.5 is the substrate it’s built on.

What this means for you

If you run an API integration on GPT-5.4 today, do the math before flipping the default. 2x input and 2x output is not a rounding error on a production workload. Benchmark GPT-5.5 against GPT-5.4 on your actual traffic: if the token efficiency claim holds and GPT-5.5 solves the same task in 70% of the tokens, the net cost lands closer to 1.4x and becomes defensible. If it doesn’t, stay on 5.4 for cheap paths and route only the hard problems to 5.5.

If you’re paying for ChatGPT Plus or Business, the upgrade is free and automatic, and the immediate win is Codex browser use. Run a Codex task through a signup funnel or a dashboard workflow you normally hand-test. That’s the fastest way to see whether the new tool actually does what OpenAI says. If it flakes on basic click paths, the benchmarks don’t matter yet.

If you’re deciding between Claude Opus 4.7, Gemini 3.1 Pro, and GPT-5.5 for agentic work, stop waiting for a marketing chart and run three head-to-heads this week. Each lab is on a 6-8 week release cadence now, which means whatever you benchmark today gets re-benched in July anyway. Pick the two tasks that are load-bearing for your team, run them across all three, and rank by cost-per-successful-completion, not by raw latency or token price. That’s the only number that correlates with your actual bill.

Why you’re hearing about this now

The pace is the story. OpenAI shipped GPT-5 in August 2025, GPT-5.4 on March 5, 2026, and GPT-5.5 on April 23. That’s one major model upgrade every seven weeks for two quarters running. Fortune frames this as “how fiercely frontier AI labs are competing for enterprise customers, and how their models are increasingly evolving through continuous, incremental updates.” Anthropic shipped Opus 4.6 and 4.7 on a similar cadence. Google rolled Gemini 3.0 to 3.1. The yearly model-release cycle is gone.

That matters practically because the procurement cycle at most enterprises is longer than seven weeks. By the time your legal team finishes reviewing the enterprise terms for the model you benchmarked, the next one is out. The GitHub Copilot team just moved Claude Opus 4.7 behind a pricier tier and froze Pro signups because the old pricing math stopped working. GPT-5.5’s 2x token cost is the same signal from a different angle: frontier labs can charge more because the underlying capability curve is still steepening.

Watch the next two weeks for a Claude Opus 5 or Gemini 3.2 announcement. The cadence says one is close. And watch the API bill at the end of April: if GPT-5.5 is genuinely 30% more token-efficient on your workload, the sticker shock moderates. If it isn’t, you’ll be testing GPT-5.4 against 5.5 at every commit until the usage line on your dashboard flattens again.

Sources

Frequently Asked

Who gets GPT-5.5 today?
ChatGPT Plus, Pro, Business, and Enterprise tiers starting April 23, 2026. Codex adds it the same day. The API is rolling out with the rest of the release.
How much more does GPT-5.5 cost to run?
Twice GPT-5.4 on the API: $5 per million input tokens and $30 per million output. GPT-5.5 Pro is $30 / $180 per million. Cached input runs at $0.50 per million.
What's the context window?
1 million tokens, same as GPT-5.4. No change to the context ceiling in this release.
What are the three GPT-5.5 variants?
GPT-5.5 (default), GPT-5.5 Thinking (longer chain-of-thought reasoning), and GPT-5.5 Pro (higher accuracy on demanding tasks). Pro is available to Pro, Business, and Enterprise tiers only.
Why is it called 'Spud'?
'Spud' is the internal codename, first reported by Axios. OpenAI has cycled through food and vegetable codenames for months. GPT-5.4 was 'Celery.'

Mentioned in this article