devtake.dev

OpenAI's models are on AWS Bedrock the day after Microsoft lost exclusivity

Amazon shipped Bedrock Managed Agents powered by OpenAI on April 28, plus Codex on Bedrock. Altman tells Stratechery the runtime matters as much as the model.

Dieter Morelli · · 4 min read · 3 sources
AWS marketing illustration of an interconnected machine-learning workflow.
Image: AWS · Source

OpenAI’s models showed up on Amazon Bedrock on April 28. Bedrock Managed Agents, a new agent runtime that AWS is positioning as its answer to OpenAI’s Workspace Agents and Anthropic’s Claude products, ships with OpenAI’s frontier lineup behind it. Codex is on Bedrock too. The launch lands the morning after Microsoft and OpenAI rewrote their partnership to end Microsoft’s exclusive cloud access. The timing isn’t a coincidence.

That backdrop is the whole story. As long as Microsoft had exclusivity, Amazon’s frontier-model bench ran one name: Anthropic. Now it runs two.

What’s actually live

The new product is Bedrock Managed Agents, an AWS-hosted agent runtime that bundles OpenAI’s models with Bedrock’s tool-calling, orchestration, and observability layer. The pitch is that you can wire up a customer-service agent or a code-review agent without writing your own agent loop, your own retries, or your own permission boundary. Codex on Bedrock is the second piece: AWS customers who want OpenAI’s coding agent can run it inside their own VPC against their own data without round-tripping through ChatGPT.

Amazon’s own framing in its blog post is restrained: “this is the beginning of a deeper collaboration between AWS and OpenAI.” Andy Jassy’s tweet calling the Microsoft amendment “very interesting” landed Monday. The product launch landed Tuesday. The product was clearly built before either announcement.

No specific GPT version numbers are in the launch post. The interview with Ben Thompson on Stratechery confirms “frontier models” without naming GPT-5.5 or anything below it.

Altman on why the runtime matters

The most useful thing in the Stratechery transcript is Sam Altman’s argument that the runtime around the model is where the value sits. “Hard to overstate how critical it is,” he says, that the runtime, tools, and model integration work together. That’s a tell. OpenAI is no longer pitching its model as a standalone product. It’s pitching its model plus an agent harness, which is the same place Anthropic’s Claude Code and Google’s Gemini are headed.

It’s also a defensible position when your competitor list is Anthropic. Claude on Bedrock has been the reference benchmark for “frontier model on AWS” since 2023. OpenAI is now showing up to that fight with a slightly different shape: not just the API, but the orchestration product as well.

Altman names the commercial piece directly: “we’re committing to buy a lot of compute, I believe there will be a lot of revenue there.” That’s the part that matters to AWS. OpenAI as a Bedrock customer is one revenue line. OpenAI as a customer of AWS-hosted compute for its own training is the much bigger one.

Where this leaves Anthropic on AWS

Amazon spent the last 18 months establishing Anthropic as its frontier-model anchor. The $5B-plus-$100B compute commitment inked on April 20 is barely a week old. Anthropic now has to share Bedrock’s marquee surface with OpenAI, which is a real strategic shift even if Anthropic still has the deeper integration story today.

The honest read on the near term: Bedrock customers who already standardized on Claude won’t switch overnight. The greenfield deals are where this matters. An enterprise that hasn’t yet picked a frontier model on AWS will, starting this week, evaluate both. Two weeks ago that wasn’t an option.

For Microsoft, the scoreboard is cleaner. Azure still gets first-party access to OpenAI products (the next-phase deal makes that explicit), but loses the “only place to run OpenAI in production” pitch. The day-after AWS launch is the demonstration that the loss is real.

What this means for you

If you’re picking a frontier-model API on AWS for an agent product, the calculus shifted Tuesday. You still have to weigh latency, regional availability, and the actual benchmark scores on your task, but the political reason to default to Claude on Bedrock (it was your only choice for the OpenAI use case anyway) is gone. Read the Bedrock Managed Agents docs once they’re public, and look for the data residency and model-version commitments before you pin a contract length.

If you’re an Azure customer running OpenAI, nothing breaks today, but the cost of a second-cloud strategy just dropped. The next AWS re:Invent in December will tell us how aggressive Amazon plans to get on price. Watch the per-token rates on Bedrock-hosted OpenAI models as they get published. If they undercut Azure by enough to matter, the multi-cloud math gets interesting.

Share this article

Sources

Mentioned in this article