devtake.dev

Samsung Q1 profit hit 57.2 trillion won. Memory chips for Nvidia drove 93% of it.

Samsung Electronics posted record Q1 2026 results on April 30: 133.9 trillion won revenue and 57.2 trillion won operating profit. Semiconductors did 93% of the work.

Hiro Tanaka · · 4 min read · 3 sources
Close-up of integrated-circuit packages on a circuit board, illustrating the memory and SoC products driving Samsung's chip results.
Mister rf / CC BY-SA 4.0 via Wikimedia Commons · Source

Samsung Electronics posted record Q1 numbers on April 30. Operating profit hit 57.2 trillion won (roughly $41 billion), revenue cleared 133.9 trillion won, and the chip division did 93% of the work. HBM4 memory shipping into Nvidia’s Vera Rubin platform turned a one-trillion-won quarter a year ago into a 53.7-trillion-won record.

That’s an eightfold jump in total operating profit year-over-year, beating analyst consensus that already priced in an AI-memory super-cycle. The market read it as confirmation that the HBM bottleneck is real, that pricing power has shifted to memory makers, and that Samsung has caught SK Hynix in the part of the stack Nvidia actually buys.

What we know

Samsung’s official release on April 30 puts the consolidated Q1 numbers at all-time highs across both revenue and operating profit. The company called out memory pricing and AI server demand as the proximate driver, and it’s the first quarter where mobile and consumer hardware look like rounding errors against the chip line.

  • Top line. 133.9 trillion won in revenue, up 43% quarter-on-quarter.
  • Operating profit. 57.2 trillion won, up roughly eightfold year-over-year, per CNBC.
  • Semiconductor (DS) division. 81.7 trillion won revenue and 53.7 trillion won operating profit. Samsung says the memory unit “set an all-time high for quarterly revenue and operating profit” on the back of higher average selling prices and a technology lead in HBM.
  • Mobile (MX + Networks). 38.1 trillion won revenue, 2.8 trillion won operating profit, via SamMobile. Healthy, but a tenth of what chips did.
  • TVs and home appliances. 14.3 trillion won revenue, 0.2 trillion won profit. Effectively break-even.
  • The Nvidia line. Samsung confirmed it began shipping HBM4 and SOCAMM2 to Nvidia for the Vera Rubin AI accelerator platform during the quarter. The company named Amazon, Google, Meta, Microsoft, and OpenAI as the underlying buyers of that capacity.

Samsung’s official release said the memory business “set an all-time high for quarterly revenue and operating profit” on higher average selling prices and a technology lead in HBM. The company also confirmed shipping HBM4 and SOCAMM2 to Nvidia for the Vera Rubin platform during the quarter.

The 49x jump in DS operating profit (from about 1 trillion won in Q1 2025) is the headline that anchored every analyst note Wednesday afternoon. It’s also the number that flips the company’s identity for now: Samsung is a memory company that also sells phones and TVs, not the other way around.

What we don’t know

The release is light on forward guidance. Several open questions matter for whether the 57.2-trillion-won run rate holds through the rest of 2026.

  • HBM4 pricing curve. Samsung didn’t break out HBM ASP. SK Hynix’s earlier filings suggest Q1 HBM contract prices were up double-digits sequentially; Samsung’s silence either means the same or worse.
  • Yield gap with SK Hynix. SK Hynix locked Nvidia HBM4 qualification ahead of Samsung. Samsung’s catch-up in Q1 is the Vera Rubin ramp, but the per-die yield gap (which determines who wins HBM5 in 2027) is still a black box.
  • Tariff exposure. Samsung’s NAND and DRAM fabs are in Korea and Texas. The export-control posture from Washington toward China is material for both, and the release doesn’t quantify it.
  • Conventional DRAM versus HBM. Wccftech reports that on a per-wafer basis, conventional DRAM was more profitable than HBM in the quarter because allocation pulled supply out of the broader market. That’s a flag, not a problem yet.

What this means for you

If you’re shipping anything that needs server-class GPUs in 2026, the Samsung result is the single best data point that the supply curve isn’t softening. HBM is rationed, AI accelerators are HBM-bound, and the names buying capacity (Nvidia, then through them Amazon, Google, Meta, Microsoft, OpenAI) haven’t changed. Plan compute capacity assuming H2 lead times stay long.

If you build software that depends on hosted inference, you’re already paying part of this bill in the form of token-price floors that aren’t going to compress while one HBM contract quarter funds half of Samsung. The OpenAI and Anthropic price moves over the past 60 days, and the GitHub Copilot move to token billing on June 1, are downstream of this.

If you’re a Samsung phone buyer, the news is more mixed. The 2.8-trillion-won mobile operating profit is healthy, but the company’s attention sits on a chip business that’s now a 50-times-larger profit pool. Don’t expect Galaxy roadmap pace to be the priority for Lee Jae-yong’s executive team while memory is printing this hard.

Share this article

Quick reference

HBM4
Fourth-generation High Bandwidth Memory. Stacks DRAM dies vertically next to a GPU or accelerator to feed it data faster than a regular memory bus can.

Sources

Mentioned in this article