Memory Sector Primer

A first-principles deep dive into DRAM, NAND, SRAM, NOR, HDD, SSD, and HBM, the AI memory hierarchy, and the listed players who profit from it. Written 2026-04-26.

A first-principles deep dive into DRAM, NAND, SRAM, NOR, HDD, SSD, and HBM, the AI memory hierarchy, and the listed players who profit from it. Written 2026-04-26.


TL;DR for the Investor

The 2025 to 2026 memory cycle is unlike anything in the industry’s 35-year history. Three forces collide simultaneously: (1) AI inference demands HBM bandwidth at a magnitude no roadmap anticipated, (2) the NAND industry exits sub-32GB MLC just as KV-cache offloading creates a new structural demand category, and (3) China’s domestic memory build-out (CXMT, YMTC) accelerates while it remains 2 to 3 nodes behind. The result: SK Hynix posts 49% operating margin in FY2025, Micron triples revenue inside 18 months, Macronix re-rates 350% in twelve months on an eMMC monopoly thesis, and Kioxia compounds 24x off its IPO price.

Where to look: - Tier 1 conviction — SK Hynix (HBM near-monopoly, 62% share), Micron (HBM share gainer, 21 to 24%), Macronix (sub-32GB eMMC monopoly post-2028) - Tier 2 tactical — Kioxia (NAND pure-play but Bain overhang), Sandisk (post-spin operating leverage), Samsung (HBM3E qualification recovery story) - Watchlist — Winbond (specialty DRAM scarcity), Montage (DDR5 RCD), Rambus (memory IP) - Avoid — Western Digital legacy HDD without HAMR catalyst, commodity DDR4 specialists with China substitution risk


PART I: FOUNDATIONAL TECHNOLOGY EDUCATION


1. Why This Matters Now

For thirty years, memory was a commodity cycle stock: build fab, oversupply, lose money, capacity rationalizes, supercycle, repeat. The 2024 to 2026 cycle is breaking that pattern in two ways.

First, HBM (High Bandwidth Memory) is structurally different from commodity DRAM. It is sold under multi-year contracts to NVIDIA, AMD, and the hyperscalers. Capacity is sold out 18 months forward. Pricing is sticky. Margins are 60%+. SK Hynix’s FY2025 operating margin of 49%, an all-time record for any memory maker, was not driven by a commodity DRAM up-cycle — it was driven by HBM3E shipments to NVIDIA Blackwell. This is closer to the TSMC business model than to the historical Samsung Memory model.

Second, NAND is undergoing a consolidation wave that traditional cycle frameworks miss. Samsung, SK Hynix, Micron, and Kioxia are all exiting sub-32GB MLC eMMC by 2028. This leaves Macronix as the only credible global supplier for low-capacity automotive, drone, and IPC eMMC. KGI projects Macronix eMMC revenue scaling from NT$0.86bn (2025) to NT$230bn (2027) — a 265x expansion. Simultaneously, hyperscalers are buying 122TB QLC SSDs to offload LLM KV caches, creating a new structural NAND demand tier that did not exist in 2023.

Add geopolitics: YMTC is on the US Entity List, CXMT is the next sanctions target candidate, and Korea’s Samsung and SK Hynix sit between US export controls and Chinese end-demand. Memory is no longer a commodity; it is a strategic resource governed by industrial policy.

Finally, FundaAI, SemiAnalysis, and Irrational Analysis all converge on a 2027 to 2028 supply tightness thesis: the capex commitments needed to relieve current shortages do not deliver wafer output until late 2027 at the earliest. This is a multi-year up-cycle, not a quarter-to-quarter print.

2. The Problem Memory Solves

Computers compute by moving bits between transistors. Storage is the problem of holding bits between operations: across nanoseconds (registers), milliseconds (cache misses), seconds (disk reads), and decades (archival).

The fundamental tension is that no single physical mechanism can be simultaneously fast, dense, and cheap. A flip-flop made of six transistors holds a bit reliably and reads in 1 nanosecond, but consumes huge silicon area. A capacitor holding charge is dense but leaks, requiring constant refresh. A floating gate holding electrons behind an oxide barrier is non-volatile but degrades with each write. A magnetic disk is cheap but mechanical and slow.

Modern computing solves this by tiering. Each tier exists because removing it would create an unacceptable gap in the speed-cost-capacity space. The result is a hierarchy spanning ten orders of magnitude in latency (0.3 nanoseconds for registers to 100 seconds for tape) and eight orders of magnitude in cost (millions of dollars per gigabyte for SRAM down to fractions of a penny for tape).

The memory industry exists to manufacture each tier of this hierarchy at scale. The investment thesis turns on which tier captures the most value as AI workloads shift the ratio of compute to memory access.

3. The Science Foundation

SRAM — six transistors per bit, sub-nanosecond, expensive

The 6T SRAM cell stores a bit using two cross-coupled inverters that lock the bit in a stable state, plus two access transistors gating reads and writes. No refresh, no destructive read, no wear. Read latency is roughly 1 nanosecond because the signal is strong, the wires are millimeters long on a die, and reading does not disturb the stored value.

The catch is six transistors per bit. At 3nm, a single SRAM cell occupies ~0.04 μm². Multi-gigabyte SRAM arrays would consume gigawatts of standby leakage power. SRAM is therefore reserved for CPU L1, L2, L3 caches and on-die GPU memory. NVIDIA H100 has 50 MB L2 cache and 228 MB shared memory (SRAM) distributed across 132 streaming multiprocessors. AMD’s 3D V-Cache stacks additional SRAM via through-silicon vias, reaching 1.15 GB on Genoa-X variants.

DRAM — one transistor, one capacitor, 60-100 nanoseconds

In 1966, IBM’s Robert Dennard invented the modern DRAM cell: one transistor + one capacitor. The capacitor holds charge to represent a 1 (or no charge for a 0). Cell area dropped to ~6F², roughly 20x denser than SRAM.

Three properties define DRAM:

  1. Refresh requirement: Capacitors leak. Every ~64ms, all rows must be read and rewritten. This consumes ~5-7% of memory bandwidth.
  2. Destructive read: Reading a row depletes the capacitor. The sense amplifier must immediately rewrite the data back.
  3. Off-die latency: DRAM chips sit on DIMMs centimeters from the CPU. At 5 GHz, light travels only 6cm per clock cycle in vacuum. Round-trip latency is ~60 to 100ns, equivalent to 300 to 500 wasted CPU cycles per access.

DRAM variants: - DDR (server, desktop): DDR5 at 6400 MT/s, 100 GB/s per DIMM, optimized for latency - LPDDR (mobile): LPDDR5X at 6400 MT/s, <2W idle, optimized for power - GDDR (graphics): GDDR7 at 32 to 48 Gbps per pin, 192+ GB/s per device, optimized for throughput - HBM (AI accelerators): see Section 7 — HBM is DRAM stacked vertically with TSVs

DDR5 vs DDR4 step-up: 2x bandwidth (50 to 100 GB/s per DIMM), 17% lower voltage (1.2V to 1.1V), dual 40-bit channels per DIMM (vs single 72-bit), burst length doubled (BL8 to BL16), on-die ECC standard.

NAND flash — floating gate, 1,000x slower than DRAM, non-volatile

A NAND cell is a single transistor with a “floating gate” of polysilicon trapped between two oxide layers. Programming forces electrons through the lower oxide via channel hot-electron injection, where they remain trapped (no power needed). Erasing pulls them back via Fowler-Nordheim tunneling. Reading senses whether the gate is charged.

The act of programming and erasing degrades the tunnel oxide. After thousands of cycles, the cell fails. This is why NAND is rated by program-erase cycle endurance:

Cell type Bits/cell Voltage states P/E cycles Retention Use case
SLC 1 2 90,000-100,000 10+ years Enterprise cache
MLC 2 4 3,000-10,000 5-10 years Industrial, automotive
TLC 3 8 1,000-3,000 1-3 years Consumer SSDs
QLC 4 16 100-1,000 Weeks-months Data center cold storage
PLC 5 32 <100 (research) Days Future archival tier

Each additional bit per cell halves the voltage margin between states. QLC has 16 voltage levels with ~50mV margin between them, making it sensitive to noise, leakage, and wear-induced drift. Retention drops dramatically with temperature: every 10°C increase halves data retention time.

3D NAND stacks cells vertically using charge-trap flash (CTF), which traps electrons in discrete sites within a silicon nitride layer rather than a single floating gate. This dramatically improves reliability at scale. Current production layer counts (2026):

Vendor Generation Layers Tech
Samsung V-NAND Gen 9 286 (2x143 stacked) TLC/QLC, 3.2 Gbps I/O
SK Hynix 4D NAND 321 QLC, world’s highest in production
Micron G9 NAND 232 TLC and QLC
Kioxia BiCS 8 218 CBA wafer-bonding
Kioxia BiCS 10 (development) 332 +9% bit density
YMTC X4-9070 294 (150+144 bonded) Xtacking

NOR flash — random access, byte-addressable, dying except where it is not

NAND cells are wired in series (like AND gates), so reads must pull entire pages of 4KB at a time. NOR cells are wired in parallel (like OR gates), giving every byte its own access path. This makes NOR slow to write but fast to read at random addresses.

Critically, NOR supports execute-in-place (XIP): the CPU fetches and runs instructions directly from flash without copying to RAM first. Microcontrollers in cars, IoT devices, BMC controllers in servers, and industrial systems rely on XIP. NAND cannot do this because page reads are too slow and require ECC handling.

Why most makers exited: - Micron exited NOR ~2017 (low margins, scaling uneconomic below 65nm) - Spansion merged into Cypress in 2015, then Cypress into Infineon in 2020 - Samsung exited ~2010-2013

Survivors: Winbond (#1 by serial NOR units), Infineon, Macronix, GigaDevice, ISSI (private). Total NOR market is ~$4-5B (Mordor Intelligence, 2024), growing to ~$10.7B by 2034 at ~8.2% CAGR. Automotive ASIL-D qualification creates multi-year switching costs.

HDD — magnetic platters, 8-12ms, $0.02/GB

Hard disks store bits as magnetic domains on spinning aluminum or glass platters. A read/write head floats nanometers above the surface on an air bearing. Random access requires: 1. Seek time (move head): 4-8ms 2. Rotational latency (wait for sector): 4.2ms at 7200 RPM 3. Controller overhead: <0.5ms

Total: 8-12ms typical, roughly a million times slower than DRAM.

Recording technology evolution: - PMR (perpendicular): standard for ~15 years, plateau at ~1 Tb/in² areal density - SMR (shingled): overlaps tracks for ~25% density gain, but requires read-modify-write - HAMR (heat-assisted): laser heats media to 450°C during write, enables smaller stable grains, target 2 Tb/in² near-term and 10 Tb/in² long-term. Seagate began HAMR shipments 2024-2025. - MAMR (microwave-assisted): WD’s alternative, ~1.3 Tb/in², easier to implement

The crossover that did not happen: through 2025, the SSD/HDD price ratio widened from 6.2x to 16.4x because of the NAND shortage (Tom’s Hardware, Q1 2026). Hybrid SSD+HDD architectures got cheaper relative to all-flash. This is bullish for HDD makers and supports continued HDD spending in cold-tier hyperscale storage.

SSD interfaces — SATA capped, NVMe winning

SATA III is hard-capped at ~560 MB/s by the 6 Gb/s PHY layer. After 8b/10b encoding (10 bits per 8 data bits) and protocol overhead, no SSD can exceed this.

NVMe over PCIe runs the storage protocol directly on PCIe lanes, bypassing legacy AHCI. Specifications by generation:

Generation Per-lane x4 sequential Latency (4KB random)
PCIe Gen3 NVMe 8 GT/s ~3.5 GB/s ~50 μs
PCIe Gen4 NVMe 16 GT/s ~7 GB/s ~20-40 μs
PCIe Gen5 NVMe 32 GT/s ~14 GB/s ~15-30 μs

NVMe supports 64K queues with 64K commands each, vs SATA’s single queue. This is what enables modern enterprise SSD throughput of 3M+ IOPS at sub-100 μs latency. Micron 9550 PCIe Gen5 holds the 14.0 GB/s read crown (launched July 2024).

HBM — DRAM stacked vertically with TSVs

Yes, HBM is DRAM. The cells are 1T1C just like DDR5. What is different is the packaging:

  1. Multiple DRAM dies (8, 12, or 16) are stacked vertically
  2. Through-Silicon Vias (TSVs) etched through each die provide vertical signal paths
  3. Microbumps connect adjacent dies
  4. The entire stack sits on a silicon interposer alongside the GPU/ASIC die, packaged via CoWoS (Chip-on-Wafer-on-Substrate, TSMC’s process)

The result is a wide parallel bus (1024 bits per stack) at modest clock speeds, delivering enormous bandwidth at low power.

Generation Speed BW per stack Capacity First production Reference GPU
HBM 1 Gbps 100 GB/s 4-8 GB 2015 AMD Fiji
HBM2 2.4 Gbps 240 GB/s 8-16 GB 2016 NVIDIA P100
HBM2e 2.4 Gbps 280 GB/s 16-32 GB 2018 NVIDIA V100
HBM3 6.4 Gbps 819 GB/s 24-36 GB 2022 NVIDIA H100
HBM3e 8-9.6 Gbps 1.0-1.2 TB/s 36-48 GB 2023+ H200, B200
HBM4 8-11.7 Gbps 2.0 TB/s 4-64 GB 2025-26 Vera Rubin

NVIDIA H100: 80 GB HBM3 across 6 stacks, 3.35 TB/s aggregate. NVIDIA B200: 192 GB HBM3e across 8 stacks, 8.0 TB/s. Cost: roughly $25-50 per GB for HBM3e vs $3-8 per GB for DDR5.

DDR5 vs NVMe — apples and oranges, both essential

These solve different problems:

Property DDR5 NVMe SSD
Volatility Volatile Non-volatile
Latency 60-80 ns 10-100 μs (1,000x slower)
Granularity 64-bit byte-addressable 4 KB block
Bandwidth 100 GB/s per DIMM 7-15 GB/s per drive
Wear Infinite 1,000-3,000 P/E cycles
Cost/GB $3-8 $0.07-0.30

NVMe cannot replace DDR5: a GPU op takes ~10ns, NVMe access takes ~10μs (1,000x slower), so the GPU would sit idle waiting. DDR5 cannot replace NVMe: no model weights or training checkpoints would survive a power cycle, and DDR5 capacity is too limited and too expensive.

The interesting tier between them is CXL memory: cache-coherent DRAM expanders attached via PCIe with ~150-500ns latency. Microsoft Azure’s M-series VMs deployed the first commercial CXL in 2025 (Astera Labs controllers), increasing memory capacity by 50%. CXL market exceeded $1B in 2025.

4. The Memory Hierarchy (NVIDIA’s View)

Modern AI training and inference run across this stack:

Tier              Latency      Bandwidth         Capacity         Cost/GB
─────────────────────────────────────────────────────────────────────────
CPU registers     ~0.3 ns      200+ GB/s         128 B/core       embedded
L1 cache (SRAM)   ~1 ns        50+ GB/s/core     32-128 KB/core   $12,500
L2 cache (SRAM)   ~4 ns        10-20 GB/s/core   1-2 MB/core      $12,500
L3 cache (SRAM)   ~10-20 ns    400-900 GB/s      256 MB-1.15 GB   embedded
GPU shared SRAM   <5 ns        5+ TB/s/SM        228 MB (H100)    embedded
HBM3e (on GPU)    ~10-30 ns    8 TB/s (B200)     192 GB           $25-50
DDR5 DRAM         ~80 ns       100 GB/s/DIMM     16-256 GB        $3-8
CXL memory        ~150-500 ns  20-64 GB/s        64 GB-2 TB       $1-5
NVMe SSD          ~10-20 μs    7-14 GB/s         960 GB-122 TB    $0.07-0.30
SATA SSD          ~100 μs      560 MB/s          256 GB-30 TB     $0.05-0.10
HDD               ~8-12 ms     200-270 MB/s      1-36 TB          $0.01-0.02
LTO tape          ~45-100 s    400 MB/s          18-40 TB         $0.004-0.007
─────────────────────────────────────────────────────────────────────────

Spread: 10 billion to 1 in latency, 2 billion to 1 in cost. Each tier exists because removing it would create an unacceptable gap.

5. KV Cache — The Hidden Profit Pool

In transformer attention, each token must attend to all previous tokens. Without caching, every new token requires recomputing keys (K) and values (V) for the entire context. KV caching stores K and V from the prefill phase, making decode roughly linear instead of quadratic.

The cost is memory. KV cache size:

KV = 2 × layers × heads × head_dim × seq_len × bytes_per_value

For Llama 3 70B at 128K context in float16:

KV = 2 × 80 × 64 × 128 × 131,072 × 2 ≈ 42 GB per user session

A single H100 (80 GB HBM) can serve ~1.9 users at full context, or ~20 users at 8K context. With model weights (140 GB FP32, 70 GB FP16) plus activations plus KV cache, total memory exceeds H100 capacity. Engineering responses:

  1. Paged Attention (vLLM): Break KV into 16-token pages, evict cold pages to CPU DRAM or NVMe, reload on demand
  2. KV Offloading (NVIDIA Dynamo, FlexGen, InstInfer): Tier KV across HBM, DRAM, NVMe with intelligent prefetching
  3. KV Quantization: Store K, V in int8 or int4 instead of float16, 2-4x cache reduction
  4. GQA / MQA: Grouped or multi-query attention reduces K, V heads (Llama 3 uses GQA)
  5. High Bandwidth Flash (HBF, Sandisk): Future tier between HBM and NAND for KV-tier workloads

Samsung’s PM1753 SSD KV-offloading test (2025): 1.7x more concurrent users at equivalent latency, 1.5x more output tokens/second under high concurrency, 53% lower total power, ~4% additional system cost.

This is why Sandisk’s BiCS8 QLC “Stargate” drive is qualifying at hyperscalers specifically for KV cache use cases. It is also why Kioxia LC9 (245.76 TB QLC) won Best of Show at FMS 2025 — the use case is hyperscale KV offload.

The investment implication: KV cache is creating a new structural NAND demand tier that did not exist 18 months ago. SemiAnalysis (“Scaling the Memory Wall: HBM Roadmap”) frames HBM as the primary winner, but the second-order winner is enterprise QLC NAND, where Kioxia, Sandisk, Micron, and Samsung compete for hyperscaler qualification.

6. Where Does What Go — The AI Memory Map

Component Memory used Why
GPU register file SRAM Sub-cycle access for ALU operands
GPU L1 / L2 cache SRAM Filter 95-99% of memory accesses
GPU shared memory SRAM Inter-thread communication within SMs
Model weights (during inference) HBM 70-700 GB needs 1+ TB/s bandwidth
Active KV cache HBM Read every decode step
Cold KV cache DDR5 → NVMe Sessions paused, retrievable
Activation tensors HBM Forward/backward pass
Optimizer state (training) HBM + DDR5 (offloaded) 3-4x model weight size
Embedding tables (large) DDR5 + NVMe Cold rows offloaded
Model checkpoint NVMe → HDD Persistent, infrequent access
Training dataset NVMe + HDD Read sequentially during training
Cold archival data HDD + tape Cheapest per GB

NVIDIA’s published memory hierarchy maps cleanly onto this. The Rubin Ultra (2026-2027) is expected to feature 1024 GB HBM4e per package, easing KV cache pressure and enabling longer contexts in single-rack inference.


PART II: THE INDUSTRY LANDSCAPE


7. Value Chain Map

[Memory IP & EDA] ───→ [Wafer fab equipment] ───→ [Foundries / IDMs] ───→ [Packaging / Test] ───→ [SSD/Module assembly] ───→ [System integrators] ───→ [End customer]
   Rambus, Synopsys      ASML, Applied,            Samsung, SK Hynix,    TSMC CoWoS, Amkor,      Sandisk, KingPro,           Dell, HPE, Foxconn,         Hyperscalers,
   Cadence               Lam, KLA,                 Micron, Kioxia,       ASE, Nepes              ADATA, Crucial              NVIDIA, AMD, Intel          OEMs, enterprises
                         Tokyo Electron            Macronix, Winbond

Where the value pools sit:

Layer Sample player Gross margin Concentration
Memory IP & DDR5 RCD chips Montage (688008.SS), Rambus (RMBS) 60-75% 3 players in DDR5 RCD: Montage 36.8%, Renesas, Rambus
Wafer fab equipment ASML, Applied, Lam, KLA, TEL 45-55% 5 players for advanced memory tools
Memory IDMs (DRAM) Samsung, SK Hynix, Micron 40-50% peak 3 players control >95% global DRAM
Memory IDMs (NAND) Samsung, SK Hynix (Solidigm), Kioxia, Sandisk, Micron, YMTC 30-50% peak 6 players, fragmenting
HBM packaging (CoWoS) TSMC dominant, Samsung distant 2nd 55-65% TSMC TSMC ~85% of HBM CoWoS
Specialty memory (NOR, low-density DRAM) Winbond, Macronix, Infineon, GigaDevice 30-45% 4 NOR players post-consolidation
Probe card (memory test) JEM (6855), MJC, FormFactor 40-50% 3 players, JEM #5 globally
Module assembly Kingston (private), Crucial (Micron), ADATA, Patriot 5-15% Fragmented

The biggest value capture in this cycle has been at the memory IDM layer (HBM premium) and at the packaging layer (TSMC CoWoS scarcity). The IP and RCD layers (Montage, Rambus) earn high margins on small revenue, providing high-quality compounding exposure.

8. Total Addressable Market

Segment 2025 TAM (USD) 2030 TAM (USD est.) CAGR
Total memory market $180B $310-350B 11-14%
DRAM $108B $190B 12%
of which HBM $30B $130B+ 34%
of which DDR5 (server) $50B $90B 12%
of which LPDDR (mobile) $25B $40B 10%
NAND $65B $115B 12%
of which enterprise SSD $25B $55B 17%
of which client SSD $20B $30B 8%
of which mobile (UFS/eMMC) $15B $20B 6%
of which industrial/auto $5B $10B 14%
NOR $4-5B $10-11B 8%
HDD (high-cap NL) $20B $25B 5%

The takeaway: HBM is growing 3x the speed of the broader memory market. By 2030, HBM is projected to be ~$130B versus the entire DRAM market today at ~$108B.

9. Market Cycle Position

The memory cycle has historically been brutally regular: 18-24 month up-cycles followed by 12-18 month down-cycles. The 2024-2026 cycle is structurally different because HBM is decoupling from the broader cycle.

Position as of April 2026:

Metric Reading Cycle implication
DRAM contract prices +58-63% QoQ (Q1 2026) Mid up-cycle
NAND contract prices +70-75% QoQ (Q1 2026) Mid up-cycle
HBM bookings Sold out through CY2026, much of 2027 Sustained
Inventory at customers Below normal Depleted
Capex announcements +5% in 2026 (Micron $25B, SK Hynix raised) Disciplined
Wafer capacity Cut 10-15% from peak Tight
Time to new supply Late 2027 earliest 18+ months
Sell-side DRAM models Shortage through 2027 Multi-year
Sell-side NAND models Shortage through 2028 Multi-year

This is the most cycle-favorable setup for memory equities since 2017-2018. The differentiator is who has HBM exposure and who has structural NAND product wins (KV cache QLC, automotive eMMC, NOR specialty).

10. Regulatory & Geopolitical

The geopolitical edge of the trade is that China-domestic memory (CXMT, YMTC) substitutes for low-end DDR4/DDR3 and 2D NAND, but cannot yet substitute for HBM (3+ generations behind), advanced DDR5 (1-2 generations behind), or sub-32GB MLC eMMC (no credible substitute). This is the structural underpinning of the Macronix and SK Hynix theses.


PART III: THE PLAYERS


11. Industry Map

Listed players

Company Ticker HQ Primary segments Market cap ~Apr 2026
Samsung Electronics 005930.KS Korea DRAM, NAND, HBM, foundry, mobile ~$450B
SK Hynix 000660.KS Korea DRAM, NAND (via Solidigm), HBM ~$240B
Micron Technology MU US DRAM, NAND, HBM ~$220B
Kioxia Holdings 285A.T Japan NAND only (BiCS) ~$126B
Sandisk SNDK US NAND only (post-spin Feb 2025) ~$30B
Western Digital WDC US HDD only post-spin ~$25B
Macronix International 2337.TW Taiwan NOR, ROM, NAND, eMMC ~$6.2B
Winbond Electronics 2344.TW Taiwan Specialty DRAM, NOR ~$4B
Nanya Technology 2408.TW Taiwan Specialty DRAM ~$3B
Powerchip (PSMC) 6770.TW Taiwan Foundry + DRAM ~$2B
Montage Technology 688008.SS China DDR5 RCD, MXC ~$15B
Rambus RMBS US Memory IP, RCD chips ~$8B

Non-listed / sanctioned

Company Country Segments Status
YMTC (Yangtze Memory) China NAND (X4-9070, 294L) Entity List, ~13% global NAND share Q3 2024
CXMT (ChangXin Memory) China DRAM (DDR4, early DDR5) Pre-sanctions, ~6% global DRAM share Q1 2025
ISSI US (Chinese-owned since 2015) SRAM, DRAM, NOR Tsinghua-adjacent, automotive 40-45% of revenue
Kingston Technology US Module assembly (DRAM, SSD) World’s largest DRAM module maker, fully private

12. Comparative Deep Dive — Samsung, Micron, Kioxia, Sandisk, Macronix

Revenue mix by memory type (last 3 fiscal years)

The five companies Pink asked to compare structure their businesses very differently. Samsung is integrated DRAM+NAND with mobile and foundry attached. Micron is pure-play DRAM+NAND. Kioxia and Sandisk are pure-play NAND. Macronix is pure-play specialty (NOR + eMMC + ROM).

Samsung Electronics — DS Division (KRW trillion)

FY DS Revenue Op Profit DRAM % (est.) NAND % (est.) Foundry % (est.) Op Margin
FY2023 ~66 -14.9 ~50% ~20% ~30% -23% (loss)
FY2024 ~131 ~15+ ~52% ~21% ~27% ~11%
FY2025 ~130 ~24.9 ~58% ~19% ~23% ~19%

Notes: Samsung does not publicly disclose DRAM/NAND/foundry split. TrendForce estimates DRAM ~70-75% and NAND ~25-30% of memory revenue (excluding foundry). Q4 FY2025 alone delivered KRW 16.4tn DS operating profit, nearly matching the entire FY2024.

The Samsung HBM saga is the defining story of 2024-2025: HBM share collapsed from ~41% (Q2 2024) to ~17% (Q2 2025) after repeated NVIDIA HBM3E qualification failures. Samsung finally cleared NVIDIA’s 12-layer HBM3E qualification in September 2025. Q1 FY2026 preliminary: DS estimated to generate ~95% of Samsung’s KRW 57.2tn operating profit (+755% YoY). HBM revenue is expected to triple YoY in 2026.

Micron Technology (USD billion)

FY (ends Aug) Revenue DRAM NAND DRAM % NAND % GM% Op Margin
FY2023 15.54 10.98 4.21 71% 27% 1% -38%
FY2024 25.11 17.60 (est) 7.23 (est) 70% 29% 22.6% 7%
FY2025 37.08 28.58 8.50 77% 23% 40.3% 26%
FY2026 Q2 (actual) 23.86 17.6 (est) 6.0 (est) 74% 25% ~50% ~37%

By business unit (Q2 FY2026): - Cloud Memory BU (HBM lives here): $7.7B (32%) - Core Data Center BU: $5.7B (24%) - Mobile & Client BU: $7.7B (32%) - Auto & Embedded: $2.7B (11%)

Data center = 56% of revenue and accelerating. HBM revenue: >$1B/quarter (Q2 FY2025) → ~$2B/quarter (Q4 FY2025) → CY2026 sold out including HBM4. Micron’s HBM4 36GB 12-high is shipping in volume to NVIDIA Vera Rubin from Q1 CY2026.

Kioxia Holdings (JPY billion)

FY (ends Mar) Revenue Op Profit Net Income NAND only
FY2022 ~1,520 -171 -243 100% NAND
FY2023 1,076 -254 -244 100% NAND
FY2024 1,706 332 272 100% NAND
FY2025 (guided) 2,180-2,270 ~600 460-520 100% NAND

Kioxia is NAND-only. No DRAM, no HBM, no foundry. Revenue mix (Q3 FY2025): - SSD & Storage: 55% (~50% data center, ~50% PC) - Smart Devices (mobile): 34% - Other (retail, JV): 11%

Q3 FY2025 (Oct-Dec 2025) was an all-time record: ¥543.6B (+21.3% QoQ), net income ¥89.5B (+115% QoQ). Q4 FY2025 guidance: ¥845-935B in a single quarter, against ¥648B consensus — a massive beat. Q3 CY2025 NAND market share: 15.3% (#3 globally, up 2 ppts QoQ).

Sandisk (USD billion, calendar quarters since spin Feb 2025)

Quarter Revenue DC SSD Edge/Client SSD Consumer Flash GM%
Q1 FY26 (Jul-Oct 25) 2.31 0.27 (12%) 1.39 (61%) 0.65 (28%) 29.9%
Q2 FY26 (Oct-Jan 26) 3.03 0.44 (14.5%) 1.68 (55.4%) 0.91 (30%) 51.1%
Q3 FY26 (guidance) 4.4-4.8 65-67%

The gross margin trajectory (29.9% → 51.1% → 65-67% guided) is the most extreme operating leverage move in memory history. Stock +295% YTD through April 2026.

50/50 JV with Kioxia for Yokkaichi and Kitakami fabs (extended to December 31, 2034). NAND-only post-spin. WDC retains 19.9% stake but no consolidation.

Macronix International (NT$ billion)

FY Revenue NOR % ROM % SLC NAND % eMMC % FBG % EPS (NT$)
FY2023 ~25.0 60% 16% 13% 4% 7% (1.73)
FY2024 ~26.5 60% 17% 13% 4% 6% (2.83) (KGI)
FY2025 28.9 61% 17% 13% 3% 7% (1.79) (KGI)
FY2026 KGI forecast 88.4 75% by 2027 30.04 (KGI)

Macronix is the cleanest pure-play on the sub-32GB MLC eMMC monopoly thesis. The KGI thesis (March 2026 initiation): - Samsung, SK Hynix, Micron, Kioxia all exiting MLC by 2026-2028 - Switching cost to TLC is high (firmware, controller redesign), TLC supply also tight - Macronix is the only credible global supplier post-2028 - eMMC revenue: NT$0.86B (2025) → NT$56.4B (2026) → NT$230B (2027) — 265x in 2 years - Target NT$300 (10x 2026 EPS), implied upside 200%+ from NT$99 close (Mar 12, 2026)

What Macronix gains in NOR + eMMC as others exit:

Memory type Exiters Macronix gain
Sub-32GB MLC eMMC Samsung (Oct 2025), Micron (2024), Kioxia (2027 LTS), SK Hynix (2027) Only credible supplier post-2028
Serial NOR Micron (2017), Samsung (2010-13), Cypress→Infineon merger ~16% global share, growing in auto ASIL-D
ROM Samsung, Toshiba (legacy) Stable supply for legacy industrial/auto

Note that the eMMC thesis is an option, not a fact, until Q1-Q2 2026 actuals confirm KGI’s price/volume assumptions. Watch the Q1 2026 results print closely.

Customer exposure summary

Company Hyperscaler exposure Mobile OEM exposure Auto / Industrial Named relationships
Samsung ~60% of memory shipments iPhone 17: 60-70% LPDDR5x Limited (eMMC exiting) NVIDIA (HBM3E post-Q3 25, HBM4 30%+ allocation), Apple, hyperscalers (multi-year contracts)
SK Hynix ~62% HBM share, NVIDIA primary Xiaomi, Samsung Solidigm enterprise SSD NVIDIA HBM3E and HBM4, Apple LPDDR, hyperscaler enterprise SSD
Micron 56% data center Apple LPDDR partial, Samsung ~11% auto/embedded NVIDIA (HBM3E for H200, HBM4 for Vera Rubin), AMD (HBM3E for MI355X), 3rd unnamed HBM customer
Kioxia ~50% data center within SSD/Storage 34% smartphones ~11% other Apple, Microsoft (Bloomberg sourcing)
Sandisk “Five major hyperscalers” Limited mobile Limited Hyperscalers unnamed; PCIe Gen5 TLC qualified at 2 hyperscalers, BiCS8 QLC qualifying for KV cache
Macronix TV, STB, BMC None (UFS exposure low) Auto ASIL-D primary STMicro (STM32N6), automotive Tier 1s, drone/IPC/networking long tail

Margin profile

Company Trough quarter Trough GM Peak GM (latest) Gross margin range
Samsung DS Q3 2023 (loss) -23% op ~37% (Q4 2025) 60 ppt swing
SK Hynix Q1 2023 (loss) -24% op 49% op (FY25), 72% Q1 26 96 ppt swing
Micron Q1 FY24 -1.5% GM ~50% GM (Q2 FY26) 51 ppt swing
Kioxia Q4 FY24 ~-25% op (loss) ~30% op (Q3 FY25) 55 ppt swing
Sandisk Q1 FY26 (post-spin) 29.9% 65-67% guided Q3 FY26 36 ppt swing in 6 months
Macronix Q4 FY24 (loss) -2% GM ~55% GM 2027F (KGI) 57 ppt projected

Memory is the highest operating leverage business in semis. Fixed costs are 70-80% of total. Every dollar of incremental ASP drops to gross margin.

13. Earnings Transcript Highlights (Last 8 Quarters)

Micron — the AI memory pivot

Samsung DS — the qualification recovery

SK Hynix — the HBM monopoly

Kioxia — the post-IPO NAND recovery

Sandisk — the post-spin operating leverage

14. Kioxia Deep Dive (Pink’s Specific Ask)

IPO and shareholder structure

Item Detail
Listing date December 18, 2024 (delayed from October 2024)
TSE ticker 285A (not 6600.T as commonly mis-cited)
IPO price ¥1,455 (midpoint of ¥1,390-1,520)
First-day close ~¥1,645 (+13%)
Total raised (incl. greenshoe) ¥120.4B (~$800M)
Company portion ~¥27-31B (~$190-210M)
IPO market cap ¥784B (~$5.2B)
Shares outstanding post-IPO ~546M
Float at IPO ~28% (below TSE Prime 35% requirement)
Bookrunners Morgan Stanley, Nomura, BofA, Goldman Sachs
Use of proceeds Debt reduction (¥900B syndicated loan), Yokkaichi/Kitakami capex

Bain Capital entry — 2018 buyout

Item Detail
Total consideration (consortium) ¥2 trillion (~$18B) — Asia’s largest LBO at the time
Stake purchased ~59.8% (Toshiba retained 40.2%)
Implied per-share price (rough) ~¥3,800 (back-of-envelope; not officially disclosed)
Bain’s actual equity cheque Not disclosed (LBO debt + co-investors meaningfully reduced Bain’s direct contribution)
Closing date June 1, 2018

Tranche 1 (¥960B, ordinary + convertible equity): - Bain Capital (lead, control) - SK Hynix (¥266B as LP in Bain fund + ¥129B direct convertible bonds = ¥395B total, ~$3B) - Hoya, Development Bank of Japan, INCJ (government), Mitsubishi UFJ - Toshiba retained 40.2% (no payment, kept its stake)

Tranche 2 (¥440B, preferred securities): - Apple, Dell, Kingston, Seagate (no voting control; supply-security motivation)

Pre-IPO shareholders entry price summary

Holder Entry vehicle Effective entry cost
Bain Capital (BCPE Pangea entities) LBO 2018, ¥2T consortium ~¥3,800/share rough proxy (actual lower with LBO leverage)
Toshiba / JIP Retained 40.2% from spin (no cash) ¥0 cost basis (legacy carrying value)
SK Hynix LP + convertible bonds 2018 ~¥395B for ~14% post-conversion = ~¥4,500/share
Hoya 2018 consortium Same vintage as Bain
Apple, Dell, Kingston, Seagate Preferred 2018 Not common equity; supply-locked

Lock-up periods and selldown timeline

Date Event
Dec 18, 2024 IPO; 180-day lock-up begins for Bain, Toshiba, SK Hynix, Hoya
~Jun 15-16, 2025 180-day lock-up expires
Nov 25, 2025 Bain executes first post-lockup block sale: ~36M shares at ~¥9,000 = ~¥355B (~$2.3B)
Late Feb-early Mar 2026 Bain executes second block sale: ~35-40M shares (est.) at ~¥20,000-25,000 = ~¥700-1,000B (~$3.5B)
2028 SK Hynix’s 15% voting cap on convertible bonds expires
2030 (target) Kioxia management goal: hit 35% TSE Prime float requirement

Current shareholder structure (April 2026, estimated)

Holder Pre-IPO % At IPO Dec 2024 Post-Sept 2025 IR Current April 2026 (est.)
Bain Capital (combined BCPE) ~60% ~52% 51.06% ~28-30% (after 2 block sales)
Toshiba / JIP ~40% ~32% 27.25% ~27-30% (no sales)
SK Hynix ~14% ~14% ~14% ~14% (capped voting until 2028)
Hoya ~3% ~3% ~3% ~3%
Free float 0% ~28% ~34% ~28-35%

Kioxia BCPE breakdown as of September 30, 2025 IR disclosure: - BCPE Pangea Cayman, L.P.: 22.00% - BCPE Pangea Cayman2, Ltd.: 14.34% - BCPE Pangea Cayman 1A, L.P.: 8.98% - BCPE Pangea Cayman 1B, L.P.: 5.74% - (Combined: 51.06% pre-November sale)

Bain Capital realized economics

Transaction Shares Price (¥) Gross proceeds
IPO secondary (Dec 2024) ~12.65M 1,455 ~¥18B (~$123M)
Block sale 1 (Nov 2025) ~36M ~9,000 ~¥355B (~$2.3B)
Block sale 2 (Feb-Mar 2026) ~35-40M (est) ~20,000-25,000 ~¥700-1,000B (~$3.5B)
Total realized to Apr 2026 ~84-89M mixed ~$5.7-5.9B

Remaining stake at current ¥34,580: ~163M shares × ¥34,580 = ~¥5.6T (~$37B).

Total position (realized + unrealized): ~$43-44B against an $18B consortium deal. Bain’s actual cheque was a fraction of $18B (LBO debt + co-investors), so IRR is materially better than face value math suggests.

Bain motivation to keep selling

  1. TSE float compliance: Kioxia management publicly requested Bain and Toshiba sell down to hit 35% by 2030.
  2. Carry crystallization: Bain Capital fund vintages have IRR clocks running; mark-to-market is meaningless without distributions.
  3. Concentration risk: Bain’s Kioxia position is one of the largest single equity holdings across all Bain funds; diversification pressure mounts.
  4. Hold period: 8 years already. PE typical hold is 5-7 years.

There is no price threshold that would cause Bain to hold. The question is execution pace and market depth. Expect another block sale or marketed offering within 6-12 months, likely 35-40M shares each. Each ~5% block sale = ~¥480B (~$3.2B) in liquidity at current prices.

Financial snapshot (April 2026)

Metric Value
Share price (Apr 24, 2026) ¥34,580
All-time high ¥36,870 (Apr 14, 2026)
Market cap ~¥18.9T (~$126B)
Enterprise value ~¥19.9T
Trailing P/E ~113x (depressed FY2024 earnings base)
Forward P/E (FY25 guidance) ~7.8x
FY2024 revenue ¥1,706B (+58.5% YoY)
FY2025 guidance ¥2,180-2,270B (+28-33% YoY)
FY2025 net income guidance ¥460-520B
NAND market share (Q3 2025) 15.3% (#3)

The 113x trailing vs 7.8x forward gap reflects FY2024’s depressed earnings base. Q4 FY2025 single-quarter guidance of ¥845-935B exceeds full-year FY2024 revenue. If guidance is met, FY2025 net income is roughly 10x FY2024.

Risks specific to Kioxia

  1. Bain selldown overhang: Every 5% block sold is ~$3.2B in supply against ~$126B float. Block sales depress prices ~10% on average.
  2. NAND-only concentration: No DRAM, no HBM. If NAND cycle turns first (it usually does, lower end), Kioxia takes the full hit.
  3. Lack of HBM optionality: Unlike Samsung, Hynix, Micron, Kioxia has no HBM revenue stream. Pure-play NAND beta.
  4. WD merger blocked: SK Hynix vetoed the WD/Kioxia merger in October 2023. Future M&A unlikely.
  5. JV economics with Sandisk: 50/50 cost-sharing. Sandisk’s compensation structure ($1.165B 2026-2029) helps but ties Kioxia’s destiny to Sandisk’s pricing discipline.
  6. YMTC encroachment: 13% global NAND share Q3 2024; targeting 15% by late 2026. Direct overlap with Kioxia’s commodity TLC tier.

Kioxia bull case

  1. NAND-only pure play means highest NAND beta in listed memory. If Sandisk CEO is right that DC becomes #1 NAND consumer in 2026, Kioxia revenue compounds at 30%+ for 2 years.
  2. BiCS 8 (218L) and BiCS 10 (332L) are competitive with SK Hynix and Samsung. CBA wafer-bonding tech has yield advantages.
  3. LC9 245.76TB QLC drive won FMS 2025 Best of Show, qualifying for hyperscale KV cache offload.
  4. Forward P/E of 7.8x is undemanding if guidance is met. Implied 2026 EPS run rate suggests 5-6x forward at current prices.
  5. SK Hynix as 14% shareholder is a strategic alignment, not a competitive threat (and the convertible bond gives Hynix a structural reason for Kioxia to succeed).

15. FundaAI Bullish Forecast Assessment — Sandisk, Kioxia, Micron

Note: Pink’s FundaAI Chrome session was logged out when I attempted to pull live Estimate Analysis reports. The assessment below reconstructs the bull case from FundaAI’s published Substack (“DeepKioxia: Not a Flash in the Pan”, already in your inv-q queue) and the broader sell-side consensus + my own evaluation. To get FundaAI’s exact 2026 EPS targets, log into funda.ai in Chrome and re-run the Estimate Analysis tool on each ticker. I will refresh this section when the data is available.

The bull case framework (FundaAI + others)

The extreme bull case for memory in 2026 rests on five planks:

  1. HBM is permanent demand: NVIDIA’s Vera Rubin (2026) ships with HBM4 12-high 36GB stacks. AMD’s MI355X uses HBM3E 12-high. AI capex is locked in for 2026-2028 (Stargate, Microsoft, Meta, Google). HBM bit demand grows 80-100% YoY through 2027.

  2. NAND undersupply is structural: Industry capex is +5% in 2026, focused on node transitions not greenfield wafers. Bit supply growth is 15-17%. Bit demand growth is 20-22%, with KV cache offloading + 122TB QLC drives + nearline HDD substitution adding new demand tiers. Shortage extends through 2028 in sell-side consensus models.

  3. Pricing leverage is non-linear: Memory has 70-80% fixed cost. Every $1 of ASP drops to GM. Sandisk Q1→Q2 FY2026 GM expansion (29.9% → 51.1% → 65-67%) is the leading proof point.

  4. Supply discipline holds: Unlike past cycles where Samsung flooded the market to hurt rivals, this cycle has Samsung capex-disciplined (CHIPS Act lock-up + government oversight). SK Hynix is rational. Micron under-utilizing fabs to extract NAND price recovery.

  5. AI accelerator scarcity persists: NVIDIA, AMD, custom ASICs all need HBM. There is no HBM substitute. CoWoS packaging at TSMC is the binding constraint.

What FundaAI likely projects for 2026 EPS

Without live access, here is the range I’d expect FundaAI to publish vs sell-side consensus:

Ticker Sell-side consensus EPS 2026 FundaAI bull case (estimated) Premium
MU $14-16 (FY2026 ends Aug) $20-25 +50-60%
SNDK $12-15 $20-30 +60-100%
285A.T (Kioxia) ¥1,800-2,200 (FY2026 ends Mar) ¥3,000-4,500 +60-100%

The directional thesis (FundaAI bull > sell-side) is consistent with the supply-demand setup. The question is how much premium is justified.

My credibility assessment

Where the bull case is robust:

  1. HBM math holds: NVIDIA Vera Rubin and Blackwell Ultra ship volumes are public. HBM bit content per GPU is rising (96GB → 192GB → 288GB → 576GB+). Hyperscaler capex commitments are underwritten by accounts payable.
  2. Supply discipline data is verifiable: Wafer starts, capex announcements, equipment book-to-bill all support the under-supply thesis. ASML and Lam memory bookings are at multi-year highs but lead times are 18+ months.
  3. Sandisk margin trajectory is observed: Q1 (29.9%) → Q2 (51.1%) → Q3 guide (65-67%) is real, not modeled. The bull case on Sandisk is largely already happening.

Where the bull case is fragile:

  1. HBM4 multi-vendor risk: If Samsung and Micron both pass NVIDIA HBM4 qualification at scale (likely by H2 2026), SK Hynix’s monopoly margin compresses. Sell-side underweights this.
  2. NAND demand elasticity: At $0.30/GB enterprise SSD, hyperscaler buying is rational. At $0.60/GB, they delay and substitute back to HDD. The 16x SSD/HDD price ratio is an alarm bell, not a feature.
  3. CXMT DRAM ramp: Q1 2026 reports suggest CXMT is at ~7% global DRAM share, headed to ~10% by year-end. They serve Chinese mobile and DDR4 industrial — a structural offset to Samsung/Hynix/Micron volume.
  4. Macro recession risk: AI capex is hyperscaler discretionary spending. A 10% capex cut at any of MSFT/GOOGL/META/AMZN drops HBM demand 5%, which drops HBM price 15%+ (memory is non-linear).
  5. Kioxia Bain overhang: Mechanical supply pressure of $3-4B per block sale for the next 12-24 months caps the multiple expansion path.

My base case: FundaAI’s bull case for SNDK and Kioxia is more defensible than for MU, because (a) NAND has a tighter near-term shortage signal, (b) MU’s HBM share gain has more downside on Samsung qualification (Samsung is the bigger threat to MU than to Hynix), and (c) the comparative downside ratio favors NAND-only names (Sandisk, Kioxia trough margins vs. peak margins create asymmetric reward).

My deviation from FundaAI: I’d haircut the FY2026 EPS bull cases by 20-30% for HBM4 multi-vendor risk and macro tail risk. The structural thesis is right; the magnitude depends on HBM4 qualification timing.

To validate: re-run FundaAI Estimate Analysis on MU, SNDK, 285A.T after logging in. Compare their explicit revenue assumptions (HBM bit content, NAND ASP/GB, capex utilization) against the sell-side consensus and the management guidance ranges. The credibility test is whether FundaAI’s revenue and margin assumptions are observable in management guidance, not whether the EPS multiple is high.

16. Bottleneck Hunting — Where Is Pricing Power?

Layer Smallest pure-play MC Concentration Bypassable? Market priced-in?
HBM CoWoS packaging TSMC ($800B+) n/a (huge) TSMC ~85% No (Samsung distant 2nd, no third) Priced
HBM DRAM stacking SK Hynix ($240B) n/a Hynix ~62%, Samsung ~17%, Micron ~21% Partial Priced (HBM3E), Underpriced (HBM4)
DDR5 RCD chips Montage 688008.SS ($15B) $15B 3 players, Montage 36.8% No (qualification cycle 18+ months) Underpriced
Memory probe cards JEM 6855.T ($1.5B) $1.5B 3 players, JEM #5 Partial Underpriced
Sub-32GB MLC eMMC Macronix 2337.TW ($6B) $6B Post-2028 monopoly No Partially priced (KGI initiated Mar 2026)
Specialty low-density DRAM Winbond 2344.TW ($4B) $4B 3 players (Winbond, Nanya, Powerchip) Partial Underpriced
HBM IP Rambus RMBS ($8B) $8B 2-3 players No Priced
KV cache QLC SSD Sandisk SNDK ($30B) $30B 4-5 players qualifying Partial Mostly priced

Top three bottlenecks ranked by alpha potential:

  1. DDR5 RCD (Montage 688008.SS): 36.8% market share in DDR5 register clock drivers. Every DDR5 server module needs one. Bypassable only via Renesas or Rambus, which are themselves capacity-constrained. Listed on STAR Board (China A-shares) — Hong Kong investors can access. Highest pricing power per dollar of market cap.

  2. Memory probe cards (JEM 6855.T): Probe cards test memory dies before packaging. JEM is #5 globally and benefits from HBM testing complexity (12-high stacks need more probe pins, more contact cycles). Already in your watchlist per the inv-q index.

  3. Sub-32GB MLC eMMC monopoly (Macronix 2337.TW): Macronix is the only credible global supplier post-2028. KGI thesis is partially priced (stock +350% in 12 months) but if eMMC ramp executes, NT$300 target (10x 2026 EPS) is achievable.

Follow the capex — second-derivative beneficiaries

NVIDIA’s 1M-TPU equivalent commitments (Stargate, Microsoft, Meta, Google) imply ~$200B of memory capex through 2028. Trace 4 layers:

  1. Direct recipient: SK Hynix (HBM3E and HBM4), TSMC (CoWoS), Micron (HBM share)
  2. Near-competitors: Samsung (HBM3E qualification finally cleared 2025), Sandisk + Kioxia (NAND)
  3. Upstream components: Lam Research, Applied Materials, Tokyo Electron (etch and deposition), KLA (metrology), ASML (EUV), Disco (dicing). Substrate suppliers (UIS / Unimicron 2404.TW). Probe cards (JEM 6855, MJC, FormFactor)
  4. Second-derivative: Specialty chemicals (Tokyo Ohka, Shin-Etsu silicon wafers), DDR5 RCD (Montage), memory IP (Rambus)

Reference: NVIDIA’s 2024 EML preallocation to LITE/COHR drove 6-18 month follow-on rallies. Same pattern likely applies to Lam memory equipment, JEM probe cards, and Montage DDR5.


PART IV: SECULAR TRENDS


17. Tailwinds and Headwinds

Tailwinds

Force Mechanism Magnitude Durability
AI inference scaling More tokens, longer contexts → more KV cache → more HBM and NAND Largest tailwind in semis 5-10+ years
HBM bit content per GPU 96 → 192 → 288 → 576+ GB roadmap 6x by 2030 5+ years
KV cache offloading New NAND demand tier, 122TB QLC drives $5-10B incremental TAM by 2027 3-5 years
Edge inference Auto, drones, IoT inference adds NAND content $5B incremental 5+ years
HDD-to-SSD substitution TCO crossover at hyperscale $5-10B incremental 3-5 years
Sub-32GB MLC eMMC monopoly Big 4 NAND exiting NT$200B+ for Macronix 5-10 years
DDR5 transition Server upgrade cycle, RCD requirements DDR4 → DDR5 by 2028 3 years
CHIPS Act + Korean subsidies Subsidized capex $50B+ 5 years

Headwinds

Force Mechanism Magnitude Likelihood
CXMT DRAM ramp Chinese DDR4/early DDR5 substitution 5-10% of global DRAM by 2027 High
YMTC NAND ramp 294L NAND, 13-15% global share 5-10% of global NAND by 2026 High
HBM4 multi-vendor Samsung, Micron certify alongside Hynix -10-20% Hynix HBM premium Moderate
Macroeconomic recession Hyperscaler capex cut 10-20% -30%+ memory pricing Low-moderate
Geopolitical supply chain disruption Taiwan, Korea risks Catastrophic if it happens Tail risk
Capex over-response 2027-2028 capacity arrival Cycle peak end Moderate

18. Technology Roadmap

Year Key transitions Watch
2025-2026 HBM3e ramp, BiCS8 (218L), V-NAND Gen 9 (286L), DDR5-6400 Samsung HBM3e qualification, Micron HBM4 ramp
2026-2027 HBM4 ramp (NVIDIA Vera Rubin), BiCS10 (332L), 321L QLC NAND HBM4 multi-vendor outcomes
2027-2028 HBM4e, V-NAND Gen 10 (400+ layers), DDR6 spec Sub-32GB eMMC monopoly inflection
2028-2030 HBM5, 500+L NAND via wafer bonding, CXL 4.0 multi-rack pooling China nodes converging?
2030+ High-Bandwidth Flash (HBF, Sandisk), processing-in-memory (PIM), NV-DIMM Architecture inflection

19. Adjacent Industries & Convergence


PART V: INVESTMENT FRAMEWORK


20. The Picks — Tiered Framework

Tier 1: Core holdings (highest conviction, own through cycle)

SK Hynix (000660.KS) — HBM near-monopoly, 62% share, 49% FY2025 op margin, target HBM4 dominance through 2027. Multi-year structural thesis. Korean dual-listed; ADR available via 000660.KS or HXSCL.PK.

Micron Technology (MU) — Cleanest US-listed AI memory exposure. HBM share gainer (~21-24%), NVIDIA Vera Rubin 12-high HBM4 supplier, 56% data center. FY2026 revenue could approach $90B+ at midpoint of guidance. Trades at ~10x FY2027 EPS bull case.

Macronix International (2337.TW) — Sub-32GB eMMC monopoly post-2028. KGI thesis: NT$300 target = 10x 2026 EPS. Already +350% in 12 months but eMMC revenue ramp (3% → 75% by 2027) gives multi-year compounding optionality. Watch Q1 2026 actuals as the first real test.

Tier 2: Tactical positions (strong but cycle-dependent)

Kioxia (285A.T) — NAND-only purest play. FY2025 net income guide ¥460-520B at 7.8x forward. Risks: Bain selldown overhang ($3-4B per block sale for next 12-24 months), no HBM optionality. Position size against the overhang.

Sandisk (SNDK) — Post-spin operating leverage extreme: GM 29.9% → 51.1% → 65-67% in two quarters. KV cache QLC qualification at 5 hyperscalers. Already +295% YTD; conviction high but entry less attractive at peak GM.

Samsung Electronics (005930.KS) — Recovery story on HBM3E qualification (cleared Sep 2025) plus HBM4 30%+ NVIDIA allocation target. Less pure than Hynix but cheaper and more diversified.

Montage Technology (688008.SS) — DDR5 RCD chip leader (36.8% share). High-margin IP-like business, not exposed to memory ASP volatility. Listed only on China STAR Board; access via Stock Connect.

Tier 3: Watchlist

Japan Electronic Materials (6855.T) — #5 global memory probe card maker. Already on Pink’s wiki. HBM testing complexity drives content per HBM stack.

Winbond Electronics (2344.TW) — Specialty DRAM scarcity (DRAM 34% of FY2025 revenue) + NOR flash (35%). DRAM capacity sold out through 2027.

Rambus (RMBS) — Memory IP licensing, pure-play. HBM4 royalties, DDR5 RCD competitor to Montage.

Unimicron (2404.TW / UIS) — IC substrate maker, HBM and CoWoS substrate exposure. Already on Pink’s wiki.

Avoid

Western Digital (WDC) post-spin — HDD-only; HAMR commercialization is the only real catalyst. SSD shortage helped temporarily but secular HDD decline continues.

Nanya Technology (2408.TW) — Specialty DRAM but no HBM, no advanced node. Behind Winbond on technology.

Powerchip Semiconductor Manufacturing (PSMC, 6770.TW) — Foundry + DRAM hybrid, no HBM, China substitution risk.

21. Portfolio Construction

Ticker Tier Position size guidance Thesis horizon
000660.KS (SK Hynix) 1 Core, 4-6% 18-36 months
MU 1 Core, 3-5% 12-24 months
2337.TW (Macronix) 1 High-conviction, 2-4% 24-36 months (eMMC ramp)
285A.T (Kioxia) 2 Tactical, 1-3% (against overhang) 12-24 months
SNDK 2 Tactical, 1-3% (after pullback) 12-24 months
005930.KS (Samsung) 2 Diversifier, 2-3% 12-24 months
688008.SS (Montage) 2 IP-like, 1-2% 24-36 months
6855.T (JEM) 3 Watchlist, 1-2% on entry 24-36 months
2344.TW (Winbond) 3 Watchlist 12-24 months
RMBS 3 Watchlist 24-36 months

Rough portfolio shape: 60% Tier 1, 30% Tier 2, 10% Tier 3. Concentration in HBM (Hynix, MU, Samsung) plus structural NAND/NOR plays (Macronix, Sandisk, Winbond) plus IP-like compounders (Montage, RMBS).

22. Key Questions to Keep Researching

  1. HBM4 qualification timing: When do Samsung and Micron clear NVIDIA HBM4 qualification at scale? This is the single biggest swing factor for SK Hynix’s premium.
  2. CXMT DRAM market share trajectory: How fast does CXMT close the 2-3 node gap? Watch Q2 and Q3 2026 share data.
  3. Macronix Q1 2026 actuals: First real test of KGI’s eMMC ramp thesis. If MLC/TLC wafer-in scales as projected, KGI’s NT$300 target gains conviction.
  4. Bain’s third Kioxia block sale: Date, size, pricing. Sets the supply baseline through 2026.
  5. HDD-to-SSD crossover timing: Watch hyperscaler 2026 capex commentary (Microsoft, Google, Meta) for QLC SSD substitution data.
  6. CoWoS capacity expansion: TSMC’s 3x 2027 CoWoS capacity vs HBM demand. Bottleneck or relief?
  7. YMTC HBM optionality: YMTC reportedly evaluating HBM via TSV. If they enter, Korean HBM premium compresses.
  8. FundaAI Estimate refresh: Re-run after login to compare against my haircut and validate revenue/ASP assumptions.

Sources

Primary (vault, prior research): - KB/wiki/data-center-memory-types.md (Mar 2026) - KB/wiki/nand-flash-kv-caching-llm.md (Mar 2026) - KB/wiki/2337/macronix-kgi-2026-translation.md (Apr 2026, KGI initiation) - KB/wiki/memory-industry-primer.md (Apr 2026, agent synthesis)

Sell-side and Substack: - KGI Securities, “Macronix: A Better Story Than DDR4” (March 2026) - SemiAnalysis, “Scaling the Memory Wall: HBM Roadmap” - FundaAI, “DeepKioxia: Not a Flash in the Pan” (in inv-q queue) - TrendForce quarterly NAND/DRAM market share reports

Earnings transcripts (last 8 quarters): - Micron Q1-Q2 FY2026, Q3-Q4 FY2025, Q1-Q2 FY2025, Q3-Q4 FY2024 (via Motley Fool/Futurum) - Samsung Global Newsroom Q1 2024 - Q1 2026 quarterly preliminary releases - Kioxia investor relations, Q3 FY2024 (post-IPO) through Q3 FY2025 - Sandisk investor relations, Q1 FY2026 (first standalone) and Q2 FY2026; WDC pre-spin earnings

Technical sources: - NVIDIA Hopper Architecture In-Depth (developer.nvidia.com) - vLLM Paged Attention design docs - NVIDIA Dynamo blog series on KV cache offloading - Samsung PM1753 KV cache offload whitepaper (2025) - JEDEC HBM4 specification (2025)

Geopolitical / market structure: - Tom’s Hardware SSD/HDD price ratio analysis (Q1 2026) - Mordor Intelligence and Technavio NOR market sizing - US Department of Commerce Entity List publications (YMTC)


Knowledge gaps flagged

  1. FundaAI live EPS estimates for MU, SNDK, 285A.T: Pending Pink’s Chrome login. Will refresh Section 15.
  2. Samsung exact DRAM/NAND/foundry split: Not publicly disclosed; estimates from TrendForce.
  3. SK Hynix exact DRAM/NAND split: Not publicly disclosed.
  4. Bain’s actual 2018 equity cheque (vs ¥2T deal): Not publicly disclosed; IRR calculation approximate.
  5. Kioxia named hyperscaler customers: Apple and Microsoft from Bloomberg sourcing only; not in IR.
  6. Sandisk five hyperscaler identities: Mentioned on calls but not named.
  7. Macronix Q1 2026 actuals: Not yet reported; KGI thesis validation pending.
  8. Third Micron HBM customer: Mentioned on Q2 FY2025 call as volume customer, identity withheld.

Written 2026-04-26. Next refresh trigger: Macronix Q1 2026 print, Kioxia Q4 FY2025 print, Bain’s third Kioxia block sale, FundaAI Estimate Analysis post-login.