Three concurrent signals dominate Monday's L1+L2 focus: SK hynix's Q1 2026 disclosure of a 72% operating margin and HBM demand exceeding three years of forward capacity (L1+L7), the April 29 simultaneous earnings prints from Microsoft, Alphabet, and Meta as a $440B+ FY2026 capex visibility test (L1+L2), and the NVIDIA-Groq 3 LPU + Vera Rubin NVL72 integration confirmed for Q3 2026 customer rollout (L1+L2). The Korean memory bloc has secured a multi-year structural rent and become the binding constraint on global AI compute economics.

SK hynix Q1 2026 — 72% Operating Margin, HBM Demand Exceeds 3-Year Capacity

SK hynix reported KRW 52.6 trillion in revenue (+198% YoY), KRW 37.6 trillion in operating profit (+405% YoY) at a 72% operating margin on April 23. The first quarter ever above KRW 50 trillion in revenue, and an all-time-high margin print that overtakes TSMC's 58.1% same-quarter operating margin.

On the earnings call management said HBM4 is already shipping into customer-aligned production schedules, that cumulative HBM demand inquiries exceed three years of forward capacity, and that HBM4E will use the company's 1c-nm process node, with samples in H2 2026 and mass production targeted for 2027.

KOSPI closed at 6,417.93, the first 6,400 print in history. Combined SK hynix + Samsung Electronics market cap surpassed KRW 2,173 trillion.

Big Tech Earnings Week Opens — $690B 2026 Capex Under Test

Microsoft, Alphabet, and Meta print Q1 calendar earnings on the same after-market session April 29, with Apple and Amazon on April 30. Microsoft FY26 capex guidance $146B, Alphabet $175–185B, Meta $115–135B — combined FY2026 hyperscaler capex consensus including Apple and Amazon stands at $690B.

Microsoft Cloud gross margin is consensus-modeled at 66.23%, down from 69% in Q3 FY2025 — the explicit signal that the AI infrastructure phase is still margin-compressive. Market focus has shifted from "are they overbuilding?" to "where is the ROI?"

NVIDIA-Groq 3 LPU + Vera Rubin — Inference Pricing Moves Inside the NVIDIA Stack

NVIDIA's GTC 2026 (March) announcement of Groq 3 LPU integration into the Vera Rubin NVL72 platform is now confirmed for Q3 2026 customer availability. The chip is manufactured by Samsung on a 4nm process, and the combined LPX + Vera Rubin rack delivers 35× higher throughput per megawatt than Blackwell NVL72 alone for trillion-parameter models, at a target of $45 per million tokens.

The December 24, 2025 deal structure is a $20B non-exclusive licensing agreement with NVIDIA hiring Groq's founder and core engineering team. Senators Warren and Blumenthal characterized the transaction as a "reverse acquihire" and urged DOJ / FTC review — that determination remains unpublished as of April 26.

Lock-in Change and the Korea-Side Counterweight

The HBM supply chain crosses from cyclical scarcity into structural lock-in. Accelerator OEMs lose negotiating leverage on memory allocation through the GB300 / Rubin Ultra cycle, and the Korean memory bloc captures a multi-year structural rent that compresses NVIDIA, AMD, and Intel gross margins downstream.

At the same time, NVIDIA-Groq 3 LPU's Samsung 4nm foundry assignment forms a Korea-side structural counterweight to TSMC's CoWoS chokehold — the first time Korea holds simultaneous leverage in both memory (SK hynix + Samsung) and foundry (Samsung 4nm) along the AI compute stack.


6-Month Implications: Korean memory and foundry capacity becomes the binding constraint on global AI compute economics over the next six months. The April 29 earnings outcome cascades into NVIDIA's May 28 print, Korean memory equipment orders for May–June, and 2027 SMR / nuclear PPA pipelines for power delivery. [HIGH on structural rent, MEDIUM-HIGH on the NVIDIA-Groq inference standard pending DOJ / FTC antitrust closure]