Saturday's full-layer scan surfaces three concurrent signals marking an inflection. OpenAI shipped GPT-5.5 co-designed with NVIDIA's GB200 NVL72, posting 82.7% on Terminal-Bench 2.0. Meta and Microsoft executed AI-efficiency reductions of 20,000+ workers in a single week. Microsoft committed A$25 billion (US$18B) over four years to Australian AI infrastructure, with the Australian AI Safety Institute operating its evaluation regime on Azure. Lock-in has migrated from "model interface" to a fused stack of model + compute + governance, and the AI investable universe is reorganizing from four or five model providers into two integrated stacks — OpenAI–NVIDIA–Microsoft vs. Anthropic–Google–Broadcom.
L2+L1+L5 — OpenAI GPT-5.5: Co-Designed With NVIDIA GB200 NVL72
OpenAI shipped GPT-5.5 and GPT-5.5 Pro to the API on Friday, April 24, 2026. The model was co-designed, trained, and served on NVIDIA's GB200 and GB300 NVL72 systems, which OpenAI says deliver 35× lower cost per million tokens and 50× higher token output per megawatt versus the prior generation. Public benchmarks reported by OpenAI: 82.7% on Terminal-Bench 2.0 (vs. 75.1% for GPT-5.4 and narrowly above Anthropic's Claude Mythos Preview), 78.7% on OSWorld-Verified, and 74.0% on 1M-token MRCR v2 (vs. 36.6% for GPT-5.4). Pricing is set at roughly twice GPT-5.4.
Notably, OpenAI states that GPT-5.5 itself helped develop the load-balancing heuristics that improved its own infrastructure throughput by more than 20% — the first publicly disclosed instance of an AI system optimizing the production stack on which it runs. The release lands in the same week as Anthropic's April 23 Claude Code postmortem and within three weeks of the April 7 Mythos non-deployment, allowing OpenAI to recapture the model-narrative spotlight that Anthropic had owned through Q1.
GPT-5.5 being co-designed with NVIDIA GB200 NVL72 marks the first explicit instance of model performance being permanently coupled to a specific compute generation. "Model-interface lock-in" has expanded into "model + compute lock-in."
L10+L7+L5 — Big Tech Sheds 20,000+ Jobs in One Week: AI Capex Displaces Wage Costs
On April 23, Meta announced it would cut roughly 8,000 employees (10%) of its 79,000-person workforce beginning May 20, while leaving an additional 6,000 open roles unfilled. Meta also raised its 2026 operating expense guidance to $162–169 billion and explicitly framed AI infrastructure capex as the primary pressure point. In the same week, Microsoft offered voluntary buyouts to roughly 12,000 employees.
The combined 20,000+ workforce reduction in a single week is the first public-market signal that the AI labor shock — previously concentrated among 22–25-year-old entry-level developers per the Stanford AI Index 2026 — has expanded into mid- and senior-level white-collar bands. U.S. unemployment claims tied to AI-cited layoffs rose roughly 14% in the first week of April. The pattern indicates "AI capex displacing wage costs" is becoming a standard line item in quarterly disclosure rather than a one-off restructuring narrative.
L1+L8+L4 — Microsoft Australia A$25B + AISI Evaluation Operations
On April 23, Microsoft CEO Satya Nadella and Australian Prime Minister Anthony Albanese jointly announced a four-year A$25 billion (~US$18 billion) commitment that includes: (1) a 140% expansion of Microsoft's Azure AI and commercial cloud capacity in Australia by end-2029; (2) a co-developed evaluation regime with the Australian AI Safety Institute; (3) the Microsoft–ASD Cyber-Shield extending to additional government agencies; and (4) AI workforce training for three million Australians. The package is roughly 5× larger than Microsoft's October 2023 A$5 billion commitment.
As a four-year program, it functions less as a quarterly P&L item and more as an "infrastructure node in the allied AI bloc," providing a reference template for negotiations with the EU, UK, Korea, and Japan. By operating AISI evaluation infrastructure, Microsoft accelerates the de facto convergence of allied governance standards alongside the U.S. and UK Safety Institute models. With Copilot lagging ChatGPT and Gemini in consumer share, Microsoft is leaning into government and enterprise channels to offset OpenAI–NVIDIA's model edge with infrastructure and policy lock-in.
Lock-in Change: Model Interface → Full Stack (Model + Compute + Governance)
Lock-in is migrating from "model interface" to a fused stack of model + compute + governance. GPT-5.5's co-design with NVIDIA GB200 NVL72 marks the first explicit instance of model performance being permanently coupled to a specific compute generation. In parallel, Microsoft's Australia deal binds Azure AI infrastructure to the operations of the Australian AI Safety Institute, so meeting governance obligations now reinforces dependence on a single cloud provider. The result is a "triple lock-in" — model, compute, and governance — accumulating with the OpenAI–NVIDIA–Microsoft axis and difficult for challengers to replicate. The Anthropic–Google–Broadcom axis is building a parallel stack to respond, but suffered a model-momentum setback after the April 23 Claude Code postmortem.
6-Month Implications: The AI investable universe will reorganize around two integrated stacks rather than four to five model providers. OpenAI–NVIDIA–Microsoft now controls the most credible model + compute + governance pipeline; Anthropic–Google–Broadcom is the only realistic counter-stack and will lean harder on its governance narrative (RSP v3.0, Frontier Safety Roadmap) to compensate for the post-mortem hit. Expect at least one additional Microsoft-style sovereign AI commitment in the EU or UK by Q3, and at least one U.S. Congressional hearing that quotes the Stanford AI Index alongside Meta/Microsoft layoff data as direct evidence for AI Adjustment Assistance legislation. For investors, model performance ceases to be the differentiator between providers — the more useful lens becomes "which provider can ship, withhold, comply, and scale capex simultaneously." [Two-stack reorganization HIGH · EU/UK sovereign AI deal timing MEDIUM]