Displacement and Failure, Quantified Simultaneously
Goldman Sachs economist Elsie Peng published the first granular separation of AI's dual labor effects in an April 6 note. AI substitution eliminates roughly 25,000 US jobs per month. Augmentation adds back about 9,000. Net loss: approximately 16,000 per month. Gen Z bears disproportionate impact — they are concentrated in data entry, customer service, legal support, and billing roles optimal for automation. Entry-level hiring at the top 15 tech companies has fallen 25% since 2023.
But the same week delivered a counter-signal. Robert Half found that 29% of companies that laid off workers for AI have rehired them, and a separate analysis shows 55% regret the cuts. AI handles the cases it was designed for but fails on the distribution tail — edge cases, emotionally complex situations, and queries requiring real customer context. Gartner forecasts that 50% of AI-driven customer service headcount cuts will be reversed by 2027.
The structural reading: AI displacement is real at 16K/month, but it follows a zigzag rather than a linear replacement path. Enterprise AI TCO calculations that exclude "replace-then-rehire" costs are systematically overstating ROI.
Compute-to-Alignment: Anthropic's AAR Paradigm Shift
Anthropic published on April 14 that nine Claude Opus 4.6-based Automated Alignment Researchers (AARs) achieved 97% Performance Gap Recovered on a weak-to-strong supervision problem — versus 23% by human researchers. The AARs worked for 5 days at $18,000 in compute (800 cumulative research hours); humans took 7 days for 23%. The top method generalized to math (PGR 0.94) and coding (PGR 0.47).
The significance is the pathway: "compute converts to alignment progress." With only ~1,100 AI safety researchers worldwide, this addresses a critical bottleneck. But the structural implication cuts both ways. The compute-to-alignment pathway is available only to frontier labs with the capital to run large-scale AAR operations. This may centralize rather than democratize safety research, and the argument that "safety scales with compute" could be repurposed to justify accelerating frontier model development.
Perfect Alignment Is Mathematically Impossible
A PNAS Nexus paper by Zenil et al. used Gödel's incompleteness theorem and Turing's Halting Problem to prove that perfect alignment is mathematically impossible for sufficiently complex LLMs. Any model complex enough to exhibit general intelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible.
The authors propose "managed misalignment" as an alternative — competing AI agents with different cognitive styles and partially overlapping goals operating in distinct roles to check one another. In experimental debates, open-source models showed a wider spectrum of perspectives than proprietary models, creating a more resilient ecosystem less likely to converge on potentially harmful consensus.
This reframes alignment from "a completable goal" to "a continuous management task" — with direct implications for how MCP/agent orchestration systems should be designed.
MCP's 30 CVEs: Agent Infrastructure Is Already Insecure
Between January and February 2026, security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure. Of 2,614 MCP implementations surveyed, 82% are vulnerable to path traversal, two-thirds have code injection risk, and over a third are susceptible to command injection. A CVSS 9.6 remote code execution flaw was found in a package downloaded nearly half a million times.
The CVE breakdown: 43% exec/shell injection, 20% tooling infrastructure flaws, 13% authentication bypass. Root causes were not exotic zero-days — they were missing input validation, absent authentication, and blind trust in tool descriptions. Gartner predicts that by 2028, 25% of enterprise GenAI applications will experience at least five minor security incidents per year, up from 9% in 2025.
MCP's design optimizes interoperability and developer speed, not security enforcement by default. The "managed misalignment" theory from PNAS offers a design principle — competing agents checking each other — but the current security foundation is insufficient to realize that framework.
The Governance Gap
Today's L9 and L10 signals converge on a single structural observation: the speed at which humans are being displaced and the speed at which AI is learning to align itself are both accelerating, but the governance infrastructure connecting the two is not. Goldman's 16K/month gives policymakers a number. Anthropic's AAR gives frontier labs an argument. PNAS gives theorists a ceiling. MCP's CVEs give security teams a crisis.
For the next six months, the frame shifts from "will AI take jobs" to "who governs the gap between displacement speed and safety speed." Enterprise AI strategists should factor "replace-then-rehire" costs and agent-protocol security overhead into TCO, and monitor whether AAR-style automation genuinely distributes safety capacity or further concentrates it.
Confidence: MEDIUM. The Goldman data is robust but based on one methodology; the AAR result is a single benchmark; the PNAS proof is formal but its practical implications for system design are still unfolding.