Two Layers, Two Inflection Points, Same Week
Today's L5 (AI Native Apps) + L6 (Vertical Penetration) scan surfaces two structural transitions that arrived in the same week. One, the question of whether AI-native SaaS can monetize is now closed by actual financial disclosures. Two, physical AI has moved from "pilot phase" to "quantified ROI phase." These converging shifts suggest that the power topology of L5 and L6 is being redrawn in parallel.
Perplexity's $500M ARR — The Agentic Pivot Gets Financially Validated
Perplexity hit $500M in annualized revenue in April 2026. That is up 335% year-over-year from $232M in 2025 — more than doubling in about two months. The drivers are specific: the February launch of its agent product "Computer" and a parallel usage-based pricing layer.
Perplexity began as an AI search company. When it raised $200M at a $20B valuation in September 2025, the narrative was "the answer engine that threatens Google Search." The $22.6B valuation in January 2026, and today's disclosed $500M ARR, reframe that narrative: Perplexity is now an agent execution platform that happens to have started in search.
This signal extends to the AI-native app ecosystem broadly. Products that stop at "search → answer" hit a monetization ceiling; products that extend to "answer → execution" can compound usage-based revenue. Perplexity's 335% YoY is the first large-scale earnings evidence for that hypothesis.
Power Score: +2 Perplexity / -1 Google Search (ad revenue at risk)
Salesforce Agentforce's $800M ARR — Enterprise Upsell at Scale
In the same week, Salesforce disclosed that Agentforce reached $800M ARR in Q4 FY26, up 169% YoY, across 29,000 signed deals — and 60% of those deals came from existing Salesforce customers. That 60% figure is the point.
The classic risk of any enterprise AI agent product is the "new product sold to new customers" problem. Agentforce avoids it by running on top of Data Cloud inside the existing CRM footprint, expanding account ACV without net-new customer acquisition cost. ServiceNow's Now Assist ($600M ACV, Pro Plus tier commanding a 25–40% premium) shows the same pattern; the 5.5% single-day jump in ServiceNow shares on April 1 was the market's immediate reaction to that momentum.
Sigma Computing adds a third data point: $200M ARR (+100% YoY) with Sigma Agents now live. Sigma's twist is that its agents run inside the customer's own data warehouse, not as an external SaaS layer — inheriting the warehouse's security and governance model.
Data Proximity Is the New Lock-in
The common principle across today's three L5 events is simple: the closer an agent sits to the data, the higher the switching cost.
- Agentforce: runs on Salesforce CRM + Data Cloud — requires the customer's data to already live in Salesforce
- Sigma Agents: execute directly inside Snowflake/Databricks warehouses — inherit security and governance by default
- Harvey AI: adopted group-wide by HSBC as its legal platform — once legal research and contract archives concentrate there, migration cost compounds non-linearly
This pattern connects to today's L3 (middleware) activity as well. Lucidworks (April 8) and Domo (April 2) both launched their own MCP servers. Each vendor's MCP server looks like "more openness," but it actually creates MCP-per-data-source dependency — new switching cost that accumulates silently under the appearance of interoperability.
Physical AI: From "Pilot" to "Quantified ROI"
Tesla officially started mass production of the Cybercab in April. Gigafactory Texas is targeting one unit every ten seconds and an annual run rate of two million. A purpose-built robotaxi architecture — no steering wheel, no pedals, no shared-passenger-car platform — is an attempt to rewrite the cost structure rather than optimize within it. Waymo, legacy OEMs, and ride-hailing platforms do not currently have a path to match that cost base.
Figure AI disclosed a 90-minute humanoid assembly cycle. The 10-year target is one million units per year. All of 2026 production is already pre-allocated to Hyundai (30,000 units/year starting 2028) and Google DeepMind. Running alongside: Figure 02 contributed to 30,000 BMW X3 units at BMW Spartanburg over 10 months, moving roughly 90,000 components across ~1.2 million steps. That is the first quantified ROI reference point for a humanoid deployment.
The moment "humanoid ROI is unproven" stops being credible, physical AI procurement shifts from "pilot evaluation" to "production-deployment contract." BMW beginning a further test deployment at Leipzig this month is part of the same transition.
Structural Takeaway
It is not coincidence that L5 and L6 printed an inflection point in the same week. Both layers are crossing from "the technology works" to "the numbers show up." L5's proof is ARR. L6's proof is BMW's 30,000 units and Cybercab's 2M/year target.
Over the next six months, the dominant enterprise AI procurement question is likely to shift from "which model" or "which protocol" to "how close is the agent to our data?" Data-owning platforms (Salesforce, Snowflake, Microsoft, Harvey + Ironclad) are now validated with reported revenue, while standalone AI apps that only call external APIs face compounding pressure from pricing and functional overlap.
Confidence on that hypothesis: MEDIUM. One quarter of disclosed results is validation, not durability.
Tomorrow's Watchlist: Thursday L7+L8
Tomorrow's AI Power Atlas covers L7 (Capital & Market) + L8 (Regulation & Geopolitics). Focus: VC and Tier 1 responses to the AI-native ARR rush (a16z, Sequoia, Sapphire); FDA/EU guidance ahead of the August 2026 EU AI Act high-risk medical device obligations; NHTSA/DMV/EU TSR commentary triggered by the Cybercab production start.