Two updates in the last 24 hours pull AI competition back to fundamentals: infrastructure and product value. Google appears to be turning TPUs from an internal weapon into a rentable compute asset pool—now including Meta as a customer—while Duolingo shows signs of slowing user momentum as free, real-time conversational AI reshapes what “learning a language” even means.

Commentary:
If accurate, this signals Google accelerating from “self-use chip maker” toward “open compute infrastructure provider.” The JV structure looks like compute project financing: external capital helps absorb the heavy capex of datacenter buildouts and TPU deployment, while the investment partner can expand distribution via customer networks and financing muscle—an “asset pool” approach to scaling TPU capacity with less balance-sheet strain.
Meta’s move is the bigger tell. As one of NVIDIA’s largest customers (your note cites 1.3M+ H100s), adding rented TPUs reads like a deliberate de-risking of single-vendor dependence. Treating compute as a liquid asset—shifting workloads to whichever platform offers better availability and economics—is how hyperscalers protect training and inference continuity and keep $/token under control. If TPUs deliver meaningfully better energy efficiency on certain inference workloads (you cite 50%+), the operating cost advantage compounds at scale.
The likely end state is a diversified compute portfolio: “NVIDIA GPUs + Google TPUs + in-house MTIA,” with training, inference, and specialized workloads mapped to the best-fitting stack. Once this model works at scale, it weakens any single ecosystem’s pricing power and pushes suppliers to compete harder on price, supply certainty, and software compatibility. In that sense, if Meta is renting TPUs, the market naturally asks whether NVIDIA’s moat is being eroded by multi-source supply and the financialization of compute.
Commentary:
This report concentrates three warning signs: weakening growth quality, fading user momentum, and accelerating AI competitive pressure. Revenue can still grow at 35%, but if the engine is pricing and subscription mix optimization rather than user expansion, the compounding story becomes more dependent on increasing penetration and ARPU—harder levers in consumer education.
The operational signals you provided are notable: Q4 2025 DAU growth slowed to ~18%, MAU reportedly declined sequentially; paid users reached 11.5M, but only ~9% of MAU, implying limited headroom unless conversion improves; $42M net income on $283M revenue yields a net margin of ~1.5%, suggesting profit leverage is not yet robust.
More importantly, AI changes the user’s goal. For many people, the objective isn’t “learn a language” but “communicate across languages.” Free, real-time, context-aware practice in ChatGPT/Gemini/Claude nudges users from learning toward direct task completion. Meanwhile, Duolingo’s aggressive “AI-first” push—generating 148 courses—reportedly introduced quality issues in smaller languages and more mechanical voice output, which can erode its core advantage: a consistent learning experience and habit-forming retention.
To defend its position, Duolingo has to prove it delivers learning outcomes that generic AI chat can’t: structured pathways, reliable assessment, personalized review scheduling, and measurable progress tied to real scenarios (interviews, work writing, exams, travel). Do you think Duolingo is actually achieving that?
When compute becomes a rentable, financeable asset pool, infrastructure moats get pressured by multi-source supply. And when AI tools make “instant communication” the default, content-first learning apps face a rewritten growth equation. The next winners will be the ones who can lock in supply certainty, unit economics, and product loops—at the same time.