The last 24 hours made one thing obvious: AI competition is shifting from “who has the best model” to “who controls the gateway, the power budget, and the full stack.” Platforms are drawing new boundaries, compute is now planned in gigawatts, and big-tech org charts are converging around model + silicon + systems.

The EU is examining whether Meta is using control over WhatsApp chatbots and the WhatsApp Business API to limit third-party AI providers. In parallel, Meta is developing “Mango,” a next-gen unified image-and-video model.
Commentary:
Meta plans to restrict third-party general-purpose AI (e.g., ChatGPT, Copilot) from delivering core AI functions via the WhatsApp Business API starting January 15, 2026—allowing only “assistive” use cases like order lookups or flight alerts—while aggressively promoting Meta AI. The stated rationale is “stability” and “API intent,” but strategically it looks like a deliberate land-grab for the highest-value conversational surface area. From an antitrust lens, the EU’s scrutiny is unsurprising.
The other key question: is Mango purely internal (ads creative, Reels, creator tooling), or will Meta push it outward as a market-facing model? If external, Meta must win on ecosystem and distribution beyond its own apps; if internal, Mango may be the missing piece for productizing AI where Meta has under-delivered so far.
OpenAI’s expanded partnership could deliver up to 6 gigawatts of AMD chips—signaling that frontier AI planning is now constrained by power, not just GPUs.
Commentary:
For OpenAI, diversifying away from a single NVIDIA path is as much about lead times and allocation risk as it is about price. If AMD can ship reliably and offer a usable software stack with real engineering support, OpenAI gains negotiating leverage on cost, schedule, and supply certainty. The real question isn’t “replace NVIDIA,” but “scale a second path for critical workloads.”
At 6GW, we’re talking about deployment at extreme scale—potentially hundreds of thousands of GPUs over time. That’s why this is bigger than chip revenue: if OpenAI ports key training/inference workloads to AMD, AMD gets a shot at influencing standards, tooling gravity, and developer pathways.
Who’s next to seriously target AMD as an alternative compute backbone—clouds, sovereign AI projects, or another hyperscale model player?
NVIDIA is partnering with the U.S. DOE on “Genesis Mission AI,” aiming to strengthen U.S. leadership through AI infrastructure and R&D across nuclear, quantum, biology, and materials science.
Commentary:
The plan reportedly spans five years and exceeds $120B, with roughly 60% allocated to compute buildout and model R&D. NVIDIA is positioned to lead at least seven next-gen AI supercomputers. This is more than revenue; it’s structural lock-in—NVIDIA embedding itself into the U.S. long-term science and strategic industry roadmap.
Crucially, this isn’t about bigger chatbots. It’s about durable advantage in high-barrier domains where AI can compound scientific and industrial leadership. If execution holds, it’s a long-horizon tailwind for NVIDIA’s moat.
Amazon’s AGI VP Rohit Prasad is leaving at year-end. Reinforcement learning expert Pieter Abbeel will take over, as Amazon reorganizes AGI and merges it with chip R&D and quantum teams.
Commentary:
This looks like Amazon moving AGI from “model lab” toward a systems-and-agents posture—where agentic workflows, infrastructure economics, and integration matter as much as raw model capability. As training/inference costs become constrained by power and supply chains, the winners are those who can iterate chips, racks, networking, and scheduling alongside model strategy.
Quantum remains a long-dated option. Folding it into the same org doesn’t imply near-term AGI breakthroughs, but it does suggest Amazon wants quantum exploration attached to the AI capital engine—preventing it from becoming an isolated island.
Will this reorg produce distinctly different AI products (especially enterprise-grade agent workflows and toolchains), or is it simply a defensive reset under pressure? The product cadence over the next two quarters will tell.