Two developments over the past 24 hours highlight how the AI race is evolving along two very different directions.
One comes from the model layer.
Meta’s next-generation model Avocado is reportedly delayed.

The other comes from infrastructure.
Elon Musk says Tesla is preparing to launch its own AI chip manufacturing project called Terafab.
One story is about model capability.
The other is about control over the hardware supply chain.
Both are beginning to shape the future of the AI industry.
Commentary:
Avocado is not performing poorly in absolute terms.
In Meta’s internal evaluations, the model reportedly outperforms previous Meta models in reasoning, coding, and writing tasks. It also surpasses Google’s Gemini 2.5 released in March 2025.
The issue is timing.
Google released Gemini 3.0 in November 2025, and Avocado still appears to trail that model. It also does not clearly surpass the current leading models from OpenAI and Anthropic.
More interestingly, reports suggest that Meta internally discussed a temporary solution: renting access to Google’s Gemini system to support some AI products until Avocado is fully ready.
If that discussion actually happened, it would suggest that Meta is facing more pressure in the model race than it publicly acknowledges.
Over the past two years Meta has invested aggressively in AI—recruiting researchers, expanding compute infrastructure, and building a large developer ecosystem around its Llama models.
But influence in open-source ecosystems does not necessarily translate into leadership in frontier models.
As Google, OpenAI, and Anthropic continue to push their flagship systems forward, Meta will eventually need a model that clearly closes the gap.
Tesla CEO Elon Musk announced that Terafab, an AI chip manufacturing initiative, will begin operations next week at Tesla’s Texas Gigafactory. The project aims to build a production chain covering logic chips and advanced packaging, supporting Tesla’s autonomous driving and AI training systems while reducing reliance on suppliers such as TSMC and Samsung.
Commentary:
Terafab appears to be more than just another chip design effort.
Based on the information available, Tesla is attempting to build a vertically integrated AI chip manufacturing capability that includes logic chips, memory, and advanced packaging.
This mirrors a broader trend across the AI industry.
Google, Amazon, Microsoft, and Meta are all investing heavily in custom silicon to reduce reliance on external suppliers and optimize the economics of large-scale AI infrastructure.
Tesla’s case is slightly different.
Its chips are designed primarily for vertical workloads such as autonomous driving, robotics, and AI training clusters rather than general cloud computing markets.
However, the real challenge lies in manufacturing.
Building and operating a cutting-edge semiconductor fab requires massive capital investment, specialized equipment, mature process technology, and a large pool of engineering talent. Historically, very few companies have built such capabilities from scratch.
Which leaves a simple question:
Is Musk once again attempting something that almost no one else would try?
Most important AI events from the past 72 hours:
Microsoft locks in $174B of GPU capacity while Adobe pays $150M over subscription practices
Samsung and NVIDIA bet on next-generation NAND while Robotaxis enter the WeChat ecosystem