Meta’s decision to cancel its most advanced in-house AI training chip marks a turning point in how the company plans to build its artificial intelligence infrastructure. The scrapped chip, developed with TSMC and already through the tape-out phase, was intended to handle AI model training at scale – a capability Meta will now source externally.
The cancellation did not happen in isolation. Within weeks, Meta finalized major procurement agreements with both Nvidia and AMD, suggesting the chip’s failure accelerated a shift toward a multi-vendor hardware strategy rather than triggering it.
Key details of Meta’s external chip strategy:
- Nvidia Deal: Meta will deploy Vera Rubin GPUs, Grace CPUs (as standalone chips – a first), Spectrum-X Ethernet switches and InfiniBand interconnects
- AMD Deal: Up to 6 gigawatts of custom AMD Instinct GPUs based on the MI450 architecture, with shipments starting in late 2026
- AMD Warrant: Meta received a performance-based warrant for up to 160 million AMD shares, tying procurement to equity upside
Meta’s existing MTIA chip, first deployed in 2023, continues to run content recommendation systems on Facebook and Instagram. The cancelled chip was a separate, more advanced project targeting training workloads.
The company frames this pivot as a “portfolio-based approach to infrastructure,” but the sequence of events points to a more urgent recalibration driven by internal development setbacks.