Nvidia is moving closer to what could become one of the largest strategic investments in its history, as the chipmaker deepens its relationship with OpenAI to support large-scale AI infrastructure expansion.
The two companies have outlined a long-term partnership focused on deploying at least 10 gigawatts of Nvidia-powered AI systems. This rollout is expected to span multiple phases and rely on millions of GPUs installed across purpose-built data centers.
Key Elements of the Partnership
- Deployment of multi-gigawatt AI infrastructure using Nvidia GPUs
- First 1-gigawatt phase targeted for the second half of 2026
- Use of Nvidia’s upcoming Vera Rubin platform
- Joint optimization of hardware, networking and software stacks
OpenAI will treat Nvidia as a preferred strategic partner while continuing to work with other major infrastructure providers. The arrangement reflects OpenAI’s growing need for reliable, high-density compute as its models and services scale.
Nvidia continues to expand its AI footprint beyond large-scale infrastructure by pushing deeper into model-level innovation. The company recently released an open-source speech AI system with ultra-low latency, demonstrating how optimized software can run efficiently on massive GPU deployments, as seen in its 24-millisecond response-time speech model. This approach reinforces Nvidia’s strategy of pairing advanced AI models with high-density compute to support its growing partnership with OpenAI.
Investment Details at a Glance
| Area | Details |
| Planned Nvidia investment | Potentially up to $100B over time |
| Initial reports | Earlier plans described as non-binding |
| Current status | Confirmed as progressing and on track |
Nvidia CEO Jensen Huang confirmed participation in OpenAI’s latest fundraising round, calling it potentially Nvidia’s largest-ever investment. He also noted interest in a future OpenAI IPO, should one materialize.
The move aligns with OpenAI’s broader push to raise roughly $100 billion to fund next-generation AI infrastructure.