Nvidia’s $100 billion gamble on OpenAI could redefine the future of artificial intelligence.
In a move that could reshape the artificial intelligence infrastructure landscape, Nvidia has announced it will invest up to $100 billion in OpenAI, merging enormous capital with cutting-edge compute power to fuel the next generation of AI systems. Here’s a detailed, credible breakdown of what’s going on—and what it might mean.
What’s in the Deal
The agreement, described as a strategic partnership, includes two intertwined transactions. First, Nvidia will provide cash investments into OpenAI in exchange for non-controlling equity according to Reuters. Second, OpenAI will purchase Nvidia’s high-performance chips to power its AI data centers. According to those involved, the first tranche of Nvidia’s investment—about $10 billion—will activate once the first gigawatt of infrastructure is deployed.
OpenAI, which AP News reports was recently valued at around $500 billion, will benefit from access to a vast supply of AI hardware, while Nvidia ensures a major customer and partner in scaling up AI compute.
Infrastructure: Gigawatts, Chips, Timeframes
A central part of the deal is the build-out of at least 10 gigawatts of compute infrastructure powered by Nvidia’s systems. To give context, 10 gigawatts is an enormous amount of compute capacity—comparable to the output of several nuclear reactors in electricity terms when thinking analogously.
The first gigawatt is expected to be functional in the second half of 2026, leveraging Nvidia’s next-generation chip system named Vera Rubin. Subsequent phases will follow as more infrastructure is deployed according to The Guardian.
Why It Matters
This deal secures a very strong supply relationship between Nvidia and OpenAI at a time when demand for AI compute is surging. OpenAI has been constrained in part by access to compute (GPUs, accelerator chips), so this agreement helps it lock in both investment and supply of hardware.
For Nvidia, the win is threefold: financial upside from its investment; recurring revenue from OpenAI’s chip purchases; and reinforcing its position as a foundational player in the frontier AI ecosystem. Additionally, Nvidia becomes OpenAI’s “preferred” supplier for compute and networking infrastructure.

OpenAI leaders Greg Brockman and Sam Altman with Nvidia’s Jensen Huang, highlighting the $100 billion deal to expand AI infrastructure.
Risks & Open Questions
While the deal is massive, many details remain to be worked out. Questions include exactly how the equity stake will scale over time, how deployment costs break down (land, power, real estate, cooling, etc.), and how regulatory or geopolitical challenges could affect the build-out, especially related to antitrust or export controls.
Another risk involves competing infrastructure projects (in-house chips, other AI chip makers), and how efficient the resulting systems will be in terms of power usage, cost per model training, and so on.
Broader Context: Where This Fits in the AI Arms Race
This comes amid growing competition globally—companies and countries are racing to build both the hardware and the regulatory, ethical, and climate infrastructure to handle widespread deployment of more capable AI systems. The deal reinforces Nvidia’s role as an essential enabler for frontier AI, and is another chapter in OpenAI’s evolution from research lab into a large-scale infrastructure operator.
It also complements other major partnerships and projects (e.g. Stargate, Microsoft’s involvement) aiming to build global AI compute capacity.
FAQs (People Also Ask)
Will Nvidia’s investment give it control over OpenAI?
No. The equity Nvidia will receive is described as non-controlling, meaning Nvidia is not going to run OpenAI or dictate its research agenda.
Does this affect OpenAI’s other chip suppliers or in-house chip development?
It does not seem to block them. OpenAI is still pursuing other chip suppliers and exploring its own hardware (for example, with Broadcom and TSMC). This deal instead secures a preferred channel and supply from Nvidia while OpenAI continues diversifying.
Where will the data centers or infrastructure be located?
Most of the reporting suggests much of the build-out will occur in the United States. But details of exact locations aren’t yet public.
What is “Vera Rubin” and why is it important?
“Vera Rubin” is Nvidia’s next generation of AI hardware system, which is planned to be used in the first deployment phase of this deal. Because the first gigawatt of infrastructure will run on it, its performance, energy efficiency, reliability, and cost will heavily influence the whole project’s success.
Conclusion
This is not just another tech-investment headline. Nvidia's pledge of up to $100 billion stakes a claim in what may be the single most important resource for future AI development: reliable, massive compute capability.
By locking in both capital and chip supply, Nvidia and OpenAI are positioning themselves at the core of the next wave of AI breakthroughs—but with that comes real risk: enormous cost, regulatory scrutiny, the engineering challenge of building data centers at scale, and the need to deliver value that matches expectations. If this deal succeeds, it could set the blueprint for how AI infrastructure is financed and built globally. If it falters, it may show just how difficult it is to turn massive ambitions into operational reality.
