OpenAI Teams Up with Broadcom to Build Custom AI Chips

Key Points
- OpenAI partners with Broadcom to create custom AI accelerator chips.
- Goal is to generate up to 10 gigawatts of proprietary compute capacity.
- Partnership aims to lessen OpenAI's reliance on Nvidia hardware.
- Deployment of Broadcom equipment expected to start in the second half of 2026.
- Full rollout of the 10‑gigawatt capacity targeted for completion by the end of 2029.
- Sam Altman called the deal a critical step for AI infrastructure and societal benefit.
- Previous multi‑gigawatt deals include six gigawatts with AMD and ten gigawatts with Nvidia.
- The collaboration reflects a broader industry shift toward in‑house chip design.
OpenAI announced a partnership with Broadcom to develop its own custom AI accelerator chips, aiming to diversify its compute supply and lessen reliance on Nvidia. The collaboration targets the creation of up to 10 gigawatts of bespoke AI hardware for OpenAI's data centers, with deployment slated for the second half of 2026 and completion by the end of 2029. The move follows prior multi‑gigawatt deals with AMD and Nvidia and reflects a broader industry push toward in‑house chip design to secure compute capacity for advanced models.
Partnership Overview
OpenAI disclosed a strategic alliance with Broadcom to design and manufacture custom AI accelerator chips intended for use in its data‑center infrastructure. The agreement positions Broadcom as the hardware partner that will enable OpenAI to produce "10 gigawatts of custom AI accelerators," a scale the company describes as comparable to the output of multiple large‑scale power plants.
Strategic Rationale
By developing its own chips, OpenAI seeks to reduce dependence on Nvidia, which currently dominates the AI‑accelerator market. The partnership also aims to embed insights gained from frontier model development directly into hardware, thereby unlocking new levels of capability and efficiency for OpenAI's suite of products.
Technical Ambitions
The custom chips are planned to be integrated into OpenAI's existing and future compute clusters, supporting a range of AI applications from conversational agents to advanced generative tools. The collaboration builds on earlier multi‑gigawatt agreements—six gigawatts with AMD and ten gigawatts with Nvidia—demonstrating OpenAI's broader strategy of diversifying its compute sources.
Timeline and Deployment
Broadcom is expected to begin rolling out rack‑level equipment in the second half of 2026. The full deployment of the agreed‑upon 10‑gigawatt capacity is projected to conclude by the end of 2029. OpenAI co‑founder and CEO Sam Altman highlighted the partnership as a "critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses."
Industry Context
The move aligns with a growing trend among major tech firms—such as Meta, Google, and Microsoft—to develop proprietary silicon solutions. This shift is driven by soaring AI demand, supply‑chain considerations, and the desire to attain greater control over performance characteristics. While custom chip projects have yet to pose a substantial threat to Nvidia's market dominance, they have opened substantial opportunities for alternative chipmakers like Broadcom.