OpenAI and Broadcom: Building Custom AI Chips
Key Vocabulary
accelerator /əkˈsɛləreɪtər/
capacity /kəˈpæsəti/
Ethernet /ˈiːθərnɛt/
vertical integration /ˌvɜːrtɪkəl ˌɪntɪˈɡreɪʃən/
in-house /ˌɪnˈhaʊs/
📖 Article
On October 13, 2025, OpenAI announced a multi-year collaboration with Broadcom that will see OpenAI design its own AI accelerators while Broadcom develops and deploys the hardware. The plan assigns OpenAI responsibility for chip architecture and system design, and Broadcom responsibility for manufacturing, rack integration and networking. If the companies meet their schedule, racks will begin rolling out in the second half of 2026. Such an arrangement reflects a strategic shift toward vertical integration by AI developers.
The partnership targets a cumulative capacity of 10 gigawatts by the end of 2029, a figure that Reuters noted is roughly equivalent to the electricity needs of more than eight million U.S. homes. Consequently, the scale of deployment raises questions about energy, supply chains and capital intensity. Broadcom will supply Ethernet-based networking and connectivity for the racks, which the firms say supports scale-up and scale-out architectures. The companies did not disclose financial terms for the deal.
While OpenAI has also secured major agreements with Nvidia and AMD for specialized processors, building custom accelerators allows tighter alignment between hardware and model requirements. Nevertheless, some analysts have expressed skepticism about the magnitude of commitments relative to revenue and warned of investment risk. Moreover, Broadcom’s stock rose after the announcement, reflecting investor optimism about its role in AI infrastructure. The collaboration follows a broader industry trend in which cloud providers and large developers seek more control over chip design and data center networking.
The project will be watched closely by competitors and customers, since its outcomes may influence future data center design. For now, the timeline and capacity targets offer concrete milestones to assess progress.
❓ Quiz
💬 Discussion
Do you think large investments in AI hardware affect everyday services you use? How?
Have you ever felt surprised by how much energy big technology projects use? What did you learn?
What do you think about companies designing hardware that matches their software? Is it smart?
Would you trust a company more if it made both hardware and software? Why or why not?
How would you explain the idea of "in-house" design to a friend who knows little about tech?