OpenAI and Broadcom: Building Custom AI Chips
Key Vocabulary
accelerator /əkˈsɛləreɪtər/
deploy /dɪˈplɔɪ/
capacity /kəˈpæsəti/
Ethernet /ˈiːθərnɛt/
📖 Article
OpenAI has partnered with Broadcom to co-develop custom AI accelerators. The agreement, announced on October 13, 2025, says OpenAI will design the chips while Broadcom will handle development and deployment. Although the companies shared a timeline, they did not disclose financial terms. The hardware will be integrated into racks that use Broadcom networking technology.
Deployment is scheduled to begin in the second half of 2026 and to complete by the end of 2029, with a planned capacity of 10 gigawatts. While OpenAI has secured separate agreements with Nvidia and AMD for specialized chips, this move shows a trend toward in-house designs by AI developers. Analysts have questioned the scale and cost of such commitments; however, Broadcom shares rose after the announcement. Furthermore, the partnership underlines a choice of Ethernet for scale-up and scale-out networking, which may affect data center design and operational costs. Since OpenAI's services have grown rapidly, the company has sought more tailored infrastructure. The planned 10 gigawatts was described as roughly equal to the power needs of more than eight million U.S. homes, which illustrates the scale.
❓ Quiz
💬 Discussion
Do you think having in-house hardware helps a company move faster? Why?
Have you ever worked with a team that chose a new technology? What changed?
What do you think about companies buying a lot of computing power? Is it risky?
Would you prefer services from a company that builds its own hardware? Why or why not?