HardTechnologyOctober 13, 2025

OpenAI and Broadcom: Building Custom AI Chips

Key Vocabulary

accelerator /əkˈsɛləreɪtər/

a specialized processor that speeds up machine-learning computations
Example: Large models run faster when an accelerator is optimized for them.

capacity /kəˈpæsəti/

the total computing power planned or available in a system
Example: The deployment aims to reach a capacity of ten gigawatts.

Ethernet /ˈiːθərnɛt/

a family of wired network technologies used for data center connections
Example: The racks will use Ethernet-based networking for scale-out designs.

vertical integration /ˌvɜːrtɪkəl ˌɪntɪˈɡreɪʃən/

when a company controls more stages of its production or supply chain
Example: Designing chips is an example of vertical integration.

in-house /ˌɪnˈhaʊs/

done inside a company rather than by outside suppliers
Example: OpenAI will do much of the design work in-house.

📖 Article

On October 13, 2025, OpenAI announced a multi-year collaboration with Broadcom that will see OpenAI design its own AI accelerators while Broadcom develops and deploys the hardware. The plan assigns OpenAI responsibility for chip architecture and system design, and Broadcom responsibility for manufacturing, rack integration and networking. If the companies meet their schedule, racks will begin rolling out in the second half of 2026. Such an arrangement reflects a strategic shift toward vertical integration by AI developers.

The partnership targets a cumulative capacity of 10 gigawatts by the end of 2029, a figure that Reuters noted is roughly equivalent to the electricity needs of more than eight million U.S. homes. Consequently, the scale of deployment raises questions about energy, supply chains and capital intensity. Broadcom will supply Ethernet-based networking and connectivity for the racks, which the firms say supports scale-up and scale-out architectures. The companies did not disclose financial terms for the deal.

While OpenAI has also secured major agreements with Nvidia and AMD for specialized processors, building custom accelerators allows tighter alignment between hardware and model requirements. Nevertheless, some analysts have expressed skepticism about the magnitude of commitments relative to revenue and warned of investment risk. Moreover, Broadcom’s stock rose after the announcement, reflecting investor optimism about its role in AI infrastructure. The collaboration follows a broader industry trend in which cloud providers and large developers seek more control over chip design and data center networking.

The project will be watched closely by competitors and customers, since its outcomes may influence future data center design. For now, the timeline and capacity targets offer concrete milestones to assess progress.

269 words

❓ Quiz

Q1. When was the collaboration announced?
Q2. What cumulative capacity is targeted by the partnership?
Q3. When will racks begin rolling out?

💬 Discussion

1.

Do you think large investments in AI hardware affect everyday services you use? How?

2.

Have you ever felt surprised by how much energy big technology projects use? What did you learn?

3.

What do you think about companies designing hardware that matches their software? Is it smart?

4.

Would you trust a company more if it made both hardware and software? Why or why not?

5.

How would you explain the idea of "in-house" design to a friend who knows little about tech?