OpenAI is reportedly preparing to develop its inaugural custom artificial intelligence (AI) chipset within the year. According to sources, the San Francisco-based company has commenced the internal design phase and aims to complete the chip’s blueprint in the upcoming months. This move appears to be driven by the company’s desire to diminish its dependence on Nvidia while enhancing its bargaining power with other chip manufacturers. A recent trademark application from OpenAI suggests that the firm is also looking to produce a diverse array of hardware, including chipsets.
OpenAI’s Chipset
A report from Reuters indicates that OpenAI is in the process of finalizing its in-house chipset design, expected to conclude in the near future. Sources familiar with the situation revealed that following the design phase, OpenAI plans to tape out the chipset at Taiwan Semiconductor Manufacturing Company (TSMC), which will oversee the chip’s manufacturing process.
The proposed chipset will utilize a 3-nanometer process technology, featuring systolic array architecture alongside high-bandwidth memory (HBM) and extensive networking capabilities. This HBM-centered design aligns with technology also present in Nvidia chipsets.
OpenAI is believed to view the creation of its own chipsets as a means to gain a competitive edge over other suppliers in future negotiations. This initiative is envisioned to lessen the company’s reliance on Nvidia’s widely used chips. Future iterations of the chipset are intended to deliver progressively advanced processors with expanded functionalities.
Sources indicate that the chipset’s design is primarily being led by Richard Ho, the head of hardware at OpenAI, and his in-house team. Ho brings experience from his tenure at both Lightmatter and Google, specializing in semiconductor engineering. Under his guidance, the team has reportedly expanded in recent months, now comprising 40 members.
The initial deployment of OpenAI’s first chipset is expected to be limited, primarily serving to run certain AI models. While its current role within the company’s infrastructure may be restricted, there is potential for increased integration in the future. Ultimately, OpenAI aspires to utilize these chips for both AI model training and inference.