OpenAI has clarified that it currently has no plans to utilize Google’s proprietary chips to enhance its products. This statement follows a report from Reuters and other media outlets suggesting that the AI research organization might be considering Google’s artificial intelligence chips in response to a surge in demand.
An OpenAI representative confirmed on Sunday that while the organization is experimenting with some of Google’s tensor processing units (TPUs), it does not intend to implement them on a large scale at this time.
Google has chosen not to comment on the matter.
Testing various chips is a common practice for AI labs, but transitioning to new hardware on a large scale is often a lengthy process that necessitates different architectures and software adaptations. Presently, OpenAI relies heavily on Nvidia’s graphics processing units (GPUs), along with AMD’s AI chips, to support its increasing operational demands. Additionally, OpenAI is making progress on developing its own chip, expected to reach the “tape-out” phase this year, at which point the design will be finalized and sent for production.
Earlier this month, Reuters reported that OpenAI has begun using Google Cloud services to accommodate its expanding requirements for computing power. This signals an unexpected collaboration between two key competitors in the AI landscape. A substantial portion of OpenAI’s computing capacity is sourced from GPU servers operated by CoreWeave, which is described as a neocloud company.
In recent times, Google has made strides to make its in-house AI chips, including TPUs, more accessible to external customers, having previously kept them primarily for internal use. This shift has attracted a range of clients, from major tech firms like Apple to emerging companies such as Anthropic and Safe Superintelligence, both of which were founded by former OpenAI executives and are competing with ChatGPT.
© Thomson Reuters 2025