OpenAI Taps Google’s AI Chips in Strategic Shift Away from Nvidia Dependency

OpenAI Taps Google’s AI Chips in Strategic Shift Away from Nvidia Dependency
x
Highlights

OpenAI begins using Google’s AI chips, marking a major shift in its reliance on Nvidia and Microsoft’s infrastructure.

In a significant move within the AI landscape, OpenAI, the Microsoft-backed creator of ChatGPT, has reportedly begun utilizing Google’s artificial intelligence chips. According to a recent report by Reuters, this development points to OpenAI’s efforts to diversify its chip suppliers and reduce its dependency on Nvidia, which currently dominates the AI hardware market.

OpenAI has historically been one of the largest buyers of Nvidia’s graphics processing units (GPUs), using them extensively for both training its AI models and performing inference tasks — where the model applies learned data to generate outputs. However, as demand for computing power surges, OpenAI is now exploring alternatives.

The Reuters report, citing a source familiar with the matter, claims that OpenAI has started using Google’s Tensor Processing Units (TPUs), marking a notable shift not only in its hardware strategy but also in its reliance on cloud services. Earlier this month, Reuters had already suggested that OpenAI was planning to leverage Google Cloud to help meet its growing computational needs.

What makes this collaboration remarkable is the competitive context. Google and OpenAI are direct rivals in the AI field, both vying for leadership in generative AI and large language model development. Yet, this partnership demonstrates how shared interests in infrastructure efficiency and cost management can bridge even the most competitive divides.

According to The Information, this is OpenAI’s first major deployment of non-Nvidia chips, indicating a deliberate effort to explore alternative computing platforms. By leasing Google’s TPUs through Google Cloud, OpenAI is reportedly looking to reduce inference costs — a crucial factor as AI services like ChatGPT continue to scale.

The move is also part of a broader trend at Google. Historically, the tech giant has reserved its proprietary TPUs mainly for internal projects. However, it appears Google is now actively expanding external access to these chips in a bid to grow its cloud business. This strategy has reportedly attracted several high-profile clients, including Apple and AI startups like Anthropic and Safe Superintelligence — both founded by former OpenAI employees and seen as emerging competitors.

A Google Cloud employee told The Information that OpenAI is not being offered Google’s latest-generation TPUs, suggesting the company is balancing business expansion with competitive caution. Still, the fact that OpenAI is now a customer illustrates Google’s ambition to grow its end-to-end AI ecosystem — from hardware and software to cloud services — even if that means partnering with direct rivals.

Neither Google nor OpenAI has issued official statements confirming the deal. Yet, the development signals an evolving AI infrastructure market where flexibility, cost-efficiency, and compute availability are becoming more strategic than ever.

As the race to power the future of AI intensifies, such cross-competitive collaborations could become more commonplace — redefining how major players navigate both cooperation and competition in the era of intelligent computing.

Show Full Article
Print Article
Next Story
More Stories
OSZAR »