The launch of Google’s Gemini 3 has the entire investing world rethinking the artificial intelligence landscape. The new reasoning model not only leapfrogged the latest from ChatGPT juggernaut OpenAI, the still-private company driving so much of the massive AI spending out there, but was also trained entirely on Google’s custom chips called tensor processing units (TPUs), co-designed by Broadcom . In a new post from The Information, the tech outlet said that Meta Platforms is thinking about using Google’s TPUs for its data centers in 2027. The report fuels the debate about whether custom silicon is going to take a bite out of Nvidia’s graphics processing units business. Club stock Nvidia sank to nearly three-month lows on Tuesday. Nvidia put out a statement on X , saying, “We’re delighted by Google’s success — they’ve made great advances in AI, and we continue to supply to Google.” But the post continued, “Nvidia is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.” Jim Cramer, who views the recent Nvidia stock drop as a buying opportunity , said Tuesday that Meta or any other tech companies shopping around on AI chips won’t lower the price of Nvidia GPUs, which are considered the gold standard in all-purpose chips to run AI workloads. “The demand is insatiable for Nvidia,” Jim said, pointing to last week’s solid earnings and rosy guidance . The real winners here are Meta and Broadcom, which are also Club holdings. Jim said the idea of using less expensive TPUs gives Meta a chance to show that it’s not going to just spend like a drunken sailor, which was basically what slammed the stock the day after the company boosted its already massive spending guidance. For Broadcom, Jim said it is another feather in the cap of CEO Hock Tan, who is also on Meta’s board. So, if there is truth to The Information story, that might be the connection. Broadcom and Nvidia have been top performers for the portfolio in 2025, up more than 60% and 30%, respectively. Meta, also a Club stock, has been up and down, and up only about 7.5% year to date. AVGO NVDA,META YTD mountain Broadcom, Nvidia, and Meta YTD The advent of Gemini 3 and its reliance on TPUs also raises a question about what Gemini 3 means for OpenAI and its growth trajectory — not to mention its financial commitments? After all, so much of what’s going on with AI nowadays has the ChatGPT creator right at the center of it all. While not a public company yet with earnings reported quarterly, it’s safe to assume that OpenAI doesn’t currently make enough money to justify its $500 billion valuation, nor its level of announced spending plans. It’s the momentum of user adoption and, more importantly, the sustainability of the momentum that could, if anything, justify OpenAI’s spending intentions. If it were to lose its lead, then OpenAI’s perceived growth path would bear greater scrutiny. ChatGPT has been trained on Nvidia chips. Alphabet’s Google designed its TPUs with the help of Broadcom. Even before Gemini 3 was released last week, Alphabet stock had been soaring. On Monday, it surged another 6%, extending its year-to-date gains to nearly 70%. Alphabet stock was up again Tuesday, knocking on the door of a $4 trillion market cap. While some believe that the answers to these questions are that Google/Broadcom are now winning at the expense of Nvidia/OpenAI, and that the future is now all about custom silicon, we say, not so fast. First off, it’s way too early to make a call that the battle of AI reasoning models will play out like the search wars, with the winner taking all. The idea that there will only be one model to rule them all, like Google Search has done for more than two decades, is not where we see this going. Not for the hardware, nor for the software or LLMs that run on it. We still think this could all play out in such a way that certain models are better suited to certain tasks. That could mean Gemini for coding and research, Meta AI for more social or creative tasks, Anthropic and Microsoft playing for the enterprise space, and so on. Since we’re still in the early days of AI, the leading model at any given time still must fight to stay on top. For example, when OpenAI’s ChatGPT launched in late 2022 and quickly went viral, Google hastily and disastrously stood up Gemini. But here we are three years later, and Gemini 3 catapulted Google to the top of the heap as far as capabilities. ChatGPT is, however, enjoying its first mover advantage, reporting early last month over 800 million weekly active users. Google said last week that Gemini has over 650 million monthly active users. Second, just because Gemini doesn’t rely on Nvidia graphics processing units (GPUs) doesn’t mean that Nvidia hardware is suddenly less relevant. Custom semiconductors are nothing new. While they can bring financial cost advantages, that advantage does come at a cost to develop, update, and manufacture the chips. Plus, investors must stay mindful that while Gemini may not rely on Nvidia hardware, Google Cloud services do. TPUs are a type of application-specific integrated circuit (ASIC), meaning that these chips are suited to a particular type of task or application. That’s all well and good for internal projects, like the advancement of large language models (LLMs), that will underly much of Google’s own services, such as Search, YouTube, or Waymo. However, TPUs are not as attractive when the aim is to rent compute out to customers, which is what Google does as the world’s third-biggest cloud behind Amazon and Microsoft. For renting cloud compute, Nvidia’s GPUs are the undisputed champions, as they work with Nvidia’s CUDA software platform, which AI researchers have been working with for years. GPUs are flexible, widely available, and already broadly adopted and familiar to developers around the world. If a customer were to develop strictly on TPUs, they might realize a cost benefit. However, to do so, it would require giving up CUDA to develop on Google’s specific software stack, a stack that doesn’t translate to GPUs or likely even other custom chips that might be offered by other companies. To be sure, for the biggest LLM companies out there, it may make sense to develop a TPU version alongside a GPU one, if the volume of business warrants it. We’re monitoring The Information report about Meta, but we are a bit skeptical. For starters, we already know that Meta is working with Broadcom on its own custom chips, so the idea of buying Alphabet’s custom silicon, instead of utilizing the one it has been working with Broadcom to optimize for their own workloads, is a bit odd. Alphabet is also Meta’s main rival in digital advertising, so the idea that it’s going to start shifting to Alphabet as a key supplier, be it for a hardware or a software stack, seems a bit risky. Nonetheless, the race to build out accelerated AI infrastructure has resulted in the formation of plenty of frenemy relationships, so we certainly are not dismissive of the news. However, developing TPU versions of software, alongside GPU-based versions, is not going to be the case for most companies. Even if a company’s stated goal was to diversify beyond the Nvidia ecosystem, locking itself into another, even more specific software and hardware stack like Google’s TPU environment, isn’t a smart way to go about it. In addition to having to rework years of development written in CUDA and realize the cost benefits of that effort, a company would also be giving up the ability to move to another cloud provider or even bring workloads in-house. Google’s TPUs aren’t available on AWS or Microsoft’s Azure clouds, or on neoclouds like CoreWeave, nor can they be purchased outright if a company opts to build its own infrastructure. While The Information report does suggest that Google may consider doing just that, it’s not clear when or to what extent it will sell chips to third parties for use in their own data centers — will it be reserved for large buyers, or open to buyers of all sorts to more directly compete with Nvidia, time will tell and we will conintue to monitor for further details. What Gemini 3 does indicate is that there are other ways to go about developing a leading LLM that can be run more cheaply than those based on Nvidia hardware. However, it requires years of work and billions of dollars of investment to develop both the hardware and software necessary to do so. Additionally, what a company like Google develops for internal use to reduce costs may not be as attractive to customers who don’t want to be locked in. The strategy only works for companies doing so much volume internally that the benefit of a financial cost reduction is worth the loss of flexibility that Nvidia’s GPUs provide. Only a handful of companies in the world have that scale – and fortunately for Nvidia, most of those companies make more money renting out GPU-based compute. In the end, we’re back to where we started, believing that custom silicon does make a lot of sense for the big players, which is one key reason we took a position in Broadcom to begin with. But we know that Nvidia’s GPUs have far more reach thanks to their flexibility to operate many different types of workloads and a long history, which has resulted in broad-based adoption, portability from one cloud or on-premises infrastructure to another, and the largest software library around. Additionally, when we consider sovereign AI spend, these nation-state buyers are going to be far more interested in a more flexible, open ecosystem like the one Nvidia provides that lets buyers write their own code with more control, versus a more specialized closed ecosystem that puts them more at the mercy of a U.S. company. Consider that Google isn’t even allowed in China, so are Chinese buyers really going to demand Google TPUs, especially if President Donald Trump authorizes Nvidia’s H200 chips for sale into China? Cost savings are important, but from the perspective of a sovereign entity, national security is the priority. The introduction of AI agents may also change some of these dynamics, as it may become easier to switch from one infrastructure to another if AI agents can be deployed to, say, convert CUDA-based programs to something that will run on a TPU. However, for the time being, we don’t think the introduction of Gemini 3 to be enough to derail the demand Nvidia spoke about, or put on hold that vast number of deals it has made in recent months. While some may argue that the idea of renting out compute (infrastructure-as-a-service) will become less relevant as companies like Alphabet instead turn to selling their application programming interface (API) in a move toward a model-as-a-service (MaaS) business model. It’s a trend we expect to hear more about in a post-Gemini 3 world. However, we’re not at the point of it altering our investment thesis on Nvidia or the broader AI cohort at the moment. Nonetheless, investors would be remiss not to acknowledge and keep in mind this effort to move away from Nvidia chips, in certain instances, and the effort by Alphabet to potentially move beyond an IaaS model altogether to a new MaaS business model, though even in that scenario, the world wouldn’t need less compute, the end customer may just simply be a bit less picky about the hardware their applications are being run on, as the move to an MaaS model would allow the API provider to choose the hardware based on cost. While mindful of the evolving playing field, we see no major change to our view of the AI space. We still think Nvidia is a must-own name and that Broadcom is the way to play the custom silicon space. However, the introduction of Gemini 3 should wake investors up to these changes happening under the surface, and the potential risks they may bring, in different ways, to the juggernauts driving AI innovation. (Jim Cramer’s Charitable Trust is long NVDA, AVGO, AMZN, META, MSFT. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
