⬤ Nvidia (NVDA) is facing a new kind of challenge. Google is working on TorchTPU, a software project designed to run PyTorch directly on Google's own TPU chips. Unlike previous attempts to compete on hardware alone, this effort goes straight after Nvidia's software moat—the thing that's made switching away from Nvidia so painful for developers.
⬤ PyTorch is the framework most AI engineers actually use. It's been built around Nvidia's CUDA platform for years, which means if you're training models with PyTorch, you're probably locked into Nvidia GPUs. TorchTPU wants to change that by letting developers run the same PyTorch code on Google's chips without rewriting everything from scratch. If it works, companies could move workloads between platforms without the usual headaches.
⬤ Nvidia's stock has crushed the S&P 500 over the past few years, and a big part of that rally came from confidence in CUDA's stickiness. Developers don't just buy Nvidia chips—they build entire systems around them. Breaking that lock-in has been nearly impossible. But if Google succeeds, it opens the door for cloud providers and big AI users to shop around more freely, especially as Nvidia's pricing stays sky-high and chip shortages linger.
⬤ This isn't just about Nvidia. Amazon, Microsoft, and Google have all been pouring money into custom AI chips, but software compatibility has always been the blocker. If that barrier falls, competition heats up fast. TorchTPU isn't ready for prime time yet, but the direction is clear: the next battle in AI isn't just about who builds the fastest chip—it's about who controls the software layer underneath.
Eseandre Mordi
Eseandre Mordi