⬤ Nvidia just dropped some major details about its next-gen Vera Rubin GPU platform, and the numbers are pretty wild. CEO Jensen Huang says Grok 5 will hit 7 trillion parameters, which shows just how massive these AI models are getting. The company shared this during a presentation on their official YouTube channel, focusing on what it takes to train these monster models efficiently.
⬤ Here's where it gets interesting: if you're looking at a one-month training window, Rubin only needs a quarter of the systems that Blackwell requires to train the same frontier model. That's 75% less physical hardware for the exact same job. Nvidia's calling this "factory throughput," and they're claiming it's about 10x better than Blackwell.
⬤ The improvements stack up fast when you look at multiple generations. Blackwell already delivered roughly 10x better performance than Hopper. So when you do the math, Rubin hits around 100x improvement in throughput per watt compared to Hopper. Nvidia's pushing this metric hard because modern AI data centers are hitting power limits before they run out of space.
⬤ Huang spelled it out pretty clearly: in a massive $50 billion data center running on 1 gigawatt of power, what matters most is how much work you can squeeze out of each watt. More efficiency means more usable compute without adding power capacity. Nvidia's betting that Rubin will be crucial for keeping these enormous AI training runs economically viable as models keep growing.
Eseandre Mordi
Eseandre Mordi