⬤ Tesla just dropped new details on its AI compute roadmap, and the scale they're targeting is genuinely wild. The update lays out how each hardware generation is built to push well past today's limits — starting small and ramping up in a clear, structured way through internally developed AI platforms and Dojo systems.
⬤ AI5 and AI6 are the first steps — designed for tight spaces and early rollouts, running at relatively low gigawatt-per-year levels. They lay the groundwork. Then comes AI7 paired with Dojo 3, which is where things get serious: that combo is built to crack 10+ GW/year, a massive ju mp in raw compute power.
⬤ The roadmap's final stage pairs AI8 with Dojo 3, targeting 100+ gigawatts per year. Here's the thing — Tesla isn't just chasing short-term performance numbers. The whole strategy revolves around building hardware that can run continuously at an enormous scale, with custom silicon as the backbone of it all.
⬤ Why does this matter? Compute at this scale is becoming the bottleneck for serious AI development — it drives energy demand, shapes infrastructure decisions, and determines who stays competitive long term. By mapping a path to civilization-scale AI compute, Tesla is positioning Dojo and its chip roadmap as a core part of its tech strategy, not just a side project.
Peter Smith
Peter Smith