⬤ OpenAI has drawn a direct line between compute scale and revenue growth, making it clear that infrastructure capacity has become the bottleneck in the AI economy. The company's chief financial officer explained how expanding their power footprint from around 200 megawatts to 2 gigawatts coincided with annual recurring revenue skyrocketing from roughly $2 billion to more than $20 billion. Even with that massive expansion, OpenAI says they're still compute-constrained, meaning demand is outpacing what they can actually deliver.
⬤ What's emerging here is a new model for AI monetization—one that's tied to physical infrastructure rather than typical software metrics. OpenAI's numbers show that revenue growth essentially mirrors how much compute they can bring online and keep running. The fact that they're still hitting limits after a tenfold capacity increase tells you something important: the real ceiling right now isn't finding customers, it's finding enough power and hardware to serve them.
⬤ This pattern is playing out across the entire AI sector, where scaling has become a question of securing power, data centers, and specialized chips. Moving from hundreds of megawatts to gigawatt-scale operations shows just how drastically AI workloads are reshaping where capital gets deployed. The focus isn't on squeezing out marginal efficiency gains anymore—it's about building and running bigger compute clusters fast enough to keep up with demand.
⬤ The takeaway here is structural. If AI revenue keeps tracking compute availability this closely, growth is going to be governed by physical limits—not software dynamics. OpenAI is calling this a paradigm shift, and they're right. How quickly the AI sector can expand now depends less on innovation cycles and more on how fast massive compute resources can be built, powered, and integrated into live systems.
Sergey Diakov
Sergey Diakov