The debate around artificial intelligence tends to circle back to the same concerns: reasoning gaps, hallucinations, ethical dilemmas. But a recent observation from Y Combinator co-founder Paul Graham reframes the conversation entirely. He suggests that AI has already crossed a critical threshold, entering a self-sustaining phase of development comparable to the early Industrial Revolution.
"LLMs in their current form may not be able to do everything, but AI now has enough momentum that this won't matter," Graham wrote. "Beam engines couldn't do everything either, but they were enough to set off the Industrial Revolution."
The Historical Parallel That Changes Everything
Paul Graham's comparison to the beam engine is more than a clever analogy. The beam engine, one of the earliest practical steam engines, was clunky, inefficient, and limited in scope. It couldn't power everything, and it certainly wasn't the final form of steam technology. Yet it accomplished something far more important than perfection: it proved that mechanical power could replace human labor at scale. That single breakthrough set in motion a cascade of innovations that transformed manufacturing, transportation, and society itself over the next two centuries.
Today's large language models occupy a strikingly similar position. ChatGPT, Claude, Gemini, and their peers are riddled with flaws when you look closely enough. They stumble over complex reasoning, sometimes generate confident nonsense, and struggle with consistency across longer conversations. But here's what matters more than their limitations: they work well enough to be useful across an astonishing range of applications, from writing code to analyzing data to automating customer service.
Why Momentum Trumps Perfection
The fundamental insight Graham offers is that technological revolutions don't wait for perfect tools. They begin the moment a technology becomes useful enough to justify further investment and development. LLMs crossed that threshold somewhere between 2022 and 2023, and the proof is everywhere. Companies like OpenAI, Anthropic, Google DeepMind, and xAI are pouring billions into scaling these systems, improving their efficiency, and expanding their capabilities. Every major tech company is racing to integrate AI into their products, and startups are building entirely new categories of software around these models.
The systems are already embedded in education, healthcare, software development, legal research, and creative work. Each deployment generates more data, reveals new use cases, and funds the next generation of improvements. This creates a feedback loop that accelerates progress independent of whether any single model is "good enough" by some abstract standard.
A Shift in Perspective That Matters
By drawing the beam engine parallel, Graham does something subtle but important: he moves the conversation away from whether current AI is ready to solve every problem, and toward recognizing that the transformation has already begun. The relevant question isn't whether GPT-5 or Claude 5 will achieve some theoretical milestone. It's whether the momentum already built into the ecosystem is sufficient to drive continuous improvement toward more capable systems.
History suggests the answer is yes. Steam power led to electric motors and industrial automation. Early computers led to the internet and mobile computing. Each generation of technology was imperfect when it arrived, but sufficient to justify the next wave of development. The pattern holds remarkably well across different eras of innovation.