⬤ Peter Steinberger shared a reality check on AI-driven "manager" tools and multi-agent orchestrators. Sure, AI has made building systems way easier and faster, but here's the catch—it doesn't guarantee good ideas. AI can't reliably come up with high-quality concepts unless humans are actually steering the ship with solid judgment.
⬤ Steinberger pointed out that many multi-agent setups look super productive on the surface. You've got AI agents planning, delegating, and executing tasks all at once, which creates this illusion of serious momentum. But he warns that underneath all that activity, the quality of what's actually being produced can quietly tank. These orchestrators can burn through massive computational resources while delivering results that just don't hold up, despite all the organizational complexity.
⬤ To drive the point home, Steinberger joked about a scenario he called "Gastown"—imagine mayors, overseers, and agents all talking to each other constantly. Despite all those layers and nonstop activity, the final output still falls apart. It's a perfect example of how piling on more agents or management layers doesn't fix the core problem: lack of judgment. Without taste, direction, and clear standards, AI systems just optimize for churning out tokens rather than delivering anything meaningful.
⬤ This matters because it exposes a fundamental limitation in how AI tools should be evaluated and used. As more organizations jump on multi-agent frameworks to accelerate development and decision-making, the difference between busy work and real value becomes critical. Steinberger's take is clear: human direction isn't optional—it's essential for determining whether AI actually boosts productivity or just amplifies noise. How well companies manage this balance will shape future adoption strategies, resource allocation, and what we should realistically expect from AI-driven workflows.
Peter Smith
Peter Smith