When AWS went down this week, it paralyzed thousands of developers relying on cloud-based AI coding tools. The outage disrupted Cursor, Codex, and Claude Code, leaving coders unable to work. But one developer's tweet captured a growing sentiment: maybe it's time to stop depending on the cloud entirely.
What started as sarcasm became a serious conversation about how developers build software in 2025.
When the Cloud Fails
AI engineer Ahmad posted: "Can't write code because Cursor and Codex are both down thanks to the aws-us-east-1 outage? With one command and a few GPUs, you can route Claude Code to your own local LLM with ZERO downtime. Buy a GPU."
The AWS us-east-1 region is like the internet's central nervous system. This week's outage took out major AI coding tools: Codex went offline, Claude Code stopped responding, and Cursor became unusable. Even GitHub Copilot users reported problems.
The incident exposed an uncomfortable truth: modern AI development is built on infrastructure that can fail without warning, and when it does, there's nothing you can do but wait.
Osman's "Buy a GPU" phrase resonated because it represents a real alternative. Open-source models like Llama 3, Mistral, and Code Llama have made local deployment accessible. Tools like Ollama and LM Studio let you set up an AI coding assistant with a single command.
Running models locally means no downtime, no privacy concerns about sending code to third-party APIs, no usage limits, and full control to modify models however you want. You get back control over your development environment instead of being at the mercy of cloud providers.
Why the Frustration
Cloud AI platforms come with strings attached. Usage caps, token limits, and sometimes "nerfed" model versions actively slow down development. Every code snippet sent to cloud AI gets processed on someone else's server—a compliance risk for companies handling sensitive data. Add unpredictable outages and rising costs, and developers are looking for alternatives.
This mirrors the shift from mainframe computing to personal computers—people eventually want ownership over their tools.
"Buy a GPU" isn't realistic for everyone. Running powerful AI models requires high-end hardware and technical knowledge—a significant investment for individual developers or small startups. Cloud platforms also scale instantly and make collaboration seamless.
The future is probably hybrid: local models for everyday coding, cloud services for massive workloads. This gives you resilience without sacrificing flexibility.
The AWS outage was a reminder that centralized systems have inherent risks. Osman's tweet voiced a desire many developers share: to own their tools rather than rent them, to control their workflow rather than adapt to someone else's limits.
As open-source AI improves and hardware costs decline, local deployment will become more common. The question isn't whether local LLMs will replace cloud services entirely—it's how developers will balance reliability, cost, control, and capability when both options exist. But after this week, a lot more developers are considering buying that GPU.