It's not every day that two of AI's leading thinkers land on the same page about the future—but that's exactly what seems to be happening. Leopold Aschenbrenner, known for his work on AI safety and alignment, predicted in his essay Situational Awareness that AI systems would start automating research work by 2027. Now, Sam Altman is saying similar things. The implication? We're moving toward a world where AI doesn't just answer questions—it actively contributes to building the next generation of AI.
What Aschenbrenner and Altman Are Saying
AI observer Chubby highlights how Leopold Aschenbrenner's 2024 essay Situational Awareness predicted that AI would automate research by 2027—a vision OpenAI's Sam Altman now seems to be echoing.
Both voices converge on a few key ideas about where AI is headed:
- AI will design AI – Aschenbrenner envisions models that architect and test new systems on their own, and Altman's recent comments suggest this could happen sooner than we think
- Research cycles will shrink dramatically – Tasks that take human researchers months could be completed in days or hours as AI takes over model design, training, and fine-tuning
- Safety and alignment become critical – Both emphasize the risk: as AI systems self-improve, keeping them aligned with human values becomes harder—and more urgent
- The role of human researchers will shift – Instead of doing the work, humans may focus more on oversight, strategy, and ensuring AI stays on track
Why This Matters
If Aschenbrenner and Altman are right, we're looking at a step-change in how fast AI evolves. Innovation could accelerate dramatically, but so could the risks. The countries and companies that lead in automated AI research will gain massive influence—economically and geopolitically. Meanwhile, the job of being an AI researcher might look very different in just a few years.
The bottom line? AI is on the verge of becoming its own best researcher. That's thrilling—and a little unnerving. The challenge now is making sure these self-improving systems stay aligned with what we actually want them to do.
Usman Salis
Usman Salis