The race toward Artificial General Intelligence (AGI)—a level of AI capable of performing any intellectual task a human can—has sparked heated debates among researchers, tech leaders, and policymakers. One of the latest voices raising concerns is Steven Adler, a former AI safety expert, who made headlines when an OpenAI Researcher Resigns, announcing his departure in late 2024.
In a candid post on X (formerly Twitter), Adler described the global pursuit of AGI as a “very risky gamble”, warning that AI labs are moving too fast without solving critical safety issues. His resignation adds to the growing list of AI safety researchers leaving OpenAI, citing concerns over the company’s priorities.
But Adler is not alone. Prominent AI researchers, including Stuart Russell from UC Berkeley, have expressed similar concerns, comparing the AGI race to running toward the edge of a cliff. With the recent emergence of DeepSeek, a Chinese AI company that has reportedly developed a model rivaling OpenAI’s technology, the competition for AI dominance is more intense than ever.
This article explores why Adler left OpenAI, the risks of unchecked AGI development, and the broader concerns surrounding AI safety and governance.
The OpenAI Researcher Resigns (Steven Adler): A Warning for AI Safety
A Wild Ride at OpenAI
Steven Adler, who worked as an AI safety lead at OpenAI, spent four years researching ways to align AI with human values. During his tenure, he contributed to OpenAI’s AI safety research, product launches, and long-term AI development strategies.
In his farewell post, he described his time at OpenAI as a “wild ride” but admitted that he was increasingly worried about the pace of AI development.
“When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” — Steven Adler
His main concern is that AI labs, including OpenAI, are racing to develop AGI without properly addressing safety measures.
Also Read: OpenAI Unveils 1-800-CHATGPT for Phone and WhatsApp Access
The AI Safety Dilemma: The Race Toward AGI
Adler’s resignation highlights a fundamental problem in the AI industry—the pressure to develop more advanced AI models as quickly as possible.
He argues that even if one company wants to develop AGI responsibly, competitors may cut corners to gain an advantage. This, in turn, forces all AI labs to accelerate their research, regardless of safety risks.
This concern is not new. Jan Leike, another former OpenAI researcher, left the company in 2023, openly criticizing OpenAI’s shift in priorities.
“Over the past years, safety culture and processes have taken a backseat to shiny products.” — Jan Leike
Similarly, Ilya Sutskever, OpenAI’s former Chief Scientist, was reportedly one of the key figures behind Sam Altman’s brief removal as CEO in 2023, largely due to disagreements over AI safety.
With Adler’s resignation, it becomes clear that internal debates over AI safety and ethics continue to plague OpenAI.
Also Read: OpenAI Sora Launch Revolutionizes AI Video Generation Globally
The Global AGI Race: OpenAI vs. DeepSeek
China’s AI Breakthrough: DeepSeek’s Rapid Progress
Adler’s concerns are particularly relevant in light of the recent rise of DeepSeek, a Chinese AI company that has allegedly built an AGI-capable model rivaling OpenAI’s GPT-4.
DeepSeek’s breakthrough stunned Silicon Valley and even led to a temporary drop in U.S. tech stock values. The company’s ability to develop a highly capable AI model at a fraction of the cost of U.S. counterparts has raised alarms in both the tech industry and government circles.
Sam Altman’s Response
OpenAI CEO Sam Altman acknowledged DeepSeek’s rapid progress, calling it “invigorating” but also signaling that OpenAI would speed up the release of its upcoming AI models.
This response aligns with Adler’s warning: rather than slowing down and ensuring AI safety, leading AI companies may double down on development, further increasing risks.
Also Read: Elon Musk’s xAI Runs Out of Human-Made Data, Turns to Synthetic Data
The Risks of an Unchecked AGI Race
1. Lack of AI Alignment
AI alignment refers to ensuring that AI systems operate in harmony with human values and goals. However, as Adler pointed out, no AI lab has solved this problem. If AI systems are not properly aligned, they could act in ways unintended or even harmful to humanity.
2. Ethical and Safety Concerns
AGI has the potential to surpass human intelligence, which means it could make decisions beyond human comprehension or control.
Stuart Russell, a renowned AI researcher, issued a dire warning:
“The AGI race is a race towards the edge of a cliff.” — Stuart Russell
3. Geopolitical and National Security Risks
The U.S.-China AI race is becoming a national security issue. If China gains a decisive lead in AGI development, it could reshape global power dynamics. The U.S. government is already investigating DeepSeek to assess potential security threats.
Also Read: OpenAI and Retro Biosciences Train GPT-4b Model to Extend Human Life
The Future of AI Safety and Governance
Adler and other AI researchers are calling for global cooperation in AI governance. Stronger regulations, ethical frameworks, and transparency are needed to ensure AGI is developed safely.
However, with AI labs rushing toward AGI for commercial and geopolitical reasons, it remains unclear whether safety concerns will take priority over innovation.
Conclusion
Steven Adler’s resignation highlights the growing divide between AI progress and AI safety. As companies like OpenAI and DeepSeek continue to push the boundaries of AGI, the world faces an urgent question: Will safety concerns be addressed before it’s too late?
FAQs
1. Why did Steven Adler leave OpenAI?
Steven Adler resigned due to concerns over the rapid and unsafe development of AGI in the industry.
2. What is AGI?
Artificial General Intelligence (AGI) is AI that can perform any intellectual task that a human can.
3. Why is the AGI race considered dangerous?
Experts warn that rushing to develop AGI without solving alignment and safety issues poses existential risks to humanity.
4. What role does AI alignment play in AGI development?
AI alignment ensures that AI systems act according to human values. Currently, no lab has fully solved AI alignment.
5. Who are some other AI researchers who left OpenAI?
Prominent ex-OpenAI researchers include Jan Leike, Ilya Sutskever, and Daniel Kokotajlo, all of whom expressed AI safety concerns.
6. How is China’s DeepSeek impacting the AI race?
DeepSeek has developed an AI model that reportedly matches or surpasses OpenAI’s models, intensifying the AI competition.
7. What is Sam Altman’s stance on AI safety?
Sam Altman has acknowledged AI risks but continues to push for faster AI development.
8. How does AI regulation factor into AGI development?
Governments worldwide are discussing AI regulations, but clear global standards are lacking.
9. What are the risks of unaligned AGI?
An unaligned AGI could act unpredictably, bypass human control, and cause unintended consequences.
10. What can be done to ensure AI safety?
Experts recommend stronger AI governance, international cooperation, and more research into AI alignment.