Former OpenAI security researcher Steven Adler has resigned, voicing serious concerns over the rapid advancement of artificial intelligence (AI) and its potential existential threats to humanity.
Adler, alongside prominent figures like UC Berkeley professor Stuart Russell, warns that the accelerated pursuit of artificial general intelligence (AGI)—systems capable of surpassing human intelligence—could result in catastrophic outcomes without robust safety measures.
OpenAI has faced mounting criticism for its internal practices and waning emphasis on ethical responsibility, with allegations of a restrictive culture and a shift in priorities away from AI safety.
A growing number of safety-oriented researchers have departed OpenAI, reflecting a troubling trend where advocates for caution and ethical considerations are increasingly marginalized.
The global AI race has also escalated into a geopolitical competition, with governments and corporations prioritizing dominance over precaution, raising fears that critical safeguards are being sacrificed in the rush to innovate.
In a notable departure from one of the world’s leading AI labs, Steven Adler announced his resignation on X (formerly Twitter), reigniting debates about the ethical and safety implications of AGI. Adler expressed deep apprehension over the industry’s direction, warning that competitive pressures are pushing labs to prioritize speed over alignment. “Even if one lab tries to act responsibly, others may cut corners to keep up, creating a dangerous feedback loop,” Adler stated.
A race to the brink Adler’s concerns align with those of Stuart Russell, who has described the AGI race as akin to “a sprint toward the edge of a cliff.” Russell and others have cautioned that, without proper controls, AGI could lead to unpredictable or even catastrophic outcomes, posing risks as severe as human extinction. “Even the CEOs leading this charge acknowledge that the winner of this race faces an overwhelming chance of causing existential harm,” Russell told the Financial Times.
The risks are not overstated. If AGI operates outside the bounds of human values, its behavior could become uncontrollable and hostile. Adler noted that systemic pressures within the AI industry encourage labs to “cut corners,” creating a perilous imbalance where safety is often sidelined.
OpenAI under scrutiny Adler’s resignation is not the first to raise questions about OpenAI’s commitment to safety. The tragic death of former researcher Suchir Balaji in late 2024, reportedly by suicide, cast a somber light on the company’s culture. Balaji, who had become a whistleblower, criticized OpenAI for its stringent confidentiality agreements and lack of transparency.
Other high-profile departures have further amplified these concerns. In 2023, OpenAI co-founder Ilya Sutskever and Jan Leike, who led the superalignment team, also left the organization. Leike publicly criticized OpenAI’s focus on flashy products at the expense of safety, remarking, “The commitment to safety has taken a backseat as the race for market dominance intensifies.”
These departures underscore a disturbing reality: as competition heats up, ethical voices are increasingly drowned out. Adler’s resignation is a stark reminder that the pursuit of AGI is not just a technological challenge but an ethical one as well.
A geopolitical battleground The AI race has grown beyond the private sector, becoming a critical issue on the global stage. Former President Donald Trump pledged to dismantle policies he claimed stifle AI innovation, vowing to align U.S. development with “common sense” and national interests. Meanwhile, OpenAI recently unveiled ChatGPT Gov, a tool designed for government agencies, highlighting the deepening ties between AI and national security.
However, Adler and other experts fear that this rush for geopolitical dominance risks sidelining essential safety measures. Projects like ChatGPT Gov may enhance competitiveness, but the potential costs—if safety concerns are ignored—could be catastrophic.
As Adler steps away, his warnings resonate loudly: humanity’s future hinges on finding a balance between innovation and precaution before it’s too late.