The excitement around OpenAI’s security began when Ilya Sutskever and Jan Leike, leaders of the so-called “superalignment” team focused on AI safety, left the company. Sutskever had played a key role in the surprise firing of CEO Sam Altman last year, but later reversed course and supported Altman’s return. It seems the CEO still held a grudge against him for that. Jan Leike also left OpenAI, criticizing the company for not investing enough in AI safety and stating that tensions with OpenAI’s leadership had “reached a critical point.”
Related: What is AI Art, and How Does It Work?
In a video interview with Dwarkesh Patel published on Tuesday, Leopold Aschenbrenner, a former OpenAI safety researcher, joined the growing chorus of former and current OpenAI employees criticizing CEO Sam Altman for prioritizing profits over responsible AI development. Aschenbrenner was fired for voicing his concerns in writing. In the interview, he discussed internal conflicts over priorities, suggesting a shift towards rapid growth and deployment of AI models at the expense of safety.
The Safety Illusion
Aschenbrenner revealed that OpenAI’s actions contradicted its public statements on safety. When he raised safety concerns, the company would respond with, “Safety is our top priority.” However, when it came time to invest serious resources or make compromises to implement basic measures, safety was no longer a priority.
This aligns with Leike’s claims, who said the team was “swimming against the tide” and that “the culture of safety and processes took a backseat to shiny products” under Altman’s leadership.
The former employee also expressed primary concern about the development of AGI (AI reaches or exceeds human intelligence in any field), emphasizing the importance of a cautious approach, especially since many fear that China is racing to surpass the United States in AGI research.
Leopold highlighted the questions he was asked when he was fired. They concerned his views on AI progress, AGI, the appropriate level of safety for AGI, whether the government should be involved in AGI, the loyalty of him and the superalignment team to the company, and what he was doing during OpenAI board meetings.
Just a few weeks ago, it came to light that OpenAI requires its employees to sign non-disclosure agreements (NDAs) that prevent them from discussing the company’s safety practices. Aschenbrenner said he didn’t sign such an NDA, but he was offered about $1 million in stock options. A considerable sum to appease an employee who knows too much.
Suspicious Staff Restructuring
The departure of prominent safety team members, including Sutskever and Leike, drew additional scrutiny. Subsequently, the entire team was disbanded, and a new safety team was announced. Leading this team (not the Terminator, to everyone’s surprise) was none other than CEO Sam Altman himself.
In a blog post, OpenAI announced that the new committee would also be led by Bret Taylor, the company’s board chair, and board member Nicole Seligman:
The first task of the Safety and Security Committee will be to assess and further develop OpenAI’s safety processes and measures over the next 90 days. Upon completion of the 90 days, the Safety and Security Committee will share its recommendations with the full board. After a comprehensive review by the Board of Directors, OpenAI will publicly share updated information on the adopted recommendations in a manner consistent with security requirements.
More Info:
- Releasing the Dragon: China’s AI Startups Take on Global Giants
- Project Astra: Google’s Multimodal Answer to OpenAI’s ChatGPT 4o
- Virtual AI Priest Fired from Catholic Faith After Proposing Brother to Sister Marriage
OpenAI also announced that it has begun training a new AI model to replace the one currently powering ChatGPT. The company stated that the new AI model, which will succeed GPT-4, is another step towards achieving artificial general intelligence.
But who really knows if the team is being replaced for the officially stated reasons, or if this is an attempt to preempt another scandal. Perhaps soon we’ll hear claims from former employees that ChatGPT is programmed to give away all your embarrassing queries, like the time you asked it for the best way to hide a mountain of dirty laundry before your in-laws showed up unexpectedly, or how to get out of a Zoom meeting without anyone noticing.
Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.