Sam Altman, CEO of OpenAI, has resigned from the company’s Safety and Security Committee. The committee was established in May to oversee critical safety decisions related to OpenAI’s projects and operations.
In a blog post released today, OpenAI announced that the committee would transition into an “independent” board oversight group.
Carnegie Mellon professor Zico Kolter will chair the newly constituted board, which will include Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman—all of whom are existing board members.
According to OpenAI, the committee, which had been involved in the safety review of OpenAI’s latest AI model, o1, will continue to receive regular updates from the company’s safety and security teams.
It will also retain the authority to delay releases until safety issues are resolved.
“The Safety and Security Committee will continue to receive technical assessments for current and future models and ongoing post-release monitoring reports,” OpenAI stated.
“We are enhancing our model launch processes to establish a comprehensive safety and security framework with clearly defined criteria.”
Altman’s departure from the committee follows scrutiny from U.S. lawmakers. This summer, five U.S. senators wrote to Altman to raise concerns about OpenAI’s policies.
Additionally, a significant portion of OpenAI’s staff focused on AI’s long-term risks has departed, with former researchers accusing Altman of opposing meaningful AI regulation in favour of policies that benefit OpenAI’s commercial interests.
In response to these criticisms, OpenAI has notably increased its federal lobbying expenditures, budgeting $800,000 for the first half of 2024 compared to $260,000 for the entire previous year.
Altman has also joined the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations on AI development and deployment within U.S. critical infrastructure.
Despite Altman’s exit, there are concerns about whether the Safety and Security Committee will make decisions that could significantly impact OpenAI’s commercial strategy.
OpenAI had previously stated that it would address “valid criticisms” through the committee, though what constitutes “valid” remains subjective.
In an op-ed for The Economist in May, former OpenAI board members Helen Toner and Tasha McCauley expressed doubts about OpenAI’s ability to self-regulate effectively.
“Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” they wrote.
OpenAI’s financial ambitions are also growing, with rumours circulating about a $6.5 billion funding round that could value the company at over $150 billion.
There is speculation that OpenAI may abandon its hybrid nonprofit model, which was designed to cap investor returns and align with its mission of developing artificial general intelligence for the benefit of humanity.