Jan Leike, a prominent safety researcher at the company behind ChatGPT, resigned shortly after the launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has stated that the company, responsible for ChatGPT, is prioritizing “shiny products” over safety. He revealed that he resigned after a disagreement over fundamental goals reached a “breaking point.”
Jan Leike, formerly a key safety researcher at OpenAI serving as its co-head of superalignment, was responsible for ensuring that advanced AI systems aligned with human values and objectives. His remarks precede a global artificial intelligence summit in Seoul, where policymakers, experts, and technology executives will discuss the oversight of AI.
Leike resigned shortly after the San Francisco-based company unveiled its latest AI model, GPT-4o. His departure marks the exit of two senior safety figures from OpenAI this week, following the resignation of Ilya Sutskever, OpenAI’s co-founder and co-head of superalignment.
In a post on X on Friday, Leike outlined the reasons for his departure, citing a decline in the priority given to safety culture.
“Over the past few years, safety culture and processes have been overshadowed by a focus on flashy products,” he wrote.
OpenAI was established with the objective of ensuring that artificial general intelligence, which it defines as “AI systems that are generally more intelligent than humans,” serves the greater good of humanity. In his X posts, Leike mentioned that he had been in disagreement with OpenAI’s leadership regarding the company’s priorities for some time, and that this disagreement had “finally reached a breaking point.”
Leike suggested that OpenAI, known for developing the Dall-E image generator and the Sora video generator, should allocate more resources to address issues such as safety, social impact, confidentiality, and security in its upcoming models.
“These challenges are extremely difficult to address correctly, and I am worried that we are not heading in the right direction,” he wrote, noting that it was becoming “increasingly challenging” for his team to conduct its research.
“Developing machines smarter than humans is inherently risky. OpenAI bears a tremendous responsibility on behalf of humanity,” Leike stated, emphasizing that OpenAI “must prioritize safety as an AGI company.”
Sam Altman, OpenAI’s CEO, replied to Leike’s thread with a post on X, expressing gratitude to his former colleague for his contributions to the company’s safety culture.
“He is correct that we have much more to accomplish, and we are dedicated to achieving it,” he wrote.
Sutskever, who also served as OpenAI’s chief scientist, expressed confidence in his X post announcing his departure that OpenAI “will develop AGI that is both safe and beneficial” under its current leadership. Sutskever initially supported Altman’s removal as OpenAI’s leader last November, but later endorsed his reinstatement after a period of internal turmoil at the company.
Leike’s cautionary remarks coincide with a panel of international AI experts releasing their first report on AI safety, which noted a lack of consensus regarding the likelihood of powerful AI systems circumventing human control. The report cautioned that regulators could struggle to keep up with rapid technological advancements, warning of a “potential mismatch between the speed of technological progress and the pace of regulatory response.”