Reports indicate safety concerns over new model Q*, voiced to the board before CEO Sam Altman’s removal
OpenAI was allegedly developing an advanced system, known as Q*, before Sam Altman’s dismissal, which raised safety concerns among the staff. Some OpenAI researchers expressed alarm to the board of directors before Altman’s departure, cautioning that the model could pose a threat to humanity, as reported by Reuters.
The artificial intelligence model, pronounced as “Q-Star,” demonstrated the capability to solve novel basic math problems, as outlined by the tech news site the Information. The swift development pace of the system caused apprehension among some safety researchers. The ability to solve unfamiliar math problems is considered a noteworthy advancement in AI.
Following days of upheaval at San Francisco’s OpenAI, the board dismissed Altman last Friday but reinstated him on Tuesday night due to almost all 750 staff threatening to resign if he wasn’t brought back. Altman also had backing from Microsoft, the company’s major investor.
Numerous experts express worry that entities like OpenAI are progressing too rapidly towards artificial general intelligence (AGI), a system capable of performing diverse tasks at or beyond human intelligence levels—raising concerns about potential challenges in human control.
According to Andrew Rogoyski from the Institute for People-Centred AI at the University of Surrey, a notable advancement would be the ability of a model to solve math problems not present in its training set.
“Many generative AI systems recycle or reshape existing knowledge, encompassing text, images, or mathematical solutions already stored in libraries. If an AI can tackle a problem without having encountered the solution in its extensive training sets, it represents a significant breakthrough, even if the math involved is relatively straightforward. The prospect of solving intricate, unseen mathematical problems would be even more thrilling,” Rogoyski stated.
Speaking last Thursday, a day before his unexpected dismissal, Altman hinted at another breakthrough by the company behind ChatGPT. During the Asia-Pacific Economic Cooperation (Apec) summit, he shared, “Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime.”
Initially established as a nonprofit with a board overseeing a commercial subsidiary led by Altman, OpenAI now has Microsoft as its principal investor in the for-profit venture. As part of the preliminary agreement for Altman’s reinstatement, OpenAI will have a new board chaired by Bret Taylor, a former co-CEO of software company Salesforce.
The developer of ChatGPT asserts its establishment with the aim of creating “safe and beneficial artificial general intelligence for the benefit of humanity.” The for-profit entity is declared to be “legally bound to pursue the nonprofit’s mission.”
Amid speculation that Altman was dismissed for jeopardizing the company’s core mission of safety, Emmett Shear, his temporary successor as interim chief executive, clarified this week that the board did not remove Sam over any specific disagreement on safety.