Yoshua Bengio, an expert, emphasizes the need for independent oversight as technology advances rapidly
Companies developing powerful artificial intelligence systems should include independent board members representing the “interests of society,” according to an expert considered one of the modern pioneers of the technology. Yoshua Bengio, a co-recipient of the 2018 Turing Award, often dubbed the “Nobel prize of computing,” emphasized the necessity for AI firms to have oversight from public representatives as technological advancements accelerate rapidly. In the aftermath of the boardroom changes at OpenAI, the developer of ChatGPT, which involved the departure and return of CEO Sam Altman, Bengio stressed the importance of a “democratic process” to monitor developments in the field.
“How do we ensure that these advancements occur without posing a threat to the public? How do we prevent their misuse to consolidate power?” the AI pioneer expressed to The Guardian.
“In my view, the solution is evident in principle. We require democratic governance. Organizations must have an inclusive board consisting of individuals who can oversee operations, distinct from regulators. While regulators are necessary, we also need independent individuals within these organizations who represent societal interests.”
Bengio, a professor at the University of Montreal and the founder and scientific director of Mila, the Quebec Artificial Intelligence Institute, expressed concern that despite recent upheaval at OpenAI, there would be no deceleration in AI development throughout the tech industry.
In March, Bengio joined thousands of prominent figures in the tech industry, including Elon Musk, in signing an open letter advocating for a six-month halt in the development of the most potent AI systems.
I worry that we won’t see a deceleration,” he remarked. “It seems like there will be an acceleration without the necessary safeguards, possibly with a heightened emphasis on competing against other players and winning the game, rather than prioritizing the protection and safety of the public.
Following the reinstatement of Altman as OpenAI CEO, just days after his removal, reports surfaced indicating that the company had been developing an AI model before his dismissal, and its capabilities had raised concerns among some company researchers.
Worries about AI safety encompass a spectrum from widespread dissemination of misinformation to biased outcomes and the hastened progress toward artificial general intelligence (AGI). AGI refers to a system capable of performing human tasks at or beyond human levels of intelligence, potentially eluding human control.
Anything that expedites the path to AGI is something we should be wary of, at least until we establish the appropriate safeguards, which I believe are still lacking,” Bengio remarked.
The newly restructured OpenAI board includes independent members and is led by Bret Taylor, the former chair of Twitter. It retains one member from the previous board that dismissed Altman, tech entrepreneur Adam D’Angelo. However, Helen Toner, an independent board member and AI safety researcher who co-authored a paper expressing concerns about the impact of ChatGPT’s release on the pace of AI development elsewhere, has departed.
Bengio also expressed reservations about a voluntary agreement formed at last month’s UK-hosted global AI safety summit between governments and AI companies to collaborate on testing potent AI models both before and after deployment. He noted that the process “favors companies” by placing the responsibility on governments to identify issues with the models rather than requiring companies to demonstrate the safety of their technology.
A more effective approach, in my opinion, is for companies to bear the responsibility of proving to regulators that their system is trustworthy. Similar to how we require pharmaceutical companies to conduct clinical studies on their products,” he stated.
The government doesn’t conduct the clinical studies; it’s the pharmaceutical industry. They must present scientific evidence, such as a statistical evaluation demonstrating, ‘with a very high probability, the drug is not toxic.’ The government then reviews the report and the process, granting approval for commercialization.”
Bengio, who participated in the Bletchley summit, has been appointed as the chair of the inaugural “state of AI science” report, slated for publication before an upcoming AI summit in Korea in May. He mentioned that the report would have a strong focus on safety and expressed the hope that it would be released every six months.
Hopefully, there’ll be one every six months because the technology moves pretty fast,” he added.
Bengio outlined a potential timeframe for the emergence of a system capable of eluding human control, estimating it to be between five and 20 years.
“I personally anticipate losing control within a range of five years at the least, possibly up to 20. However, there’s a lot of uncertainty. It could happen more rapidly. If you’re the government, you should safeguard the public against this 1% chance that something adverse could occur with these systems,” he remarked.
He also expressed approval for Joe Biden’s executive order on AI, released shortly before the Bletchley summit, characterizing it as a “positive development” that steered the industry “towards improved regulation.” However, he stressed the importance of other countries following suit. The White House order includes provisions mandating tech companies to share test results from the most powerful AI systems with the US government.
Bengio, a recipient of the 2018 Turing Award alongside Geoffrey Hinton and Yann LeCun, was recognized for their conceptual and engineering advancements that played a crucial role in establishing deep neural networks as a fundamental element of computing.