Writers and scholars caution that advancing systems without safety checks is “completely reckless
A group of senior experts, including two AI pioneers, has cautioned that potent AI systems pose a threat to societal stability. They insist that AI companies must bear responsibility for any harm their products cause. This warning was issued on Tuesday as international politicians, tech companies, scholars, and civil society leaders prepare to convene at Bletchley Park next week for an AI safety summit.
One of the co-authors of the policy proposals, among the 23 experts, expressed concern that pursuing more powerful AI systems without grasping how to ensure their safety is “completely reckless.”
Stuart Russell, a professor of computer science at the University of California, Berkeley, emphasized the need to take advanced AI systems seriously, emphasizing that these are not mere playthings. He believes that advancing their capabilities before comprehending how to make them safe is a profoundly reckless course of action.
He further stated, “AI companies face fewer regulations than sandwich shops.”
The document encouraged governments to embrace a variety of policies, including:
Governments should dedicate one-third of their AI research and development funding, while companies should allocate one-third of their AI R&D resources to ensuring the safe and ethical use of these systems.
Granting independent auditors entry to AI labs.
Introducing a licensing framework for the construction of advanced models.
AI firms must implement safety measures when hazardous features are detected in their models.
Tech companies should be accountable for foreseeable and avoidable damages arising from their AI systems.
The document’s other co-authors encompass prominent figures like Geoffrey Hinton and Yoshua Bengio, two of the triumvirate known as the “godfathers of AI.” They were honored with the ACM Turing Award in 2018, akin to the Nobel Prize in computer science, for their AI contributions.
Both Hinton and Bengio are part of the exclusive list of 100 attendees invited to the summit. This year, Hinton departed from Google, raising concerns about what he referred to as the “existential risk” linked to digital intelligence. Meanwhile, Bengio, a computer science professor at the University of Montreal, united with him and thousands of other experts in March, signing a letter that advocated for a halt in massive AI experiments.
The list of co-authors for these recommendations comprises notable individuals such as Yuval Noah Harari, the bestselling author of “Sapiens,” Nobel laureate in economics Daniel Kahneman, Sheila McIlraith, an AI professor at the University of Toronto, and the acclaimed Chinese computer scientist Andy Yao.
The authors expressed concerns that haphazardly developed AI systems pose a significant threat by potentially amplifying social injustice, undermining established professions, destabilizing society, facilitating large-scale criminal or terrorist activities, and undermining our shared perception of reality that forms the bedrock of our society.
They cautioned that current AI systems were already demonstrating disconcerting capabilities that hint at the emergence of autonomous systems capable of planning, setting goals, and taking action in the physical world. For instance, they noted that the GPT-4 AI model, which powers the ChatGPT tool and was developed by the US company OpenAI, has displayed the ability to design and execute chemistry experiments, browse the web, and utilize various software tools, including other AI models.
The experts also pointed out that if we develop highly advanced autonomous AI, we run the risk of creating systems that independently pursue undesirable objectives, and there may be challenges in controlling them effectively.
Additional policy recommendations within the document encompass:
- Mandatory reporting of incidents involving models displaying concerning behavior.
- Implementation of measures to prevent hazardous models from self-replicating.
- Empowering regulators with the authority to halt the development of AI models demonstrating dangerous behavior.
The forthcoming safety summit will concentrate on existential threats linked to AI, including their potential involvement in crafting new bioweapons and evading human oversight. The UK government, in collaboration with other participants, is working on a statement expected to underscore the extent of the threat posed by frontier AI, referring to advanced systems. Nonetheless, the summit is not anticipated to formally establish a global regulatory body, despite outlining the AI risks and measures to mitigate them.
While some AI experts argue that concerns about existential threats to humanity are exaggerated, Yann LeCun, a fellow Turing award recipient in 2018 alongside Bengio and Hinton, currently serving as the Chief AI Scientist at Meta under Mark Zuckerberg, and also attending the summit, described the idea of AI exterminating humans as “preposterous” in an interview with the Financial Times.
Nevertheless, the authors of the policy paper have contended that in the event of the immediate emergence of highly advanced autonomous AI systems, there would be a lack of knowledge on how to ensure their safety or conduct safety assessments. They further emphasized that even if such knowledge were available, most countries lack the necessary institutions to prevent misuse and enforce safe practices.