Physicist Max Tegmark asserts tech execs can’t halt AI due to fierce competition
The scientist who initiated a significant letter advocating for a pause in the development of potent artificial intelligence systems has stated that technology executives did not cease their work due to being entrenched in a “race to the bottom.”
Max Tegmark, one of the co-founders of the Future of Life Institute, orchestrated an open letter in March, urging a six-month hiatus in the advancement of massive AI systems. Despite garnering support from over 30,000 signatories, including Elon Musk and Apple co-founder Steve Wozniak, the document did not succeed in securing a break in the development of the most ambitious AI systems.
In a conversation with The Guardian six months later, Tegmark revealed that he had not anticipated the letter would deter tech companies from pursuing AI models even more potent than GPT-4, the expansive language model that powers ChatGPT, mainly because the competition has intensified significantly.
In my discussions with corporate leaders, I sensed that many of them privately desired a pause, but they found themselves ensnared in a fierce competition against one another. Consequently, no single company could afford to pause independently,” he stated.
The letter cautioned against an uncontrollable race towards creating intelligences that nobody could comprehend, foresee, or effectively manage. It called on governments to step in if an agreement on a moratorium regarding systems more advanced than GPT-4 could not be reached among leading AI firms like Google, OpenAI (the owner of ChatGPT), and Microsoft.
The letter posed essential questions: “Should we forge ahead with non-human intelligences that might eventually surpass, outwit, render obsolete, and supplant us? Are we willing to risk losing control over our civilization?”
Max Tegmark, a professor of physics at the Massachusetts Institute of Technology (MIT), deemed the letter a success.
The impact of the letter has been more significant than I initially anticipated,” he remarked, highlighting a notable political awakening concerning AI. This awakening has encompassed US Senate hearings featuring tech executives and the UK government’s organization of a global summit on AI safety in November.
Max Tegmark noted that the expression of concern regarding AI had shifted from a subject that was considered taboo to a mainstream perspective since the publication of the letter. He pointed out that the letter from his thinktank was followed in May by a statement from the Center for AI Safety, endorsed by numerous tech executives and academics, which asserted that AI should be regarded as a societal risk on par with pandemics and nuclear warfare.
“I sensed there was a significant amount of bottled-up apprehension regarding the rapid advancement of AI, concerns that people worldwide hesitated to voice for fear of being seen as alarmist critics. The letter provided legitimacy to these discussions, rendering them socially acceptable,” Tegmark explained.
Tegmark cautioned against characterizing the emergence of digital “god-like general intelligence” as a distant future threat, noting that some AI experts believe it could materialize in just a few years.
The Swedish-American scientist expressed enthusiasm for the upcoming UK AI safety summit in November, scheduled to take place at Bletchley Park, describing it as a “remarkable initiative.” His thinktank has outlined three primary objectives for the summit: fostering a shared comprehension of the gravity of AI-related risks, acknowledging the necessity of a coordinated worldwide response, and endorsing the urgency of government intervention.
Furthermore, he emphasized the continued need for a pause in AI development until universally accepted safety standards are established. He stated, “Advancing models beyond our current capabilities must be temporarily halted until they can adhere to universally agreed-upon safety criteria.” He also noted, “Reaching a consensus on these safety standards will inherently lead to the pause.
Tegmark also called upon governments to address the issue of open-source AI models that are available for public access and adaptation. Notably, Mark Zuckerberg’s Meta recently unveiled an open-source large language model named Llama 2. A UK expert cautioned that such a decision was akin to “providing individuals with a blueprint for constructing a nuclear bomb.”