While aiming to demystify technology in Christmas lectures, a man acknowledges genuine speculative concerns regarding artificial intelligence
Dubbed a risk akin to pandemics, artificial intelligence has stirred existential concerns. Yet, a pioneer, Prof Michael Wooldridge, slated to present this year’s Royal Institution Christmas lectures, remains unperturbed by such fears.
Wooldridge, a computer science professor at the University of Oxford, apprehends AI’s potential as an overbearing boss, surveilling emails, offering ceaseless feedback, and potentially determining terminations.
He cited existing tools with disconcerting capabilities. Through Britain’s esteemed science lectures, Wooldridge aims to demystify AI, especially as 2023 marks the introduction of widespread AI tools like ChatGPT. The allure can be deceptive, he warns.
This is the inaugural instance of AI resembling the long-promised AI depicted in films, games, and literature,” he remarked.
However, he emphasized that tools like ChatGPT lacked enchantment or mystique.
“When audiences witness the inner workings of this technology during the [Christmas] lectures, they’ll encounter unexpected revelations,” Wooldridge explained. “This understanding will empower them to embrace a world where AI is merely another tool, akin to a pocket calculator or computer.”
He will have company as robots, deepfakes, and prominent figures in AI research will also participate in delving into the technology.
Noteworthy elements of the lectures encompass a Turing test—a renowned challenge originating from Alan Turing. In essence, if a human engages in a written exchange and cannot discern whether the entity responding is human or not, the machine has exhibited human-like comprehension.
While certain experts assert the test remains unconquered, differing opinions prevail among others.
Some of my colleagues believe that we’ve essentially surpassed the Turing test,” Wooldridge remarked. “Sometime in the past few years, inconspicuously, technology has advanced to the extent that it can generate text indistinguishable from what a human would create.”
However, Wooldridge maintains a contrasting perspective.
“I believe this indicates that while the Turing test holds significance due to its simplicity, elegance, and historical relevance, it doesn’t truly serve as a comprehensive assessment for artificial intelligence,” he stated.
From the professor’s viewpoint, an intriguing facet of contemporary technology lies in its capacity to empirically examine inquiries that were previously relegated to the realm of philosophy – such as the potential consciousness of machines.
“Our understanding of human consciousness remains quite limited,” Wooldridge acknowledged. Yet, he noted, there’s a prevalent argument in favor of experiences being crucial.
For instance, humans can perceive the aroma and flavor of coffee, whereas extensive language models like ChatGPT cannot replicate such experiences.
“It can analyze your social media stream, discern your political inclinations, and subsequently furnish you with fabricated stories in an attempt to influence your behavior, like altering your voting choice,” he explained.
Additional apprehensions involve the potential for systems like ChatGPT to dispense erroneous medical counsel. AI systems could also inadvertently amplify biases present in their training data. Some are concerned about unforeseen repercussions stemming from AI use, including the development of preferences misaligned with human values – although Wooldridge contends that this is improbable with existing technology.
According to Wooldridge, the solution to addressing prevailing risks lies in fostering skepticism, particularly considering ChatGPT’s susceptibility to errors, and in establishing transparency and accountability mechanisms.
However, he chose not to endorse the statement by the Center for AI Safety or a similar letter from the Future of Life Institute, both of which were issued this year, cautioning against the perils of the technology.
“Addressing the risk of AI-triggered extinction should be a worldwide priority on par with other large-scale societal risks, such as pandemics and nuclear warfare,” the former statement asserted.
“The rationale behind my decision to abstain from signing is rooted in the observation that these statements intertwined immediate apprehensions with exceedingly speculative long-term concerns,” Wooldridge explained. He pointed out that while there were indeed conceivable “spectacularly unwise actions” that could arise from AI, acknowledging threats to humanity was important. However, he noted that no one was seriously contemplating entrusting AI with the control of a nuclear arsenal, for instance.
“If we’re not yielding authority over lethal matters to AI, it becomes considerably harder to conceive how it could genuinely pose an existential threat,” he reasoned.
Although Wooldridge appreciates the forthcoming global summit on AI safety in the autumn and the establishment of a UK taskforce to cultivate secure and dependable large language models, he remains unconvinced about the parallels drawn by some between the concerns voiced by today’s AI researchers and those expressed by J Robert Oppenheimer regarding the invention of nuclear bombs.