A prominent researcher attending the AI safety summit in London this week cautions about a “genuine threat to public discourse
A senior industry figure, Aidan Gomez, co-author of a research paper integral to chatbot technology, argues that fixating on AI doomsday scenarios detracts from addressing pressing issues like widespread misinformation. Gomez, attending this week’s AI safety summit, advocates for studying and pursuing long-term risks, like existential threats from AI, but emphasizes that they could divert politicians from addressing immediate potential harms. He believes that discussing existential risks in the context of public policy is unproductive and distracts from more tangible and immediate risks that require the public sector’s attention.
As the CEO of Cohere, a North American company specializing in AI tools for businesses, including chatbots, Gomez is participating in the two-day summit commencing this Wednesday. In 2017, at the young age of 20, Gomez was part of a team of researchers at Google responsible for creating the Transformer, a pivotal technology underpinning AI tools like chatbots.
Gomez contends that AI, which encompasses computer systems capable of performing tasks typically associated with intelligent beings, is already extensively employed. He suggests that the summit should direct its attention toward these existing applications. Chatbots like ChatGPT and image generators such as Midjourney have astounded the public with their capacity to generate credible text and images based on simple text prompts.
This technology is already integrated into a billion user products, such as those used by Google and other companies. This introduces a range of new risks that require discussion, none of which are of an existential or doomsday nature,” commented Gomez. “Our primary focus should be on aspects that are poised to impact people imminently or are actively affecting them, rather than engaging in more abstract, academic, or theoretical conversations about the distant future.”
Gomez expressed particular concern about misinformation, which involves the dissemination of misleading or inaccurate information online. “Misinformation is a primary concern for me,” he emphasized. “These AI models have the capability to produce content that is exceedingly convincing, highly persuasive, and nearly indistinguishable from text, images, or media created by humans. Therefore, it is imperative that we urgently address this issue and determine how to empower the public to differentiate between these various forms of media.
On the summit’s inaugural day, a variety of AI-related topics will be explored, encompassing concerns related to misinformation, such as its potential impact on elections and the erosion of social trust. On the following day, a select assembly of countries, experts, and technology executives, convened by Rishi Sunak, will deliberate on tangible measures to mitigate AI risks. Notably, the U.S. Vice President, Kamala Harris, will be among the attendees.
Gomez, who underscored the summit’s significance, stated that it is increasingly plausible that a legion of bots, which are software designed for repetitive tasks like posting on social media, could disseminate AI-generated misinformation. “If this becomes a reality,” he cautioned, “it poses a genuine threat to democracy and the integrity of public discourse.
Last week, the government released a series of documents outlining AI-related risks, which encompassed concerns like AI-generated misinformation and labor market disruption. In these documents, the government acknowledged that it couldn’t dismiss the possibility of AI development progressing to a point where it posed a threat to humanity.
One of the risk papers published last week stated, “Given the substantial uncertainty in forecasting AI advancements, there is insufficient evidence to definitively rule out the potential of highly capable Frontier AI systems, if misaligned or inadequately controlled, posing an existential threat.”
The document further noted that while many experts viewed this risk as highly improbable, it would necessitate the occurrence of various specific scenarios, including an advanced AI system gaining control over weapons or financial markets. Concerns about an existential threat from AI primarily revolve around the concept of artificial general intelligence, which refers to an AI system capable of performing diverse tasks at a level of intelligence equivalent to or surpassing human abilities. Such a system could theoretically replicate itself, elude human control, and make decisions contrary to human interests.
These concerns prompted the issuance of an open letter in March, which garnered the signatures of over 30,000 technology professionals and experts, including Elon Musk, advocating for a six-month halt to massive AI experiments.
Subsequently, two out of the three contemporary “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio, endorsed a further statement in May, emphasizing the importance of addressing the risk of AI-driven extinction with the same gravity as the perils posed by pandemics and nuclear warfare. However, Yann LeCun, their fellow “godfather” and co-recipient of the ACM Turing Award, considered the equivalent of the Nobel Prize in computing, has dismissed concerns about AI potentially eradicating humanity as “absurd.”
LeCun, who serves as the Chief AI Scientist at Meta, Facebook’s parent company, stated to the Financial Times this month that several “conceptual breakthroughs” would be required before AI could attain human-level intelligence, a stage where it could elude human control. LeCun further added, “Intelligence is not synonymous with a desire to dominate, and this isn’t even true for humans.