The language will compose the communiqué for the upcoming AI summit next month, where it is doubtful that an agreement on an organization to oversee technology scrutiny will be achieved
Rishi Sunak’s advisors are working to reach a consensus among global leaders on a statement highlighting the concerns surrounding artificial intelligence as they finalize the agenda for the upcoming AI safety summit next month. Downing Street officials have been engaging with counterparts from China, the EU, and the US on a global scale to negotiate the language for a communique during the two-day conference. However, the prospect of establishing a new international organization to oversee cutting-edge AI is unlikely, despite the UK’s interest in expanding the government’s AI taskforce’s global influence.
Sunak’s AI summit is set to produce a statement addressing the risks associated with AI models, offer an update on safety guidelines brokered by the White House, and conclude with discussions among “like-minded” nations about how to oversee the most hazardous variants of this technology by national security agencies.
The potential for international collaboration concerning advanced AI that poses risks to human safety will also be a topic of conversation on the summit’s final day, scheduled for November 1st and 2nd at Bletchley Park, as outlined in a draft agenda obtained by The Guardian.
The draft agenda mentions the establishment of an “AI Safety Institute” aimed at facilitating the scrutiny of cutting-edge AI models with national security implications.
Nonetheless, the representative for the prime minister’s summit, in the previous week, minimized the creation of such an organization while underlining the crucial role of “collaboration” in effectively addressing risks associated with advanced AI.
In a recent post on X, formerly Twitter, Matt Clifford stated, “The focus is not on establishing a singular new international institution. Our perspective is that most nations will seek to cultivate their own capacities in this domain, particularly for assessing cutting-edge models.
The UK is currently at the forefront of advancements in frontier AI, having formed a frontier AI taskforce led by tech entrepreneur Ian Hogarth. Deputy Prime Minister Oliver Dowden expressed his desire last month that the taskforce could develop into a lasting institutional framework, potentially offering international expertise in the realm of AI safety.
Clifford revealed last week that the summit is expected to host approximately 100 attendees, representing a diverse group comprising cabinet ministers from various countries, CEOs of companies, academics, and representatives from international civil society.
The draft agenda outlines the summit’s schedule for the first day, which consists of a three-track discussion: an examination of the risks linked to frontier AI models, strategies for mitigating those risks, and the exploration of opportunities arising from these models. Following these discussions, a brief communique will be drafted for endorsement by country delegations, signifying a shared understanding of the risks and opportunities associated with frontier AI models.
Companies participating in the summit, including entities such as OpenAI, Google, and Microsoft, are expected to disclose their adherence to AI safety commitments established in July in partnership with the White House. These commitments encompass external security testing of AI models prior to their release and ongoing scrutiny of these systems once they are in operation.
As reported by Politico recently, the White House is in the process of revising the voluntary commitments, specifically pertaining to safety, cybersecurity, and the potential utilization of AI systems for national security purposes. An official announcement regarding these updates may be anticipated later this month.
The second day of the summit, as outlined in the draft agenda, will host a smaller group of approximately 20 participants, primarily composed of “like-minded” countries. The discussions will revolve around the trajectory of AI in the next five years and the positive opportunities it presents in alignment with sustainable development goals. This will also encompass conversations regarding the establishment of a safety institute.
In a series of posts on X, Clifford emphasized that the UK maintains a strong interest in collaborating with other nations on the subject of AI safety.
In his message, Clifford stressed the vital role of collaboration in effectively addressing the risks posed by Frontier AI, highlighting the importance of working with civil society, academics, technical experts, and other countries.
A government spokesperson commented, stating, “We have made it explicitly clear that these deliberations will involve the exploration of potential collaborative efforts in AI safety research, encompassing evaluation and standardization. Ongoing international discussions in this domain are already in progress and showing positive advancements, including dialogues regarding cross-country and cross-firm collaboration, as well as engagement with technical experts for the assessment of advanced AI models. There are various avenues to pursue this, and we eagerly anticipate facilitating these discussions in November during the summit.