To lower risks, the Centre for Long-Term Resilience advises the upcoming government to keep incident logs
According to a research, the UK requires a framework to document instances of AI misuse and malfunctions, or governments run the danger of missing important ones. The next government should explore establishing a central centre to compile AI-related events across the country, as advised by the Centre for Long-Term Resilience (CLTR). A system for recording AI incidents in public services should also be implemented.
For the effective application of AI technology, the Centre for Long-Term Resilience (CLTR), which focuses on government responses to unforeseen crises and extreme risks, highlighted the significance of an incident reporting system akin to the Air Accidents Investigation Branch (AAIB).
According to the paper, since 2014, news organisations have documented 10,000 AI “safety incidents,” which have been collected and organised into a database by the international research organisation Organisation for Economic Co-operation and Development (OECD). Harmful AI instances are those that result in damage to one’s body, finances, reputation, or mental health, according to the OECD.
The OECD’s AI safety incident monitor has examples such as a deepfake of Labour leader Keir Starmer acting abusively towards party employees, Google’s Gemini model that represents WWII German soldiers as people of colour, incidents involving self-driving cars, and a man who was encouraged by a chatbot to plot the assassination of the late queen.
“In safety-critical industries like aviation and medical, incident reporting has greatly reduced and managed hazards. But it is mostly missing from the AI regulatory framework, which hinders the UK government’s ability to respond to situations involving AI by keeping it in the dark,” stated Tommy Shaffer Shane, the report’s author and policy manager at CLTR.
A “well-functioning incident reporting regime” like to those in safety-critical industries such as aviation and medicine should be adopted by the UK government, as proposed by the CLTR. It pointed out that the lack of a regulator concentrating on sophisticated AI systems like chatbots and image generators may mean that many AI mishaps go unreported by UK watchdogs. Labour has pledged to enact legally-binding rules for the most cutting-edge AI firms.
Such a system, according to the think group, would enable the government to anticipate similar future instances and promptly identify areas where AI was falling short. They also emphasised how event reporting would help discover early signals of potential large-scale impacts and coordinate responses to urgent circumstances where prompt action is necessary.
Even if a model passes the UK’s AI Safety Institute’s testing, its drawbacks might not become apparent until after it is fully released. But event reporting would at least enable the government to assess how well these risks are being managed by the nation’s regulatory structure.
According to CLTR, the Department of Science, Innovation, and Technology (DSIT) ran the risk of being out of date on the misuse of artificial intelligence (AI) systems. Disinformation campaigns, attempts to create bioweapons, bias in AI systems, and improper use of AI in public services are a few examples of these problems. As an illustration, consider the Netherlands, where tax officials deployed artificial intelligence (AI) in an attempt to combat benefit fraud, causing financial hardship for thousands of families.
“The report stated that DSIT should prioritise ensuring the UK government learns about such novel harms through established incident reporting processes rather than through the news.”
The three immediate steps that CLTR recommended were: creating a government system for reporting AI incidents in public services; asking UK regulators to identify gaps in AI incident reporting; and considering the creation of a pilot AI incident database. CLTR was primarily funded by wealthy Estonian programmer Jaan Tallinn. The Information Commissioner’s Office, the MHRA, the AAIB, and other current organisations may contribute AI-related occurrences to this database.
The present algorithmic transparency reporting standard could be utilised by the AI use in public services reporting system, according to CLTR’s suggestion. Departments and law enforcement organisations are encouraged by this criterion to reveal how they employ AI.
Ten nations, including the UK, and the EU signed a declaration in May on AI safety cooperation that placed a focus on keeping an eye on “AI harms and safety incidents.”
The paper went on to say that the DSIT’s Central AI Risk Function (CAIRF), which assesses and documents hazards related to AI, would be supported by an incident reporting system.