Transparency is crucial now: politicians, tech firms must disclose origins and usage of AI-generated campaign content
In the upcoming year, democracy will flourish globally, with elections in India, Mexico, the EU Parliament, as well as presidential elections in the US, Venezuela, Taiwan, and a UK general election by 28 January 2025. However, this surge in political activity coincides with the advent of widespread generative AI, amplifying and hastening the controversies surrounding “fake news” that we can only begin to fathom.
Here’s my prediction: The significant disruption caused by this AI generation will further erode public trust in the information they encounter. It seems almost inevitable that we will witness scandals where political candidates are accused of utilizing AI to produce their content, even if they haven’t. Such incidents will sow doubt, reduce trust, and disengage individuals. This manipulative tactic, often exploited in disinformation campaigns, not only deceives but also creates chaos, leaving people unsure of what to believe. When everything becomes questionable, individuals may ultimately abandon the attempt to discern fact from fiction.
Efforts are underway to address content authenticity, such as the Content Authenticity Initiative, an organization aiming to establish standards for verifying information sources. Newsrooms are also formulating guidelines on the use of generative AI in their work.
To tackle this issue head-on, political parties should take the initiative and demonstrate transparency by openly publishing accessible policies on their use of generative AI before elections. In the US, there has already been controversy surrounding an AI-generated Republican political advertisement, and criticism from organizations like the Centre for Countering Digital Hate regarding political parties’ lack of commitment to address this issue directly.
Although the UK has not experienced such incidents yet, it is highly likely to occur. We may witness AI-generated images depicting candidates performing fictional heroic acts or fabricated attack ads. There will come a point when a political candidate delivers a speech composed with ChatGPT, an AI chatbot, including “hallucinated” statistics.
The risks persist beyond political parties: female politicians, who already endure daily online hate, may encounter the circulation of manipulated sexualized images, including pornographic deepfakes. Additionally, external actors seeking to undermine democracy might establish AI “content farms” to inundate social media platforms with false or deceptive information regarding elections.
Parties in the UK should pay attention and distinguish themselves from malicious online entities, while proactively addressing the impending scandals by disclosing their plans for utilizing AI. At the very least, formulating such policies will compel them to contemplate the risks beforehand, rather than waiting until it becomes too late.
To achieve sustainable resolutions, it is crucial to acknowledge that our predicament extends beyond citizens’ inability to consistently discern accurate information. The processes involved in generating, disseminating, and consuming information are progressively becoming obscure to us.
In the past, the concealed mechanism under scrutiny pertained to the methods through which social media platforms incentivize, curate, and disseminate information—the enigmatic workings of the “algorithm.” We still face those same issues, and more, with the potential widespread adoption of generative AI technologies, whose creators fail to disclose their development processes.
In the absence of transparency regarding their design, the data they have been trained on, and the fine-tuning methods employed, we are left unaware of the driving forces behind the information outputs generated by these tools. Consequently, we cannot determine their reliability. Instead of empowering citizens to critically engage with these tools, we are left dependent on the assurances provided solely by the tech companies regarding the benefits and risks involved. The tech companies possess complete control while evading accountability.
The Competition and Markets Authority in the UK is currently conducting a review of the AI market, focusing on the risks associated with false or deceptive consumption of information, while the Federal Trade Commission in the US is undertaking a similar assessment. It is evident that regulators and policymakers are mindful of the risks, which is a positive development. However, we have yet to find an effective approach to digital regulation, and the need for a solution is becoming increasingly urgent.
The upcoming 2024 elections present a crucial test to determine who holds the reins of power: citizens and their democratically elected governments or the major tech companies. It is imperative that we promptly address the impact of AI on politics before it becomes a problem rather than reacting to issues when they arise. UK political parties can contribute by ensuring transparency regarding their utilization of these tools.