Experts call on tech giants to safeguard democracy in the 2024 elections using their platforms
Experts caution that social media firms are ill-prepared to combat election misinformation in the upcoming 2024 global elections due to language barriers.
The Global Coalition for Tech Justice, comprising civil society leaders and tech harm survivors, urges prominent tech companies like Google, TikTok, Meta, and X to guarantee their platforms are ready to safeguard democracy and security during next year’s elections.
In 2024, over 2 billion individuals are expected to participate in 50+ elections, encompassing the US, India, and the EU.
Back in July, the coalition requested Google, Meta, X, and TikTok to develop comprehensive election action strategies, both globally and at the national level, aimed at preserving freedoms, rights, and user safety during the 2024 elections. Social media firms did not acknowledge or respond to this request to disclose their strategies.
The coalition had requested information regarding the workforce composition, including employees and contractors, for each language and dialect. This was aimed at ensuring expertise in national and regional contexts for content moderation.
Experts caution that without robust election action plans, there’s a potential risk to both election integrity and citizen safety.
Katarzyna Szymielewicz, co-founder and president of the Panoptykon Foundation, a Polish NGO specializing in surveillance technology monitoring, emphasized that major tech companies could take additional measures to ensure effective content moderation across various languages.
She stated, “Moderating content effectively in diverse cultural contexts poses a significant challenge for major platforms. The farther it strays from the English language, the more complex it becomes. Moderating sponsored or organic content to eliminate political violence, misogynistic content, and various forms of abuse isn’t always straightforward solely based on language… Platforms should make greater efforts to address this issue and invest significantly in human moderators for more effective moderation.”
Szymielewicz emphasizes that major tech platforms should proactively ensure their systems are inherently designed to minimize the amplification of disinformation and hate, not just during elections but as a default mode. This entails implementing measures to reduce the algorithmic reach and prominence of content, groups, and accounts that propagate disinformation and hate.
In South Africa, there has been social media-driven violence fueled by xenophobia, targeting migrants, refugees, and asylum seekers.
In India, which has witnessed a resurgence of anti-Muslim violence, the coalition alleged that Meta had violated its own guidelines concerning hate speech associated with the ruling Hindu nationalist Bharatiya Janata Party (BJP).
Ratik Asokan, a member of the diaspora organization India Civil Watch International, a part of the coalition, stated, “Meta has failed to adequately moderate its platforms in India due to a shortage of content moderators and its affiliations with the BJP.”
He continued, “The BJP lacks the motivation to remove hate speech because it aligns with their political agenda, contributing to their electoral success. Those who could intervene are turning a blind eye to this situation. While Meta advocates for human rights, it appears to be aligning itself with the Hindu far right as an ally.