The technology firm stated that accounts posing as Americans disseminated divisive political content in anticipation of the upcoming election year
In China, an individual crafted thousands of counterfeit Facebook and Instagram profiles to mimic Americans, disseminating divisive political content with the apparent goal of sowing discord in the US before the upcoming elections, according to Meta on Thursday.
The tech company, owner of Facebook and Instagram, identified and removed the network consisting of nearly 4,800 fraudulent accounts. These accounts utilized fake photos, names, and locations to masquerade as ordinary American Facebook users expressing opinions on political matters.
Rather than disseminating false information like some other networks, these accounts focused on reposting content from Twitter/X, originating from politicians, news outlets, and various sources. The interlinked accounts curated content from both liberal and conservative channels, suggesting their objective was not to favor either side but rather to magnify partisan divides and intensify polarization.
The recently revealed network underscores how foreign adversaries leverage US-based tech platforms to foster discord and mistrust. It alludes to the substantial risks posed by online disinformation in the upcoming year, with national elections scheduled in the US, India, Mexico, Ukraine, Pakistan, Taiwan, and other countries.
Ben Nimmo, spearheading investigations into inauthentic behavior on Meta’s platforms, cautioned, “While these networks still grapple with audience building, they serve as a warning. Foreign threat actors are endeavoring to reach individuals across the internet in anticipation of next year’s elections, emphasizing the need for vigilance.
Meta Platforms Inc., headquartered in Menlo Park, California, refrained from publicly connecting the Chinese network to the Chinese government. However, the company established that the network originated in China. The content disseminated by these accounts aligns with broader Chinese government propaganda and disinformation efforts aiming to amplify partisan and ideological rifts within the US.
In an effort to mimic regular Facebook accounts, the network occasionally shared content related to fashion or pets. In a notable shift earlier this year, certain accounts suddenly changed their American-sounding usernames and profile pictures to ones indicating they resided in India. Subsequently, these accounts started disseminating pro-Chinese content related to Tibet and India, illustrating how fraudulent networks can be reoriented to concentrate on different targets.
On Wednesday, Meta also published a report assessing the likelihood of foreign adversaries, such as Iran, China, and Russia, utilizing social media for election interference. The report highlighted that Russia’s recent disinformation campaigns have shifted their focus away from the US, concentrating instead on its conflict with Ukraine. In these efforts, Russia utilized state media propaganda and misinformation to undermine support for the nation under invasion.
Nimmo, Meta’s principal investigator, indicated that influencing opinion against Ukraine is likely to be the primary objective if Russia engages in disinformation within the US political discourse leading up to next year’s election.
“This holds significance in the lead-up to 2024,” Nimmo emphasized. “As the conflict persists, we should particularly anticipate Russian endeavors to target election-related discussions and candidates centered on supporting Ukraine.”
While Meta frequently highlights its actions in dismantling counterfeit social media networks as a testament to its dedication to safeguarding election integrity and democracy, critics argue that the platform’s emphasis on fake accounts diverts attention from its failure to address its responsibility for the existing misinformation on its site, contributing to polarization and mistrust.
As an illustration, Meta permits paid advertisements on its platform asserting the 2020 US election was rigged or stolen, thereby amplifying the unfounded assertions of Donald Trump and other Republicans. Despite repeated debunking of claims about election irregularities, the former US president’s own attorney general and federal and state election officials have stated that there is no credible evidence to suggest the presidential election, in which Trump lost to Joe Biden, was compromised.
In response to inquiries about its advertising policy, the company stated its emphasis is on future elections rather than past ones. It clarified that it would reject advertisements that cast baseless doubt on upcoming contests.
While Meta has introduced a new artificial intelligence policy mandating political ads to include a disclaimer if they feature AI-generated content, the company has permitted other manipulated videos, created through more traditional programs, to remain on its platform. This includes a digitally altered video of Biden falsely alleging he is a pedophile.
Zamaan Qureshi, a policy adviser at the Real Facebook Oversight Board, an organization critical of Meta’s handling of disinformation and hate speech, remarked, “This is a company that cannot be taken seriously and that cannot be trusted. Watch what Meta does, not what they say.
During a press conference call on Wednesday, Meta executives delved into the network’s operations, a day after the tech giant unveiled its policies for the upcoming election year—many of which were previously implemented for past elections.
However, experts studying the connection between social media and disinformation assert that 2024 presents novel challenges. With numerous large countries conducting national elections and the advent of advanced AI programs, there is an increased ease in generating realistic audio and video content that could potentially mislead voters.
Stromer-Galley characterized Meta’s election plans as “modest,” highlighting a clear contrast with the unregulated nature of X. Since Elon Musk acquired the X platform, formerly known as Twitter, he has dismantled content moderation teams, reinstated users previously banned for hate speech, and used the platform to propagate conspiracy theories.
While both Democrats and Republicans have called for legislation addressing algorithmic recommendations, misinformation, deepfakes, and hate speech, the likelihood of significant regulations being enacted before the 2024 election is minimal. Consequently, the responsibility to self-regulate will largely fall on the platforms.
According to Kyle Morse, deputy executive director of the Tech Oversight Project, a non-profit advocating for new federal regulations for social media, Meta’s current efforts to safeguard the election offer a troubling glimpse of what can be expected in 2024. Morse emphasizes the urgent need for Congress and the administration to take action, ensuring that platforms like Meta, TikTok, Google, X, Rumble, and others do not actively assist foreign and domestic actors openly undermining democracy.
Several of the counterfeit accounts pinpointed by Meta this week also possessed almost identical profiles on X, with some routinely sharing Musk’s posts through retweets. These accounts continue to be active on X, and a request for comment from the platform went unanswered.