Meta, the parent company, will also label AI-altered content as “high-risk” if it deceives the public on an important matter
Meta, the parent company of Facebook and Instagram, unveiled significant policy adjustments regarding digitally created and altered media on Friday, ahead of elections that will challenge its capacity to regulate deceptive content produced by AI technologies.
Starting in May, the social media behemoth will introduce “Made with AI” labels to AI-generated videos, images, and audio shared on Facebook and Instagram. This expands on a policy that previously focused on a limited range of manipulated videos, according to Monika Bickert, the Vice President of Content Policy, in a blog post.
Bickert stated that Meta will also introduce distinct and more noticeable labels for digitally modified media that presents a “particularly high risk of significantly deceiving the public on an important issue,” regardless of whether it was generated using AI or other methods. A spokesperson mentioned that Meta will immediately start applying these more prominent “high-risk” labels.
This approach marks a shift in the company’s approach to manipulated content, moving away from solely removing a limited number of posts to leaving the content accessible while informing viewers about its creation process.
Meta had previously disclosed a plan to identify images produced with third-party generative AI tools by embedding invisible markers into the files, but did not specify a start date at that time.
A spokesperson for the company mentioned that the labeling strategy will be implemented for content shared on Facebook, Instagram, and Threads. However, different regulations apply to its other services, such as WhatsApp and Quest virtual-reality headsets.
These changes precede the upcoming US presidential election in November, which tech researchers caution could be influenced by generative AI technologies. Political campaigns have started using AI tools, particularly in regions like Indonesia, pushing the limits of guidelines set by providers like Meta and leading generative AI company OpenAI.
In February, Meta’s oversight board criticized the company’s existing policies on manipulated media as “incoherent” after reviewing a video posted on Facebook last year that used edited footage to falsely suggest inappropriate behavior by US President Joe Biden.
The video was allowed to remain online because Meta’s current policy on “manipulated media” prohibits misleadingly altered videos only if they were created by artificial intelligence or if they depict individuals saying things they did not actually say.
The oversight board stated that this policy should also encompass non-AI content, which can be “just as misleading” as AI-generated content, as well as audio-only content and videos showing individuals engaging in actions they did not actually perform or say.