Nick Clegg, a global executive, says users seek clarity on boundaries as AI-generated content increases
Meta is striving to identify and tag AI-generated images on Facebook, Instagram, and Threads, as part of its effort to expose “individuals and organizations that intentionally seek to mislead.”
Images produced with Meta’s AI imaging tool are currently marked as AI-generated. However, Nick Clegg, Meta’s President of Global Affairs, revealed in a blog post on Tuesday that the company aims to also label AI-generated images created on competing platforms.
Meta’s AI images are already embedded with metadata and invisible watermarks that indicate they were AI-generated. The company is also working on tools to detect these markers when used by other companies, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, in their AI image generators, according to Clegg.
“As the distinction between human and synthetic content becomes less clear, people are eager to understand where the boundary lies,” Clegg explained. “Many are encountering AI-generated content for the first time, and our users have expressed a desire for transparency regarding this new technology. Therefore, it’s crucial that we help individuals identify when photorealistic content they encounter has been created using AI.”
Clegg mentioned that the feature is currently under development and that the labels will be implemented in all languages in the coming months.
“We’re adopting this strategy over the next year, a period that coincides with several significant elections worldwide,” Clegg stated.
He clarified that this initiative is currently limited to images, and AI tools that produce audio and video do not yet include these markers. However, the company plans to enable users to voluntarily disclose and add labels to such content when they share it online.
He mentioned that the company would additionally add a more noticeable label to “digitally created or altered” images, video, or audio that pose a significant risk of misleading the public on important matters.
The company is also exploring the creation of technology to automatically identify AI-generated content, even in cases where the content lacks invisible markers or where these markers have been erased.
“This effort is particularly critical as this area is expected to become more competitive in the future,” Clegg explained.
Individuals and groups intent on deceiving others with AI-generated content will seek to circumvent detection measures. In our industry and society at large, we must continually seek ways to remain ahead of such efforts.
AI deepfakes have already impacted the US presidential election cycle. For instance, there were robocalls featuring what is believed to be an AI-generated deepfake of US President Joe Biden’s voice, discouraging voters from participating in the Democratic primary in New Hampshire.
Nine News in Australia received backlash last week for modifying an image of Victorian Animal Justice party MP Georgie Purcell, revealing her midriff and altering her chest, in a broadcast on the evening news. The network attributed the changes to “automation” in Adobe’s Photoshop software, which incorporates AI image tools.