In a statement Meta said “as the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.”
Meta’s labeling tools, however, do not work across audio and video AI-generated content.
While incorporating signals into image generators is becoming common, such signals are not yet present in AI tools generating audio and video at a comparable scale, Meta said.
The social media giant is introducing a feature for users to disclose when sharing AI-generated video or audio content, with penalties for non-compliance. The company acknowledges the technical challenges in identifying all AI-generated content and said it is actively working on developing classifiers and watermarking technologies to enhance detection and traceability.
Recognizing the adversarial nature of the field, Meta said it is encouraging users to scrutinize content for unnatural elements. The company also said the broader role of AI as both a protective and potentially transformative force in enforcing community standards and combating harmful content.
As AI-generated content becomes more prevalent, Meta said it is committed to ongoing collaboration with industry peers and regulatory bodies to adapt its approach and uphold transparency and accountability in AI development.