- Meta is implementing invisible watermarks on all images created by its AI services to combat fake news and misinformation
- The watermarks are embedded within images and allow tracing back to the AI source, but are invisible to the human eye
- Meta will first roll out watermarking on Meta AI images, then expand it across Facebook Messenger, Instagram, and other AI services over time
Fighting AI-Generated Fake News
Meta is implementing a new system to combat AI-generated fake news and misinformation. The social media giant will use invisible watermarks on all images created through its AI services. Unlike traditional visible watermarks, Meta’s technique embeds tracing data within the images themselves.
What Are Invisible Watermarks?
Meta is developing a deep learning model that can embed invisible watermarks into any image generated by Meta’s AI systems. While invisible to the human eye, these watermarks allow tracing images back to their AI source. The watermarks are resilient against common image edits like cropping, color and brightness changes, screenshots, etc.
Rolling Out Across Meta’s AI Services
Initially, Meta will roll out invisible watermarking on all images created through Meta AI, its general AI assistant service. However, the company plans to expand the watermarking technique across its other AI image generation services over time. This includes Meta’s new AI-powered reimagine features on Facebook Messenger and Instagram.
Preventing AI Misuse
Meta aims to curb misuse of its AI services to spread misinformation or scams. Recent examples include AI-generated fake news and deepfake celebrity images used in phishing campaigns. While other AI services like DALL-E allow manual watermarking, Meta’s invisible AI technique is more robust. The goal is to increase transparency and traceability for all AI-generated content.