Social media company Snap announced on Tuesday that it will be implementing a new feature to identify AI-generated images on its platform. To indicate a photo has been created using artificial intelligence, a watermark featuring a small ghost with a sparkling icon will appear when the image is saved or exported.
“Removing Snap’s Ghost with sparkles watermark violates our terms,” the company stated on its support page.
It is currently unclear how Snap plans to detect and handle instances of the watermark being removed. We have reached out to the company for further information and will update the story accordingly.
This move mirrors other tech giants such as Microsoft, Meta, and Google, who have also taken steps to label and identify images created with the help of AI-powered tools.
As of now, Snap offers users the ability to create or edit AI-generated images through its Snap AI feature, available to paid users, and through its selfie-focused Dreams feature.
In a recent blog post discussing the company’s safety and transparency measures involving AI, Snap explained that AI-powered features, like Lenses, will be accompanied by visual markers, such as the sparkling logo.
Additionally, the company has introduced context cards for images created using AI tools like Dream selfies, with the intention of providing more information to the user.
In February, Snap collaborated with HackerOne to reinforce the security of its AI image-generation tools by launching a bug bounty program. The company also stated that it has implemented a review process to address any problematic issues that may arise during the development of AI-powered lenses.
“We want all users to have equal access and expectations when utilizing all features within our app, including our AI-powered experiences. With this in mind, we are conducting additional testing to minimize any potentially biased AI results,” Snap stated in its blog post.
Snapchat faced backlash after the introduction of its “My AI” chatbot last year. An article from The Washington Post reported that the bot was providing inappropriate responses to users. As a response, the company introduced controls in its Family Center, allowing parents and guardians to monitor and limit their teen’s interactions with AI.