Snap to Enhance Image Protection with Watermarks Generated by AI-Powered Technology

Social media company Snap said Tuesday that it plans to add watermarks to AI-generated images on its platform. Other tech giants such as Microsoft, Meta, and Google have also taken steps to label or identify images created with AI-powered tools. Currently, Snap allows users to create or edit AI-generated images with Snap AI for paid users and a selfie-focused feature called Dreams. The company also added context cards with AI-generated images from tools like Dream selfies to better inform the user. In February, Snap partnered with HackerOne to stress its AI image-generation tools by adapting a bug bounty program.

Social media company Snap announced on Tuesday that it will be implementing a new feature to identify AI-generated images on its platform. To indicate a photo has been created using artificial intelligence, a watermark featuring a small ghost with a sparkling icon will appear when the image is saved or exported.

“Removing Snap’s Ghost with sparkles watermark violates our terms,” the company stated on its support page.

It is currently unclear how Snap plans to detect and handle instances of the watermark being removed. We have reached out to the company for further information and will update the story accordingly.

This move mirrors other tech giants such as Microsoft, Meta, and Google, who have also taken steps to label and identify images created with the help of AI-powered tools.

As of now, Snap offers users the ability to create or edit AI-generated images through its Snap AI feature, available to paid users, and through its selfie-focused Dreams feature.

In a recent blog post discussing the company’s safety and transparency measures involving AI, Snap explained that AI-powered features, like Lenses, will be accompanied by visual markers, such as the sparkling logo.

Additionally, the company has introduced context cards for images created using AI tools like Dream selfies, with the intention of providing more information to the user.

In February, Snap collaborated with HackerOne to reinforce the security of its AI image-generation tools by launching a bug bounty program. The company also stated that it has implemented a review process to address any problematic issues that may arise during the development of AI-powered lenses.

“We want all users to have equal access and expectations when utilizing all features within our app, including our AI-powered experiences. With this in mind, we are conducting additional testing to minimize any potentially biased AI results,” Snap stated in its blog post.

Snapchat faced backlash after the introduction of its “My AI” chatbot last year. An article from The Washington Post reported that the bot was providing inappropriate responses to users. As a response, the company introduced controls in its Family Center, allowing parents and guardians to monitor and limit their teen’s interactions with AI.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *