deepfake

Google Play Takes Action Against AI Apps Amid Spread of Deepfake Nude Creation Programs

Google Play Store Search
Google on Thursday is issuing new guidance for developers building AI apps distributed through Google Play, in hopes of cutting down on inappropriate and otherwise prohibited content. Schools across the U.S. are reporting problems with students passing around AI deepfake nudes of other students (and sometimes teachers) for bullying and harassment, alongside other sorts of inappropriate AI content. Google says that its policies will help to keep out apps from Google Play that feature AI-generated content that can be inappropriate or harmful to users. It points to its existing AI-Generated Content Policy as a place to check its requirements for app approval on Google Play. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.

“Enhancing Deepfakes Surveillance: Meta’s AI Playbook Introduces Increased Labeling and Reduced Takedowns”

Gettyimages 944719854
Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board. So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns. “Our ‘Made with AI’ labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content,” said Bickert, noting the company already applies ‘Imagined with AI’ labels to photorealistic images created using its own Meta AI feature. Meta’s blog post highlights a network of nearly 100 independent fact-checkers which it says it’s engaged with to help identify risks related to manipulated content. These external entities will continue to review false and misleading AI-generated content, per Meta.