YouTube Enforces Stricter Measures Against AI Videos that Depict Deceased Children or Crime Victims with Realistic Simulations

YouTube is updating its harassment and cyberbullying policies to clamp down on content that “realistically simulates” deceased minors or victims of deadly or violent events describing their death. The policy change comes as some true crime content creators have been using AI to recreate the likeness of deceased or missing children. In these disturbing instances, people are using AI to give these child victims of high profile cases a childlike “voice” to describe their deaths. In recent months, content creators have used AI to narrate numerous high-profile cases including the abduction and death of British two-year-old James Bulger, as reported by the Washington Post. TikTok’s policy allows it to take down realistic AI images that aren’t disclosed.

YouTube is making a major change to its harassment and cyberbullying policies in order to crack down on disturbing content that “realistically simulates” deceased minors or victims of deadly or violent events by forcing them to describe their own death. The Google-owned platform recently announced that they will begin removing such content on January 16th.

This policy update comes in response to some creators of true crime content using AI technology to recreate the appearance of deceased or missing children. In these disturbing cases, people are using AI to give the child victims of high-profile cases a childlike “voice” to describe the traumatic events surrounding their death.

Over the past few months, content creators have used AI to narrate various well-known cases, including the abduction and death of British toddler James Bulger, as reported by the Washington Post. Similar AI narrations have also been used for the cases of Madeleine McCann, a three-year-old who went missing from a resort in Portugal, and Gabriel Fernández, an eight-year-old boy who was tortured and murdered by his own mother and her boyfriend in California.

The consequences for violating these new policies are severe. YouTube states that any content that violates the new rules will be removed, and the user will receive a strike. Upon receiving three strikes, the user’s channel will be permanently removed from YouTube, and they will be unable to upload videos, live streams, or stories for one week.

This shift in policy comes just two months after YouTube introduced new guidelines for responsible disclosures regarding AI-generated content, along with tools to request the removal of deepfakes. One of the key changes is that users must disclose when they have created altered or synthetic content that appears to be realistic. The company has warned that those who fail to disclose their use of AI may face consequences such as content removal, suspension from the YouTube Partner Program, or other penalties.

Additionally, YouTube clarified that AI content may be removed if it contains “realistic violence,” even if it has been labelled as AI-generated.

In September 2023, the popular social media app TikTok launched a tool that allows creators to label their AI-generated content, following an update to their guidelines that requires disclosure for all synthetic or manipulated media that shows realistic scenes. TikTok has the authority to take down any AI-generated media that is not properly disclosed.

Avatar photo
Dylan Williams

Dylan Williams is a multimedia storyteller with a background in video production and graphic design. He has a knack for finding and sharing unique and visually striking stories from around the world.

Articles: 643

Leave a Reply

Your email address will not be published. Required fields are marked *