Introducing the New Policy: Disclosure of AI-Generated Realistic Content on YouTube

YouTube is now requiring creators to disclose to viewers when realistic content was made with AI, the company announced on Monday. YouTube says the new policy doesn’t require creators to disclose content that is clearly unrealistic or animated, such as someone riding a unicorn through a fantastical world. It also isn’t requiring creators to disclose content that used generative AI for production assistance, like generating scripts or automatic captions. They will also have to disclose content that alters the footage of real events or places, such as making it seem as though a real building caught on fire. Creators will also have to disclose when they have generated realistic scenes of fictional major events, like a tornado moving toward a real town.

YouTube requiring creators to disclose AI-created realistic content

The platform introduces new tool to prevent deception and confusion

YouTube logo with AI symbol

“We want to make sure users aren’t fooled by altered or synthetic media that looks like the real thing.” – YouTube

In a move to combat the rise of deepfakes and AI-generated content, YouTube announced on Monday that it will now require creators to disclose when they use altered or synthetic media that could be mistaken for reality. The new tool, found in Creator Studio, aims to prevent viewers from being deceived and misled by deceptive videos.

The launch of this disclosure tool comes as concerns grow over the impact of AI and deepfakes on the upcoming U.S. presidential election. Experts warn that these technologies pose a significant risk in spreading misinformation and manipulating public opinion.

This announcement follows YouTube’s promise last November to implement stricter policies regarding the use of AI. With their new tool, the platform is targeting videos that use the likeness of a realistic person or place. This includes digital manipulation of faces, voices, and events.

The disclosure policy applies to the following types of AI-generated content:

  • Digital alterations of a person’s face or voice
  • Synthetic narration of videos
  • Manipulated footage of real events or places
  • Realistic scenes of fictional major events

It’s important to note that the policy does not apply to clearly unrealistic or animated content, such as fantasy worlds or creatures. It also exempts the use of AI for production assistance, such as generating scripts or captions.

“We recognize that AI technology is constantly evolving and we want to stay ahead of the potential risks it poses to our users,” a spokesperson for YouTube stated.

Most videos will display a label in the expanded description, indicating the use of AI-generated content. However, for videos discussing sensitive topics like health or news, a more prominent label will be displayed on the video itself.

Viewers can expect to see these labels roll out across all YouTube formats in the coming weeks, starting with the mobile app and eventually appearing on desktop and TV. YouTube also plans to take action against creators who consistently fail to adhere to this policy, including adding labels to their videos on their behalf.

These efforts by YouTube serve as a reminder to viewers to always be aware and critical of the media they consume, especially in the era of advanced AI technology. Stay informed and stay vigilant.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *