“Renowned AI Experts Unite to Advocate for Anti-Deepfake Laws”

Hundreds in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur real legislation (despite the House’s new task force), it does act as a bellwether for how experts lean on this controversial issue. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. As you can see, there is no shortage of reasons for those in the AI community to be out here waving their arms around and saying “maybe we should, you know, do something? !”Whether anyone will take notice of this letter is anyone’s guess — no one really paid attention to the infamous one calling for everyone to “pause” AI development, but of course this letter is a bit more practical.

In recent years, the ever-growing field of artificial intelligence has faced a new challenge – the emergence of deepfakes. These AI-generated impersonations have posed a significant threat to society, leading hundreds in the community to take action.

“Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes.”

This call for stricter regulation has garnered support from over 500 individuals in and adjacent to the AI field. Their open letter emphasizes the dire need for measures to be taken in order to prevent the spread of harmful deepfakes.

One of the key demands in the letter is the criminalization of deepfake child sexual abuse materials. This would apply to both real and fictional figures, aiming to protect the most vulnerable in society. Additionally, any individual found creating or sharing harmful deepfakes would face criminal penalties. The letter also calls upon developers to take preventative measures in their products, with the possibility of facing penalties if they fail to do so successfully.

Notable signatories of the letter include individuals such as Jaron Lanier, Frances Haugen, Stuart Russell, Andrew Yang, Marietje Schaake, Steven Pinker, Gary Marcus, Oren Etzioni, and Yoshua Bengio. The list also consists of hundreds of academics from various disciplines and countries. Interestingly, the letter is organized by “Notability,” showcasing the diverse range of support it has received from different individuals in the AI community.

This call for stricter regulations is not the first of its kind. The issue has been debated for years in the EU and was recently formally proposed. Perhaps it is the slow progress in addressing the issue or the lack of protection for this type of abuse that has prompted these experts to speak out. Additionally, the threat of AI-generated scam calls swaying elections or defrauding unsuspecting individuals has also been a cause for concern.

The announcement of a new task force, with no specific agenda, has further ignited the AI community’s concerns and sense of urgency. This letter serves as a collective voice, urging legislators to take notice and consider the opinions of the worldwide AI academic and development community. However, whether or not any significant action will be taken remains uncertain, especially in the midst of an election year with a divided Congress.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *