“Major Platforms Under EU Scrutiny: Examining GenAI Risks before Elections”

The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the bulk of the rules. These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections. It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance. Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely. The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio.

The European Commission has launched an initiative to gather information from major platforms on their efforts in mitigating risks associated with the use of generative AI. The Commission has sent formal requests for information (RFI) to Google, Meta, Microsoft, Snap, TikTok, and X, inquiring about the measures they have in place to address the potential harms of generative AI on their platforms.

Under the newly revamped ecommerce and online governance regulations, the Digital Services Act (DSA), these platforms are designated as Very Large Online Platforms (VLOPs). This means that besides complying with the majority of the DSA rules, they are also required to assess and mitigate systemic risks.

The Commission’s inquiries specifically target Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, as they are designated as VLOPs under the DSA. These platforms have been asked to provide more information on their risk management strategies related to generative AI, including the handling of “hallucinations” where AI generates false information, the viral spread of deepfakes, and the automated manipulation of services that may mislead voters.

In a press release on Thursday, the Commission stressed that these questions apply to both the distribution and creation of generative AI content. They have also requested internal documents and risk assessments from the platforms related to the impact of generative AI on electoral processes, disseminating illegal content, protecting fundamental rights, gender-based violence, minors, and mental well-being.

The Commission also announced plans to conduct stress tests after Easter to assess the platforms’ preparedness for handling generative AI risks, such as a potential influx of political deepfakes leading up to the upcoming European Parliament elections in June. “We want to push the platforms to tell us about their preparations and be as prepared as possible for any incidents we may detect before the elections,” a senior Commission official stated, speaking anonymously.

Election security is a top priority for the EU, and it is currently working on formal guidance for VLOPs. The Commission has given the platforms until April 3 to provide information related to election security, which is considered an “urgent” request. However, they hope to finalize the guidelines by March 27.

The Commission expressed concerns over the decreasing cost of producing synthetic content, which increases the risks of widespread dissemination of misleading deepfakes during elections. Therefore, they are focusing on larger platforms with the potential to spread such content widely.

The tech industry’s agreement to combat deceptive use of AI during elections, announced at the Munich Security Conference last month and backed by several platforms, is not sufficient in the EU’s view. A Commission official stated that their forthcoming election security guidelines will go further, utilizing a combination of the DSA’s due diligence rules, their experience working with platforms through the Code of Practice Against Disinformation, and future transparency labeling and AI model marking rules under the upcoming AI Act.

Additionally, the Commission’s RFIs address a broad range of risks associated with generative AI, not just voter manipulation. These include potential harms related to deepfake porn and other malicious synthetic content, both in video or audio format. These risks fall under the focus areas for the EU’s DSA enforcement, which also include illegal content and child protection.

The platforms have been given until April 24 to respond to these RFIs related to other generative AI risks.

Smaller platforms and AI tool makers are also on the EU’s radar for potential risk mitigation efforts. Although they may not fall under the Commission’s direct oversight as VLOPs, the EU’s strategy is to indirectly apply pressure through larger platforms and self regulatory mechanisms, such as the Disinformation Code and the upcoming AI Pact.

The AI Pact is expected to launch shortly after the adoption of the AI Act, which is anticipated to happen within the next few months.

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *