The European Commission has launched an initiative to gather information from major platforms on their efforts in mitigating risks associated with the use of generative AI. The Commission has sent formal requests for information (RFI) to Google, Meta, Microsoft, Snap, TikTok, and X, inquiring about the measures they have in place to address the potential harms of generative AI on their platforms.
Under the newly revamped ecommerce and online governance regulations, the Digital Services Act (DSA), these platforms are designated as Very Large Online Platforms (VLOPs). This means that besides complying with the majority of the DSA rules, they are also required to assess and mitigate systemic risks.
The Commission’s inquiries specifically target Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, as they are designated as VLOPs under the DSA. These platforms have been asked to provide more information on their risk management strategies related to generative AI, including the handling of “hallucinations” where AI generates false information, the viral spread of deepfakes, and the automated manipulation of services that may mislead voters.
In a press release on Thursday, the Commission stressed that these questions apply to both the distribution and creation of generative AI content. They have also requested internal documents and risk assessments from the platforms related to the impact of generative AI on electoral processes, disseminating illegal content, protecting fundamental rights, gender-based violence, minors, and mental well-being.
The Commission also announced plans to conduct stress tests after Easter to assess the platforms’ preparedness for handling generative AI risks, such as a potential influx of political deepfakes leading up to the upcoming European Parliament elections in June. “We want to push the platforms to tell us about their preparations and be as prepared as possible for any incidents we may detect before the elections,” a senior Commission official stated, speaking anonymously.
Election security is a top priority for the EU, and it is currently working on formal guidance for VLOPs. The Commission has given the platforms until April 3 to provide information related to election security, which is considered an “urgent” request. However, they hope to finalize the guidelines by March 27.
The Commission expressed concerns over the decreasing cost of producing synthetic content, which increases the risks of widespread dissemination of misleading deepfakes during elections. Therefore, they are focusing on larger platforms with the potential to spread such content widely.
The tech industry’s agreement to combat deceptive use of AI during elections, announced at the Munich Security Conference last month and backed by several platforms, is not sufficient in the EU’s view. A Commission official stated that their forthcoming election security guidelines will go further, utilizing a combination of the DSA’s due diligence rules, their experience working with platforms through the Code of Practice Against Disinformation, and future transparency labeling and AI model marking rules under the upcoming AI Act.
Additionally, the Commission’s RFIs address a broad range of risks associated with generative AI, not just voter manipulation. These include potential harms related to deepfake porn and other malicious synthetic content, both in video or audio format. These risks fall under the focus areas for the EU’s DSA enforcement, which also include illegal content and child protection.
The platforms have been given until April 24 to respond to these RFIs related to other generative AI risks.
Smaller platforms and AI tool makers are also on the EU’s radar for potential risk mitigation efforts. Although they may not fall under the Commission’s direct oversight as VLOPs, the EU’s strategy is to indirectly apply pressure through larger platforms and self regulatory mechanisms, such as the Disinformation Code and the upcoming AI Pact.
The AI Pact is expected to launch shortly after the adoption of the AI Act, which is anticipated to happen within the next few months.
[…] about the social network’s use of user data for targeted advertisements. The request for information (RFI) comes from the Commission, the governing body responsible for overseeing compliance with a […]