Last week, Meta started testing its AI chatbot in India across WhatsApp, Instagram, and Messenger.
Meta confirmed that it is restricting certain election-related keywords for AI in the test phase.
When you ask Meta AI about specific politicians, candidates, officeholders, and certain other terms, it will redirect you to the Election Commission’s website.
But just like other AI-powered systems, Meta AI has some inconsistencies.
This week, the company rolled out a new Llama-3-powered Meta AI chatbot in more than a dozen countries, including the U.S., but India was missing from the list.
Weeks before the national elections in India, Elon Musk-owned X said it is rolling out support for posting Community Notes — the company’s crowd-sourced fact-checking program — in the key overseas market.
The first set of contributors from India will start posting notes from today and more will be accepted over time, X said.
Community Notes now active on India!
Over time, the company has allowed members from different countries to start posting Community Notes to provide local context better.
India was one of the last major markets where Community Notes had not previously expanded.
The eight platforms are designated as very large online platforms (VLOPs) under the regulation — meaning they’re required to assess and mitigate systemic risks, in addition to complying with the bulk of the rules.
These will test platforms’ readiness to deal with generative AI risks such as the possibility of a flood of political deepfakes ahead of the June European Parliament elections.
It’s recently been consulting on election security rules for VLOPs, as it works on producing formal guidance.
Which is why it’s dialling up attention on major platforms with the scale to disseminate political deepfakes widely.
The Commission’s RFIs today also aim to address a broader spectrum of generative AI risks than voter manipulation — such as harms related to deepfake porn or other types of malicious synthetic content generation, whether the content produced is imagery/video or audio.
TechCrunch has learned that the search giant has started to restrict queries made Gemini when they relate to elections, in any market globally where elections are taking place.
The search giant confirmed to TechCrunch that it started rolling out the restrictions on Gemini to limit surfacing answers about election-related queries globally.
TechCrunch found the AI tool did show answers when passing on queries with typos.
The AI tool, responding to a query about whether Indian Prime Minister Narendra Modi was a fascist, responded that Modi had been accused of implementing policies that some had characterized as fascist.
It is unclear whether Google will unblock Gemini for answering election-related queries after the elections end later this year.
A number of major AI services performed poorly in a test of their ability to address questions and concerns about voting and elections.
Their concern was that AI models will, as their proprietors have urged and sometimes forced, replace ordinary searches and references for common questions.
They submitted these questions via API to five well-known models: Claude, Gemini, GPT-4, Llama 2 and Mixtral.
The AI model responses ranged from 1,110 characters (Claude) to 2,015 characters, (Mixtral), and all of the AI models provided lengthy responses detailing between four and six steps to register to vote.
GPT-4 came out best, with only approximately one in five of its answers having a problem, pulling ahead by punting on “where do I vote” questions.