India abandons mandatory approval for AI model launches

India is walking back on a recent AI advisory after receiving criticism from many local and global entrepreneurs and investors. The Ministry of Electronics and IT shared an updated AI advisory with industry stakeholders on Friday that no longer asked them to take the government approval before launching or deploying an AI model to users in the South Asian market. Under the revised guidelines, firms are instead advised to label under-tested and unreliable AI models to inform users of their potential fallibility or unreliability. The revision follows India’s IT ministry receiving severe criticism earlier this month from many high-profile individuals. Less than a year ago, the ministry had declined to regulate AI growth, identifying the sector as vital to India’s strategic interests.

India has backtracked on an AI advisory following backlash from local and global entrepreneurs and investors. The Ministry of Electronics and IT recently shared an updated AI advisory with industry stakeholders, no longer requiring government approval before launching or deploying AI models in the South Asian market.

Under the revised guidelines, firms are now advised to label under-tested and unreliable AI models to inform users of their potential fallibility. This change in direction comes after India’s IT ministry faced severe criticism earlier this month from prominent figures, including Martin Casado, a partner at venture firm Andreessen Horowitz, who called India’s move “a travesty.”

This is a reversal from India’s previous hands-off approach to AI regulation. Less than a year ago, the ministry had declined to regulate AI growth, citing its importance to the country’s strategic interests.

The new advisory, like the original released earlier this month, is not available online, but TechCrunch has obtained a copy. The ministry reiterated that while the advisory is not legally binding, it reflects the “future of regulation” and is expected to be followed.

The advisory highlights that AI models should not be used to share unlawful content under Indian law and must not perpetuate bias, discrimination, or threats to the electoral process. Intermediaries must also use “consent popups” or similar mechanisms to explicitly explain the unreliability of AI-generated output to users.

The ministry maintains its emphasis on addressing deepfakes and misinformation, advising intermediaries to label or embed content with unique metadata or identifiers. However, it is no longer necessary for firms to develop a method to identify the “originator” of any specific message.

Avatar photo
Kira Kim

Kira Kim is a science journalist with a background in biology and a passion for environmental issues. She is known for her clear and concise writing, as well as her ability to bring complex scientific concepts to life for a general audience.

Articles: 806

Leave a Reply

Your email address will not be published. Required fields are marked *