There have been several instances of misinformation, impersonation, and online harm stemming from Artificial Intelligence (AI)-generated content such as deepfakes, including high-profile incidents involving Ratan Tata, Amitabh Bachchan, and Rashmika Mandana. With a view to strengthening the country’s regulatory framework for addressing these aspects, the Ministry of Electronics and Information Technology (MeitY) recently took 2 significant steps:
Recently, the issue of deepfakes and AI-generated misinformation came into the spotlight when Sadhguru and the Isha Foundation sought legal intervention against AI-generated deepfake videos and advertisements falsely showing them endorsing certain products. Subsequently, in the matter of Sadhguru Jagadish Vasudev v. Igor Isakov,1 the Delhi High Court directed several intermediaries, including Google LLC, to proactively remove such manipulated content and adopt technological measures to prevent its recurrence. The order underscored the urgent need for ber legal mechanisms to deal with deepfake-based misinformation, influencing MeitY’s decision to tighten intermediary obligations.
The notified amendment to Rule 3(1)(d)
Rule 3(1)(d) of the Rules governs the process by which intermediaries such as social-media platforms, search engines, and messaging services are required to remove or disable unlawful content. The latest amendment by MeitY brings in several structural reforms:
Although intermediaries continue to be bound by the 36-hour compliance window, the process now incorporates a higher degree of transparency and accountability. At the same time, the amendment modifies the earlier ‘Good Samaritan’ protection, which allowed intermediaries to voluntarily remove content without losing their safe-harbour immunity under Section 79 of the Information Technology Act, 2000 (Act). The omission or narrowing of this protection could have a chilling effect on proactive moderation, since platforms may fear legal exposure even when acting in good faith.
The proposed amendment to regulate AI-generated content
Alongside the notified amendment, MeitY has proposed to add certain provisions to the Rules that specifically address the dissemination of AI-generated misinformation (Draft Rules). The Draft Rules introduce the concept of ‘Synthetically Generated Information (SGI)’ under Rule 2(1), defining it as any information artificially or algorithmically created or modified using a computer resource in a manner that makes it appear authentic or true. Further, references to ‘information’ across key provisions of the Rules, including those relating to unlawful acts and due diligence obligations, would encompass SGI unless the context suggests otherwise.
The Draft Rules propose a series of additional obligations for intermediaries:
Creating a robust regulatory framework
Even though the notified amendment and the Draft Rules operate in different domains, they are closely connected – while the notified amendment strengthens procedural safeguards and accountability for all takedown requests, the Draft Rules extend the scope of content regulation to the new realm of AI-generated material. These frameworks will intersect whenever synthetic media is alleged to violate the law, for example, where a deepfake video threatens public order or defames an individual. In such cases, takedown orders under amended Rule 3(1)(d) will apply in conjunction with the obligations under the synthetic-content framework set out in the Draft Rules.
As the deliberation on steps to optimally regulate SGI and related content continues, the envisaged regulatory framework in India should address several key gap areas:
The new framework reflects the government’s growing awareness of the evolving digital landscape and the risks posed by synthetic media as well as the proliferation of AI-generated images and videos from platforms such as OpenAI and Gemini AI that can distort reality, spread misinformation, or damage reputations, in addition to sparking disputes and litigation around intellectual property rights, authorship, ownership, and misuse of creative works generated through AI, etc. The notified amendment and the Draft Rules are a welcome step toward ensuring content authenticity and user protection, and should help align India’s data protection regime with global standards.
Footnotes:
1 CS(COMM) 578/2025