🗞️ Why in News The Ministry of Electronics and Information Technology (MeitY) notified the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, effective February 20, 2026 — introducing mandatory AI content labelling, a 2-hour takedown window for deepfakes and non-consensual nudity, and strengthened safe harbour conditions for major platforms.
Legal Framework and Background
The Amendment Rules operate under:
- IT Act, 2000 — Section 79: Safe harbour provision — intermediaries (social media platforms, search engines) are not liable for third-party content if they observe due diligence
- IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: The parent rules that introduced grievance redressal, content monitoring, and OTT self-regulation
- The 2026 Amendment strengthens the 2021 Rules rather than replacing them — it adds a new synthetic media layer
Key Provisions
1. Mandatory Labelling of AI-Generated (Synthetic) Content
Definition of synthetic media under the Rules:
Audio-visual content created or altered algorithmically that appears “indistinguishable from a natural person or real-world event.”
Exemption: Minor automatic smartphone camera touch-ups (portrait mode smoothing, auto-colour correction) are explicitly excluded from this definition.
Labelling obligations:
- Platforms must display “prominent” labelling on AI-generated or AI-altered content
- Platforms must seek user disclosure — if a user created or significantly altered content using AI, they must declare this at the time of upload
- If a user fails to disclose, the platform must either prominently label it as potentially AI-generated or remove it
- This creates a shared responsibility model — platform + user — rather than placing the burden entirely on the platform
2. Faster Takedown Windows
The 2021 Rules required platforms to act within 24–36 hours of receiving takedown orders. The 2026 Amendment introduces category-specific windows:
| Content Category | Takedown Deadline |
|---|---|
| Court-ordered illegal content | 3 hours |
| Government-ordered illegal content | 3 hours |
| Non-consensual nudity (deepfake or real) | 2 hours |
| Synthetic media violating individual rights | 2 hours |
Rationale: Viral deepfakes can cause irreversible reputational damage within hours — the 24-hour window was criticised as inadequate. The 2-hour window for non-consensual nudity aligns with platforms’ existing commitments under industry self-regulation frameworks.
3. Safe Harbour Implications (Section 79 IT Act)
Section 79 of the IT Act grants intermediaries immunity from liability for third-party content — provided they do not:
- “Initiate the transmission”
- “Select the receiver”
- “Modify the information contained in the transmission”
- And observe “due diligence” as prescribed by rules
Amendment impact: Non-compliance with the new labelling, takedown, and disclosure obligations constitutes failed due diligence. Specifically, platforms that “knowingly permit, promote, or fail to act” on violative synthetic content lose Section 79 safe harbour protection — becoming directly liable.
This is a significant escalation from the 2021 framework, which primarily focused on grievance redressal and content removal.
4. Decentralised Enforcement
States are authorised to designate multiple officers for issuing takedown orders — previously, a more centralised process. This decentralises enforcement but raises concerns about inconsistent application and potential misuse by state authorities.
Constitutional and Rights Dimensions
Article 19(1)(a) — Freedom of Speech and Expression: Mandatory labelling of AI content is arguably a reasonable restriction under Article 19(2) (public order, decency, morality, national security). The Supreme Court has upheld proportionate digital speech regulation — but a blanket “label all AI content” requirement may face challenges:
- Artistic AI content (AI-assisted music, digital art) would require labelling even where no deception is intended
- Documentary filmmakers using AI colour restoration of historical footage face the same requirement
Article 21 — Right to Privacy: Protecting individuals from non-consensual deepfakes falls squarely within the privacy-dignity dimension of Article 21 (as expanded in K.S. Puttaswamy v. Union of India, 2017).
Global comparison:
- EU AI Act (2024): Requires disclosure for AI-generated deepfakes unless for artistic or satirical purposes; prohibits real-time biometric surveillance in public spaces
- USA: No federal AI law; California AB 602 (2019) and AB 730 (2019) address deepfakes in porn and political contexts; voluntary commitments at federal level
- China: Deepfake Regulations (2022) require real-name registration for deepfake creators; mandatory watermarking
Challenges in Implementation
Detection technology gap: Current AI-generated content detection tools have accuracy rates of 60–85% — insufficient for regulatory enforcement. What appears to be a clear human face may be AI-generated; what appears AI-generated may be human. This makes platform-side labelling enforcement difficult.
Scale: Meta’s platforms alone handle ~3 billion users globally; the volume of AI-generated content is expanding exponentially. The 2-hour window is operationally demanding at this scale.
Creative industries: The Entertainment Software Association, music labels, and film studios have raised concerns about over-labelling requirements stifling creative AI use (AI-composed soundtracks, AI-assisted screenplay development).
UPSC Relevance
Prelims: IT (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (parent framework), IT Act 2000 Section 79 (safe harbour), synthetic media definition under 2026 Rules, deepfakes, 2-hour and 3-hour takedown windows, MeitY, EU AI Act 2024, Article 19(1)(a) and 19(2), K.S. Puttaswamy case 2017 (Right to Privacy).
Mains GS-2: Government policies on digital governance; regulation of social media and OTT platforms; intermediary liability framework; right to privacy and digital rights; comparison with global AI regulatory frameworks. GS-3: Artificial intelligence regulation; cybersecurity; deepfakes and disinformation; technology and society.
📌 Facts Corner — Knowledgepedia
IT Amendment Rules 2026 — Key Provisions:
- Effective date: February 20, 2026
- Legal basis: Amends IT (Intermediary Guidelines) Rules 2021 under IT Act 2000
- Issuing ministry: MeitY (Ministry of Electronics and Information Technology)
Takedown Timelines:
- Court/govt-ordered illegal content: 3 hours
- Non-consensual nudity + deepfakes: 2 hours
- Previous timeline (2021 Rules): 24–36 hours
Synthetic Media:
- Definition: Audio-visual content created/altered algorithmically appearing “indistinguishable from a natural person or real-world event”
- Exemption: Minor automatic smartphone camera enhancements
- Mandatory: Prominent labelling of all AI-generated content
- User duty: Must disclose AI creation/alteration at upload
- Platform consequence for non-disclosure by user: Label prominently or remove
Safe Harbour (Section 79 IT Act):
- Protects intermediaries from third-party content liability
- Lost if platform “knowingly permits, promotes, or fails to act” on violative synthetic content
Key Constitutional Provisions:
- Article 19(1)(a): Freedom of speech and expression
- Article 19(2): Reasonable restrictions (basis for digital speech regulation)
- Article 21: Right to privacy (K.S. Puttaswamy v. UoI, 2017)
Global Comparisons:
- EU AI Act (2024): Mandatory disclosure + prohibited real-time biometric surveillance
- China Deepfake Regulations (2022): Real-name registration + mandatory watermarking
- USA: No federal law; state-level only (California AB 602, AB 730)
Other Relevant Facts:
- IT Rules 2021: Covered OTT platforms under self-regulatory framework; introduced Grievance Officer requirement; three-tier content moderation
- India IT Act 2000: Amended significantly in 2008; Information Technology Amendment Act 2008 added cybercrime provisions
- CERT-In (Computer Emergency Response Team India): Under MeitY; handles cybersecurity incidents
Sources: Drishti IAS, Next IAS