India Reaffirms 3 Hour Takedown Rule for Big Tech Platforms
The Indian government has stood firm on its new mandate requiring internet platforms to remove unlawful content within three hours of receiving an official order. This decision comes despite concerns raised by major technology companies regarding the practical challenges of such a short deadline.
Government Holds High Level Meeting with Big Tech
On Thursday, the Ministry of Electronics and Information Technology (MeitY) met with representatives from “Big Tech” firms to discuss the 2026 amendments to the IT Rules. These rules specifically target the spread of deepfakes and harmful AI generated content. Notable attendees included:
- Meta
- OpenAI
- The Internet and Mobile Association of India (IAMAI)
During the meeting, the government clarified that compliance with these new rules is a top priority and is not open for negotiation.
Tech Giants Flag Operational Challenges
Industry leaders have expressed concern over the “three hour window.” Rob Sherman, Meta’s Vice President of Policy, previously described the timeline as “operationally challenging.” He noted that while the goal of stopping viral misinformation is important, such rapid takedowns require a careful assessment that is difficult to complete in just 180 minutes.
Why the Takedown Time Was Reduced
The Ministry acknowledged that three hours is a tight deadline but signaled that no changes would be made. Previously, intermediaries had up to 36 hours to comply with government orders under Section 79 of the IT Act.
The timeline was shortened specifically to combat deepfakes and misleading AI content, which can go viral and cause widespread damage in a very short amount of time.
The Stakes for Platforms: “Safe Harbour” at Risk
The government warned that it cannot protect firms that fail to comply with these rules. Under Indian law, platforms enjoy “Safe Harbour” protection, which means they are not held legally responsible for what users post. However, if a platform misses the three hour deadline, it could lose this immunity and face direct legal action.
The government also urged platforms to focus on identifying and removing harmful “synthetic content” (AI generated media) as a primary challenge, rather than simply labeling content.
Also Read : CBI Registers Case Against Anil Ambani for Alleged Rs 2,000 Crore Fraud





