
AFP via Getty Images
India has taken several steps in recent years to tighten oversight of online speech
India has introduced new rules that make it mandatory for social media companies to remove unlawful material within three hours of being notified, in a sharp tightening of the existing 36-hour deadline.
The amended guidelines will take effect from 20 February and apply to major platforms including Meta, YouTube and X. They will also apply to AI-generated content.
The government did not provide a reason for reducing the takedown window.
But critics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users
In recent years, Indian authorities have used existing Information Technology rules to order social media platforms to remove content deemed illegal under laws dealing with national security and public order. Experts say they give authorities wide-ranging power over social media content.
According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests.
The BBC has contacted the ministry of electronics and information technology for comment on the latest changes. Meta declined to respond to the amendments. The BBC has also approached X and Google, which owns YouTube, for a response.
The amendments also introduce new rules for AI-generated content.
For the first time, the law defines AI-generated material, including audio and video that has been created or altered to look real, such as deepfakes. Ordinary editing, accessibility features and genuine educational or design work are excluded.
The rules mandate that platforms that allow users to create or share such material must clearly label it. Where possible, they must also add permanent markers to help trace where it came from.
Companies will not be allowed to remove these labels once they are added. They must also use automated tools to detect and prevent illegal AI content, including deceptive or non-consensual material, false documents, child sexual abuse material, explosives-related content and impersonation.
Digital rights groups and technology experts have raised concerns about the feasibility and implications of the new rules.
The Internet Freedom Foundation said the compressed timeline would transform platforms into "rapid fire censors".
"These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal," the group said in a statement.
Anushka Jain, a research associate at the Digital Futures Lab, welcomed the labelling requirement, saying it could improve transparency. However, she warned that the three-hour deadline could push companies towards full automation.
"Companies are already struggling with the 36-hour deadline because the process involves human oversight. If it gets completely automated, there is a high risk that it will lead to censoring of content," she told the BBC.
Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy".
He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate.
On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.
The BBC has reached out to the Indian government for a response to these concerns.

5 hours ago
1







English (US) ·