Meta flags challenges with India’s three-hour content takedown rule

NEW DELHI: Meta Platforms has expressed concern over India’s new rule requiring platforms to remove certain harmful content within three hours of receiving a valid order, saying the deadline may be difficult to meet in practice.
“Operationally, three hours (removing the window) is going to be really challenging,” Meta Rob Sherman, vice president of policy and chief privacy officer, said at a media roundtable in New Delhi on Tuesday. “Traditionally, the Indian government has been very consultative on these issues. I think this is an example where we’re concerned that if they had come to us and talked to us about this, we would have talked about some of the operational challenges.”
On February 10, the Center formally notified changes to the existing Information Technology Rules aimed at combating the misuse of artificial intelligence (AI) through deepfakes and other sensitive “synthetic” content, introducing a stricter compliance regime for social media companies such as X, Facebook, Instagram and Telegram. Companies that fall within the definition of intermediary will have to comply with the law as of February 20.
Under the new rules, enforcement deadlines for removing objectionable materials have been sharply tightened. Non-consensual sexual images, including deepfakes, must be removed within two hours rather than 24 hours before. Other illegal content must be removed within three hours of a user report or government or court order, compared to the previous 36 hours.
Sherman said the company uses a wide variety of tools and techniques to detect content that violates terms of service or community standards, but the real challenge under the new rules will be the logistics of properly investigating and verifying requests in such a short time frame.
“When we get a request from the government (to remove content), we’re going to have to review it, investigate it, and verify it ourselves. So that’s something that takes a little bit of time, especially if there’s something we need to review. It’s not usually possible to turn that around within three hours,” Sherman said.
Tighter timelines come as the misuse of AI through deepfakes and non-consensual sexual images increasingly affects users. However, the government argued that compatibility should not be an issue for the platforms, given their technological capabilities.
Communications and IT Minister Ashwini Vaishnaw on Tuesday said the government is in talks with social media platforms on tackling deepfakes and age-based restrictions to protect society from the harms of AI.
“…We’ve done a lot of work at Meta to create things like teen accounts so that there are parental controls so parents can make the right choices for themselves or based on how their kids use social media,” Sherman said, adding that Australia-like social media bans on teens probably don’t serve the goal they want to serve.
He added that it might be prudent to classify young people by age, similar to the approach taken in the United Kingdom.
Privacy law increases compliance burden
On timelines for compliance with the Digital Personal Data Protection (DPDP) Act, Sherman noted that most countries provide a transition period of around two years to implement new privacy rules, but the Indian government has significantly short-lived that timeline.
The rules, which came in November last year, said companies must comply with the provisions of the Act within 12-18 months, including appointing consent managers and data protection officers, putting systems in place for explicit user consent and reporting data breaches within 72 hours.
“We’re still in the process of looking at what that will mean in terms of how we comply. We have every confidence that we’ll do the best we can, but we’re still figuring out exactly what that will look like,” Sherman said.
Under the DPDP Rules, 2025, the government has the power to direct the processing and storage of certain categories of personal data only in India.
Sherman said the Indian government’s discussions about localization often focus on “specific types of information that have national security implications.” He added that strict localization requirements for platforms like WhatsApp, Instagram and Facebook would logically be difficult because these platforms are designed for cross-border communication, which inherently requires data to be stored in multiple global locations to work.


