Pura Duniya
world11 February 2026

Government’s new IT rules make AI content labelling mandatory; give Facebook, Instagram and other platforms 3 hours for takedowns

Government’s new IT rules make AI content labelling mandatory; give Facebook, Instagram and other platforms 3 hours for takedowns

India has tightened the timeline for removing online content that violates its digital safety rules. Under the new directive, platforms such as YouTube, Meta, X and a range of domestic services must delete flagged material within three hours of receiving a notice. The move marks a sharp reduction from the previous 24‑hour window and signals a more aggressive stance on regulating internet content.

Why the change matters

The three‑hour rule is part of a broader effort by the Indian government to curb hate speech, misinformation, and other harmful material that circulates on social media. By compressing the response time, authorities aim to limit the spread of content that could incite violence or threaten public order. The policy also aligns with India’s recent amendments to its Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules, which place greater responsibility on platforms for user‑generated content.

Background on India’s online rules

India’s digital ecosystem is one of the world’s largest, with billions of monthly active users across social media, video‑sharing and messaging apps. In 2021, the government introduced the IT Rules to address concerns ranging from child safety to political manipulation. Those rules required intermediaries to appoint grievance officers, set up a rapid response mechanism and remove illegal content within 24 hours of a lawful request.

Since then, several high‑profile incidents – including the spread of false information during elections and the circulation of communal hate speech – have prompted calls for stricter enforcement. Critics argued that a 24‑hour window gave harmful material enough time to reach a wide audience, while industry groups warned that the deadline could be technically challenging for large platforms that process millions of pieces of content daily.

What the new directive entails

The latest notice, issued by the Ministry of Electronics and Information Technology, specifies that any content deemed to violate the IT Rules must be taken down within 180 minutes of a valid takedown order. The order must be accompanied by a clear description of the offending material and the legal basis for removal. Platforms are required to maintain logs of each request and the corresponding action taken, and to submit periodic compliance reports to the government.

Failure to meet the three‑hour deadline could result in monetary penalties, suspension of services, or even the revocation of a platform’s operating licence in India. The directive also emphasizes that the rule applies to all categories of content, including text, images, audio and video, regardless of the platform’s size or user base.

Major tech companies have expressed concern about the feasibility of the new timeline. A spokesperson for YouTube said the company is reviewing the directive and will work with regulators to ensure compliance while preserving user rights. Meta’s representative highlighted the need for a balanced approach that protects free expression and prevents over‑removal of legitimate content. X, formerly known as Twitter, noted that its automated moderation systems are already designed for rapid response, but that a three‑hour window leaves little margin for error in complex cases.

Domestic platforms, many of which operate with smaller technical teams, face a steeper challenge. Some have begun investing in additional AI‑driven moderation tools and expanding their legal teams to handle the increased workload. Industry analysts predict that the rule could accelerate the adoption of real‑time content‑filtering technologies across the Indian market.

Implications for free speech and due process

Human‑rights groups have warned that the shortened deadline could pressure platforms into removing content before fully assessing its legality. The fear is that a “better safe than sorry” approach may lead to over‑blocking, especially for material that is borderline or subject to differing legal interpretations. Critics argue that the rule does not provide sufficient safeguards for users to contest takedowns or to seek redress.

The government, however, maintains that the measure is necessary to protect public safety. Officials point to recent incidents where delayed removal of inflammatory posts contributed to communal clashes and public unrest. They stress that the three‑hour limit is intended to act as a deterrent against the rapid spread of harmful content.

India’s decision follows a global trend of governments tightening control over digital platforms. Countries such as Germany, Brazil and South Korea have introduced or are considering stricter notice‑and‑takedown requirements. While each jurisdiction tailors its rules to local concerns, the common thread is a push for faster removal of illegal or harmful material.

The Indian market’s size makes its regulatory choices particularly influential. Platforms that operate worldwide often adapt their compliance frameworks to meet the most demanding standards, meaning that policies enacted in India can ripple across other regions. Observers note that the three‑hour rule could set a new benchmark for rapid content moderation.

Possible future developments

The directive is expected to be reviewed after a six‑month trial period. During that time, regulators will assess compliance rates, the impact on platform operations, and any unintended consequences for user expression. Depending on the findings, the government may either formalize the three‑hour window as a permanent rule or adjust the timeline to address practical challenges.

In parallel, lawmakers are discussing broader reforms to the IT Rules, including clearer definitions of illegal content and stronger oversight mechanisms for grievance redressal. If enacted, these changes could provide a more structured process for both platforms and users, potentially easing some of the tension between rapid takedown demands and due‑process protections.

What users can expect

For everyday internet users in India, the most visible effect will be a quicker disappearance of posts that authorities deem illegal. Content that violates the IT Rules – such as hate speech, extremist propaganda, or child‑sexual‑exploitation material – is likely to be removed almost instantly after a notice is issued. At the same time, users may notice an increase in temporary content blocks while platforms verify the legality of disputed material.

The shift also underscores the growing responsibility placed on digital services to monitor and manage user‑generated content. As platforms invest in faster moderation tools, users may experience more automated decisions, which could affect the perceived fairness of the process.

India’s reduction of the takedown window to three hours marks a significant escalation in the country’s effort to control online content. The policy aims to curb the rapid spread of harmful material, but it also raises questions about the balance between swift enforcement and protection of free expression. How platforms adapt, how courts interpret the new rule, and how civil‑society groups respond will shape the practical impact of this directive in the months ahead. The outcome will likely influence not only India’s digital landscape but also the global conversation on how best to regulate content in an increasingly connected world.