Choose another country or region to see content specific to your location

Safe Harbour and Social Media in India: A Shift Towards Greater Accountability

May 16, 2025

The Indian government is in the midst of a significant overhaul of its digital governance framework, with a particular focus on re-evaluating the “safe harbour” protections provided to online intermediaries such as Facebook, YouTube, and X (formerly Twitter) under the Information Technology Act, 2000. This legal immunity, laid out under Section 79, has historically shielded platforms from liability for content posted by users, provided they comply with certain due diligence obligations. The objective was to nurture innovation, promote free speech, and enable the growth of digital platforms without the fear of legal reprisals for user actions. However, the digital landscape of 2025 is far more complex. Challenges such as misinformation, hate speech, deep-fakes, cyber fraud, and the weaponization of social media for political or extremist agendas have raised concerns about the adequacy of this immunity. The central government now believes that platforms must be more accountable for the content they host, especially given their outsized influence in shaping public discourse and opinion in India’s digitally connected society.

 

Safe Harbour and Its Conditional Nature

Section 79 of the IT Act was never intended to be an unconditional shield. The protection is contingent on intermediaries acting upon “actual knowledge” of unlawful content—typically through a court order or a formal notification from the government. Furthermore, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, added another layer of obligations: platforms must appoint a Chief Compliance Officer, a Resident Grievance Officer, and a Nodal Contact Person (all based in India), respond to takedown requests within 72 hours, and publish monthly compliance reports. These rules were meant to usher in greater transparency and faster redressal mechanisms. However, the government has pointed out that many large platforms have either failed to comply fully or have delayed taking action in cases involving hate speech, misinformation, or threats to public safety. This has led to growing frustration within the administration, as delays in moderating dangerous content can have serious real-world consequences, from communal unrest to national security threats.

 

The 2023 Amendment and Legal Pushback

In April 2023, the government took a controversial step by amending the IT Rules to give the Press Information Bureau’s (PIB) Fact Check Unit the power to label government-related content as “fake news.” If flagged content wasn’t taken down promptly by intermediaries, they would risk losing their safe harbour protection under Section 79. This was seen by the government as a measure to combat the spread of false narratives, particularly those aimed at misleading the public or undermining official institutions.
However, this amendment triggered a sharp backlash from civil society, legal experts, and media groups. Critics argued that giving a government-controlled body the authority to declare content “fake” without an independent appeals process opened the door to arbitrary censorship. These fears were substantiated in January 2024, when the Bombay High Court ruled against the PIB’s expanded powers, stating that the rules lacked clear legal safeguards and were prone to misuse. Although the government has appealed the decision, the ruling was seen as a landmark moment in the debate over digital rights and censorship.

 

Current Status and developments

The most recent and significant development in this ongoing debate occurred in early May 2025, when the Parliamentary Standing Committee on Communications and Information Technology convened to deliberate on national security, misinformation, and digital platform accountability. This high-level meeting followed the Pahalgam terror attack, an incident that triggered a national conversation about how extremist content and fake news can proliferate unchecked online.
During the meeting, the Ministry of Information and Broadcasting (I&B) and the Ministry of Electronics and Information Technology (MeitY) were called upon to present detailed strategies for cracking down on social media platforms and digital influencers suspected of undermining national interests. The prevailing government view was unambiguous: Section 79’s safe harbour protections are outdated and insufficient in the current digital threat environment. As a response, the proposed Digital India Act (DIA) – still in its consultation phase aims to introduce a risk-based regulatory framework. This may involve either revoking safe harbour entirely or significantly tightening the conditions under which it applies, including stricter content moderation norms, differentiated regulation for various intermediary types, and heftier penalties for violations. While the government maintains it is not rushing the legislation, the policy direction is clear. India is moving towards a framework where platforms must demonstrate active accountability, especially in areas touching upon national security, misinformation, and harmful content. This marks a significant evolution in India’s digital governance philosophy—from permissive and innovation-driven to security-focused and responsibility-oriented.

 

The Digital India Act: A Step Towards Online Accountability

The upcoming Digital India Act (DIA) is set to modernize India’s digital laws, replacing the outdated IT Act of 2000. It aims to address growing concerns like misinformation, cybercrime, and platform accountability. Recognizing that digital platforms now influence public opinion and content flow, the government plans to introduce stricter rules for intermediaries, especially social media platforms. Under the current safe harbour protection, platforms aren’t held liable for user content if they follow due diligence. However, with increasing misuse, the government is pushing for tougher conditions or even partial removal of this immunity. The DIA proposes differentiated regulation based on platform type and risk level. By balancing freedom of speech with responsibility and user safety, the Act seeks to create a safer digital space. It’s a major step toward ensuring transparency, accountability, and a secure online environment in India’s evolving digital landscape.

Get in Touch

Visit Us At


Quant LegalTech India Pvt. Ltd
8th Floor, SN Towers, 25/2, MG Road, Bangalore - 01, Karnataka


Quant LegalTech Pte. Ltd
1 North Bridge Road, #08-08 High Street Centre Singapore 179094

© 2025 . All rights reserved.