Dark Mode
Image
Logo
Government Proposes Amendments to IT Rules to Mandate Labelling of AI-Generated Content

Government Proposes Amendments to IT Rules to Mandate Labelling of AI-Generated Content

Pranav B Prem


The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to make clear labelling of artificial intelligence (AI)-generated content mandatory. The move aims to enhance accountability among major social media and online platforms, such as YouTube, Facebook, and X (formerly Twitter), and to curb the proliferation of deepfakes and other forms of deceptive content online.

 

Also Read: BCI Warns Against Unauthorized Tie-Ups Between Indian And Foreign Law Firms; Clarifies Scope Of Practice Under Advocates Act And 2025 Rules

 

According to the draft notification titled the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, issued by the Central Government in exercise of powers under Section 87(1) read with clauses (z) and (zg) of Section 87(2) of the Information Technology Act, 2000, the proposed changes introduce the concept of “synthetically generated information” and impose new labelling and due diligence obligations on intermediaries.

 

Definition of Synthetically Generated Information

The amendment proposes the insertion of Rule 2(1)(wa), defining “synthetically generated information” as: “Information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.” This definition brings within the ambit of intermediary responsibilities all forms of AI-generated, altered, or manipulated content, including deepfake videos, synthetic audio, and digitally modified images.

 

A new sub-rule (1A) under Rule 2 further clarifies that wherever the IT Rules refer to “information” in connection with unlawful acts under Rule 3(1)(b), Rule 3(1)(d), or Rule 4, such references shall also include synthetically generated information unless the context otherwise requires.

 

Labelling and Identification Requirements

The proposed Rule 3(3) introduces stringent due diligence obligations for intermediaries that provide tools or services facilitating the creation or modification of synthetically generated information. Such intermediaries will be required to:

 

  • Prominently label or embed a permanent and unique metadata or identifier on every piece of synthetically generated information.

  • Ensure that the label is visibly displayed or audibly announced over at least 10% of the visual surface area or 10% of the audio duration, identifying that the content is synthetically generated.

  • Prevent any modification, suppression, or removal of such labels or identifiers.

 

These measures are intended to ensure transparency for users interacting with AI-generated content and to make clear distinctions between authentic and artificial material.

 

New Obligations for Significant Social Media Intermediaries

A new sub-rule (1A) under Rule 4 introduces additional obligations for Significant Social Media Intermediaries (SSMIs)—platforms such as Facebook, X, and YouTube. Before permitting display or upload of any content, such intermediaries must:

 

  1. Require users to declare whether the content being uploaded is synthetically generated.

  2. Deploy technical measures, including automated tools, to verify the accuracy of user declarations.

  3. Clearly display labels or notices on content identified as synthetically generated, ensuring visibility to users.

 

Further, intermediaries that knowingly permit, promote, or fail to act upon the publication of deceptive synthetically generated content will be deemed to have failed in their due diligence obligations. An explanatory clause clarifies that these intermediaries are responsible for taking “reasonable and proportionate technical measures” to verify user declarations and ensure that no synthetically generated information is published without appropriate labelling.

 

The Ministry emphasized that deepfake and deceptive synthetic content pose significant risks by spreading misinformation, harming reputations, influencing elections, and facilitating fraud. By mandating clear labelling, the government seeks to ensure that users are informed when content is artificially created or manipulated. The draft rules are currently open for public consultation, allowing stakeholders and citizens to provide feedback before the final notification.

 

Also Read: IBBI Notifies Liquidation Process (Second Amendment) Regulations, 2025

 

Suggestions and responses may be sent to itrules.consultation@meity.gov.in by November 6, 2025. Once finalized, these amendments will legally require both content creation tools and social media intermediaries to update their systems and policies to comply with the new transparency standards.

Comment / Reply From

Stay Connected

Newsletter

Subscribe to our mailing list to get the new updates!