GOI has now mandated clearer disclosure, labelling, and traceability for AI-generated (“synthetically generated”) content under the IT Rules Amendment 2026 (effective 20 Feb 2026).
But will implementation be that simple on the ground?
Reality Check — Key Challenges:
1. Detection vs Creation Speed Gap
AI tools can generate deepfakes in seconds, while reliable detection and verification still lag technologically.
2. Platform Diversity
Large platforms may deploy automated safeguards, but smaller startups, SaaS tools, and open-source ecosystems may struggle with cost, tooling, and compliance maturity.
3. Metadata Can Be Stripped Outside Ecosystem
Even if labels and provenance markers are embedded, content shared across multiple apps, downloads, or screen recordings can lose traceability.
4. User Behaviour & Awareness
Compliance assumes honest disclosure by users—bad actors are least likely to self-declare synthetic content.
5. Cross-Border Content Flow
Deepfakes generated outside India but consumed within India create jurisdiction and enforcement complexity.
6. Operational Burden on Intermediaries
Faster takedown timelines and continuous monitoring require significant AI moderation infrastructure and skilled manpower.
Dr. Deepak Kumar Sahu, Founder of FaceOff Technologies,“We’ve built a real-time detection engine that flags deepfakes with precision. Our platform is already aligned with the government's vision to combat synthetic media threats.”
Community Takeaway:The rules are a strong governance step—but technology, economics, and user intent will decide real effectiveness. The next phase will demand “Compliance + Detection + Digital Literacy” working together.
Bottom Line:Regulation has moved ahead of misuse—but implementation will be a marathon, not a sprint.