Cyberbullying and Trolling in the Age of AI: Legal and Social Remedies in India
By Adv. (Dr.) Prashant Mali (Cyber & Privacy Law — Practitioner & Policy Thought Leader
I. Introduction
India’s digital transformation has been rapid and massive. In the last decade hundreds of millions of Indians joined social media, mobile messaging, and app ecosystems; smartphones became near-ubiquitous and affordable data accelerated the spread of online life into the smallest towns. The benefits are obvious: financial inclusion, civic participation, education, health services and cultural expression now flow through digital channels. But this tectonic shift also produced an unanticipated and dangerous current the exponential rise of cyberbullying and trolling, now supercharged by artificial intelligence (AI) and automation.
Cyberbullying and trolling are not simply rude behaviour; they are organised, repeatable harms that shred trust, attack dignity, and sometimes inflict irreversible real-world damage reputational ruin, emotional trauma, forced migration, and, in extreme cases, suicide and loss of life. In India, these harms intersect with social faultlines: gendered violence, caste hostility, religious polarisation, and regional linguistic faultlines make certain individuals and communities uniquely vulnerable.
AI is a force multiplier. Deepfakes, voice cloning, automated doxxing, bot armies, and AI-generated personalized harassment make the scale and sophistication of abuse unprecedented. Cheap tools democratize weaponized content production; a teenager with a laptop can generate targeted harassment at scale; a motivated organised group can automate reputation attacks or communal provocation quickly and with chilling effectiveness.
This article surveys this new landscape and proposes a holistic response: legal analysis grounded in India’s evolving statutes (including the Bharatiya Nyaya Sanhita, the Information Technology Act and Rules, the Digital Personal Data Protection Act and Rules, POCSO, Juvenile Justice law, workplace law and school guidelines), social and technological measures, and policy recommendations to keep pace with the technology. The objective is pragmatic: to equip policymakers, lawyers, educators, platforms, law-enforcement, civil society and mental-health professionals with a detailed framework of remedies and prevention strategies that reflect India’s legal traditions and its modern regulatory reforms.
II. Understanding Cyberbullying and Trolling in the Indian Context
Definitions and distinctions. Cyberbullying typically refers to repeated, targeted harassment or aggression using electronic means that is meant to intimidate, humiliate, or coerce a person. Trolling can be a broader category sometimes provocative, sometimes satirical, sometimes malicious — but when trolling escalates into targeted, sustained abuse it enters cyberbullying territory. Important distinctions for legal response: (a) episodic expression vs systematic harassment; (b) anonymous mass mischief vs direct targeted abuse; (c) satire / legitimate criticism vs defamatory / threatening content. These distinctions matter when balancing free expression with the need to protect victims.
Forms of abuse. In India the common forms include:
Harassment and threats (text, calls, emails).
Doxing — publishing private data (KYC, addresses, phone numbers).
Image morphing / revenge porn and deepfake pornography.
Impersonation (fake accounts, cloned profiles).
Outing, exclusion, and coordinated harassment on group chats (WhatsApp, Telegram).
Takedown circumvention — uploading content across platforms and jurisdictions.
Caste-based and religious insults that are intentionally provocative and sometimes criminally directed
Vulnerable groups. Women and girls remain disproportionately targeted (especially with sexualized content and image-based abuse), but minors, journalists, activists, Dalits, Muslims and other minorities, LGBTQ+ people, and public interest defenders face acute vulnerabilities. Insulting caste slurs, coordinated mob-trolling during elections, and targeted campaigns of harassment against local journalists are recurring patterns in India’s public life.
Platforms and tools. The landscape of harm is platform-diverse: WhatsApp and Telegram for private group harassment, Facebook/Instagram and X for public mobbing, gaming platforms for adolescent abuse, and niche apps for targeted campaigns. Closed groups on messaging apps are particularly pernicious because they enable large-scale distribution of intimate material and moral policing without public scrutiny.
Psychological and social impacts. The consequences range from anxiety, depression and PTSD, to social ostracism, career loss and suicide. Student suicides attributed (directly or indirectly) to online harassment appear periodically in media accounts and underline the urgency of systemic remedies. Victims face not only the direct assault but secondary victimization through slow or inadequate institutional responses.
Anonymity and mob mentality. Anonymity enables abuse; the herd mentality of online mobs amplifies harm with viral reach and speed. Indian cultural contexts (where honor and modesty often have outsized social value) can make reputational harm particularly devastating.
III. The AI Revolution: New Dimensions of Digital Harm
AI technologies large language models (LLMs), generative adversarial networks (GANs), voice synthesis and automated scraping tools introduced a profound asymmetry: benign users gain access to the same capabilities previously reserved for skilled operators. The result is a widening of both the volume and the quality of harmful content.
Deepfake technology: creation and dissemination. Deepfakes use machine learning to generate photo-realistic images, videos and audio where a person appears to say or do things they never did. Technically, this involves training models on publicly available images and audio to map and reproduce a subject’s facial expressions and voice characteristics. For non-technical readers: imagine a system that learns the way someone talks, their facial mannerisms, and then mashes those patterns onto new content the result can be terrifyingly realistic.
Impact and examples. Deepfake pornographic imagery of women and public figures has been circulated in India, and courts have already entertained disputes; high-profile examples and resulting civil suits show that Indian courts are beginning to respond to personality-right claims and takedown requests. Law enforcement examples show arrests for sharing politically sensitive deepfakes that threaten public order. Deepfakes blur the line between evidence and manipulation, making forensic authentication and speedy takedown vital. (See reported deepfake incidents and enforcement actions.)
Automated harassment and bot networks. Bot armies scripted to amplify hashtags, mass-reply to posts, or systematically report accounts can swamp moderation systems and create artificial trends. These networks can be rented or orchestrated, making it cheap to manufacture outrage or to drown out dissenting voices.
AI-personalized abuse. AI models can synthesize data from social profiles to generate harassment tailored to an individual’s triggers — exploiting family dynamics, cultural sensitivities, or past trauma to maximize harm. The personalization increases the emotional damage and reduces predictability.
Voice cloning and regional languages. Advances in voice synthesis allow voice clones in Indian regional languages — attackers can produce realistic threatening calls, falsified audio evidence, or simulated confessions. Courts and regulators are wrestling with how to treat such audio as digital evidence.
Adversarial AI and content moderation evasion. Bad actors can use adversarial techniques to make harmful content evade platform moderation (e.g., slightly altering images or wording to avoid keyword filters). This cat-and-mouse dynamic complicates automated enforcement.
Synthetic identities and scale. With AI, creating large numbers of synthetic profiles fake personas with consistent histories is faster. These identities seed false credibility and coordinate harassment, making platform detection harder.
The accessibility problem. Crucially, harmful AI is no longer a clandestine capability. Tools and tutorials proliferate, hosting models and services offer “deepfake as a service,” and open-source models can be fine-tuned for abuse. Democratization of capability means regulation must address both supply (platforms and model providers) and demand (end-users and facilitators).
IV. Comprehensive Legal Framework Analysis
The Indian legal architecture for cyberbullying and trolling is evolving rapidly. Below, I analyze each major statutory instrument, interpret critical provisions, and explain practical application to AI-enabled abuse. Key statutory documents include the Bharatiya Nyaya Sanhita (BNS) 2023, the Information Technology Act 2000, the DPDP Act 2023 and Rules 2025, the POCSO Act 2012, the Juvenile Justice Act 2015, and the Sexual Harassment of Women at Workplace Act. For school contexts, NCPCR Guidelines are central.
Note on citations: Where I reference provisions or rule texts I have relied on the official codified texts and recent rule notifications. For example, BNS 2023 as published in the India Code, DPDP Rules notifications and NCPCR guidelines.
A. The Bharatiya Nyaya Sanhita (BNS), 2023
Statutory context: BNS replaces many traditional IPC provisions and modernises criminal law into a revised codex; it includes offences for harassment, stalking, defamation-type harms, and offences that address digital misuses. The BNS also introduces sections that are tailored to online harms and attempts to streamline sentencing and classifications.
Relevant offences (examples and application):
Harassment & stalking provisions (BNS sections on stalking and harassment): These criminalize persistent surveillance, virtual following, and repeated intrusive contact. Under BNS, conduct that was formerly prosecuted under varied IPC provisions is now consolidated with clearer definitions for modern communication means messages, posts, and persistent tagging can amount to stalking. Application: a campaign of continuous tagging and threatening DMs over weeks can be framed as stalking.
Offences akin to defamation and reputation harm: BNS preserves criminal remedies for particularly malicious false imputations that harm reputation, while courts continue to rely on civil law (defamation suits) where appropriate. Deepfakes published to defame can be pursued under the relevant BNS provisions.
Digital offences: BNS contains provisions penalizing unauthorized use of identity, impersonation, and misuse of digital devices directly applicable to impersonation, cloned accounts, and deepfakes. The new text contemplates electronic evidence and modern digital modalities by design.
Gaps/Challenges under BNS. While BNS clarifies several offences, it is not a complete panacea. Prosecutorial expertise, forensic capacity, and procedural speed remain constraints. Also, the line between free expression and criminality continues to require careful judicial calibration.
B. Information Technology Act, 2000 (IT Act) and IT Rules
Context: The IT Act remains the primary statutory instrument dealing with electronic offences and intermediary liability. Section numbers and historical jurisprudence (e.g., the striking down of Section 66A by the Supreme Court in Shreya Singhal v. UOI) remain critical to constitutional analysis.
Key provisions & implications:
Section 66C/66D (identity theft and cheating by personation): These criminalize identity misuse and electronic impersonation—central for prosecuting fake accounts, cloned profiles and synthetic identity fraud.
Section 66E (privacy breach - voyeurism): This is directly applicable to non-consensual image capture and distribution — including morphed images and images captured or shared without consent.
Sections 67/67A/67B (obscenity, child pornography): These are invoked for pornographic content including explicit deepfakes and sexually explicit AI-generated imagery. Section 67B is particularly relevant to child sexual abuse material (CSAM) online.
Section 79 (intermediary liability & safe harbour): This is perhaps the most consequential provision in platform accountability. Intermediaries that comply with prescribed due diligence under the Intermediary Guidelines can maintain safe-harbour from third-party content liability. The Intermediary Guidelines and Digital Media Ethics Code Rules (2023) impose obligations: traceability (in certain cases), grievance redressal mechanisms, proactive takedown timelines, and ‘due diligence’ requirements. These obligations intersect with cyberbullying because platforms can be required to remove harmful content quickly and to maintain grievance mechanisms. However, enforcement and the precise mechanics of traceability raise privacy and constitutional concerns.
Recent amendments and rules (2021 — 2025) have increased obligations on significant social media intermediaries, added emergency takedown powers and introduced rules for large platform governance. The 2025 IT Amendment Rules further refine removal procedures and procedural safeguards for takedowns.
Practical application: For victims of cyberbullying, intermediaries’ grievance redressal and takedown mechanisms are the frontline: quick takedown of explicit content, blocking channels of distribution, and providing account-level details under lawful process. However, traceability requests (to reveal first originator) have been controversial; courts weigh privacy interests and the investigative needs.
C. Digital Personal Data Protection Act (DPDPA / DPDP), 2023 and DPDP Rules 2025
Context: DPDP establishes a rights-based regime for personal data, imposing obligations on Data Fiduciaries and processors, rights for data principals (access, correction, erasure), and breach notification and penalty regimes.
Relevance for platform accountability and victim remedies:
Data fiduciary obligations require purpose limitation, lawful processing, data minimisation and security safeguards platforms processing user-generated content must adopt these principles when personal data is involved (e.g., doxing content).
Rights of data principals: victims can request erasure or correction of personal data and can assert claims when platforms retain data that enables harassment. The Act’s enforcement mechanisms and penalties (which have been calibrated into substantial figures under the Rules) create deterrence. Recent Rule notifications expand procedural expectations including enhanced protections for children and breach reporting frameworks.
Breach notification: Data fiduciaries must notify the Board and affected principals. Rapid notification helps preserve evidence and triggers regulator involvement.
Platform obligations for content moderation and grievance redressal under the DPDP Rules align with IT intermediary obligations platforms have dual duties under both regimes.
Gaps/Challenges: Enforcement maturity (Data Protection Board capacity), cross-jurisdictional data flow issues, and tension between privacy rights and content-origin traceability remain practical hurdles. Also, AI training data questions — whether platforms may have used victim images to train models raise novel claims under DPDP.
D. Protection of Children from Sexual Offences (POCSO) Act, 2012
Context: POCSO criminalizes sexual offences against children and contains provisions on child sexual abuse material (CSAM), solicitation, and storage/distribution of pornographic material involving minors. Sections 11, 13, 14 and 15 are pivotal: they address aggravated penetrative sexual assault, using a child for pornographic purposes, punishment for using child for pornographic purposes, and punishment for storage of pornographic material. The Act also mandates reporting obligations and speedy prosecution for offences involving minors.
Application in an AI era: AI-generated sexual content purporting to depict a minor raises immediate legal and ethical issues. If the material is synthetic (no real child), legal treatment is evolving: many jurisdictions treat sexualized deepfakes involving minors as equivalent to CSAM because the harm and distribution paths are similar and they facilitate grooming and exploitation. Indian law enforcement treats creation and distribution of child-sexual content with severity; POCSO provides a firm criminal framework when minors are depicted or exploited.
Mandatory reporting and institutional responsibility: Schools, online platforms, ISPs and intermediaries must adopt reporting pathways; delays or suppression can attract liability and severely harm victims.
E. Juvenile Justice (Care and Protection of Children) Act, 2015
Context: The Juvenile Justice Act defines a child (under 18) and provides mechanisms for rehabilitation, protection and support. In cyberbullying contexts, children can be both victims and perpetrators. The Act prescribes Child Welfare Committees, rehabilitation, and special procedures for children in need of care and protection.
Application: Schools and authorities must treat children subjected to cyberbullying as children in need of care and protection where necessary, enabling protective orders and rehabilitation. If a juvenile perpetrates cyber abuse, the juvenile justice system’s rehabilitative approach applies, with potential diversionary measures rather than purely punitive responses.
F. Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013
Context: The Act’s ambit extends to digital workplaces, remote interactions, and online conduct that amounts to sexual harassment. Internal Complaints Committees (ICCs) can consider online behaviour as workplace harassment when there is a nexus with employment or a hostile environment.
Practical guidance: Employers should treat online harassment (leaked images, abusive DMs between colleagues, workplace impersonation) as actionable under ICC procedures, ensure policies cover virtual conduct, and train ICC members on digital evidence and privacy.
G. NCPCR Guidelines on Preventing Bullying and Cyberbullying in Schools
Context: NCPCR’s guidelines provide a school-centric blueprint for prevention, reporting protocols, counseling, teacher training and parent engagement. The guidelines emphasize early detection, safe reporting channels, counseling, and a collaborative response with law enforcement when criminal offences are evident. These guidelines are essential for mitigating cyberbullying among children and require adaptation to college/coaching environments too.
Implementation challenges: Resourcing, sensitivity training, and coordination between schools and law enforcement remain barriers. The guidelines are comprehensive but need active enforcement and monitoring.
H. Evidentiary, Constitutional and Procedural Issues
Constitutional balance - Article 19(1)(a) vs Article 21. Courts must balance free speech against dignity and life. The Supreme Court’s jurisprudence (including the Shreya Singhal precedent on Section 66A) underscores that restrictions on speech must be narrowly tailored. At the same time, speech that constitutes harassment, threats, or incitement to violence can be legitimately restricted.
Evidentiary challenges. AI makes attribution and authenticity complex. Courts increasingly rely on forensic attestations (metadata analysis, device logs, hash verification) and expert evidence to authenticate digital content. Preservation orders and prompt forensic collection are critical delays can destroy crucial metadata.
Jurisdiction and cross-border issues. Many platforms and content hosts operate from abroad. Extradition, mutual legal assistance, and cooperation with platforms are procedural chokepoints. The DPDP data localisation/transfer rules and the IT Rules’ traceability obligations aim to ease investigations but raise privacy tensions.
V. NCPCR Guidelines: School-Based Interventions
The National Commission for Protection of Child Rights (NCPCR) guidelines on preventing bullying and cyberbullying are a practical blueprint for schools. They emphasise prevention, early identification, reporting, counseling, and capacity building of teachers and parents. Key elements include:
School policy: Schools must create clear anti-bullying policies that include cyberbullying definitions, response protocols, and sanctions.
Reporting and redressal: A confidential reporting mechanism for students, with anonymity options, quick triage, and engagement with parents and counselors.
Counseling and support: Immediate psychological support for victims and rehabilitation pathways for perpetrators (juvenile focus).
Teacher training: Regular sensitization programs on online safety, digital footprints, and student mental health.
Parent engagement: Education for parents on monitoring, privacy settings, and how to create safe home online environments.
Limitations: Guidelines apply primarily to schools; higher education and coaching centres often lack similar mandates. Implementation requires resources and local buy-in.
VI. The Ecosystem of Accountability
Addressing cyberbullying requires multi-actor coordination:
Platforms bear primary operational responsibility for moderation, takedown, and robust grievance systems. Under Indian rules platforms must provide reporting mechanisms, respond within set timelines, and maintain escalation protocols.
Law enforcement must be trained in digital forensics and victim-sensitive investigation; existing capacity gaps and procedural delays hinder effective redress. Specialized cyber units and coordinated training programs are essential.
Judiciary: Fast-track procedures and cyber courts can accelerate redress. Judicial awareness on AI evidence and privacy balancing is increasing but needs systematised frameworks.
Schools and employers: Prevention through policy, training and institutional oversight — ICCs in workplaces must be empowered to handle online harassment.
Civil society and mental-health professionals: Provide victim support hotlines, counseling, and public education campaigns.
Media and public literacy: Media must avoid amplifying harmful content; digital literacy campaigns reduce victimization and improve bystander responses.
VII. Comparative Analysis
International regimes provide reference points:
EU Digital Services Act (DSA) uses risk-based obligations for large platforms and stringent transparency rules.
UK Online Safety Act emphasises duty of care and regulator powers to require platform safeguards.
Australia’s eSafety model combines a strong regulator with rapid takedown processes and targeted victim support.
South Korea has strict cyber defamation laws.
Lessons for India: risk-based platform obligations, specialized regulator powers, transparency and redressal mechanisms, and mandatory support services are valuable models that India can adapt to its constitutional and social context.
VIII. Critical Gaps and Challenges
Legislative gaps specific to AI: Existing laws were drafted before generative AI’s ubiquity. While offences cover many harms, AI’s scale and synthetic nature require explicit statutory recognition (e.g., treating sexualized deepfakes of minors as CSAM even when no real child is involved).
Enforcement deficit: Reporting to conviction ratios for online harassment remain low. Investigations stall due to jurisdictional issues, forensic delays, and low priority for local police units.
Digital literacy and the digital divide: Vulnerable populations often lack knowledge to protect themselves or report abuse.
Platform compliance and accountability: While rules impose obligations, enforcement and transparency about remediation remain uneven.
Cross-border jurisdiction: Platforms hosted abroad complicate evidence preservation and takedown speed.
Victim support infrastructure: Counseling, legal aid, and swift psychological support are thinly spread.
IX. Remedies: A Multi-Stakeholder Framework
Legal Remedies
Criminal prosecution: Use BNS, IT Act and POCSO where relevant. Early FIRs and preservation orders are critical.
Civil relief: Defamation suits, injunctions and claims for damages can offer remedial relief, including takedown orders and compensation.
Interim reliefs: Courts can issue emergency takedown or blocking orders and preserve evidence.
Victim compensation: A statutory compensation mechanism tied to data protection enforcement would be useful.
Technological Remedies
Robust content moderation: Combining AI detection, human review, and rapid takedown pipelines. Use provenance markers and watermarking for authenticity.
Deepfake detection and attribution: Investment in forensic tools and standards for authentication of audio/video.
Privacy-preserving reporting: Allow victims to report anonymously while preserving chain of custody for evidence.
Digital forensics and chain of custody protocols: Standardize procedures for log retention, hash verification, and metadata preservation.
Design interventions: Rate limits, user verification for high-impact actions, friction for mass forwarding on messaging apps.
Social Remedies
Education & media literacy: Curriculum inclusion, public campaigns, and teacher training.
Bystander empowerment: Teach intervention strategies for online communities.
Restorative justice: For juvenile perpetrators, emphasis on rehabilitation and education, not simply punishment.
Mental health infrastructure: 24/7 helplines, school counselors, and crisis intervention services.
X. Policy Recommendations on Cyberbullying
1. Statutory clarity on AI-generated content: Legislate that synthetic sexual content depicting minors is treated as CSAM irrespective of whether a real child is used; create clear offences for malicious deepfake creation and distribution.
2. Platform risk-based governance: Enforce a tiered duty of care model where very large platforms face greater transparency, independent audits, and higher procedural safeguards (while protecting legitimate expression).
3. Specialized eSafety Authority: Create a dedicated regulator (or empower an existing body) with powers to order takedowns, mandate transparency reports, and coordinate cross-border requests.
4. Forensic capacity building: Invest in public digital forensics labs, fast evidence preservation protocols, and training for police on AI tools and metadata.
5. School & workplace mandates: Make NCPCR guidelines binding for schools with monitoring; require employers to extend ICC frameworks to digital harassment.
6. Victim compensation fund: Create a fund financed by platform fees and penalties for victim support, counseling and legal aid.
7. Transparency & algorithmic accountability: Require platforms to disclose moderation outcomes, removal rates, and third-party auditor reports.
XI. Conclusion
Synthesis: Cyberbullying in the AI era is a multi-dimensional social harm: technical, legal, psychological and structural. India has modernised its laws the BNS, IT Rules, DPDP and sectoral frameworks provide a broad toolkit. Yet law alone will not suffice. We need a synchronized ecosystem: better laws for synthetic harms, capable enforcement, platform accountability, robust victim support, and grassroots education.
Call to action: Every stakeholder regulator, platform, school, employer, parent and citizen must accept responsibility. Technology should not be a weapon of humiliation and exclusion; it must remain a tool for dignity and empowerment.
Vision: A safer digital India requires collective will, timely regulation, technological investment, and a cultural shift toward online dignity. If we do not act decisively today, AI will not only multiply our capabilities it will multiply our harms. Let legal clarity, social support, and technological safeguards converge to preserve human dignity in the digital age.
About Author : Advocate (Dr.) Prashant Mali conducts various cyber awareness session in schools for students, parents and teachers to bring in awareness about cyberbullying. He as a practicing lawyer has handled many cases of cyberbullying, trolling and online defamation and extensively written about it in many media outlets.
Email :prashant.mali@cyberlawconsulting@gmail.com
Selected References & Further Reading
Bharatiya Nyaya Sanhita, 2023 (statutory text).
IT Act and Intermediary Rules (Intermediary Guidelines & Digital Media Ethics Code, 2023; IT Amendment Rules 2025).
Digital Personal Data Protection Act, 2023 and DPDP Rules 2025 notifications and press releases.
POCSO Act, 2012 (statutory provisions).
Juvenile Justice (Care and Protection of Children) Act, 2015 (statutory text).
NCPCR Guidelines — Preventing Bullying & Cyberbullying in Schools.
Recent reportage on deepfake and voice-cloning legal disputes.