Cyberbullying and Trolling in the Age of AI: Legal and Social Remedies in India

By Adv. (Dr.) Prashant Mali (Cyber & Privacy Law — Practitioner & Policy Thought Leader 

I. Introduction

India’s digital transformation has been rapid and massive. In the last decade hundreds of millions of Indians joined social media, mobile messaging, and app ecosystems; smartphones became near-ubiquitous and affordable data accelerated the spread of online life into the smallest towns. The benefits are obvious: financial inclusion, civic participation, education, health services and cultural expression now flow through digital channels. But this tectonic shift also produced an unanticipated and dangerous current  the exponential rise of cyberbullying and trolling, now supercharged by artificial intelligence (AI) and automation.

Cyberbullying and trolling are not simply rude behaviour; they are organised, repeatable harms that shred trust, attack dignity, and sometimes inflict irreversible real-world damage reputational ruin, emotional trauma, forced migration, and, in extreme cases, suicide and loss of life. In India, these harms intersect with social faultlines: gendered violence, caste hostility, religious polarisation, and regional linguistic faultlines make certain individuals and communities uniquely vulnerable.

AI is a force multiplier. Deepfakes, voice cloning, automated doxxing, bot armies, and AI-generated personalized harassment make the scale and sophistication of abuse unprecedented. Cheap tools democratize weaponized content production; a teenager with a laptop can generate targeted harassment at scale; a motivated organised group can automate reputation attacks or communal provocation quickly and with chilling effectiveness.

This article surveys this new landscape and proposes a holistic response: legal analysis grounded in India’s evolving statutes (including the Bharatiya Nyaya Sanhita, the Information Technology Act and Rules, the Digital Personal Data Protection Act and Rules, POCSO, Juvenile Justice law, workplace law and school guidelines), social and technological measures, and policy recommendations to keep pace with the technology. The objective is pragmatic: to equip policymakers, lawyers, educators, platforms, law-enforcement, civil society and mental-health professionals with a detailed framework of remedies and prevention strategies that reflect India’s legal traditions and its modern regulatory reforms.

II. Understanding Cyberbullying and Trolling in the Indian Context

Definitions and distinctions. Cyberbullying typically refers to repeated, targeted harassment or aggression using electronic means that is meant to intimidate, humiliate, or coerce a person. Trolling can be a broader category sometimes provocative, sometimes satirical, sometimes malicious — but when trolling escalates into targeted, sustained abuse it enters cyberbullying territory. Important distinctions for legal response: (a) episodic expression vs systematic harassment; (b) anonymous mass mischief vs direct targeted abuse; (c) satire / legitimate criticism vs defamatory / threatening content. These distinctions matter when balancing free expression with the need to protect victims.

Forms of abuse. In India the common forms include:

Vulnerable groups. Women and girls remain disproportionately targeted (especially with sexualized content and image-based abuse), but minors, journalists, activists, Dalits, Muslims and other minorities, LGBTQ+ people, and public interest defenders face acute vulnerabilities. Insulting caste slurs, coordinated mob-trolling during elections, and targeted campaigns of harassment against local journalists are recurring patterns in India’s public life.

Platforms and tools. The landscape of harm is platform-diverse: WhatsApp and Telegram for private group harassment, Facebook/Instagram and X for public mobbing, gaming platforms for adolescent abuse, and niche apps for targeted campaigns. Closed groups on messaging apps are particularly pernicious because they enable large-scale distribution of intimate material and moral policing without public scrutiny.

Psychological and social impacts. The consequences range from anxiety, depression and PTSD, to social ostracism, career loss and suicide. Student suicides attributed (directly or indirectly) to online harassment appear periodically in media accounts and underline the urgency of systemic remedies. Victims face not only the direct assault but secondary victimization through slow or inadequate institutional responses.

Anonymity and mob mentality. Anonymity enables abuse; the herd mentality of online mobs amplifies harm with viral reach and speed. Indian cultural contexts (where honor and modesty often have outsized social value) can make reputational harm particularly devastating.

III. The AI Revolution: New Dimensions of Digital Harm

AI technologies large language models (LLMs), generative adversarial networks (GANs), voice synthesis and automated scraping tools  introduced a profound asymmetry: benign users gain access to the same capabilities previously reserved for skilled operators. The result is a widening of both the volume and the quality of harmful content.

Deepfake technology: creation and dissemination. Deepfakes use machine learning to generate photo-realistic images, videos and audio where a person appears to say or do things they never did. Technically, this involves training models on publicly available images and audio to map and reproduce a subject’s facial expressions and voice characteristics. For non-technical readers: imagine a system that learns the way someone talks, their facial mannerisms, and then mashes those patterns onto new content the result can be terrifyingly realistic.

Impact and examples. Deepfake pornographic imagery of women and public figures has been circulated in India, and courts have already entertained disputes; high-profile examples and resulting civil suits show that Indian courts are beginning to respond to personality-right claims and takedown requests. Law enforcement examples show arrests for sharing politically sensitive deepfakes that threaten public order. Deepfakes blur the line between evidence and manipulation, making forensic authentication and speedy takedown vital. (See reported deepfake incidents and enforcement actions.) 

Automated harassment and bot networks. Bot armies scripted to amplify hashtags, mass-reply to posts, or systematically report accounts can swamp moderation systems and create artificial trends. These networks can be rented or orchestrated, making it cheap to manufacture outrage or to drown out dissenting voices.

AI-personalized abuse. AI models can synthesize data from social profiles to generate harassment tailored to an individual’s triggers — exploiting family dynamics, cultural sensitivities, or past trauma to maximize harm. The personalization increases the emotional damage and reduces predictability.

Voice cloning and regional languages. Advances in voice synthesis allow voice clones in Indian regional languages — attackers can produce realistic threatening calls, falsified audio evidence, or simulated confessions. Courts and regulators are wrestling with how to treat such audio as digital evidence.

Adversarial AI and content moderation evasion. Bad actors can use adversarial techniques to make harmful content evade platform moderation (e.g., slightly altering images or wording to avoid keyword filters). This cat-and-mouse dynamic complicates automated enforcement.

Synthetic identities and scale. With AI, creating large numbers of synthetic profiles  fake personas with consistent histories is faster. These identities seed false credibility and coordinate harassment, making platform detection harder.

The accessibility problem. Crucially, harmful AI is no longer a clandestine capability. Tools and tutorials proliferate, hosting models and services offer “deepfake as a service,” and open-source models can be fine-tuned for abuse. Democratization of capability means regulation must address both supply (platforms and model providers) and demand (end-users and facilitators).

IV. Comprehensive Legal Framework Analysis

The Indian legal architecture for cyberbullying and trolling is evolving rapidly. Below, I analyze each major statutory instrument, interpret critical provisions, and explain practical application to AI-enabled abuse. Key statutory documents include the Bharatiya Nyaya Sanhita (BNS) 2023, the Information Technology Act 2000, the DPDP Act 2023 and Rules 2025, the POCSO Act 2012, the Juvenile Justice Act 2015, and the Sexual Harassment of Women at Workplace Act. For school contexts, NCPCR Guidelines are central.

Note on citations: Where I reference provisions or rule texts I have relied on the official codified texts and recent rule notifications. For example, BNS 2023 as published in the India Code, DPDP Rules notifications and NCPCR guidelines. 


A. The Bharatiya Nyaya Sanhita (BNS), 2023

Statutory context: BNS replaces many traditional IPC provisions and modernises criminal law into a revised codex; it includes offences for harassment, stalking, defamation-type harms, and offences that address digital misuses. The BNS also introduces sections that are tailored to online harms and attempts to streamline sentencing and classifications.

Relevant offences (examples and application):

Gaps/Challenges under BNS. While BNS clarifies several offences, it is not a complete panacea. Prosecutorial expertise, forensic capacity, and procedural speed remain constraints. Also, the line between free expression and criminality continues to require careful judicial calibration.

B. Information Technology Act, 2000 (IT Act) and IT Rules

Context: The IT Act remains the primary statutory instrument dealing with electronic offences and intermediary liability. Section numbers and historical jurisprudence (e.g., the striking down of Section 66A by the Supreme Court in Shreya Singhal v. UOI) remain critical to constitutional analysis.

Key provisions & implications:

Recent amendments and rules (2021 — 2025) have increased obligations on significant social media intermediaries, added emergency takedown powers and introduced rules for large platform governance. The 2025 IT Amendment Rules further refine removal procedures and procedural safeguards for takedowns. 

Practical application: For victims of cyberbullying, intermediaries’ grievance redressal and takedown mechanisms are the frontline: quick takedown of explicit content, blocking channels of distribution, and providing account-level details under lawful process. However, traceability requests (to reveal first originator) have been controversial; courts weigh privacy interests and the investigative needs.

C. Digital Personal Data Protection Act (DPDPA / DPDP), 2023 and DPDP Rules 2025

Context: DPDP establishes a rights-based regime for personal data, imposing obligations on Data Fiduciaries and processors, rights for data principals (access, correction, erasure), and breach notification and penalty regimes.

Relevance for platform accountability and victim remedies:

Gaps/Challenges: Enforcement maturity (Data Protection Board capacity), cross-jurisdictional data flow issues, and tension between privacy rights and content-origin traceability remain practical hurdles. Also, AI training data questions — whether platforms may have used victim images to train models  raise novel claims under DPDP.

D. Protection of Children from Sexual Offences (POCSO) Act, 2012

Context: POCSO criminalizes sexual offences against children and contains provisions on child sexual abuse material (CSAM), solicitation, and storage/distribution of pornographic material involving minors. Sections 11, 13, 14 and 15 are pivotal: they address aggravated penetrative sexual assault, using a child for pornographic purposes, punishment for using child for pornographic purposes, and punishment for storage of pornographic material. The Act also mandates reporting obligations and speedy prosecution for offences involving minors. 

Application in an AI era: AI-generated sexual content purporting to depict a minor raises immediate legal and ethical issues. If the material is synthetic (no real child), legal treatment is evolving: many jurisdictions treat sexualized deepfakes involving minors as equivalent to CSAM because the harm and distribution paths are similar and they facilitate grooming and exploitation. Indian law enforcement treats creation and distribution of child-sexual content with severity; POCSO provides a firm criminal framework when minors are depicted or exploited.

Mandatory reporting and institutional responsibility: Schools, online platforms, ISPs and intermediaries must adopt reporting pathways; delays or suppression can attract liability and severely harm victims.

E. Juvenile Justice (Care and Protection of Children) Act, 2015

Context: The Juvenile Justice Act defines a child (under 18) and provides mechanisms for rehabilitation, protection and support. In cyberbullying contexts, children can be both victims and perpetrators. The Act prescribes Child Welfare Committees, rehabilitation, and special procedures for children in need of care and protection. 

Application: Schools and authorities must treat children subjected to cyberbullying as children in need of care and protection where necessary, enabling protective orders and rehabilitation. If a juvenile perpetrates cyber abuse, the juvenile justice system’s rehabilitative approach applies, with potential diversionary measures rather than purely punitive responses.

F. Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013

Context: The Act’s ambit extends to digital workplaces, remote interactions, and online conduct that amounts to sexual harassment. Internal Complaints Committees (ICCs) can consider online behaviour as workplace harassment when there is a nexus with employment or a hostile environment. 

Practical guidance: Employers should treat online harassment (leaked images, abusive DMs between colleagues, workplace impersonation) as actionable under ICC procedures, ensure policies cover virtual conduct, and train ICC members on digital evidence and privacy.

G. NCPCR Guidelines on Preventing Bullying and Cyberbullying in Schools

Context: NCPCR’s guidelines provide a school-centric blueprint for prevention, reporting protocols, counseling, teacher training and parent engagement. The guidelines emphasize early detection, safe reporting channels, counseling, and a collaborative response with law enforcement when criminal offences are evident. These guidelines are essential for mitigating cyberbullying among children and require adaptation to college/coaching environments too. 

Implementation challenges: Resourcing, sensitivity training, and coordination between schools and law enforcement remain barriers. The guidelines are comprehensive but need active enforcement and monitoring.

H. Evidentiary, Constitutional and Procedural Issues

Constitutional balance - Article 19(1)(a) vs Article 21. Courts must balance free speech against dignity and life. The Supreme Court’s jurisprudence (including the Shreya Singhal precedent on Section 66A) underscores that restrictions on speech must be narrowly tailored. At the same time, speech that constitutes harassment, threats, or incitement to violence can be legitimately restricted.

Evidentiary challenges. AI makes attribution and authenticity complex. Courts increasingly rely on forensic attestations (metadata analysis, device logs, hash verification) and expert evidence to authenticate digital content. Preservation orders and prompt forensic collection are critical delays can destroy crucial metadata.

Jurisdiction and cross-border issues. Many platforms and content hosts operate from abroad. Extradition, mutual legal assistance, and cooperation with platforms are procedural chokepoints. The DPDP data localisation/transfer rules and the IT Rules’ traceability obligations aim to ease investigations but raise privacy tensions.


V. NCPCR Guidelines: School-Based Interventions

The National Commission for Protection of Child Rights (NCPCR) guidelines on preventing bullying and cyberbullying are a practical blueprint for schools. They emphasise prevention, early identification, reporting, counseling, and capacity building of teachers and parents. Key elements include:


VI. The Ecosystem of Accountability

Addressing cyberbullying requires multi-actor coordination:

Platforms bear primary operational responsibility for moderation, takedown, and robust grievance systems. Under Indian rules platforms must provide reporting mechanisms, respond within set timelines, and maintain escalation protocols.

Law enforcement must be trained in digital forensics and victim-sensitive investigation; existing capacity gaps and procedural delays hinder effective redress. Specialized cyber units and coordinated training programs are essential.

Judiciary: Fast-track procedures and cyber courts can accelerate redress. Judicial awareness on AI evidence and privacy balancing is increasing but needs systematised frameworks.

Schools and employers: Prevention through policy, training and institutional oversight — ICCs in workplaces must be empowered to handle online harassment.

Civil society and mental-health professionals: Provide victim support hotlines, counseling, and public education campaigns.

Media and public literacy: Media must avoid amplifying harmful content; digital literacy campaigns reduce victimization and improve bystander responses.


VII. Comparative Analysis

International regimes provide reference points:

Lessons for India: risk-based platform obligations, specialized regulator powers, transparency and redressal mechanisms, and mandatory support services are valuable models that India can adapt to its constitutional and social context.


VIII. Critical Gaps and Challenges

Legislative gaps specific to AI: Existing laws were drafted before generative AI’s ubiquity. While offences cover many harms, AI’s scale and synthetic nature require explicit statutory recognition (e.g., treating sexualized deepfakes of minors as CSAM even when no real child is involved).

Enforcement deficit: Reporting to conviction ratios for online harassment remain low. Investigations stall due to jurisdictional issues, forensic delays, and low priority for local police units.

Digital literacy and the digital divide: Vulnerable populations often lack knowledge to protect themselves or report abuse.

Platform compliance and accountability: While rules impose obligations, enforcement and transparency about remediation remain uneven.

Cross-border jurisdiction: Platforms hosted abroad complicate evidence preservation and takedown speed.

Victim support infrastructure: Counseling, legal aid, and swift psychological support are thinly spread.


IX. Remedies: A Multi-Stakeholder Framework

Legal Remedies

Technological Remedies

Social Remedies

X. Policy Recommendations on Cyberbullying

1. Statutory clarity on AI-generated content: Legislate that synthetic sexual content depicting minors is treated as CSAM irrespective of whether a real child is used; create clear offences for malicious deepfake creation and distribution.

2. Platform risk-based governance: Enforce a tiered duty of care model where very large platforms face greater transparency, independent audits, and higher procedural safeguards (while protecting legitimate expression).

3. Specialized eSafety Authority: Create a dedicated regulator (or empower an existing body) with powers to order takedowns, mandate transparency reports, and coordinate cross-border requests.

4. Forensic capacity building: Invest in public digital forensics labs, fast evidence preservation protocols, and training for police on AI tools and metadata.

5. School & workplace mandates: Make NCPCR guidelines binding for schools with monitoring; require employers to extend ICC frameworks to digital harassment.

6. Victim compensation fund: Create a fund financed by platform fees and penalties for victim support, counseling and legal aid.

7. Transparency & algorithmic accountability: Require platforms to disclose moderation outcomes, removal rates, and third-party auditor reports.

XI. Conclusion

Synthesis: Cyberbullying in the AI era is a multi-dimensional social harm: technical, legal, psychological and structural. India has modernised its laws the BNS, IT Rules, DPDP and sectoral frameworks provide a broad toolkit. Yet law alone will not suffice. We need a synchronized ecosystem: better laws for synthetic harms, capable enforcement, platform accountability, robust victim support, and grassroots education.

Call to action: Every stakeholder regulator, platform, school, employer, parent and citizen must accept responsibility. Technology should not be a weapon of humiliation and exclusion; it must remain a tool for dignity and empowerment.

Vision: A safer digital India requires collective will, timely regulation, technological investment, and a cultural shift toward online dignity. If we do not act decisively today, AI will not only multiply our capabilities  it will multiply our harms. Let legal clarity, social support, and technological safeguards converge to preserve human dignity in the digital age.


About Author : Advocate (Dr.) Prashant Mali conducts various cyber awareness session in schools for students, parents and teachers to bring in awareness about cyberbullying. He as a practicing lawyer has handled many cases of cyberbullying, trolling and online defamation and extensively written about it in many media outlets.  

Email :prashant.mali@cyberlawconsulting@gmail.com


Selected References & Further Reading