NetMission Academy 2026 Session 6 Summary

How do we build digital spaces where everyone can participate safely and equitably?

Session 6 of the NetMission Academy journey, themed “Diversity and Inclusivity in the Digital Age,” examined digital inclusion and technology-facilitated gender-based violence (TFGBV) across the Asia-Pacific region.

The session featured panel insights from Ms. Ihita Gangavarapu, Coordinator of the India Youth Internet Governance Forum and cybersecurity engineer specializing in youth participation and digital inclusion, and Ms. Nirjana Sharma of UNESCO Nepal’s Communication and Information Sector, who focuses on digital transformation, diversity, and human rights in the context of digitalization and artificial intelligence. 

Sub-Group 3 presented on “Decoding Gender-Based Violence in the APAC Region and Leading Interventions,” situating the discussion within structural inequality, platform accountability, AI bias, and inclusive governance. The session framed digital inclusion as extending beyond connectivity to encompass safety, representation and equitable participation in digital ecosystems.

Sub-Group 3 Case Study Presentation

Although participants were connected through Zoom and messaging platforms, unequal participation was evident due to unstable internet, shared living environments and power disruptions. This reinforced a key premise: meaningful inclusion requires not only access to digital infrastructure but the ability to participate safely and consistently.

Country case studies demonstrated the scale and consequences of TFGBV. In Pakistan, the Digital Rights Foundation’s Cyber Harassment Helpline has received over 20,000 complaints since 2016, including 3,171 in 2024 alone. Women and young adults aged 18–30 are most affected, reporting cyberstalking, non-consensual image sharing and attacks on activists. Weak enforcement and gaps in the Electronic Crimes Act (2016) contribute to continued silencing. In Bangladesh, UNDP research found that 63.5% of women have experienced online harassment, often leading to trauma, fear, and withdrawal from digital spaces. In India, the 2021–2022 Sulli Deals and Bulli Bai incidents involved GitHub-hosted applications that symbolically “auctioned” Muslim women. Although arrests followed, the cases exposed reactive platform responses, fragmented legal protections and the disproportionate targeting of visible minority women. Continued circulation of screenshots after takedowns further eroded trust.

Supporting research from Amnesty International (2018) and the Economist Intelligence Unit confirmed that vocal and visible women face disproportionate targeting online. Comparative regulatory approaches illustrated varied global models: the UK centers individual harm, the European Union’s Digital Services Act addresses platform architecture and algorithmic responsibility, Australia applies a gender-neutral framework with high harm thresholds and Mexico explicitly recognizes digital gender-based violence in law.

The presentation concluded with a survivor-centric intervention from India: a multilingual Digital Safety Guide available in English, Hindi, Malayalam, Marathi and Tamil. Designed in a short, visual and accessible format, the guide clearly defines abuse, provides documentation guidance, explains reporting mechanisms and identifies support channels. It reflects a survivor-centred approach that complements regulatory reform by equipping individuals with practical tools. The core principle emphasized was that regulation builds accountability, while awareness builds agency.

Speakers’ Sharing – Panel Reflections

The panel discussion expanded the analysis to AI and structural bias. Women represent only 20% of machine learning roles and 12% of AI researchers globally, contributing to systems trained on historically biased datasets. Bias manifests in generative tools that reinforce stereotypes, automated moderation that disproportionately impacts marginalized voices and amplification algorithms that struggle with cultural nuance. Emerging risks include deepfakes, AI-powered impersonation and coordinated harassment. UNESCO’s approach of de-biasing datasets and increasing women’s participation in AI development was discussed as a structural necessity.

Regional perspectives highlighted linguistic and youth dimensions. In Nepal, despite internet penetration exceeding 55% and more than 120 languages spoken, most online content remains limited to English or Nepali, marginalizing indigenous communities. Youth participants stressed the need for co-governance rather than token representation. Indigenous Data Sovereignty was framed as community ownership, collective consent and culturally appropriate governance of digital heritage.

Breakout Group Discussion

Breakout groups converged on three priorities: proportionate platform responsibility with faster removals and vernacular moderation; intersectional AI audits and inclusive governance; and integrating digital safety and gender-sensitivity education into national curricula from early ages. Participants emphasized prevention, accountability and design-stage inclusion.

Wrap-Up

The session concluded that protecting digital spaces is not only about safety, but also accountability, awareness, youth engagement, protection of diverse participation and safeguarding freedom of expression. Real digital inclusion requires both regulation and empowerment for structural reform alongside individual agency.

Edited by:

Manas Joshi, Shweta