NetMission Academy 2024 Session 3: Human Rights Online – Summary

The NetMission Academy’s third session on “Human Rights Online” was conducted on January 18, 2023. The session was facilitated by Nimrah Perveen and Phyo. The assigned working group presented two case studies: (1) Internet Shutdown in Gaza and (2) Cyberbullying in Pakistan. Experts were also invited to discuss topics on human rights online, as follows:

  • Amy Crocker, Head of Child Protection and Technology, ECPAT International – Child Rights & Protection
  • Sara Pacia, Senior Managers for Communications, EngageMedia – Freedom of Expression
  • Gyan Prakash Tripathi, Technology Law and Policy Researcher, India – India’s Online Safety Policy & the Landscape of Online Harassment in South Asia

Case Study 1: Internet Shutdown in Gaza: A Breach of Human Rights

The Gaza internet shutdown exemplifies a breach of fundamental human rights, impacting telecommunication infrastructure and infringing on the right to express freely. These interruptions have raised serious concerns regarding censorship, particularly on social media platforms like Instagram, which have hindered journalists’ reporting ability. Over 161 Palestinians have been silenced for their digital voices. Reduced access to diverse information sources restricts the right to receive and impart information. These internet shutdowns, seen as a form of weaponization in the ongoing crisis, have exacerbated the situation and represent a violation of digital rights. Such incidents highlight the potential for governments to exploit internet shutdowns to suppress dissent and limit free speech, illustrating the delicate balance between security concerns and individual freedoms.

Case Study 2: Cyber Bullying in Pakistan: The Case of Abeera

In the case of cyberbullying in Pakistan, a 14-year-old girl named ‘Abeera’ has been a victim of non-consensual dissemination of intimate images for the past 8 years, causing her immense psychological trauma and daily distress due to blackmail and bullying. The victim’s harrowing experience brings attention to the challenges faced when seeking help. Inadequate responses from online platforms (like GitHub where her images and drive link were mentioned) and jurisdictional challenges, among others, underscore the urgent need for global cooperation and legal support frameworks to address the transnational nature of cybercrimes emphasizing the importance of establishing comprehensive legal mechanisms to address such incidents across borders. 

Speaker Summaries

Amy Crocker: Children’s Rights and Protection in the Digital World

Amy Crocker discussed human rights challenges in both online and offline contexts, focusing on children’s rights and online protection. She highlighted the fast-paced digital world, which has outgrown legal and regulatory frameworks. The internet, initially developed without a strong safety focus, now presents numerous challenges to human rights and well-being. She discussed the complex balancing act in Internet governance, where legislation has historically centered on security and criminal justice. Recent global efforts to regulate the digital space have faced challenges and potential risks associated with over-censorship. Amy emphasized the distinct set of rights children hold online and offline, referencing key international conventions and frameworks.

In closing, she highlighted the importance of ongoing dialogue with children and youth to inform policy and legislation, ensuring their perspectives are embedded in the design of effective online safety and protection solutions, ultimately leading to the necessity for robust regulations. Recommendations, such as using apps that seek consent before allowing content sharing and increasing caregiver monitoring of children’s online activities, highlight the importance of collaborative efforts and advocacy for international cooperation, and the establishment of legal frameworks emerges as a crucial aspect to ensure a secure yet open digital environment for the next generation.

Sara Pacia: Freedom of Expression Trends in the Asia-Pacific

Sara Pacia commenced her presentation by mentioning the International Covenant on Civil and Political Rights, focusing on Article 19 and the three-part test for legitimate limits to freedom of expression. Sara presented insights from the 2023 report on ICT policies and their implications on digital rights in the Asia Pacific, highlighting vague legal provisions used to harass and penalize those critical of the state. She discussed trends observed during COVID-19 lockdowns, such as increased censorship and content moderation to combat misinformation. Sara outlined laws in Southeast Asia affecting freedom of expression, pointing to examples from Bangladesh, Cambodia, Indonesia, Malaysia, the Philippines, and Sri Lanka. She emphasized the weaponization of laws to silence criticism and the establishment of national internet gateways. Sara also touched on the role of big tech platforms and the challenges associated with mechanisms like the Facebook Oversight Board. A case study was mentioned about Meta’s rejection of recommendations to suspend a former Prime Minister of Cambodia from Facebook. This case emphasizes the intricate relationship between tech giants and political figures, where decisions regarding content moderation can have profound implications on freedom of expression. It highlights the need for a nuanced understanding of the balance between regulating harmful content and preserving the democratic right to express dissenting opinions. Finally, she shared calls to action, encouraging awareness of policies, vocalizing against violations, and active participation in building regional digital rights movements. 

Gyan Prakash Tripathi: Violence Against Women Online

Gyan Prakash Tripathi, shared about online harassment in South Asia, drawing attention to the complexities faced by women, particularly in India and Pakistan. He highlighted cases like “Sulli Deals” and “Bulli Bai,” illustrating the challenges of ensuring online safety for marginalized communities. Gyan discussed India’s legal initiatives, such as amendments to the Indian Penal Code, addressing cyber crimes and digital harassment. He mentioned the Digital Data Protection Act and India’s cyber hygiene centers. Gyan emphasized the need for a comprehensive approach, including legal frameworks, content removal mechanisms, and awareness initiatives. Recognizing regional challenges, he called for cross-border cooperation and unified efforts to create a safer online environment in South Asia.

Breakout Group Discussion

This section highlighted the points discussed in each breakout group. Below are the questions explored during the session:

  • What ethical guidelines should govern the development and implementation of AI to protect Human Rights? What aspects of AI development and dissemination affect human rights?
  • What role do the government and regional organizations of different countries play in safeguarding human rights online amidst instances of genocide and war crimes?
  • What responsibilities do technology companies have in addressing cyber harassment, discrimination, and harassment on their platforms, and how can they be held accountable?
  • How can reporting mechanisms for cyber harassment be improved to ensure they are user-friendly, accessible, and responsive?
  • What are the repercussions that a particular government faces after imposing the internet shutdown?

Breakout Group 1:

The group delved into the impact of AI, particularly generative AI and automation, on human lives. The discussion revolved around the challenge of redirecting competition towards human-versus-human dynamics rather than human-versus-machines. The evolving neural networks in AI presented opportunities but also the risk of job displacement. Amy emphasized the need for innovation that serves humans, not one that replaces them, touching on topics like chatGPT, perplexity, Gemini, and various aspects of Generative AI. In the context of global policy-making, the importance of a ‘Multi-Stakeholder approach’ was recognized. The discussion underscored the need for policies to consider individuals at the grassroots level without access to devices and internet connectivity, ensuring that the voices of the underprivileged are heard. Collaboration among academia, civil society, the private sector, and technical communities was highlighted to bridge the gap between heard and unheard voices, with organizations like ICANN, IGF, RIRs, and Netmission playing pivotal roles.

The group also discussed the balance between human rights and duties, particularly regarding technology companies’ roles in establishing rules and safeguards. Private policies on social media platforms like LinkedIn, Instagram, and Facebook were cited as examples, emphasizing the influence of human behavior on ethical landscapes. The consensus was that responsibilities extend to both technocratic and end-user levels, requiring efforts to foster responsible behavior and usage.

Breakout Group 2:

In the evolving landscape of AI technology, ethical considerations have taken center stage, particularly in addressing concerns about the potential intrusion of AI into users’ personal data and privacy without explicit consent. Emphasizing the paramount need for transparency in AI development, both governments and development teams are urged to facilitate clear communication regarding data usage and connectivity. Delving into the roles of governments and regional organizations, discussions highlight the profound impact of war crimes and genocides, exemplified by the situations in Gaza and Israel, underscoring the advocacy for clear guidelines and bodies to address human rights violations online. Additionally, there is a call to recognize the mental health impact of online crimes, emphasizing the need to protect citizens’ well-being. Examining responsibilities in AI system development, the focus is on holding AI developers accountable for system errors and impacts, advocating for clear delineation of responsibilities for both developers and users in data-driven discussions, and stressing the importance of user awareness concerning data sharing and AI interference. In conclusion, the imperative for swift discussions and decisions on ethical AI development is acknowledged, with a strong emphasis on user awareness, transparency, and accountability as essential pillars in navigating AI technology’s complex and dynamic terrain.

Breakout Group 3:

In the realm of AI development, ethical guidelines underscore the imperative of directing artificial intelligence toward aiding individuals in conflicts and crises, emphasizing responsible weaponization for positive outcomes. The ethical framework further places a premium on a steadfast commitment to data privacy, advocating for protective measures to ensure the responsible handling of sensitive information. Acknowledging the gravity of ethical concerns, it highlights the pivotal role of advocacy efforts and participation in Internet governance forums to address emerging challenges collectively. On the governmental front, a discussion unfolds regarding the vital role governments play in upholding human rights during conflicts, citing the Cambodian genocide (1975-1979) as a case study. Additionally, the exploration of regional organizations’ involvement in diplomatic resolutions and international collaborations to address cross-border violations during conflicts showcases the broader, collective responsibility in safeguarding human rights. Shifting the focus to technology companies, their identified responsibilities include addressing cyber harassment, discrimination, and general misconduct on their platforms through the establishment of clear policies, reporting mechanisms, user education, and collaboration with external experts. The proposed measures also involve holding companies accountable through transparency reports, legal compliance, independent audits, and consideration of user feedback, recognizing the collective efforts required to foster a safer and more inclusive online environment.