NetMission Academy 2025 Session 3 Summary

Session 3 of the NetMission Academy 2025, titled “Human Rights Online,” was held on January 23, 2025. Moderated by NetMission alumni, Bea Guevarra and Sameer Gahlot, we explored the critical intersection of technology and fundamental human rights in the digital age with esteemed speakers: Pranav Bhaskar Tiwari (Senior Program Manager, The Dialogue), Emad Karim (UN Women), and Edward Tsoi (Co-founder, AI Safety Asia).

The Internet and emerging technologies have provided unprecedented opportunities for communication, access to information, and social progress, but they also present unique challenges in protecting human rights. This session assumed participants had a foundational understanding of basic human rights principles, without requiring specialized knowledge of artificial intelligence, journalism, or digital technology. It aimed to highlight the complex interplay between these areas, focusing on the impact of AI development and the challenges journalists face in the digital space, while encouraging dialogue on potential solutions and policy recommendations.

Additionally, the session was designed to equip participants with a deeper understanding of how technology is affecting human rights and empower them to advocate for a more just and equitable digital world. The focus during this session was to foster critical thinking and encourage engagement in shaping the future of digital rights. Lastly, emphasis was placed on a multi-stakeholder approach, as no single entity can adequately address these complex issues in isolation.

Summary of your presentation & case study

The session began with an introduction, emphasizing the need to discuss and discover solutions posed due to the confluence of technology and human rights. The initial framing laid out the core themes of the session, beginning with the deliberation on Al and data scraping, and transitioning to journalist rights online reaching a consensus about the rights we have offline should be carried over into our online presence.

Case Study 1: Data Scraping: The presentation delved into the ethics of data scraping for developing LLMs model(s). It outlined how Al companies often scrap personal and copyrighted data to train their models, raising concerns about privacy violations, copyright infringement, and bias in Al systems. The participants presented the data scraping process, noting that it involves extracting data from a variety of online sources, which can then be used for machine learning and developing Al models. The presentation included information on various lawsuits that have arisen, focusing specifically on the case of artists whose work was used to train Al models without their consent. It covered responses to data scraping prevalent in jurisdictions including the EU, Japan, Singapore, and China.

Case Study 2: Silencing the Truth: The Battle for Journalists’ Rights: The presentation highlighted the challenges journalists face in the digital age. This included limitations to their freedom of expression via online harassment, surveillance, censorship, and legal persecution. It outlined that while the Internet provides an unprecedented ability to share information globally, governments and powerful entities are increasingly limiting this expression through various means. It was noted that journalists are often the main point of contact between citizens and holding powerful people accountable, making it pivotal for a democracy to function. Several examples were presented, including the situation in Hong Kong, where national security laws have stifled press freedom, and India and Pakistan, where journalists have become targets of government surveillance and online harassment campaigns.

Speaker’s sharing and highlights from breakout-group discussion

Emad Karim highlighted AI’s evolving nature, emphasizing ethical responsibility, transparency, resource costs, and the complexity beyond black-and-white views which requires multifaceted actions. Pranav B Tiwari linked freedom of expression, privacy, and access to information, advocating end-to-end encryption for journalist protection. Edward Tsoi highlighted AI governance, the dangers of misinformation, autocratic risks, and the need for diverse Global South perspectives.

During the breakout group discussion, the participants were encouraged to engage in dialogues on thought-provoking questions aligned with the session’s theme. Breakout Group 1 emphasized education, informed consent, transparency, monitoring AI misuse, and involving affected people in policy creation. Breakout Group 2 stressed AI, human rights, ethical online journalism, digital security, and the need for collaboration to ensure accountability. Breakout Group 3 emphasized protecting press freedom, addressing digital repression, ensuring journalists’ presence in discussions, and defending human rights.

Written by Rupam Barui, Khursheed Akram, Leena Goyal, Naylie Hashim, and Sameer Gahlot (Edited by Jenna Manhau Fung)