How do we build digital trust when AI can fake identities and our data is constantly at risk?
On January 29, 2026, the fourth session of the NetMission Academy 2026, titled “Cybersecurity, Privacy, and a Safer Internet,” brought together youth fellows and experts from across the Asia-Pacific region to examine emerging challenges at the intersection of digital trust, cybersecurity, and ethical artificial intelligence governance. Moderated by Vinayak Bharadwaz and Jenie Fernando, the session combined participant-led case study analyses with expert reflections, fostering critical discussion on how digital governance frameworks must evolve in response to AI-driven risks and large-scale data use.
The session featured expert insights from Dr. Sonny Zulhuda, Associate Professor of Law at the International Islamic University Malaysia; Raunaq Sharma, Senior Research Associate at The Dialogue; and Mel Migriño, Chief Executive Officer of the Women in Security Alliance Philippines and Country Head of Gogolook. Together, they offered multidisciplinary perspectives on informed consent, platform accountability, cybersecurity ethics, and the societal harms arising from AI-enabled threats.
Case Study Presentation
Sub-Group #4 fellows led the case study segment, presented by Dahyun Chung and Shreejita Pal, emphasizing that cybersecurity and privacy have become core pillars of digital trust in an era defined by AI-driven platforms and extensive data collection. The presenters highlighted growing risks such as online fraud, surveillance, data breaches, and AI-generated deception, noting that these threats disproportionately affect young users and digitally marginalized communities. The case studies framed the importance of digital literacy, resilience, and accountability as foundational elements of safer Internet governance.
Case Study 1: Deepfake-Manipulated Videos in Hong Kong
The first case study examined the use of AI-generated deepfake videos in sophisticated fraud schemes in Hong Kong, where attackers impersonated senior corporate executives during video calls to instruct employees to transfer large sums of money. These incidents reportedly resulted in financial losses of approximately USD 25 million, underscoring the real-world consequences of synthetic media misuse.
The case highlighted critical challenges associated with the rise of generative AI, particularly the growing tension between authenticity and deception in digital communications. The financial and reputational damage caused by deepfake-enabled fraud has significantly eroded trust in institutions, while the rapid and largely unregulated proliferation of generative AI tools has made detection, verification, and accountability increasingly complex. These risks are especially pronounced in the Asia-Pacific region, which experienced a reported 1,530% increase in deepfake cases between 2022 and 2023, second only to North America. Beyond financial fraud, additional concerns included the misuse of biometric data and the creation of sexually abusive AI-generated images, raising serious questions around privacy, dignity, and human rights.
Policy responses to deepfake-related harms vary across jurisdictions. China prohibits the creation of deepfakes without consent and mandates clear labeling of AI-generated content to enhance transparency. South Korea has criminalized the distribution of harmful deepfakes, while its AI Act, which came into effect in January 2026, places increased responsibility on developers and expands government oversight. Australia encourages technology companies to label and watermark AI-generated media, while countries such as Thailand, the Philippines, Malaysia, and Singapore rely on personal data protection laws to mitigate associated risks. In the United States, legislative measures such as the Take It Down Act and the DEFIANCE Act aim to curb the spread of harmful synthetic media, whereas the European Union, through the Digital Services Act (DSA), emphasizes platform accountability and transparency as central regulatory principles.
Case Study 2: Coupang Data Breach in South Korea
The second case study focused on a major data breach involving Coupang, South Korea’s largest e-commerce platform, highlighting the growing importance of cybersecurity resilience and corporate responsibility. The discussion explored how large-scale data breaches can undermine consumer trust and expose systemic weaknesses in platform security practices.
A comparative policy analysis examined data protection frameworks in India, Singapore, and Australia, with particular attention to breach notification obligations, corporate liability, and enforcement mechanisms. The case emphasized the importance of cross-border reporting, public–private cooperation, and robust data protection governance in strengthening cybersecurity resilience across digital ecosystems.
Guest Speakers’ Insights and Q&A
The guest speakers provided in-depth reflections on digital trust, informed consent, and cybersecurity ethics in the AI era. Dr. Sonny Zulhuda framed digital trust as a central challenge of contemporary Internet governance, linking it closely to informed consent, data misuse, and the social harms caused by AI-driven scams and sexual deepfakes. He emphasized that safeguarding dignity, security, and trust online requires balanced legal and ethical frameworks, shared responsibility across governments, industry, and civil society, and international collaboration. At the same time, he cautioned that overly surveillance-driven or punitive cybercrime laws risk undermining transparency, trust, and responsible security research.
Raunaq Sharma focused on the evolving meaning of informed consent within AI-driven data ecosystems. He argued that consent must be freely given, purpose-specific, and clearly communicated, particularly for digitally marginalized communities. Highlighting concerns around opaque data practices, including the use of publicly available content for AI training, he stressed the need for simplified consent mechanisms, clearer disclosures, and greater user awareness.
Mel Migriño examined the emotional, social, and economic harms caused by AI-driven scams and sexual deepfakes, noting that human vulnerability has increasingly become a primary attack vector. She emphasized the importance of detection and verification tools, content credentialing systems, and sustained public education. She also highlighted the need for coordinated international policy frameworks to address cross-border cyber risks and promote ethical AI deployment.
Highlights from Breakout Group Discussions
During the breakout group discussions, participants engaged deeply with policy questions surrounding AI regulation, deepfake governance, cybersecurity ethics, and accountability. A central debate focused on whether existing legal frameworks are sufficient in the AI era or whether deepfake-specific legislation is required. Participants emphasized the value of risk-based regulatory approaches, platform accountability, metadata protection, watermarking mechanisms, and international cooperation to curb the rapid spread of harmful content.
Discussions also highlighted the importance of hybrid governance models that integrate traditional cybersecurity practices with AI-specific frameworks, such as the NIST AI Risk Management Framework and Singapore’s AI Verify. On accountability, a strong consensus emerged that responsibility for AI-related harms should be shared across the entire AI supply chain—including developers, deployers, distributors, and platforms—with liability aligned to levels of control and risk.
Conclusion
Session 4 underscored that cybersecurity, privacy, and digital trust are deeply interconnected and increasingly central to modern life. While digital connectivity enables innovation, inclusion, and participation, it also introduces complex risks related to AI-generated content, platform governance, and large-scale data collection.
Participants agreed that addressing emerging threats such as deepfakes and data breaches requires proactive governance, ethical AI frameworks, technical safeguards, and sustained multi-stakeholder collaboration. Balancing safety, accessibility, and accountability—rather than prioritizing profit or control alone was identified as essential to building a safer Internet for all users, not only minors.
The session concluded with a call to action for continued research on AI-driven cybersecurity threats, stronger youth participation in digital policymaking, and enhanced global cooperation to advance ethical AI governance and cybersecurity resilience in the digital age.
Contributors:
Shweta, Waleed Mukhtar, Dahyun Chung
Editors and Reviewers:
Khushbakht, Nawal Munir