On January 30, 2025, the fourth session of the NetMission Academy, titled “Cybersecurity, Privacy, and Safer Internet,” convened participants from across the Asia-Pacific region. The session explored key issues related to government regulation, platform accountability, and digital rights, focusing on age restrictions on social media, AI-driven identity verification, and the balance between security and privacy in digital governance. Guest speakers Gyan P. Tripathi (Technology Law and Policy Researcher), Athena Tong (Researcher at the University of Tokyo), and Jean Linis-Dinco (Digital Rights Senior Advisor, Manusha Foundation) provided expert insights on cybersecurity risks, AI-powered disinformation, and regulatory challenges in digital spaces.
Presentation and Case Study Summary
The SG#4 team presented two case studies examining how governments and platforms regulate digital spaces while striving to balance cybersecurity, privacy, and freedom of expression.
The first case study focused on Australia’s Online Safety Amendment Act 2024, which prohibits individuals under 16 years old from creating accounts on social media platforms such as Facebook, Instagram, TikTok, and Snapchat. Enacted in November 2024, the law requires social media platforms to implement strict age verification systems by 2025, with non-compliance penalties reaching AUD 49.5 million. While the policy is intended to protect minors from online harm, it has raised concerns regarding feasibility, privacy risks, and potential overregulation. Critics argue that a blanket ban may push minors towards unregulated platforms rather than ensuring their safety. Alternative regulatory models, such as the EU’s General Data Protection Regulation (GDPR) and Digital Services Act, focus on platform accountability and data protection rather than access restrictions, offering a balanced approach that upholds digital rights while promoting online safety.
The second case study examined AI-initiated e-KYC, an AI-driven process that enhances identity verification for financial institutions. While AI-KYC improves efficiency and financial inclusion, it also introduces data privacy risks, fraud vulnerabilities, and regulatory inconsistencies. The session highlighted regional disparities in AI adoption, with Japan and Singapore leading AI-driven innovations, while Vietnam and the Pacific Islands struggle due to limited infrastructure and weak regulatory frameworks. Key risks include misuse of personal data, algorithmic bias, and security breaches.
To mitigate these risks, experts proposed multi-factor authentication (MFA), bank-based digital IDs, and AI model validation as safeguards against fraudulent activities. On the policy side, recommendations included strengthening legal frameworks, fostering public-private partnerships, and establishing global AI governance standards to ensure ethical and secure implementation.
Insights from Guest Speakers
Guest speakers provided valuable insights into cybersecurity, AI disinformation, and regulatory challenges. Their perspectives highlighted the evolving digital landscape and the impact of emerging technologies on privacy and security.
- Gyan P. Tripathi discussed deepfake technology and AI regulation, emphasizing the EU AI Act’s ban on Remote Biometric Identification (RBI). He explained that while the law enhances privacy protection, it may limit AI-driven fraud detection capabilities.
- Athena Tong provided a geopolitical perspective, examining China’s AI-driven disinformation campaigns, Russia’s election interference tactics, and Iran’s use of synthetic media to manipulate public opinion. She underscored the growing challenge of distinguishing authentic content from AI-generated misinformation.
- Jean Linis-Dinco focused on cybersecurity risks in ASEAN, warning that governments may justify digital surveillance under the pretext of national security. She raised concerns about Big Tech’s role in mass data collection and AI-driven monitoring, emphasizing the need for stronger regulatory frameworks to protect privacy rights.
Breakout Group Discussion
The breakout discussions examined who holds responsibility for ensuring online safety, the effectiveness of digital regulations, and AI’s role in privacy protection.
- One key debate explored whether social media age restrictions should be determined by governments or left to platforms. Many participants opposed blanket bans, arguing that minors could circumvent restrictions by accessing unregulated platforms. Instead, parental guidance, AI-driven content moderation, and digital literacy initiatives were considered more effective alternatives.
- Another discussion focused on who should be responsible for minors’ online safety. A consensus emerged that both parents and platforms share responsibility, parents must educate children on online risks, while platforms should implement stricter safeguards, such as content filtering and privacy controls. However, concerns were raised that social media companies may prioritize business interests over user safety, reinforcing the need for regulatory oversight and ethical AI deployment.
- Participants also debated how AI-powered verification systems should be regulated globally to prevent misuse and bias. Groups were at a consensus that a balance should be adhered on how social media platforms can provide a safer space for everyone. An outright ban on minors does not pose as the correct solution. There should be more importance in ensuring content online is safe and real.
- For AI-KYC services, the groups shared their thoughts into how technical capabilities can safeguard the information being gathered and processed. One example is 2FA which could lead to enhanced security for these services. By ensuring the necessary guardrails on the technical side are in place, KYC services are better positioned to tackle any possible data breaches.
Conclusion
The session emphasized the delicate balance between cybersecurity, privacy, and digital rights. While regulatory frameworks are essential for ensuring online safety and cybersecurity, they must not compromise digital freedoms or restrict youth engagement. Participants agreed that a collaborative approach, integrating government oversight, platform responsibility, and public engagement, is key to fostering a safer and more inclusive digital space.
The session concluded with a call to action for:
- Continued research on emerging cybersecurity threats,
- Youth involvement in digital policymaking, and
- Global cooperation to establish equitable cybersecurity measures and ethical AI governance in the digital age.
Even from the breakouts, groups were at a consensus that balancing profit, safety, and accessibility is the way to go. Only by then we can achieve a harmonious relationship between the private and public sectors on how to ensure a safer internet for all, and not just minors. With the continuing trend of innovation especially in the AI space, we see a call to action to up-skill and connect with peers on how to collaborate and enhance cybersecurity for all.
Written by Sana Nisar, Harvey Asuncion, and John Gilbert Ora’a (Reviewed and edited by Bea Guevarra, Jenie Fernando, and Jenna Manhau Fung)