The rapid evolution of digital platforms and Artificial Intelligence (AI) has significantly influenced online interactions, shaping regulations and policies aimed at ensuring cybersecurity, privacy, and safer Internet experiences. Governments and private entities worldwide grapple with the challenge of balancing security with user freedoms, data protection, and ethical AI governance. This case study explores two significant policy developments: Australia’s Online Safety Amendment Act 2024 and the increasing adoption of AI-initiated e-KYC systems in the Asia Pacific. They highlight key cybersecurity and privacy concerns, the role of regulatory frameworks, and the interplay between technological advancements and governance.
Case Study 1: Australia’s Online Safety Amendment Act 2024
Background & Legislative Overview
In November 2024, the Australian government enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, which restricts individuals under 16 years old from creating accounts on major social media platforms, including Facebook, Instagram, TikTok, and Snapchat. The bill mandates strict age verification mechanisms by 2025, with penalties of up to AUD 49.5 million for non-compliance. The stated objective is to protect minors from cyberbullying, online exploitation, and harmful content, reinforcing Australia’s commitment to online safety.
Key Issues and Concerns
While the law aims to reduce online harm, several challenges and unintended consequences have emerged. One major issue is the lack of a clear age verification system, as the bill does not specify a uniform standard, leaving implementation to individual platforms. This could lead to inconsistencies and significant privacy risks, as companies may collect personal data to verify users’ ages, raising concerns about data breaches, identity theft, and misuse of information. Critics also argue that banning minors from social media could lead to digital exclusion, restricting their access to educational resources, digital literacy, and civic engagement. Some have likened the policy to “banning children from libraries.” Furthermore, minors may attempt to bypass these restrictions by creating fake accounts, using VPNs, or migrating to unregulated platforms, ultimately undermining the law’s intended effectiveness.
Proposed Alternatives & Global Best Practices
International models, such as the European Union’s General Data Protection Regulation (GDPR) and the United Kingdom’s Age-Appropriate Design Code, offer alternative regulatory approaches. GDPR emphasizes data privacy protections by requiring platforms to implement privacy-by-design features without excessive data collection. Meanwhile, the UK’s Age-Appropriate Design Code encourages platform accountability, ensuring that platforms incorporate safeguards for minors without resorting to outright bans. These models highlight the importance of balancing safety with digital rights, providing valuable insights for future online safety regulations.
Instead of blanket bans, the EU offers a smarter approach to digital regulation. The GDPR ensures personal data is processed securely, with strict governance policies and employee training. The Digital Services Act regulates platforms to prevent harmful content while maintaining an open and fair Internet. The Online Safety Act 2023 targets illegal and harmful content, holding platforms accountable without excessive restriction. These policies demonstrate that regulation can protect users while preserving digital freedom.
The rise in teenage mental health concerns shouldn’t be blamed solely on social media. Rather than banning platforms, governments should focus on research-driven solutions. Educating parents, caregivers and young users about healthy social media habits is more effective than restrictive policies. Overregulation may push minors toward secrecy rather than fostering open discussions about safe online behavior.
A balanced approach ensures both safety and digital inclusion. Instead of limiting access, policymakers should promote digital literacy, parental guidance, and platform accountability to create a safer and more empowering online environment for young users.
The Australian government’s approach to online safety through the Online Safety Amendment Act reflects the growing concern over safeguarding vulnerable populations, particularly minors, in an environment where digital interactions are becoming central to daily life. Similarly, the adoption of AI-driven solutions, such as e-KYC, underscores the rapid shift toward automation and AI in regulatory compliance, particularly in the financial sector. Both case studies highlight the critical role of regulation in addressing emerging risks associated with digital interactions, while also acknowledging the need for balance in fostering innovation, protecting user privacy, and ensuring cybersecurity. These policies demonstrate how different sectors are grappling with the dual objectives of technological advancement and the protection of citizens’ rights and freedoms in the digital age.
Now, we turn our attention to Case Study 2, which explores the growing adoption of AI-initiated e-KYC systems in the Asia Pacific region and the implications of this technology for regulatory compliance in the financial sector.
Case Study 2: AI-Initiated e-KYC
Background & Adoption Across Regions
AI-enabled Know Your Customer (KYC) is the application of AI technologies to streamline and enhance the process of verifying customer identities in compliance with regulatory requirements. By leveraging tools such as machine learning, natural language processing, and biometric recognition, AI-enabled KYC offers numerous benefits, revolutionizing traditional customer verification processes. It significantly reduces the time and cost associated with manual KYC by automating tasks like document verification, identity matching, and risk scoring (Society for Worldwide Interbank Financial Telecommunication).
AI-preparedness and Cybersecurity Risks in AI-Initiated e-KYC
AI-preparedness in the Asia-Pacific region is marked by significant discrepancies in technology development and literacy, reflecting the region’s diverse economic, social, and political landscape. On one end, countries like Japan, South Korea, and Singapore are global leaders in AI innovation, boasting cutting-edge research, robust infrastructure, and highly skilled workforces. Conversely, many developing nations in South Asia and the Pacific Islands face limited Internet access, low digital literacy rates, and insufficient resources to invest in AI technologies (AI Preparedness Index, International Monetary Fund).
The use of artificial intelligence (AI) in electronic Know Your Customer (e-KYC) has transformed digital identity verification. However, this advancement also brings cybersecurity risks, data privacy concerns, and the potential for fraud. This analysis explores these challenges and suggests possible solutions to mitigate risks.
AI-based e-KYC systems rely on extensive data collection, which exposes users to several risks. One major concern is the misuse of personal data. Studies indicate that data collected for KYC purposes may be repurposed without consent, leading to privacy violations, unauthorized surveillance, and even discriminatory practices (Feretzakis & Verykios, 2024). Additionally, large datasets used in AI-powered e-KYC make them attractive targets for cybercriminals. A breach exposes sensitive user information, leading to financial fraud and reputational damage.
Identity fraud is a significant risk associated with AI-driven e-KYC. Fraudsters exploit AI technologies to create fake identities that can bypass verification methods, facilitating crimes such as money laundering and financial fraud (Haque & Shoaib, 2023). Another major risk is account takeovers, where cybercriminals exploit security loopholes to gain unauthorized access to user accounts, leading to financial losses and data manipulation.
Deepfake technology is an emerging threat. AI-generated deepfake videos and images can deceive biometric authentication systems, allowing fraudsters to impersonate legitimate users. This can lead to unauthorized account creation, financial fraud, and reputational risks for financial institutions. Organizations must implement AI model validation, enhanced biometric security, and fraud detection mechanisms to counter these risks.
Suggested Solutions: Strengthening AI-Driven KYC
AI-driven e-KYC systems should incorporate multi-factor authentication (MFA), which requires users to verify their identity through multiple channels to enhance security. Bank-backed or government-issued digital IDs can also ensure authenticity and secure onboarding. Regular AI model validation is essential to detect and prevent deepfake threats (Feretzakis & Verykios, 2024). Organizations can stay ahead of evolving cyber threats by continuously testing and improving AI security frameworks.
Policy Solutions – Governments and regulatory bodies must establish legal frameworks governing AI usage in e-KYC. Clear policies on AI ethics, transparency, and accountability can ensure responsible deployment (Md. Abdul Hannan et al., 2023). Public-private partnerships can further enhance security through collaborative knowledge-sharing and the development of best practices.
Youth Perspective: Advocating for Responsible AI Implementation
As AI-powered KYC presents both immense potential and significant risks, proactive measures—strong technical safeguards, clear regulatory frameworks, and youth advocacy—can create a system that is not just efficient but also secure and ethical. True progress lies in collaboration, where governments, tech leaders, and young changemakers work together to build trust in AI-driven identity verification. By engaging with policymakers, youth can champion transparency and accountability in AI-driven identity verification. International youth-led collaborations can foster knowledge exchange and push for harmonized e-KYC regulations, ensuring security without compromising inclusivity.
Conclusion
Both case studies underscore the ongoing tension between regulation, security, and digital rights. While Australia’s social media age restriction law aims to protect minors, it risks privacy infringement and digital exclusion. Similarly, AI-powered KYC enhances efficiency and security but raises concerns over data privacy, deepfake fraud, and regulatory inconsistencies.
A balanced, multi-stakeholder approach is essential to ensuring cybersecurity, privacy, and Internet safety. Governments, platforms, and civil society must collaborate on adaptive policies that protect users while preserving digital freedoms. As technology advances, policy frameworks must remain flexible, evidence-based, and globally aligned to address evolving cybersecurity threats while ensuring an open and inclusive digital landscape.
Written by Sana Nisar, Harvey Asuncion, Namratha Murugeshan, Nishant Pokhrel and Unaiza Shahid (Reviewed by Bea Guevarra and Jenie Benedetta Fernando)
Reference
- Feretzakis, G., & Verykios, V. S. (2024). Trustworthy AI: Securing sensitive data in large language models. AI, 5(4), 2773–2800. https://doi.org/10.3390/ai5040134.
- Haque, M. A., & Shoaib, M. (2023). e₹—The digital currency in India: Challenges and prospects. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 3(1), 100107. https://doi.org/10.1016/j.tbench.2023.100107.
- Hannan, M. A., Shahriar, M. A., Ferdous, M. S., Jabed, M., & Rahman, M. S. (2023). A systematic literature review of blockchain-based e-KYC systems. Computing, 105. https://doi.org/10.1007/s00607-023-01176-8.
- International Monetary Fund. (n.d.). AI Preparedness Index (AIPI). Retrieved from https://www.imf.org/external/datamapper/datasets/AIPI.
- SWIFT. (n.d.). Know your customer (KYC). SWIFT. Retrieved March 7, 2025, from https://www.swift.com/risk-and-compliance/know-your-customer-kyc.