Asia Pacific Policy Observatory November 2025 Report – Asia Pacific’s Digital Governance in the Age of Artificial Intelligence: A Youth-Led Analysis

This report is the sixth edition of the Asia Pacific Policy Observatory and continues our Youth-Led Analysis of Asia Pacific’s Digital Governance in the Age of Artificial Intelligence series, following the June 2025 edition published earlier this year.

Driven by youth-led research, this report focuses on four critical digital issues – ethical AI and bias, misinformation and disinformation, labor and automation, and AI governance and accountability – reflecting the diverse challenges young people in the Asia Pacific face today. Our aim is to provide practical recommendations and democratize access to policy analysis, ensuring that the voices of those most affected by AI – today’s youth – are heard in policy discussions. We seek to offer insights that guide the development of inclusive, ethical, and forward-thinking AI policies.

Although published in November 2025, this report has influenced APAC youth’s contribution to key consultation processes around WSIS+20 throughout the year. This includes shaping the outcomes of a joint letter endorsed by nearly 70 youth leaders (September 2025), informing youth interventions at the informal consultations in July, October and November, and contributing to the final APAC Youth Statement on WSIS+20, now titled Declaration on Meaningful Youth Participation.

With generous support from TWNIC, we have been able to mobilize youth talent effectively in producing this report. We look forward to sharing this publication widely and engaging with readers to receive constructive feedback.

Lastly, we’d like to extend special thanks to our research team: Jenna Manhau Fung (Project Manager & Chief Editor), Kenneth Leung (Advisor & Research Director), Aviral Kaintura (Contributor & Editor), Ankita Rathi (Contributor & Editor), and other contributors – Amrita Tiwari, Archit Lohani, Joysa Kaushik, Mahee Buddhika Bandara Kirindigoda, Pham Thu Ngan, Rohan Sachdeva, Sana Nisar, and Songo Nore – for their valuable contributions and support of this report.


Executive Summary

The Asia Pacific Policy Observatory’s November 2025 report, “Asia Pacific’s Digital Governance in the Age of Artificial Intelligence: A Youth-Led Analysis,” examines the intersection of AI and digital governance across the Asia Pacific region. The report focuses on four critical areas: ethical AI and bias, misinformation and disinformation, labor and automation, and AI governance and accountability. It provides a comprehensive analysis of the challenges and opportunities presented by AI for young people in the region.

The first chapter highlights how AI systems are increasingly shaping critical areas of society, from hiring to healthcare, but without safeguards, they risk reproducing biases and deepening inequalities. The Asia Pacific region exhibits a fragmented regulatory landscape, with approaches ranging from prescriptive, state-led frameworks to soft-law, innovation-driven models. To ensure fairness, transparency, and accountability, stakeholders must mandate independent audits, integrate ethical accountability into development, and deploy tools to monitor bias and amplify affected communities’ voices.

The second chapter focuses on how generative AI has expanded access to information but has simultaneously amplified misinformation and disinformation, disproportionately affecting women, youth, minorities, and digitally marginalized populations. The complex social, economic, and political dynamics of AI-driven falsehoods create systemic risks, from harassment to electoral manipulation. Governments, civil society, and platforms must enact adaptive laws, implement technical safeguards, and strengthen media literacy and fact-checking to build resilience against manipulation.

The third chapter examines how AI and platformization are reshaping labor markets across Asia Pacific, offering efficiency and flexibility while increasing precarity, surveillance, and algorithm-driven inequities. Gig and platform workers face unclear employment rights, limited protections, and opaque AI management systems. Addressing these challenges requires mandating transparency, human oversight, and grievance mechanisms, promoting equitable work practices, and empowering labor movements and civil society to advocate for fair and accountable AI in workplaces.

The fourth chapter examines AI governance in the Indo-Pacific, highlighting a diverse landscape that ranges from ethics-driven, innovation-focused frameworks to centralized, security-oriented models. Risks emerge throughout the AI lifecycle, including biased datasets, privacy breaches, and unsafe deployment, while structural and institutional barriers often limit meaningful multistakeholder participation. Countries with strong regulatory autonomy – such as India, Japan, Singapore, South Korea, and China – set regional benchmarks, while others rely on externally developed frameworks. Building effective governance requires institutionalized multistakeholder councils, embedding ethics, safety, and transparency by design, and fostering active engagement from industry, civil society, and the public to ensure trust, equity, and accountability.