Synthetic Lies, Real Consequences: Governing AI in the Face of Deepfakes

Case Study 1: The Escalation of Sexual Deepfake Crimes Among South Korean Youth: Analyzing the 2024 Telegram Scandal and Legislative Responses

The Rise of Deepfake Threats

ChatGPT’s 2022 release rekindled AI interest. Statista reports that 87% of respondents fear AI dangers, with deepfakes (69%) a top concern. They can falsify military orders, cause political confusion, and erode democratic trust. Emerging in 2017 with Barack Obama, Mark Zuckerberg, and Taylor Swift, deepfakes have been linked to stock market declines and expanded to crimes like intellectual property theft and national security threats as AI tools became more accessible. Sumsub’s analysis of over 2 million checks across 28 industries, revealed a 2022 to 2023 deepfake surge: 1,740% in North America, 1,530% in Asia Pacific, 780% in Europe, 480% in Middle East and Africa, and 410% in Latin America.

South Korea’s Sexual Deepfake Crisis

Koreans account for 53% of worldwide sexual deepfakes, but in August 2024, the issue worsened. Students created and shared sexual deepfakes of classmates and teachers from over 400 schools on Telegram. Team Datastack compiled these crimes to raise awareness, prompting South Korea’s Communications Commission to hold meetings. In October 2024, the Chungbuk Metropolitan Office of Education launched the first 24/7 sexual deepfake hotline, with more regions to follow. A November 2024 Ministry of Education survey found that 85.5% cited lacking education, leading to a two-week special education program in December. Alarmingly, National Police Agency data obtained by South Korean lawmakers revealed that 69% of sexual deepfake offenders in the past three years were teenagers. This issue is global, with economies grappling to combat the sexual deepfake surge.

Legal Reforms and Crackdowns

South Korea has proactively amended laws to address deepfakes. The Special Act on the Punishment of Sexual Offenses was revised in March 2020 to criminalize sexual deepfake creation. In December 2023, the Public Offices Election Act was amended to ban deepfake use in election campaigns 90 days before elections. Following the August 2024 sexual deepfake surge, in October 2024, the penalty for creating sexual deepfakes, regardless of intent to distribute, was increased to seven years in prison and a 50 million won fine, aligning with covert filming laws and viewing sexual deepfakes became a crime. December 2024 revisions enabled victim compensation and empowered police to request deletion and blockage of sexual deepfakes.

Lessons and Global Responses

To prevent further victimization, stronger global safeguards are needed. Economies can learn from South Korea’s responses. For example, Singapore has passed laws empowering its Infocomm and Media Development Authority to compel social media platforms to block harmful sexual content or face penalties.  Strict legal consequences, such as South Korea’s enhanced penalties for sexual deepfakes, act as deterrents, especially since 38.2% of Korean middle and high school students perceived penalties as too lenient. Additionally, with 29.7% of these students unsure of how to respond if victimized, clear support mechanisms, such as South Korea’s 1899-9003 hotline, which provides psychological and legal counseling, are crucial. Sustained public awareness ensures policymakers grasp sexual deepfakes’ impact, allowing laws and preventive strategies to evolve against this growing threat.

Case Study 2: Balancing Innovation and Regulation: Comparison of AI Regulatory Models in Australia and Singapore

AI is transforming daily tasks while raising serious ethical and regulatory concerns. Governments face challenges to balance innovation while ensuring that AI operates transparently and ethically. AI governance is essential to achieve trust, efficiency, and compliance in AI technology applications affecting governments, businesses, and individuals by shaping policies that impact economic growth and data privacy. Past failures, such as Microsoft’s Tay chatbot incident, highlight the significant ethical and social risks posed by AI without proper oversight.

Australia

By 2030, the Australian economy is expected to benefit by AU$315 billion through digital innovations, including AI. Fast-paced AI adoption is reshaping Australia’s economy, society, and government. However, Australia’s current regulatory system is not fit to respond to AI’s distinct risks and benefits. The government is committed to creating a regulatory environment that builds community trust and promotes innovation and adoption while balancing critical social and economic goals. The interim response involves a risk-based approach to identify high-risk and unacceptable risk areas and put regulatory frameworks and mandatory guardrails in place to ensure transparency, accountability, security, and human oversight. Innovation and research in high-risk areas may take place in a sandbox environment at a small scale and with appropriate oversight. 

AI is playing a revolutionary role in making electric vehicles safer. For example, to build trust in electric vehicles, researchers at the University of Arizona team developed an algorithm to predict when and where thermal runaway (a major cause of EV explosion) is likely to start and build mechanisms to prevent it. Thus, the balance between regulation and innovation is essential to ensure the safety of AI systems while reaping the benefits of emerging technologies.

Singapore

Singapore has adopted a forward-thinking approach to AI, balancing innovation with consumer protection. This soft-law approach allows AI developers and businesses to self-regulate, attracting global tech investments. Ranked third in 2024 Global AI Index for AI investment, innovation, and implementation (Tortoise Media, 2024), Singapore has established a Model AI Governance Framework that guides corporate organizations in addressing ethical and governance concerns while deploying AI solutions. It promotes transparency, fairness, and human-centric use while providing guidance on proper governance, data bias, and stakeholder communication.

In 2023, Singapore launched its National AI Strategy 2.0, with the aim to excel internationally in AI, empower businesses, and upgrade infrastructure. The 15-action plan emphasized AI integration across sectors like research and development, education, health, and public services, such as AI Verify, OneService Chatbot, and SELENA+. Tech startups like MooVita, which specializes in autonomous mobility solutions, are exploring the potential of AI in autonomous electric vehicles aiming to reduce carbon emissions and improve efficiency. ATTAIN*SG is a research project focused on the public perception of AI in AVs.

Singapore is addressing key AI challenges with initiatives like a draft Gen AI Governance framework, along with the Veritas toolkit by the Monetary Authority of Singapore for generative AI risks in finance, guidelines from the Personal Data Protection Commission for data privacy in AI training, and SkillsFuture initiative to upskill the workforce for emerging technologies.

Conclusion

Australia and Singapore’s contrasting AI governance models underscore the broader divergence in national approaches to regulating emerging technologies. These differences are shaped by both nations’ societal values, political structures, economic priorities, and historical regulatory philosophies.  However, such fragmentation may create vulnerabilities such as regulatory arbitrage, where businesses may exploit jurisdictional inconsistencies to circumvent oversight, and policy misalignment, which can hinder global AI governance efforts.

To address these challenges, nations must strike a delicate balance between fostering innovation and ensuring robust regulatory safeguards. While AI governance should remain adaptable to national contexts, establishing foundational principles at global forums is essential to facilitate knowledge-sharing, capacity-building, and international collaboration. A harmonized yet flexible approach—grounded in transparency, accountability, and ethical AI principles—can help mitigate risks while enabling emerging technologies that align with broader societal interests.

Written by Oh Ji Won, Au Yi Teng, Muhammad Sadeem Hannan and Rohan Sachdeva (Reviewed by Socheata Sokhachan)


Reference