As 2025 comes to a close, the digital world feels more unstable than ever. Deepfake scams, AI-generated sexual abuse, and online manipulation have become everyday threats for young people across Asia-Pacific. What used to feel like “future tech problems” are now shaping elections, hurting young people’s mental health, and creating new forms of online violence.
This month, we’re diving into one of the biggest issues affecting APAC youth today: the rise of deepfake harm — and the growing need for smarter digital safeguards across the region.
A Region Under Siege: Deepfake Abuse & Scams Spread Across Asia
Across Asia, deepfake misuse exploded in scope and sophistication between 2022 and 2025, with three alarming trends emerging: sexual abuse imagery, identity fraud, and organized crime networks.
Between 2022 and 2023 alone, the Asia-Pacific saw a 1,500% surge in deepfake cases, including sexual abuse imagery and AI-generated identity fraud that targeted young people and vulnerable communities. Deepfake sexual extortion became an urgent issue, with Singapore having reported a rise in cases involving public figures and youth. Singaporean police said this reflected how “normalized” and accessible deepfake tools had become.
Deepfake abuse continued throughout 2025, with Southeast Asian countriesstruggling against increasingly organized deepfake crime networks. In early 2025, 87 deepfake scam rings, which used AI-generated voices and faces to impersonate friends, relatives, and celebrities to fuel financial scams across Asia, were dismantled.
Teens as Both Victims and Perpetrators
One of the most alarming trends of 2024–2025 is how deeply deepfake culture has entered youth spaces.
In South Korea, digital sex crimes involving deepfakes surged in 2024, affecting young people in alarming ways: young people in their teens or twenties account for more than 90% of the victims. At the same time, teens themselves increasingly became perpetrators – some began using AI-generated sexual images for “entertainment” or casual harassment, reflecting how desensitized youth had become to digitally manipulated content.
By late 2024, the Korean National Police reported that teen-perpetrated deepfake crimes had nearly doubled within four years, with hundreds of youth charged annually for producing or spreading manipulated sexual content. This troubling pattern persisted into 2025, with mid-year arrests already surpassing the entire previous year’s total. Teens continued to make up the majority of suspects.
The Systems Are Failing: Why APAC Youth Are Still at High Risk
While authorities focused on arresting perpetrators, victims faced their own set of challenges. Across the region, young women and girls who experienced deepfake abuse often found themselves trapped in multiple ways. Many lacked the digital literacy to recognize the abuse they experienced or understand how to seek help – a problem particularly acute for those with limited technology access. When victims did try to report, they encountered failing response systems: police lacked training and resources, courts took years to resolve cases, and platform reporting tools often didn’t work in local languages or understand regional contexts. According to UNFPA, these systemic failures left many victims even more vulnerable to ongoing tech-facilitated harm.
These victim experiences reflected deeper systemic failures. Reporting mechanisms weren’t designed for the speed and scale of digital abuse. School-level education rarely covered AI manipulation risks, leaving an entire generation unprepared. And most digital platforms still lacked strong enough safeguards to detect deepfake sexual content or prevent scams before they spread.
The cumulative effect created a crisis of trust: young people across the region became increasingly unsure what to believe online. For a generation that grew up digital-native, this erosion of basic trust in images, videos, and voices represented a fundamental shift in how they navigated digital spaces.
What Young People Can Do Now
The deepfake crisis won’t solve itself. You already know what needs to happen: schools teaching AI literacy before it’s too late, platforms detecting abuse before it spreads, governments taking youth safety as seriously as economic growth. The question is whether you’ll demand it loudly enough that those in power can’t ignore you.
Share what you know. Support friends who’ve been targeted. Call out platforms when they fail. Push your schools to update their curricula. Most importantly: refuse to accept that this is just “how things are online now.”
What We’re Reading & Listening This Month
As we wrap up 2025 and look toward the new year, one piece we recommend exploring is “Ten Technology Trends for 2026”. It offers a sharp look at what might shape our digital world in the coming year — from the rise of AI-driven crime, to new concerns about synthetic identities, and the growing influence of augmented intelligence.
Written by Jenie Fernando (Reviewed by Sherry Shek)