The number of Internet users grew at an annual rate of 4%, with approximately 196 million new users accessing the Internet each year. In 2020, the average Internet user spent 145 minutes online daily, with American users leading at 24 hours per week. Over a decade, Internet usage time grew by 62%, driven largely by smartphones, which now account for 55% of online access. By 2025, it’s projected that 72.6% of global smartphone users will access the Internet exclusively via their phones. With the developments and ever-increasing use of the Internet, particularly social media, there has been an increase in extremism and its dissemination online.
The Internet is distinguished for its ability to allow information dissemination, provide conversation, and support public debates, and now also as a tool that allows individuals to promote extremism and reinforce prejudices. A study from the Profiles of Individual Radicalization in the United States (PIRUS) database reveals that between 2011 and 2016, 73.2% of extremists used social media platforms for various purposes related to extremism, such as consuming content, engaging in extremist dialogues, spreading propaganda, or communicating with other extremists.
Terrorists Exploit Modern Technology to Promote Extremism
The role of technology, particularly the Internet and social media, has become increasingly pronounced in modern terrorism. These platforms are powerful tools used to radicalize individuals, inspire extremist actions, and incite violence claiming responsibility for attacks, recruiting, and fundraising. For example, the 2019 Christchurch shooting in New Zealand was live-streamed on Facebook, demonstrating how terrorists can use these technologies to amplify their impact.
In response to the efforts of social media platforms and law enforcement agencies to take down terrorist content online, there have been several developments in how terrorists use the Internet and social media, from encrypted communication to other quite innovative methods. For example, in an attempt to avoid detection, there is a video containing terrorist content uploaded to Facebook including a 30-second introduction of the France 24 news channel before the true 49-minute long propaganda video began.
Bots and automated accounts are becoming more common in spreading harmful content online. They flood social media with extremist views, hateful language, and messages that can lead to violence. These fake accounts manipulate platforms by creating a lot of content that divides people and causes trouble.
Indeed, the rolling out of end-to-end encryption by such platforms has prompted considerable concern amongst policymakers and counter-terrorism practitioners concerning it enabling the potential for terrorists to “go dark” and communicate securely, thereby evading detection. Numerous investigations and counter-terrorism operations have indicated the use of encryption by both ISIL and Al-Qaida-affiliated individuals.
More recently, in March 2020, an ISIL supporter circulated a video on Rocket.chat, a decentralized social media platform that ISIL has used to spread terrorist content and facilitate online collaboration and coordination explaining how facial recognition software could be used.
Role of AI in Amplifying Extremism
Extremist groups are also growing interested in exploiting AI technology and using it to their advantage. Extremist groups are using AI to find and recruit vulnerable people online by studying people’s online activity, AI can identify those who might be interested in extremist ideas. This makes it easier for these groups to target specific people and grow their numbers.
Generative AI enables extremists to create and spread convincing but false content, such as fake videos, images, or audio, that can inflame emotions and spread misinformation. AI is also being used in more direct ways by terrorist groups, such as in the development of autonomous weapons or cyber-attacks. Children and vulnerable people are easy targets for AI-powered extremist groups.
In 2019, Link11 reported that nearly half of DDoS attacks are now carried out by using cloud services such as Amazon Web Services, Microsoft Azure, and Google Cloud. Leveraging the computing power provided by such services, malicious virtual machines can be created using machine learning, which is then used as part of a botnet to launch the DDoS attack.
AI advancements, including NLP technologies like GPT-3, further amplify these risks by enabling the creation of customized radicalization content and the spreading of fake news or conspiracy theories. This includes using AI to generate fake narratives or automatic text for recruitment, potentially undermining public trust and influencing extremist behavior.
Following the 2015 Paris attacks, the hacktivist group Anonymous launched an online campaign against ISIL where it claimed to have removed as many as 25,000 ISIL bots online.
Deepfake technology, which relies on advanced AI algorithms such as Generative Adversarial Networks (GANs), poses a significant threat to the spread of disinformation. These AI-generated videos, audio, and images can rapidly disseminate false information without thorough verification. While deepfake content often has a limited online lifespan, its impact can be profound, leading to panic, confusion, and distrust in public institutions. Terrorist groups may exploit AI-driven deepfakes to manipulate public opinion, and create propaganda by fabricating controversial statements from political leaders or other influential figures.
Understanding the Broader Impact of Social Media Extremism
Social media and the Internet exemplify the dual-use dilemma that can be applied both for beneficial purposes and for harmful ones. A notable illustration of the dual-use dilemma is the Islamic State of Iraq and al-Sham (ISIS). This extremist armed group has skillfully used content and social media platforms like YouTube, Twitter, and Telegram to amplify its ideological discourse and recruit new members. Using various Internet mediums such as message boards, chat rooms, and even dating websites creates a facade of widespread support and a false online image that can confuse and radicalize people. Such influences affect people, divide communities, and lead to violence. Everyone starts to feel afraid and distrustful of online platforms.
In an increasingly digital age, national security measures often extend to regulating and monitoring online platforms to prevent the spread of misinformation, extremist content, and propaganda. For example, the escalation of anti-Pakistan propaganda, particularly via some Afgan extremist social media accounts, intensified following Pakistan’s airstrikes in Afghanistan’s Paktika and Khost provinces in March 2024. These posts gained significant traction, with over 25,000 views. But this false image was actually from Marib, Yemen, dating back to September 2021.
Similarly, a viral video falsely claimed that the Pakistan Army bombed civilians in North Waziristan during Ramadan, but in reality, the incident was due to a house roof collapse in South Waziristan. These political extremist groups actively seek to create misunderstandings between the public and the military through social media and the Internet by spreading misinformation. In response to this threat, the government has decided to ban X (formerly Twitter) as a measure to safeguard national security.
Threats to Religious and Government Institutions
Extremist groups use violence and damage to spread their ideas. Religious places are especially targeted. Government buildings are also threatened because of political and ideological differences. As an example, Khalistani extremists have been involved in vandalizing Hindu temples and Indian government buildings both in India, USA and abroad. They are using online platforms to spread hateful messages that encourage violence and property damage. The escalation of unrest in Punjab indicates that attacks against Hindu temples and Indian government buildings might increase, necessitating heightened security measures. Organizations like “Sikhs for Justice” utilize bot-like activity to amplify their content and incite violence. While Twitter has been banning these accounts, the network is adapting by creating new accounts and evading detection.
This incident is an example of what can lead society toward mob lynching that intentionally harms or divides people, spreading hatred, or promoting discrimination, which can deeply disrupt social harmony and exacerbate tensions between any two groups.
The rise of online platforms has had profound social impacts, shaping how communities interact and how ideologies spread. They can also let extremist ideas spread. These platforms can create echo chambers where hateful speech grows, leading to more division and violence. For example, 2-channel (now known as 5-channel) in Japan highlights this issue, as it has been a significant source of ultranationalist rhetoric, xenophobia, and far-right propaganda.
This led to the proliferation of threads filled with hate speech against foreigners, particularly Koreans and Chinese, as well as the spread of conspiracy theories and nationalist propaganda. The forum’s content has been linked to real-world hate crimes, such as the harassment of foreigners and immigrants. The platform has also played a role in radicalizing individuals by creating echo chambers where extreme views are reinforced.
Fight Against Extremism: Finding Balance Between Security and Freedom
Online extremism poses a serious global threat. Extremist groups exploit the Internet to spread dangerous ideologies, incite violence, and disseminate disinformation, destabilizing nations and endangering societies. Social media platforms, in particular, have become breeding grounds for hate speech, prejudice, and extremism. These platforms are often used to manipulate individuals, create echo chambers, and organize real-world acts of violence.
In these echo chambers, individuals are more likely to embrace extreme beliefs, leading to radicalization. As these radicalized groups grow, they present significant threats to both national and regional security, resulting in heightened violence, social unrest, and difficulties in maintaining peace.
The rise of AI, the Internet, and social media has further amplified extremism and terrorism by accelerating radicalization, spreading misinformation, and inciting violence. Addressing this complex issue requires a comprehensive, collaborative approach involving governments, stakeholders, and the public. Recent actions, such as the bans on X (formerly Twitter) in Pakistan and Instagram in Turkey, underscore the urgency of finding effective solutions. However, such measures often infringe on free speech and fail to address the root causes of extremism. A more balanced, thoughtful strategy is needed—one that combats extremism and terrorism while safeguarding fundamental freedoms.
Written by Hamza Ahmad (Edited and Reviewed by Nawal Munir Ahmad & Qurra Tul Ain Nisar)
References
- World Population Review. (2024). Internet Users by Country 2024. Retrieved from https://worldpopulationreview.com/country-rankings/internet-users-by-country
- Digital Terrorists: Extremists Targeting Military: ISPR.” The Express Tribune, 2024. Retrieved from https://tribune.com.pk/story/2482079/digital-terrorists-extremists-targeting-military-ispr
- X Social Media Platform ‘Threat to National Security’: Pakistan Justifies Ban.” Anadolu Agency, 2024. Retrieved from https://www.aa.com.tr/en/asia-pacific/x-social-media-platform-threat-to-national-security-pakistan-justifies-ban/3269351
- Artificial Intelligence and Radicalism: Risks and Opportunities The George Washington University Center for Extremism Research Accessible at https://extremism.gwu.edu/artificial-intelligence-and-radicalism-risks-and-opportunities