The artificial intelligence revolution is rewriting the rules of power, knowledge, and representation on a global scale. Yet as AI systems increasingly shape everything from healthcare decisions to job opportunities, a critical question emerges: whose voices, values, and worldviews are actually encoded into these technologies?
Too often, the answer points to a troubling pattern of digital colonialism, where AI governance frameworks designed in Silicon Valley and European tech hubs are exported worldwide, frequently neglecting the distinct languages, contexts, and needs of the Global South.
AI Power and Its Discontents
According to Carnegie Endowment research, over half of AI governance initiatives originate in Europe and North America, despite these regions representing less than 15% of the global population. These regions disproportionately shape the ethical norms, regulatory structures, and technical standards that govern AI.
This concentration isn’t just about economic inequality — it’s about whose epistemologies, moral frameworks, and cultural assumptions become universalized through technology. When AI models are trained primarily on Western data and designed according to Western ethical frameworks, they export what researchers called “sociotechnical supremacy” — embedding particular cultural values and worldviews into systems, consciously or unconsciously, that shape the reality for billions of people who had no say in the design of those systems.
Invisible Labor Behind the Interface
The invisible labor that powers AI is largely outsourced to the Global South. Investigations have revealed that Kenyan workers were paid under $2/hour to label traumatic content for OpenAI. The invisible workforce behind AI isn’t just in Kenya — it extends across Southeast Asia, where countries like the Philippines have become hubs for what researchers term “digital sweatshops.” Filipino digital workers, many earning significantly less than their counterparts in Silicon Valley, are now organizing for labor protections as AI deployment accelerates across the region.
This digital labor exploitation raises urgent ethical questions about who bears the costs of “intelligent” systems. Today’s data and labor are extracted from the Global South to fuel AI innovations in the Global North.
Language as a Site of Digital Colonialism
The linguistic landscape of AI reveals another dimension of this inequality. Despite over 200 million Swahili speakers and 45 million Yoruba speakers worldwide, these languages remain severely underrepresented in the training data that feeds natural language processing systems. Recent research shows that out of over 2,000 African languages, only about 42 have a certain level of representation in current large language models.
This isn’t merely a technical oversight — it reflects deeper power structures about which languages and cultures are deemed worthy of technological investment. When AI systems fail to understand or adequately represent African languages, they effectively exclude millions of people from participating in the digital economy and accessing AI-powered services.
The consequences extend beyond mere inconvenience. As AI systems increasingly mediate access to financial services, healthcare, education, and government benefits, linguistic exclusion becomes a form of systemic discrimination that bars entire populations out of essential services.
Alternative Models from the Global South
Global South actors are not passive recipients. The African Union’s AI Strategy and BRICS cooperation initiatives signal a rising push for inclusive and context-specific governance.
India, as a digital powerhouse and BRICS member, is emerging as a key voice in this movement. The country has called for a multipolar digital order, emphasizing data sovereignty and ethical AI tailored to its development priorities.
ASEAN countries are charting what some called a “Third Way” — an alternative to both Silicon Valley’s market-driven logic and China’s state-centric model. Indonesia, for example, has emphasized locally grounded AI regulations, while Singapore and Thailand explore inclusive standards that reflect regional diversity.
Most significantly for regional readers, ASEAN’s expanded AI Governance Guide, released in early 2025, represents an attempt to chart a distinctly Southeast Asian approach to AI governance. The framework emphasizes cultural sensitivity, development-oriented AI deployment, and regional cooperation — values that implicitly challenge both Western individualistic frameworks and Chinese state-centered models.
However, the effectiveness of this “ASEAN way” remains to be seen. With AI readiness varying dramatically across member states — from Singapore’s global ranking of 2nd to other countries lagging far behind — the region faces the challenge of collective governance amid vast disparities in capacity and resources.
Toward Plural Governance
Imagining inclusive AI governance requires moving beyond exported models. It involves:
- Co-designing AI systems with Global South partners.
- Recognizing diverse cultural understandings of privacy, fairness, and harm.
- Ensuring economic justice through fair compensation for data and labor.
- Supporting multilingual AI development.
- Creating international mechanisms where the Global South has real influence.
The risks of reinforcing global inequality through AI are real. But the rise of alternative frameworks — from the African Union to ASEAN — offers hope for a more democratic, representative governance. The time to act is now, before these power structures are permanently hardcoded into our digital future.
Read more…
- Carnegie Endowment: Advancing a More Global Agenda for Trustworthy AI
- African Union Continental AI Strategy
- BRICS AI Governance and Social Inclusion
- TIME Investigation: OpenAI’s Kenyan Workers
- The Guardian: Content Moderators’ Trauma
- As AI giants duel, the Global South builds its own brainpower
Written by Yukako Ban (Reviewed by Kenneth Leung)