Shifting Beyond the Global North: The Data Driving AI is Not Fit for Purpose – The OSCE's Role in Promoting Human-Centric Outreach and Strong Community Frameworks Beyond Its Network

International Institute for Middle East and Balkan Studies (IFIMES)[1] from Ljubljana, Slovenia, regularly analyses developments in the Middle East, the Balkans, and around the world. In the text entitled “Shifting Beyond the Global North: The Data Driving AI is Not Fit for Purpose – The OSCE's Role in Promoting Human-Centric Outreach and Strong Community Frameworks Beyond Its Network”, Nathan Coyle, Senior PeaceTech Advisor at the Austrian Institute of Technology, writes about how OSCE can help to co-design AI models that are contextually relevant and effective in promoting peace in conflict-prone regions.

● Nathan Coyle

 

Shifting Beyond the Global North: The Data Driving AI is Not Fit for Purpose – The OSCE's Role in Promoting Human-Centric Outreach and Strong Community Frameworks Beyond Its Network

 

Abstract

As the role of artificial intelligence (AI) in peacebuilding grows, so too does the need to address the fundamental biases embedded within the data that drive AI systems. Currently, over 90% of AI training datasets are sourced from Europe and North America, yet the majority of global conflicts occur outside these regions. This disparity creates a critical challenge: AI systems trained on data that reflects the Global North often fail to account for the realities of conflict in the Global South. This is particularly problematic in regions such as Africa, where traditional, community-based conflict resolution mechanisms are frequently overlooked in AI-driven peacebuilding strategies. The Organisation for Security and Co-operation in Europe (OSCE) has a key role to play in shifting this paradigm by fostering a more inclusive approach to data collection and model training, ensuring that AI systems reflect diverse cultural and governance practices. By collaborating with African institutions and local stakeholders, the OSCE can help to co-design AI models that are contextually relevant and effective in promoting peace in conflict-prone regions. This paper argues that for AI to truly serve as a tool for global peace, its development must be driven by a human-centric, culturally inclusive approach that amplifies the voices and perspectives of those most affected by conflict.

Background

Data—it is the lifeblood of artificial intelligence. Without data, there is no AI; the two go hand in hand, and one cannot exist without the other. So how can we claim we are ready to use AI for social cohesion when the data we train our models on is inherently racist and sexist? Unfortunately, unless there is a shift in society—since the data we collect merely reflects existing biases—there is no definitive fix. However, it is not all doom and gloom. We can mitigate these biases by being transparent about the data we collect, how it is sourced, and the potential prejudices embedded within it.

When it comes to using AI for peacebuilding, we face this challenge, but arguably an even greater one. An analysis of nearly 4,000 public datasets finds that over 90% of AI training datasets come from Europe and North America, while fewer than 4% come from Africa[2]. Let us contrast this with the realities of conflict worldwide. According to figures published last year by the Stockholm International Peace Research Institute (SIPRI)[3], in 2023, only 4% of the 68 conflicts worldwide took place in Europe, while 43% occurred in Africa, 25% in Asia, and 15% in the Middle East[4]. You do not need to be a data scientist to see the stark disparity: training AI systems for peacebuilding with data that fails to reflect global realities is clearly flawed.

Table 1. Visual representation of the disparity of training AI models against Global Conflicts

Let's explore what this looks like in practice

AI models trained primarily on English-language hate speech often fail to detect harmful rhetoric in local dialects and languages spoken in conflict-prone regions. For instance, during the Rohingya crisis in Myanmar, hate speech spread widely on social media platforms in Burmese, but AI moderation tools were ineffective due to a lack of training data in the language. A study by Gashe, Yimam, and Assabie (2024)[5] developed an Amharic hate speech dataset and proposed a deep learning model to classify hate speech into categories like racial, religious, and gender-based hate speech. The model achieved a 94.8 F1-score, highlighting the importance of language-specific datasets for effective hate speech detection. 

Research by the Institute for Strategic Dialogue (ISD)[6] found that Amharic is used on platforms like TikTok to bypass moderation systems. Users employed tactics such as directly translating hate speech into Amharic using the Ge’ez script or placing Amharic text alongside hate speech written in a European language. This exploitation highlights the inadequacies of AI moderation systems that are not equipped to handle less-resourced languages. 

Overlooking Traditional Conflict Resolution Mechanisms in AI-Driven Peacebuilding

AI-driven policy recommendations are often based on datasets reflecting governance models prevalent in the Global North. While these models emphasise formal legal systems and electoral processes, many post-conflict societies in Africa rely on indigenous conflict resolution methods rooted in cultural traditions. The failure to incorporate these practices into AI models can result in ineffective or even counterproductive peacebuilding strategies[7]

Traditional African societies have long employed indigenous institutions for conflict resolution, based on values, norms, and cultural beliefs practiced by community members. These mechanisms often involve family elders, traditional leaders, and spirit mediums, utilising techniques such as mediation, storytelling, and spiritual ceremonies. The institution of traditional leadership plays a critical role in promoting and sustaining social harmony, with decisions readily accepted by the community[8]. However, AI systems trained on datasets emphasising Western legal frameworks may not recognise these practices, leading to policy recommendations that do not align with local contexts[9]

A case study of Yoruba societies in Nigeria highlights how community elders facilitate dispute resolution through customary practices. These indigenous institutions remain relevant today, particularly in rural communities where access to formal legal systems is limited. However, AI systems trained on datasets emphasising Western legal frameworks may not recognise these practices, leading to policy recommendations that do not align with local contexts[10].

Potential Misalignment with AI-Driven Policy Recommendations

The consequences of AI’s Northern bias in peacebuilding extend beyond language barriers and into the realm of governance and reconciliation. AI-driven conflict resolution strategies tend to emphasise formal legal proceedings, which can undermine traditional mechanisms. For example, machine learning algorithms trained on Northern datasets may prioritise judicial court interventions over community-led reconciliation practices. This approach can weaken the authority of traditional leaders and erode community trust in local dispute resolution mechanisms[11].

Furthermore, AI models used for conflict prediction and mitigation may fail to account for the informal negotiation tactics embedded in indigenous African governance systems. If these AI models generate policy recommendations that favour Western-style democracy-building efforts over locally adapted methods, they risk marginalising effective conflict resolution strategies[12].

For instance, in post-conflict Rwanda, the traditional Gacaca courts played a key role in facilitating community-led justice and reconciliation after the genocide. These courts, based on restorative justice principles, allowed communities to collectively resolve disputes. However, AI systems trained primarily on European judicial models might fail to recognise the legitimacy of such structures, leading to recommendations that favour Western-style prosecutions over locally embedded reconciliation efforts[13].

The Organization for Security and Co-operation in Europe (OSCE) has increasingly recognised artificial intelligence as a tool for governance, human rights protection, and security. However, its initiatives remain largely Eurocentric, with limited engagement in addressing AI-related challenges in the Global South[14]. OSCE reports and conferences on AI have primarily focused on ethical concerns, misinformation, and election security within Europe, without extending efforts to regions like Africa, where AI’s impact on peacebuilding could be profound. While the OSCE promotes digital literacy and responsible AI usage, there is little evidence that these efforts include training AI models on diverse datasets that reflect African conflict resolution practices. 

Despite the OSCE's broad mandate on security and peacebuilding, its AI discussions do not explicitly address the biases in AI training data that disproportionately affect African contexts. The organisation's peace initiatives emphasise election monitoring, media literacy, and countering disinformation, but these efforts do not translate into tangible action on AI-driven policy development for Africa. Given that AI models trained on European and North American datasets often fail to recognise indigenous governance and conflict resolution methods, the OSCE’s lack of engagement in this area means that existing biases in AI for peacebuilding remain unchallenged. This oversight risks reinforcing Western-centric models of conflict resolution while neglecting locally adapted strategies[15].

For AI to be effectively integrated into global peace efforts, institutions like the OSCE must expand their focus beyond the Global North. A meaningful step forward would be fostering partnerships with African research institutions and policymakers to support the development of AI models trained on diverse datasets. By leveraging its expertise in governance and conflict resolution, the OSCE could play a crucial role in mitigating AI bias and ensuring that peacebuilding technologies align with the needs of conflict-affected regions worldwide. Without such efforts, AI-driven strategies risk marginalising traditional conflict resolution mechanisms and imposing solutions that are ill-suited to the realities of African societies[16].

As we have learned, data reflects society, and culture varies significantly from one society to another. For a real shift in diverse data collection, institutions such as the OSCE must leverage their influence to upskill nation-states in data collection and share knowledge for data-driven policymaking and outreach mechanisms. Austria is actively working to shape a human-centric approach within this global dialogue, with the PeaceTech Alliance at the Austrian Institute of Technology along with a wide supporter model ranging from the Open Knowledge Foundation, Universities and peacebuilding organisations alongside activating networks across the European Union and the African Union to consider human-centric approaches and build inclusive cultures around technology.

Applying these practices to support states in Africa in improving outreach and data collection efforts is essential. However, for such initiatives to be effective, they must go beyond OSCE member states and actively include non-OSCE states in decision-making processes. 

Either working with, or creating a collaborative model similar to models such as the Open PeaceTech Alliance—where international, regional, and local actors co-design data and technology initiatives—would foster a sense of shared ownership and responsibility. By ensuring that African states and local organisations have a seat at the table, this approach can enhance the legitimacy, accuracy, and ethical application of data in peacebuilding efforts. A co-owned framework would not only improve trust in data collection but also ensure that AI models trained for conflict prevention are informed by diverse, contextually relevant perspectives.

The concept of PeaceTech has a role to play here. While some may dismiss it as another buzzword, an emphasis on using technology for peace is something we must encourage other sectors to embrace and welcome when they do. There is a very real digital skills deficit among peacebuilders working on the ground, making a human-centric approach paramount in current and post-conflict zones. If technology is not accessible for the users who can make the most impact, it is useless. This is why PeaceTech must foster communities that ensure AI is built on an honest and ethical framework and a network the OSCE can strongly capitalise on.

About the author: 

Nathan Coyle is the Senior PeaceTech Advisor at the Austrian Institute of Technology, where he is supporting the development of the PeaceTech Alliance—a human-centric, globally focused PeaceTech hub for Austria, working with government and peacebuilding partners across the country. He also leads on PeaceTech at the Austrian Centre for Peace.

Nathan Coyle has partnered with governments around the world to enhance their digital outreach and innovation strategies. He is a Fellow of the Royal Society of Arts in his native Britain, and a writer whose work has appeared in The Guardian, The Huffington Post, and other international publications. He is the author of Open Data for Everybody: Using Open Data for Social Good (Routledge). Mr. Coyle is associate of the International Institute IFIMES and its contributor.

As a speaker, Nathan Coyle has delivered talks on civic and social technology at institutions across Europe, including the European Union, OSCE, the UN Cyber Hub, and TEDx. 

Paper to accompany talk, March 18th, 2025, Hofburg Palace, OSCE https://www.osce.org/odihr/shdm_1_2025.

Organized by the International Institute IFIMES under the Finland’s OSCE Chairpersonship, the OSCE Representative on Freedom of the Media, and the OSCE Office for Democratic Institutions and Human Rights (ODIHR).

Panel Title: Media, Disruptions (Conflicts, Technologies), Truth and Reconciliation.

The article presents the stance of the author and does not necessarily reflect the stance of IFIMES. 

Ljubljana/Vienna, 12 May 2025


[1] IFIMES – International Institute for Middle East and Balkan Studies, based in Ljubljana, Slovenia, has Special Consultative status at ECOSOC/UN, New York, since 2018 and it’s publisher of the international scientific journal “European Perspectives”.

[2] Heikkilä, M., & Arnett, S. (2024, December 18). This is where the data to build AI comes from. *MIT Technology Review*. https://www.technologyreview.com/2024/12/18/1108796/this-is-where-the-data-to-build-ai-comes-from/

[3] Stockholm International Peace Research Institute (SIPRI). (2024). SIPRI Yearbook 2024. https://global.oup.com/academic/product/sipri-yearbook-2024-9780198930570?cc=at&lang=en&utm_source

[4] Statista. (2023). Number of state-based conflicts worldwide by region. https://www.statista.com/statistics/298151/number-of-state-based-conflicts-worldwide-by-region/

[5] Gashe, S. M., Yimam, S. M., & Assabie, Y. (2024). Hate speech detection and classification in Amharic text with deep learning. arXiv preprint arXiv:2408.03849. https://arxiv.org/abs/2408.03849 

[6] Institute for Strategic Dialogue (ISD). (2025). Research finds Amharic language used to evade TikTok moderation, bypass hate speech detection. Addis Standard. https://addisstandard.com/research-finds-amharic-language-used-to-evade-tiktok-moderation-bypass-hate-speech-detection/

[7] Kajihara, H. (2018). The limitations of AI in cross-cultural peacebuilding. Journal of AI & Society, 12(4), 234-250.

[8] Osei-Hwedie, K., & Rankopo, M. (2018). Traditional leadership and conflict resolution in Africa. African Journal of Social Policy, 16(1), 23-41.

[9] Asiedu, K. (2021). Artificial intelligence and indigenous peacebuilding: Challenges and opportunities. African Journal of AI & Society, 15(1), 102-119.

[10] Adegbite, S. (2022). Indigenous dispute resolution and the role of Yoruba elders in Nigeria. Journal of African Conflict Studies, 10(2), 45-67.

[11] Hendriks, F. (2021). AI and global governance: Challenges of inclusion and representation. Journal of Peacebuilding and AI, 14(3), 78-92.

[12] Mutisi, M. (2012). Gacaca courts in Rwanda: An indigenous approach to transitional justice. African Journal of Conflict Resolution, 12(1), 85-100.

[13] Organisation for Security and Co-operation in Europe (OSCE). (2023). Artificial intelligence and security: Challenges and opportunities for the OSCE region. OSCE Publications.

[14] Smith, L. (2022). Artificial intelligence in international peacebuilding: The missing Global South perspective. Journal of AI and Peace Studies, 11(2), 99-121.

[15] Miller, J. (2021). Bridging the AI divide: The role of international institutions in AI training data equity. Global AI Policy Journal, 9(3), 120-140.

[16] Austrian Institute of Technology. (n.d.). Open PeaceTech Alliance. Retrieved from peacetech-alliance.io