Since OpenAI launched ChatGPT in fall 2022, there has been a tremendous amount of global attention on the risks and benefits of artificial intelligence (AI). With so many unknowns about the capacity of private companies and governments to harness this technology for peace and security, it is difficult for the public and private sectors to identify a clear and straightforward path on addressing AI’s challenges. In this evolving environment, peacebuilding organizations can and should play a critical role in engaging with companies, multilateral institutions and governments on AI development and application to advise and shape its uses to advance peace and mitigate societal harm that could contribute to conflicts.

President Joe Biden discusses his administration’s commitment to seizing opportunities and managing the risks of artificial intelligence in San Francisco. June 20, 2023. (Doug Mills/The New York Times)
President Joe Biden discusses his administration’s commitment to seizing opportunities and managing the risks of artificial intelligence in San Francisco. June 20, 2023. (Doug Mills/The New York Times)

Challenges for AI in Peacebuilding

AI’s growth and diversity in application will exacerbate the already stark digital and technological divide between the haves and have nots within the international system. The majority of AI creation and regulation development is concentrated within a few countries and through a handful of private companies. Similar to issues pertaining to climate change and nuclear weapons, the majority of the world will need to confront how to construct a governance framework to limit the adverse actions of a few countries and private companies on the rest of the world.     

Generative AI will be front and center in 2024 with the massive wave of elections taking place throughout the world. An unprecedented 50+ countries covering 2 billion people will go to the polls in 2024. ChatGPT is the most well-known generative AI software but is not the only one. The proliferation of generative AI will contribute to the spread of misinformation and disinformation, particularly for elections. State and non-state actors can use generative AI to create false content that bots and people can spread on social media and messaging platforms.

Generative AI can also add to the development of sophisticated deepfakes contributing to the proliferation of misleading images and audio. Deepfakes particularly target women adding another tool to the digital abuse and harassment women encounter across the world that hinders their engagement in the public sphere. While the gendered aspects of misinformation, disinformation and malinformation (MDM) are receiving increasing attention from governments and multilateral institutions, the insidious dimensions of this form of violence still needs greater examination and inclusion in high-level discussions about AI safety and risks at the international and national levels, especially to support Women, Peace and Security efforts.

Perhaps one of the areas receiving the most attention with regard to AI concerns weapons. This year brought to stark awareness on the dangers of AI in nuclear or other advanced weapons use. The U.S. government recently advanced two efforts in this regard: the first was U.S. President Joe Biden and Chinese President Xi Jinping jointly agreed to discuss AI risks that may touch on weapons systems in the future and resume military-to-military communications; and the second involved Under Secretary of State for Arms Control and International Security Bonnie Jenkins leading efforts for multilateral agreement on the military use of AI. Already Ukraine, in defense of its territory, and Israel have employed AI in weapons systems for their military operations, and Russia is potentially using these autonomous weapons in its war of aggression against Ukraine.

Another area of AI use deserving of more international attention is surveillance technology. With the rise of authoritarianism and attacks on democratic processes, AI can enhance the ability of governments to limit the space for diverse civil society actors to advance human rights. From facial recognition software and predictive policing to generative AI potentially revealing details about nonviolent activists, the world is entering a period of unrestricted AI for security development. For all the attention on ChatGPT, there are an incredible number of generative AI options produced by companies, governments and research organizations pulling from a tremendous volume of data that exists from all corners of the internet in different languages.  AI for surveillance will continue to enable tracking of citizens’ actions on a large scale. It will substitute for some of the work of security professionals by having the ability to review and comb through large amounts of data from images to audio as well as predicting where crime may take place and contributing to sentencing decisions. AI in surveillance technology will also enhance the capacity of security and law enforcement forces across the world and transform how policing is executed. Countries with advanced AI surveillance technology are sharing it with other countries — from China with countries in Asia and Africa to Russia with Latin America.

For countries reliant on loans from other governments, private banks and international institutions, AI could exacerbate global inequality. The finance industry has long relied on AI to make decisions on loans, weigh risks for institutions and enable easier banking for customers. Greater AI involvement in financial services could limit who has access to banking and loans with automated tools defining and determining people and regions of the world who are too risky for support. The AI systems developed to determine risks and investments are not and will likely not be built in the countries on the receiving end of development loans or investment. Instead, those already with the means will continue to use AI to refine and determine eligibility for investment and future potential.

AI and the International System

From the United Kingdom and the United States to China and Russia, governments across the world are discussing the future of AI and how it should be governed. In early November, the U.K. government held its first AI Safety Summit featuring representatives from foreign governments, including the United States and China and private technology companies. The focus of the U.K. summit was on frontier AI, foundational AI models used for general purpose with the potential to cause intentional or unintentional harm. At the conclusion of the summit, countries from the Netherlands and Brazil to Rwanda and the Philippines agreed to continue inclusive global dialogue on AI safety and risks.

Discussions on AI use and development are also taking place among multilateral institutions. The United Nations’ secretary-general attended the U.K.’s AI Summit and has also launched a High-level Advisory Body on Artificial Intelligence through the Office of the Secretary General’s Envoy on Technology. The advisory group is launching the most ambitious global governance discussions on AI by bringing a diverse group of experts together to consider AI risks, computing capacity, algorithms among other topics in the lead up to the Summit of the Future in September 2024 for member states to consider a Global Digital Compact. The U.N. is uniquely positioned to convene such a large group of experts and this effort could be the shot in the arm to reinvigorate multilateralism for the future.

In addition to the U.N., the European Union is continuing its technology governance efforts. Since 2018, the EU has evolved its approach to AI, but has consistently sought to mitigate harm from emerging and advancing technologies to ensure that they contribute to the public good. The current draft of the EU’s AI Act includes stipulations that would limit the use of facial recognition software, require companies creating large language models (LLMs) to disclose the data used to develop their software and mandates that technology companies would need to conduct risk assessments before their products could be used. The EU’s AI Act is currently in the potentially final stages of negotiations this month. It has the potential to impact global use of AI similar to how the EU’s laws and regulations on e-commerce, disinformation and hate speech on social media, and citizens’ right to privacy and use of their data by technology companies are shaping international governance discussions.

Opportunities for Peacebuilders

Peacebuilding organizations need to be creative in how technology can be used. The United States Institute of Peace is forming an AI Working Group of peacebuilders from across the world to understand the implications of AI for peace and security and develop actionable guides, studies and other tools to help practitioners. We welcome peacebuilders, technology companies, research organizations and nonprofits joining us in advancing AI for peace while mitigating its risks for conflict.

Overall peacebuilder and peacebuilding organizations need to do more to engage and use technology where applicable for their work or risk falling behind in their ability to address the evolving terrain of conflicts. Despite the adverse uses of technology by governments, nonstate actors and companies, now is not the time for despair or a belief that this global fascination and use of AI will die down rather than accelerate.

AI use in multiple sectors will grow and currently its applications know no limits without more national and international governance structures to manage its use and development. AI is and will continue to touch various segments of the global system regardless of a countries’ technological and digital level. From finance to weapons system, and social media use to healthcare, the range of AI application will expand. More needs to be done to ensure greater training on AI, access to the technology and mitigating its harms across the world. These topics should be a central part of international discussions within the U.N., consideration in the EU with the global implications of their regulations on technology and data, and at the national level for countries with the resources to develop frontier models such as in North America and the Gulf.

Peacebuilding organizations are uniquely positioned to combat MDM, especially technology-facilitated gender-based violence. Peacebuilders are active in communities across the world with a deep understanding of the drivers of conflict and often engage with civil society, governments and the private sector.  As governments and multilateral institutions convene experts on AI to understand the evolving risks and challenges, peacebuilders should be included in those discussions to provide their perspective on the impact of this technology across societies to mitigate harm that could further violence in countries. Additionally, peacebuilders should work with local partners to develop datasets that do not undermine an individual’s privacy on under resourced languages that could assist social media and technology companies in improving AI content moderation and foundational AI models. With data from a variety of countries and languages, we can move toward reducing biases and discrimination in the algorithms powering this technology and communications platforms.

AI could be a tool for peace processes as well. Non-weaponized autonomous drones could play a role in monitoring lines of contact and cease-fire violations to reduce the harm to peacekeepers and units on the ground. Since previous use of drone technology to monitor cease-fires collected a tremendous amount of data, AI can help monitoring missions comb through the images on violence or potential for violations combined with satellite imagery. This could also be a useful approach for observing disarmament of combatants and identifying war crimes combined with on-the-ground data gathering when possible.

As part of peace process negotiations, community compacts or dialogues, AI can play a role in monitoring social media for hate speech and incendiary language about marginalized groups, women and the peace process that could undermine a just and inclusive settlement. Taken a step further, that data can become part of the peace agreement negotiations, community compacts or dialogues between groups to specifically try to reduce online attacks and hate speech by referring to examples of incendiary online language. Peacebuilders can pioneer new agreements that focus on reducing online attacks by combatants and other groups that could spoil sustainable peace and reopen wounds from mass atrocities during and after reconciliation efforts. The wealth of historical data on peace agreements, community compacts and dialogues could be used to develop AI that can assist in discussions by highlighting potential patterns and trends in conflict dynamics previously requiring significant human resources to research or identify.

Of course, technology is not a magic solution to achieve peace, but it does offer options as conflict environments and AI evolve.  


Related Publications

More States Are Vying to Mediate Conflict — What Does it Mean for Global Peace?

More States Are Vying to Mediate Conflict — What Does it Mean for Global Peace?

Monday, October 7, 2024

Unsurprisingly, the conflicts in Ukraine, the Middle East and Sudan dominated discussions among world leaders at the recent U.N. General Assembly. So did calls to reform and strengthen the international system, reflecting shifting global power dynamics. Diplomatic meetings in New York also revealed how these increasingly complex conflicts and shifting power dynamics are coming together in an emerging trend: a more diverse set of countries striving to mediate conflicts. At the beginning of the week, Ukrainian President Volodymyr Zelenskyy met with Indian Prime Minister Narendra Modi to discuss avenues to peace, building on Modi’s recent trips to Kyiv and Moscow. At the end of the week, Chinese and Brazilian officials co-hosted an event to garner international support for their peace plan for Ukraine, which Kyiv opposes.

Type: Analysis

Global PolicyMediation, Negotiation & DialoguePeace Processes

Will South Korea’s New ‘Unification Doctrine’ Succeed Where Past Polices Have Failed?

Will South Korea’s New ‘Unification Doctrine’ Succeed Where Past Polices Have Failed?

Thursday, September 26, 2024

South Korea’s only official policy regarding unification with North Korea is the "Three-Stage National Community Unification Formula" (hereafter “Unification Formula”), first declared by the Roh Tae-woo administration in 1989 and partially revised by the Kim Young-sam administration in 1994. However, the Korean Peninsula has changed drastically for the worse in recent years, and achieving “reconciliation and cooperation,” the first part of the three-stage formula, has become unrealistic.

Type: Analysis

Peace ProcessesReconciliation

Expanding the Scope of U.S.-Vietnam Reconciliation

Expanding the Scope of U.S.-Vietnam Reconciliation

Tuesday, September 10, 2024

Last September, Vietnam and the United States upgraded their relations to a comprehensive strategic partnership — a historic development that has analysts, academics and high-level officials from both sides now pondering the question of “What’s next?” Part of the answer might be an expansion of Vietnam-U.S. postwar reconciliation.

Type: Analysis

Peace ProcessesReconciliation

Without Sudan's Warring Parties in Geneva, What’s Next for Peace Talks?

Without Sudan's Warring Parties in Geneva, What’s Next for Peace Talks?

Wednesday, September 4, 2024

A U.S.-led peace initiative to end Sudan’s brutal civil war took place in Geneva over the last two weeks. But despite invitations and extensive international pressure, the Sudanese Armed Forces (SAF) declined to send a delegation to Switzerland altogether, while the paramilitary Rapid Support Forces (RSF) sent a delegation.

Type: Question and Answer

Peace Processes

View All Publications