Those of us who work in peacebuilding are constantly reminded that the challenges we confront are big and the resources we command are small. So there is both a practical and an ethical obligation to use those resources wisely and be certain of their value. Toward that end, a little over four years ago, USIP asked me to become the organization’s first director of learning and evaluation. At its core, my job description was simple: help the Institute use evidence to do more of what works and less of what doesn’t.

Passers-by take in wall art promoting peaceful elections in Afghanistan that was produced with support from USIP.
Passers-by take in wall art promoting peaceful elections in Afghanistan that was produced with support from USIP.

“I always considered it important to be able to answer two kinds of questions. First, are USIP programs having an impact? Second, what kind of programs should we be implementing?”

As my time at USIP wraps up and I move to a new position as the executive director of the Joan B. Kroc Institute for Peace and Justice at the University of San Diego, I wanted to take the opportunity to reflect on what I got right, what I got wrong and the work left to do. Recently, The Ford Foundation posted a job description for a director of strategy, learning and evaluation. Every day I see more job postings like this, illustrating the growth of this field. So I hope these quick reflections from a soon-to-be-former “M&E guy” will help all those moving into these new positions as well as those already working on the challenge I took up four years ago.

Three Things I Got Right

  1. A Focus on Organizational Change: Improving project design, monitoring, evaluation and learning fundamentally requires deep organizational change; it is not just a technical challenge. For instance, one of the first things we did was to develop a template for a monitoring-and-evaluation framework. But very few people used it. We realized very soon that providing this type of technical support does not work unless you address the more fundamental issues. For instance, if we provide resources, are there accountability mechanisms in place to ensure they are used? Focusing on deeper organizational issues has meant at times that the progress we’ve made has felt slow, but that progress has been sustained. In contrast, some organizations try to quickly make comprehensive changes in the way they monitor and evaluate programs. These efforts often collapse under their own weight, leaving the organization worse off than when they started.
  2. Engaging Funders: There is a tendency among implementers to blame challenges in improving monitoring and evaluation on donors, saying they are too rigid in their requirements and too intolerant of failure to allow flexible, creative, effective monitoring and evaluation (M&E). From the beginning at USIP, we committed to the strategy of engaging funding partners such as government agencies on M&E issues. Funders have seen this as a positive, and that has proved crucial in creating M&E strategies that meet everyone’s needs while allowing for the kind of honest reporting on programs that leads to true learning.
  3. More M, less E: From the beginning, we focused more on monitoring than evaluation, for two reasons. First, all evaluation efforts require rigorous project monitoring, which in turn requires a solid, well-thought-out project design. Second, since project teams are responsible for designing and monitoring their programs, they also become responsible for their own M&E. If we had just done evaluations at the end of project, the originating teams could have more easily have dismissed the task as someone else’s responsibility, meaning either my team or the outside evaluation consultants.

Three Things I Wish I Had Known

  1. Prioritize Early: As the first director of learning and evaluation at USIP, I was acutely aware that I needed to demonstrate my worth to my colleagues in programs by helping them solve their problems. A key success metric my team and I always used was demand for our services. But this makes it very difficult to say no. And not saying no leads to being spread too thin, and to always being reactive in the face of requests. I wish I had worked harder from the beginning to prioritize and clearly communicate a set of strategic priorities to the organization. Along the lines of, “this is what we’ve determined are the most strategic areas of focus to improve M&E at USIP, these are the things we can help you with, these are the kinds of things we won’t do.”
  2. The Last Mile Problem: We underestimated the importance and difficulty of gathering data at the local level—for USIP, that often means in conflict zones. Again and again, we saw how strong project designs and monitoring strategies were undermined by the challenges of gathering credible, rigorous data, in an ongoing, cost-effective way. While we have recently prioritized solving what I now call the “last mile problem,” we should have focused on this earlier as part of all our project monitoring initiatives.
  3. Are We Doing it Well vs. What Should We Do? I always considered it important to be able to answer two kinds of questions. First, are USIP programs having an impact? Second, what kind of programs should we be implementing? At the beginning, I assumed that both of these questions could be answered with similar strategies. It turns out however that the “what should we do” question is orders of magnitude harder to answer and requires different strategies to answer. Project-level evaluation is important, but not sufficient to answer the question of whether, for instance, you should invest more in community dialogue programs or security sector reform. Moreover, the challenge of organizational change is different for the “what should we do” question as well. For example, a specialist in community dialogue is almost always willing to work to make their programming better. That specialist responds very differently if you are making the argument that the organization should be doing less community dialogue–an argument that threatens their identity and their livelihood.

Three Things Left to Do

  1. Leveraging Data: I have often said that peacebuilding is a data-scarce endeavor. Collecting data in dangerous, politicized conflict environments is always difficult. But as the result of USIP’s efforts on monitoring and evaluation, there is a lot more data flowing throughout the organization. The next challenge is to do a better job of aggregating, sharing, presenting and leveraging that information throughout the Institute. USIP has recently renewed its commitment to confronting these knowledge-management challenges, but for a large, complex, mature organization like USIP, the task is somewhat daunting and will take time.
  2. Adaptive Programming: There is a groundswell of discussion within the peacebuilding community—and in the development field more broadly—about how to make programming more flexible and more adaptive. The goal is to ensure that programs can learn as they go in order to respond to the complex, rapidly changing environments in which we work. To date, however, there has been more rhetoric about these approaches than actual changes in the way programs are implemented. The next challenge is to build the systems and processes that truly support flexible, adaptive, iterative programming.
  3. A Stronger Theory of Change: M&E can only tell you one piece of the story. An evaluation, for instance, can tell you if trust was built between groups. Only a broader theory of change about how peace can be built—and then testing it--can tell you if that increased trust will have an impact on larger peace and conflict dynamics. That requires combining M&E and applied research to provide a firmer evidence base for that theory of change. This will enable USIP to make stronger claims about larger impact, e.g., we built trust, and trust matters for broader, long-term peace.

One final thought, looking a bit farther back than four years. When I first started working on monitoring and evaluation in the peacebuilding field, perhaps 10 years ago, there was still a significant debate underway regarding whether monitoring and evaluation had any relevance for the peacebuilding field at all. Some people argued that peacebuilding is too complex, too non-linear, too much art rather than science to be rigorously monitored and evaluated.

This argument is now over. The question is no longer “if” we should evaluate, but “how.” How do we evaluate in ways that acknowledge the complexities of peacebuilding while holding ourselves accountable for producing results? How do we conduct assessments in a cost-effective way that provides a positive return on investment? How do we evaluate in ways that create continuous learning and improvement in our programs? How do we hold ourselves accountable not just to funders, but to our partners and the communities in which we work? Although I am leaving USIP, I will continue working on these questions, along with my amazing, soon to be former, colleagues and the rest of the peacebuilding community.

Andrew Blum is USIP’s outgoing vice president for planning, learning and evaluation.

Related Publications

The Latest @ USIP: How Civil Society is Addressing Haiti’s Crisis

The Latest @ USIP: How Civil Society is Addressing Haiti’s Crisis

Monday, March 25, 2024

By: Dr. Marie-Marcelle Deschamps

In the past few years, life in Haiti has been dominated by gangs’ growing control over huge swathes of the capital, Port-au-Prince. For Haitian families, this crisis has meant extreme violence, pervasive unemployment, lack of education for children and reduced access to health care. 2023 Women Building Peace Award finalist Dr. Marie-Marcelle Deschamps serves as the deputy executive director, the head of the women's health program and the manager of the clinical research unit of GHESKIO Centers in Port-au-Prince. She spoke to USIP about how her work helps women and their families, and what the global community can do to help Haitian civil society address this devastating humanitarian crisis.

Type: Blog

Conflict Analysis & PreventionGender

Myanmar’s Fateful Conscription Law

Myanmar’s Fateful Conscription Law

Monday, February 26, 2024

By: Ye Myo Hein

Earlier this month, Myanmar’s ruling junta enacted a compulsory conscription law that had been dormant since 2010. General Guan Maw, a leader of the Kachin Independence Organization, greeted the junta's decision by comparing it to the 2021 military coup: "If February 1, 2021, was the beginning of the end, the law enforced on February 10, 2024, can be said to mark the end of the end.” As popular reactions to the new conscription plan roll out across the country, General Guan Maw’s pronouncement becomes increasingly prescient.

Type: Analysis

Conflict Analysis & Prevention

Report of the Expert Study Group on NATO and Indo-Pacific Partners

Report of the Expert Study Group on NATO and Indo-Pacific Partners

Monday, February 19, 2024

By: USIP Expert Study Group on NATO and Indo-Pacific Partners

The North Atlantic Treaty Organization (NATO) and its four partner countries in the Indo-Pacific—Australia, Japan, the Republic of Korea (ROK), and New Zealand—have entered a period of increased engagement. This engagement is taking shape in the context of the war waged by the Russian Federation (Russia) against Ukraine, NATO’s growing awareness of the security challenges posed by the People’s Republic of China (China), and important structural changes in the international system, including the return of strategic competition between the United States and China and Russia. It is occurring not only in bilateral NATO-partner relations but also between NATO and these Indo-Pacific countries as a group.

Type: Report

Conflict Analysis & PreventionCivilian-Military RelationsGlobal PolicyMediation, Negotiation & Dialogue

View All Publications