Gauging What Works and What Doesn’t in Peacebuilding
Reflections on Monitoring and Evaluation from a Departing “M&E Guy”
Those of us who work in peacebuilding are constantly reminded that the challenges we confront are big and the resources we command are small. So there is both a practical and an ethical obligation to use those resources wisely and be certain of their value. Toward that end, a little over four years ago, USIP asked me to become the organization’s first director of learning and evaluation. At its core, my job description was simple: help the Institute use evidence to do more of what works and less of what doesn’t.
“I always considered it important to be able to answer two kinds of questions. First, are USIP programs having an impact? Second, what kind of programs should we be implementing?”
As my time at USIP wraps up and I move to a new position as the executive director of the Joan B. Kroc Institute for Peace and Justice at the University of San Diego, I wanted to take the opportunity to reflect on what I got right, what I got wrong and the work left to do. Recently, The Ford Foundation posted a job description for a director of strategy, learning and evaluation. Every day I see more job postings like this, illustrating the growth of this field. So I hope these quick reflections from a soon-to-be-former “M&E guy” will help all those moving into these new positions as well as those already working on the challenge I took up four years ago.
Three Things I Got Right
- A Focus on Organizational Change: Improving project design, monitoring, evaluation and learning fundamentally requires deep organizational change; it is not just a technical challenge. For instance, one of the first things we did was to develop a template for a monitoring-and-evaluation framework. But very few people used it. We realized very soon that providing this type of technical support does not work unless you address the more fundamental issues. For instance, if we provide resources, are there accountability mechanisms in place to ensure they are used? Focusing on deeper organizational issues has meant at times that the progress we’ve made has felt slow, but that progress has been sustained. In contrast, some organizations try to quickly make comprehensive changes in the way they monitor and evaluate programs. These efforts often collapse under their own weight, leaving the organization worse off than when they started.
- Engaging Funders: There is a tendency among implementers to blame challenges in improving monitoring and evaluation on donors, saying they are too rigid in their requirements and too intolerant of failure to allow flexible, creative, effective monitoring and evaluation (M&E). From the beginning at USIP, we committed to the strategy of engaging funding partners such as government agencies on M&E issues. Funders have seen this as a positive, and that has proved crucial in creating M&E strategies that meet everyone’s needs while allowing for the kind of honest reporting on programs that leads to true learning.
- More M, less E: From the beginning, we focused more on monitoring than evaluation, for two reasons. First, all evaluation efforts require rigorous project monitoring, which in turn requires a solid, well-thought-out project design. Second, since project teams are responsible for designing and monitoring their programs, they also become responsible for their own M&E. If we had just done evaluations at the end of project, the originating teams could have more easily have dismissed the task as someone else’s responsibility, meaning either my team or the outside evaluation consultants.
Three Things I Wish I Had Known
- Prioritize Early: As the first director of learning and evaluation at USIP, I was acutely aware that I needed to demonstrate my worth to my colleagues in programs by helping them solve their problems. A key success metric my team and I always used was demand for our services. But this makes it very difficult to say no. And not saying no leads to being spread too thin, and to always being reactive in the face of requests. I wish I had worked harder from the beginning to prioritize and clearly communicate a set of strategic priorities to the organization. Along the lines of, “this is what we’ve determined are the most strategic areas of focus to improve M&E at USIP, these are the things we can help you with, these are the kinds of things we won’t do.”
- The Last Mile Problem: We underestimated the importance and difficulty of gathering data at the local level—for USIP, that often means in conflict zones. Again and again, we saw how strong project designs and monitoring strategies were undermined by the challenges of gathering credible, rigorous data, in an ongoing, cost-effective way. While we have recently prioritized solving what I now call the “last mile problem,” we should have focused on this earlier as part of all our project monitoring initiatives.
- Are We Doing it Well vs. What Should We Do? I always considered it important to be able to answer two kinds of questions. First, are USIP programs having an impact? Second, what kind of programs should we be implementing? At the beginning, I assumed that both of these questions could be answered with similar strategies. It turns out however that the “what should we do” question is orders of magnitude harder to answer and requires different strategies to answer. Project-level evaluation is important, but not sufficient to answer the question of whether, for instance, you should invest more in community dialogue programs or security sector reform. Moreover, the challenge of organizational change is different for the “what should we do” question as well. For example, a specialist in community dialogue is almost always willing to work to make their programming better. That specialist responds very differently if you are making the argument that the organization should be doing less community dialogue–an argument that threatens their identity and their livelihood.
Three Things Left to Do
- Leveraging Data: I have often said that peacebuilding is a data-scarce endeavor. Collecting data in dangerous, politicized conflict environments is always difficult. But as the result of USIP’s efforts on monitoring and evaluation, there is a lot more data flowing throughout the organization. The next challenge is to do a better job of aggregating, sharing, presenting and leveraging that information throughout the Institute. USIP has recently renewed its commitment to confronting these knowledge-management challenges, but for a large, complex, mature organization like USIP, the task is somewhat daunting and will take time.
- Adaptive Programming: There is a groundswell of discussion within the peacebuilding community—and in the development field more broadly—about how to make programming more flexible and more adaptive. The goal is to ensure that programs can learn as they go in order to respond to the complex, rapidly changing environments in which we work. To date, however, there has been more rhetoric about these approaches than actual changes in the way programs are implemented. The next challenge is to build the systems and processes that truly support flexible, adaptive, iterative programming.
- A Stronger Theory of Change: M&E can only tell you one piece of the story. An evaluation, for instance, can tell you if trust was built between groups. Only a broader theory of change about how peace can be built—and then testing it--can tell you if that increased trust will have an impact on larger peace and conflict dynamics. That requires combining M&E and applied research to provide a firmer evidence base for that theory of change. This will enable USIP to make stronger claims about larger impact, e.g., we built trust, and trust matters for broader, long-term peace.
One final thought, looking a bit farther back than four years. When I first started working on monitoring and evaluation in the peacebuilding field, perhaps 10 years ago, there was still a significant debate underway regarding whether monitoring and evaluation had any relevance for the peacebuilding field at all. Some people argued that peacebuilding is too complex, too non-linear, too much art rather than science to be rigorously monitored and evaluated.
This argument is now over. The question is no longer “if” we should evaluate, but “how.” How do we evaluate in ways that acknowledge the complexities of peacebuilding while holding ourselves accountable for producing results? How do we conduct assessments in a cost-effective way that provides a positive return on investment? How do we evaluate in ways that create continuous learning and improvement in our programs? How do we hold ourselves accountable not just to funders, but to our partners and the communities in which we work? Although I am leaving USIP, I will continue working on these questions, along with my amazing, soon to be former, colleagues and the rest of the peacebuilding community.
Andrew Blum is USIP’s outgoing vice president for planning, learning and evaluation.