Evaluating the work that peacebuilders do and the impact it has can all seem a bit wonky. But it's critically, and increasingly, important. It is always important to show where the work an organization has done has had an effect. But it's even more important now, in an age of funding scrutiny. And many in the peacebuilding and aid work community are looking for success stories – examples of what works and what doesn't, but also effective approaches to monitoring and evaluation that could even be replicated.
A report, "Proof of Concept – Learning from Nine Examples of Peacebuilding Evaluation," co-authored by USIP's Andy Blum, takes a look. Blum and his co-author, Melanie Kawano-Chiu, write in the introduction:
In a field that specializes in dialogue, consensus building and finding solutions to complex challenges, surprisingly absent is an open exchange on a fundamental element of peacebuilding: the determination of whether or not an intervention has ‘worked.' Yet the reality is that the power dynamics inherent in evaluation make a lack of dialogue among funders and implementers understandable.
Moreover, ever-increasing pressures to demonstrate effectiveness, consistently constrained budgets, and shifting and occasionally ill-defined standards of what counts as credible evidence are creating significant challenges for even the most sophisticated organizations in regard to monitoring and evaluating their initiatives.
In the midst of these dynamics, peacebuilding organizations are nonetheless identifying creative ways to address evaluation challenges in conflict-affected contexts. To learn from these efforts, the United States Institute of Peace (USIP) and the Alliance for Peacebuilding (AfP) convened the first Peacebuilding Evaluation: Evidence Summit in December 2011. The idea for the Summit originated from discussions during the first year, 2010 - 2011, of the Peacebuilding Evaluation Project: A Forum for Donors and Implementers (PEP). With the support of USIP, the PEP set out to address some of the challenges organizations face in evaluating their peacebuilding programs and projects in conflict-affected settings. In the first twelve months, PEP enabled honest and transparent conversations about the power dynamics inherent in evaluation; triggered new partnerships; increased consensus on potential solutions to challenges in evaluation and impact assessment particularly in peacebuilding contexts; published a USIP Special Report 'Improving Peacebuilding Evaluation' and an AfP Lessons Report "Starting on the Same Page;" and sparked new projects to improve the practice of evaluation, including the Evidence Summit and the Women's Empowerment Demonstration Project.
And they conclude:
There is little doubt that the pressure will continue to mount on organizations to demonstrate impact and to learn from and improve their peacebuilding practice. In response, across the peacebuilding field, donors, implementers, researchers, and evaluation specialists are finding innovative, scalable, and conflict-appropriate ways to demonstrate impact and learn from programming successes and failures at the project, program, and country levels. It is unlikely that more advanced evaluation methodologies alone will improve peacebuilding interventions, but as a community we can evolve our practice based on evidence and work to change the peacebuilding evaluation environment in a way that enables that evolution. The nine case studies below are a reflection of these efforts. Combined, these cases bring into sharp relief what is now possible in peacebuilding evaluation as well as the field-wide challenges that must be overcome if we are to continue to make progress.
Check out the whole report, hosted by the Alliance for Peacebuilding, The link includes the nine case studies the two put together.