As an interested observer, much seems to be asked of monitoring and evaluation (M&E) professionals: uphold the highest possible quality standards; use appropriate and robust methodologies; attribute change excluding any contradictory factors; be cost effective and capture the theory of change.
This is tough with unanticipated effects and broader, more diffuse consequences than those identified in the theory of change. It can be challenging at an organisational level, particularly in smaller organisations, whose capacity and resources are more limited.
How, then, can organisations think creatively about impact? Here are three possible routes in a complex and resource-limited environment:
- A flexible, iterative approach to measuring impact. As well as being set before the programme starts, evaluation criteria and metrics may be adjusted in response to what is happening on the ground. Melinda Gates urges NGOs to create a “continuous feedback loop” in a recent Ted Talk, rather than relying entirely on post hoc evaluation. You can think about your theory of change as a work in progress, informed by an ever-expanding evidence base. Any theory of change is a model, a framework built on the best available evidence. Programming is increasingly agile and adaptive, and so an iterative approach can enhance learning.
- Thinking about impact in terms of how and why. Are large-scale quantitative or experimental designs, such as Randomised Control Trials (RCTs) or quasi-experimental studies, the only credible causal approaches? A broader range of methodologies may offer insights into causal processes. Qualitative and participatory methods can aid us in understanding how and why programmes achieve or fall short of stated objectives, going beyond establishing or quantifying impact. They may also offer insights into broader, perhaps unanticipated, effects. Rather than being inferior or supplementary to experimental data, mixed methods can build a richer causal picture. Integration is key: take a look at Michael Quinn Patton’s blog, where he describes qualitative and quantitative evaluations too often behaving like two year-olds refusing to play with one other.
- Collating evidence to assess impact. Even if primary data collection is small-scale or otherwise limited, it is still of value in the context of a broader evidence base. We have to avoid generalisations or tenuous assumptions, where interventions and contexts differ. However, the overall weight of evidence, particularly when informed by systematic reviews, gives strong grounds for making claims about impact. As stated by DFID, “individual studies, no matter how rigorous or scientific, are not a sufficient evidence base from which to make informed policy and practice decisions”.
The narrative of monitoring and evaluation is becoming increasingly important, with greater emphasis placed on effectiveness, efficiency and accountability. Looking to the future, data synthesis will become key, pulling together data from a range of sources, including open data and social media. New technologies and digital tools also provide opportunities to streamline and innovate. With greater attention paid to M & E than ever before, an ever-expanding evidence base and new tools and methodologies on the horizon, we can embrace fresh opportunities to think creatively and constructively about impact.
Madeline Nightingale is an experienced social researcher who has worked across the public, private and non-profit sector. With an MSc in Comparative Social Policy from the University of Oxford, she is currently working towards her PhD in social policy, focusing on working poverty in Europe. [Department of Social Policy and Intervention].