This page is optimized for a taller screen. Please rotate your device or increase the size of your browser window.

Monitoring, Evaluation and Learning (MEL) for Complex Programs in Complex Contexts: Three Facility Case Studies (Part Two)

June 20, 2019

Lessons from MEL in Facilities
In our previous blog post, we discussed the challenges of performing monitoring, evaluation and learning (MEL) on aggregations of complex, multi-sector aid investments, or ‘Facilities’ as they are known in an Australian context. In this post, we explore the findings of our evaluation of three Department of Foreign Affairs and Trade (DFAT)-funded, Abt-managed facilities. Our research identified seven areas where lessons emerged, or where deviation from more ‘traditional’ MEL frameworks (MELFs) was required:

  1. The Facility’s intent and strategic plan to guide development of a MELF should be clarified as early as possible (i.e., at the design stage) – even it if is only a best guess. This means detailing how change actually happens in the country the Facility is operating in (i.e., create a Theory of Change), and not just a description of what the program’s activities will do (i.e., a Theory of Action). For all three Facilities, strategic clarity took time to emerge but, once in place, has been critical in allowing MELFs to be developed.
  2. The demands that multisector investments place on MELFs are greater than is addressed in current donor guidance. In the cases we explored, MELFs were required to assess multiple, often competing functions, including public and performance accountability, public diplomacy and communication, and evaluation and internal learning. KOMPAK overcomes this challenge by using different datasets and processes to provide MEL for different audiences (e.g., reporting on a simple set of indicators for the donor while also running quarterly monitoring, learning and reflection processes for program teams).
  3. Telling a contribution story is hard. Explaining the Facility’s achievements, without simply aggregating results up from one level of the project frame to the next, is challenging. However, there are good examples –such as using independent approaches to evaluation (per the Quality and Technical Assurance Group in Papua New Guinea (PNG) or setting qualitative- or process-level indicators (e.g., for collaboration and responsiveness) coupled with review and reflection sessions.
  4. Defining indicators capable of describing how outputs from specific investments led to Facility outcomes, and how these contributed to the Facility goal, has tested the limits of conventional MELFs. We found that the higher up the hierarchy of the program logic, the harder it was to define, measure and understand change. This is because Facilities do not easily aggregate results from one level of the project frame to the next (e.g., from input to output to outcome to goal). Given the number of factors outside a Facility’s control, it is very hard to attribute a single project’s work to the overall Facility goal. Furthermore, because a Facility combines a range of different projects under one operating platform–often for efficiency –it may be difficult to map some projects’ contributions to higher-order development outcomes. The PNG program is addressing this through an annual review of how governance has changed in PNG (the ‘Governance Update’) and by tracking process level indicators, which describe how well the Facility model itself is functioning. The Governance Update summarises the state of governance in PNG, against which each project maps its results and assesses its contribution to development change, while the process indicators.
  5. In most donor monitoring systems, baselines equal best practice, but the challenge of creating them—and thus describing how the Facility has changed the development status of the country or sector—has varied according to the availability, accuracy and sophistication of secondary data sources in each country. The Facilities have overcome this by using mixed methods approaches. Specifically, they set quantitative baselines for particular locations or projects where hard data is available, and then rely on qualitative assessments for other parts of the Facility. These segments adopt Adaptive Management and Problem Driven Iterative Adaptation approaches to programming (and thus change their activities and outputs and outcomes frequently, making it very hard to set a single baseline at the start of the program).
  6. ‘Learning by doing’ lies at the heart of the thinking and working politically and adaptive management agenda, yet it was one of the most challenging aspects of each Facility’s MELF. In our paper, we identify a need to a) develop systems that embed learning in the programming process (such as PNG’s use of Strategy Testing), b) overcome incentives to focus on monitoring for output reporting only and c) to integrate design, implementation and monitoring by undertaking these tasks simultaneously.
  7. Finally, there is a need to rethink the way MEL is resourced. In the case studies we explored, identifying staff who can apply MEL to projects that work in adaptive and politically- informed ways was challenging. We argue that quarantining budgets for MEL activities and recruiting staff for non-technical traits (e.g., their capacity to promote learning, curiosity and reflection amongst teams) could address this.
     

Each Facility has had varying levels of success in designing and implementing facility MELFs. The critical lesson is that it is difficult to describe, summarise and plan (an often experimental) portfolio of investments using a linear-change/project framework approach. In the case-study research summarised here, the rigid use of linear-change models served as an impediment to effectively applying MEL to Facilities. Although the project framework serves important accountability purposes, it has sometimes been mistaken for a hard performance benchmark, thus working actively against more flexible and adaptive forms of program management.

If the international community is serious about transforming how complex programs and complex change are measured then we think the place to start is not MEL methods, but the linear-change model that underpins the logic of the project framework itself.

Read more about the challenges of MEL in facilities in part one of this blog.

 
Work With Us
Ready to change people's lives? We want to hear from you.
We do more than solve the challenges our clients have today. We collaborate to solve the challenges of tomorrow.