Evaluation method for the NAPA Priority Project 1: Promoting Drought/Crop Insurance Programme in Ethiopia

Introduction:
With the previous article regarding Ethiopian climate change adaptation policies, it has been shown that the Government followed a new approach called NAPA (National Adaptation Programmes of Action). This took into account existing coping strategies at the grassroots level and, provided a preparation process to be followed for the identification of priority activities. With this method, 11 priority projects were identified with an estimated cost of 770 million USD.
Each one of these projects was designed to be implemented on its own, the NAPA being a tool for periodic monitoring and evaluation procedures and processes that would feed into further adaptation activities and projects. That is why this report will concentrate on one priority project and present a recommendation on how its result should be evaluated.

The Priority Project:

The NAPA priority project for which evaluation options are to be recommended is the one regarding the promotion of drought/crop insurance program in Ethiopia. The rationale for this project is the need to minimize the shocks induced by droughts. In fact, drought is the single most important climate related natural hazard impacting the country from time to time. In such a situation, poor farmers face highly uncertain risks and as a consequence, they do not have access to credit. For this reason, believe is that Insurance as a risk management tool will contribute to protect the livelihoods of Ethiopian farmers vulnerable to recurrent drought risk. It will also help demonstrate the feasibility of establishing contingency funding for an effective aid response in drought years.

In support of these project objectives, there is a need for the interventions designed to achieve one outcome: Enhanced coping mechanism and adaptive capacity to drought impacts. To achieve it, the following outputs are envisaged: (1) Drought Indices and insurance design developed; (2) Increased number of farmers insured for drought; (3) Capacity building and training of key actors and; (4) Studies, research and assessments of various aspects of weather/drought insurance.

Having said this, one has to be reminded that NAPAs are designed taking into account existing coping strategies and, it is the case for this project. A pilot project regarding Ethiopia’s first index insurance pilot was implemented in 2006 through a partnership between the World Food Programme (WFP) and the Government. The objective was to insure against the risk of natural drought catastrophe on the international financial market. Though stopped in 2007 due to lack of donor support, it is the basis on which part of the NAPAs were developed.

Coming back to the project of our concern, one must be aware that it should be co-financed and co-implemented by GEF (Global Environmental Facility) Agency. As a reminder, GEF was requested by the UNFCCC (United Nations Framework Conventions on Climate Change) to operate the funds for financing climate change adaptation policies for the Least developed Countries.

Guidance and limits were provided to GEF for financing the implementation of NAPA projects. This gave it the opportunity to put in place an evaluation office and create evaluation standards to be followed. The reason behind is to ensure that GEF programmes and projects are monitored and evaluated on a regular basis, and to maintain sufficient flexibility to respond to changing circumstances and experience gained from monitoring and evaluation (M&E) activities (GEF, 2006).

Evaluation analysis:

Firstly, it is very important to understand what is meant by evaluation in this context.

“Evaluation is a systematic and impartial assessment of an activity, project, program, strategy, policy, sector, focal area, or other topics. It aims at determining the relevance, impacts, effectiveness, efficiency, and sustainability of the interventions and contributions of the involved partners. An evaluation should provide evidence-based information that is credible, reliable, and useful, enabling the timely incorporation of findings, recommendations, and lessons into the decision-making processes (DAC, 2007 and GEF, 2006).

As indicates the definition, an evaluation is a systematic assessment of what works and what does not. This leads us to the purpose of the approach itself. According to the DAC (Development Assistance Committee), the purpose and intended use of the evaluation must be stated clearly and should be addressing: why the evaluation is being undertaken at this particular point in time, why and for whom it is undertaken.

Because the project is co-financed and co-implemented by the GEF Agency and other donors such as the World Food Programme (WFP), there is a requirement for a feedback system on projects performance via Monitoring & Evaluation (M&E). This evaluation feedback is to be used as a learning tool to improve future aid policy and interventions (Finland/Ministry of Foreign Affairs, 2006). In other words, the specific objectives of the evaluation are to look at impact and sustainability of results, including the contribution to capacity development.

This report, however, is a first step recommending how the government should evaluate the result of the project. This latest statement answers the second question regarding the purpose of the evaluation. More precisely, this report will be used by the project steering committee composed of representatives from stakeholders (government agencies, GEF Agency, WFP, etc.). After that, the evaluation of the project itself will be carried out by independent technical experts (Abebe Tadege, 2007).

Evaluation methodology:

Now that the purpose and objectives of the evaluation are clearly defined, the next step is to present the right method of evaluation. There are a wide range of approaches available and that is why there is no “one size fits all” evaluation template to put against the variety of questions. Different types of evaluations are appropriate for answering different kinds of questions (Rist and Zall Kusek, 2004). So, in order to select the right method, one has to begin by asking correct questions. This is supported by the DAC when it states that:

“Evaluation questions are decided early on in the process and inform the development of the methodology” (DAC, 2007).

The questions to be asked, however, must reflect the objectives of the evaluation. These questions are:

–          Was the project effective in achieving the desired objectives? If so, are these achievements sustainable over time?

–          Has the project contributed to outcomes that will improve the capacity to enhance coping mechanism and adaptive capacity to drought impacts?

These questions push to choose the corresponding method of evaluation. There are however, different approaches available; each one responding to a specific set of objectives. Hereafter, are examined the most appropriate ones for the objectives pre-cites.

– Impact evaluation: also referred as summative evaluation attempts to find out the changes that occurred and to what they can be attributed (Rist and Zall Kusek, 2004). In other words, it poses questions such as: what impact, if any, does a policy, programme or some other types of government intervention have in terms of specific outcomes for different groups of people (HM Treasury, 2004).

This type of evaluation is difficult, especially as it comes after the end of the intervention. In fact, if the timeframe between the intervention and the attempt to attribute change are far a part from each other, there will be a greater risk to have the intended outcome affected by other factors such as changes in the economic or social context or motivation of the population (Morse and Struyk, 2006; Rist and Zall Kusek, 2004).

The difference between outcome and impact is crucial, here. Outcome is the status or result, which may or may not be due to the programme. The impact, however, is clearly linked to the programme. This rends the use of this method very complex (Morse and Struyk, 2006).

But, this issue of attribution can be addressed by way of asking what is called “the counterfactual question”. The aim is to ask what would have happened if the intervention had not taken place? It is not an easier question but, there are strategies for doing so, using both experimental and quai-experimental designs. Thus, the trick is to identify the impact by contrasting the outcomes for an experimental group for which the project is conducted and a control group for which it does not. For these reasons, advice is to plan for the impact evaluations before the intervention even begins (Rist and Zall Kusek, 2004).

– Process evaluation: also called implementation evaluation focuses on implementation details. Although it is largely a systematic documentation of key aspects of program performance that indicate whether the program is functioning as intended or according to some appropriate standard, the added value is that the process is not just documented. It is also about studying unanticipated outcomes.

More importantly, it gives to programme managers the possibility to understand why the implementation effort is or is not on track, allowing policy changes or programme performance improvement (Rist and Zall Kusek, 2004). This is made possible by examining programme implementation in the domain of service utilization or in the domain of programme organization and operations. Evaluations of service utilization are concerned with the number of persons receiving services, whether those receiving the services are the intended population, and whether the target population is aware of the programme. Evaluations of organization and operations address whether the programme is being appropriately and effectively implemented (Morse and Struyk, 2006).

Pros and cons in relation to the objectives:

As classified earlier, the specific objectives of the evaluation are to look at impact and sustainability of results, including the contribution of capacity building. It was also noted that different types of evaluations are appropriate for answering different kinds of questions. That is why the pros and cons of the above two methods in relation to the objectives will be presented hereafter.

–          Impact evaluation:

Pros

Cons

– Asks the   question of impact;

– uses the   counterfactual question by using both experimental and quasi-experimental   designs;

– Difficulties   to attribute the results to outcomes or impacts;

– Difficult to   realize because it comes after the end of the intervention.

 

–          Process evaluation:

Pros

Cons

– Allows   policy changes or programme performance improvement;

– Evaluates   service utilization or organization and operations;

– Evaluation   is undergone during the project implementation process;

– Focuses on   implementation details;

– Studies   unanticipated outcomes.

Recommendation:

In relation to the objectives of the evaluation, the analysis of these two methods has identified a number of pros and cons vis-à-vis their selection. To support my recommendation though, I will use the results of the pilot project drought insurance stated above.

The results of that project have shown that there were an increasing awareness among both farmers and financial institutions of the role of index insurance. As a consequence, a number of positive effects occurred, including opening access, for smallholders, to loans. It has also allowed agricultural insurance in Ethiopia to connect to the international financial market. These findings relate to questions of impacts and long-term outcomes (Anderson J. and Alii, 2010).

Regarding the questions of sustainability and scalability, it has been discovered that nothing will be achieved unless product development is locally owned and managed. In Ethiopia, a local insurance company NISCO has been a pioneer in the domain and had found that one key obstacle to scaling up and sustaining index insurance is to focus on intermediaries and unions in the development of the product. For these reasons, the company has initiated discussions with farmers’ associations (Anderson J. and Alii, 2010).

It appears that these findings corroborate the questions asked by the Impact evaluation. In fact, this method of evaluation which is undertaken after the end of a project is used when questions relating to impacts, long-term outcomes and sustainability of a project are asked. For this reason, I recommend strongly the use of this method of evaluation for this priority project.

Practical issues:

Two issues arise following this recommendation: the cost of evaluation and the method of data collection.

Regarding the cost of evaluation, no detailed information is available, yet. However, there is an informing indication in the priority project facilitating the estimation of the cost. This relates to the requirement of having independent technical experts to conduct the project evaluation. For reminder, the most important cost item in an evaluation budget is the consultants’ fee (Molund and Schill, 2004).

And to control the costs, it is necessary to set a budget limit. This can be done by estimating the total number of person-weeks suitable for the assignment. Of course, there will not be a serious undermining of the cost competition because biddings are submitted in terms of weekly fees and the number of working weeks.

Nevertheless, estimating the time necessary for an evaluation is generally an arbitrary exercise; advice is to make sure that budget limits cover all reimbursable costs such as travel, hotel and other costs, as well as fees (Molund and Schill, 2004).

Concerning the data collection method, although, the availability of necessary information is a key issue; the method of evaluation is to be correlated to the objectives of the evaluation itself. In this case, the objective is to find the impacts of the priority project and the corresponding method is a baseline survey, as it provides benchmarks against which impacts can be identified.

This means that collecting existing documentation is crucial for the method. It is of course a starting point and remains a preliminary deskwork intended to gather useful information from evaluation reports on related subjects, reports on progress from other related donors’ intervention, data from official sources and research papers.

Once this is done, field work can be started. This might bring on the table qualitative information emanating from interviews conducted with stakeholders or quantitative information obtained through statistics. Thus, the way field information is obtained will depend of the evaluation questions.

Conclusion:

It appears that a good and successful evaluation depends of clear project objectives. If these are precisely defined then, the evaluators will be able to describe the purpose of the evaluation and translate the project objectives to evaluation questions. Once this is done, the most appropriate method of evaluation can be selected according to the questions asked. Inherent to the approach chosen will be some practical issues for which clarifications must be made.

Bibliography:

1)      Abebe Tadege, 2007. “Climate change national adaptation programme of action (NAPA) of Ethiopia” [Online], Addis Ababa, National Meteorological Agency, available from: http://unfccc.int/national_reports/items/1408.php [Accessed 19 June 2010].

2)      African Development bank, 1996. “Guidelines on methodologies for evaluation” [Online], African Development Bank, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 17 July 2010].

3)      Anderson J. and Alii, 2010. “The Potential for Scale and Sustainability in weather Index Insurance for Agriculture and Rural Livelihoods” [Online], Rome, WFP, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 09 July 2010].

4)      Asian Development Bank, 2006. “Guidelines for the Preparation of Country Assistance Program Evaluation Reports” [Online], Asian Development Bank, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 15 July 2010].

5)      DAC, 2007. “Quality Standards for Development Evaluation” [Online], OECD, available from http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 12 July 2010].

6)      GEF, 2006. “The GEF Monitoring and Evaluation Policy” [Online], WashingtonD.C., GEF, available from: http://www.thegef.org/gef/ [Accessed 09 July 2010].

7)      Ministry of Foreign Affairs of Denmark, 2006. “Evaluation Guidelines” [Online], Copenhagen, Ministry of Foreign Affairs of Denmark, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 12 July 2010].

8)       Ministry of Foreign Affairs of Finland, 2006. “Guidelines from Programme design, Monitoring and Evaluation” [Online], Helsinki, Ministry of Foreign Affairs of Finland, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 12 July 2010].

9)      Ministry of Foreign Affairs of Japan, 2003. “ODA Evaluation Guidelines” [Online], Tokyo, Ministry of Foreign Affairs of Japan, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 16 July 2010].

10)  Molund Stefan and Schill Goran, 2004. “Looking Back, Moving Forward. Sida Evaluation Manual” [Online], Stockholm, Sida, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 15 July 2010].

11)  Rist C, Ray and Zall Kusek Jody, 2004. “Ten Steps to a Result-Based Monitoring and Evaluation System” [Online], WashingtonD.C., The World Bank, available from: http://www.thegef.org/gef/ [Accessed 19 July 2010].

12)  UNDP-GEF, 2006. “Coping with Drought and Climate Change” [Online], Addis Ababa, UNDP-GEF Agency, available from: http://www.thegef.org/gef/ [Accessed 09 July 2010].

13)  UNDP, 2002. “Handbook on Monitoring and Evaluating for Results” [Online], New York, UNDP, available from: http://www.oecd.org/document/11/0,3343,en_35038640_35039563_35126667_1_1_1_1,00.html [Accessed 16 July 2010].

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s