Previous Page Table of Contents Next Page


2. Evaluation methods


2.1 What is "success"?
2.2 Evaluation methods used by some agencies
2.3 Methods used in this study

2.1 What is "success"?

Before attempting to evaluate the success of projects the criteria to be used to measure success must be defined. There are a number of possibilities:

- Banks work in terms of money. For the World Bank, an economic rate of return (ERR) of up to 25 percent is the base for setting up projects, and they are usually considered successful if they achieve more than 10 or 12 percent.

- "The principle determinant of success or failure is the degree to which new practices are associated with visible increases in annual domestic income for the farmers" (Murray 1979). This is a rather narrow view, although the question of farmer benefit is emerging as a major factor in successful projects.

- A rather cynical suggestion is that success of a project could be judged from the answer given by the project manager to the following question, "If this project were operating as a commercial venture, using funds coming from shareholders, would you invest your own personal money in it?"

- Probably the best yardstick, and the one used in this study, is to measure the extent to which the project achieved its objectives. However this requires that the project should have defined its objectives and also set up criteria for measuring their achievement. Later it is shown that it is rare to find this situation.

Followers of the British TV series "Yes Minister" or readers of the derived book may remember the occasion when a technical expert comes up with what the Minister thinks is an original and brilliant idea to control expenditure on projects. In that case it was local government expenditure, but the arguments apply equally to aid agencies because they have much in common - they are spending public money, they are concerned about spending it properly and being accountable, but they seldom manage to get it right.

The expert's idea was that all new proposed projects would be required to include a method of measuring the success or failure of the project. The proposal would have to include minimum thresholds in the form "The scheme will be a failure if it takes longer than this time... costs more than a given expenditure... employs more staff than this number... fails to meet the following preset performance standards". The official proposing the project would be responsible for applying these criteria.

The Minister thinks that this is a beautifully simple and effective way of managing expenditure on projects but is out-manouvered by his civil servants, one of whom reminds him that the practice of transferring officials every two or three years is done to "stop this personal responsibility nonsense" (Lynn and Jay 1984).

A related real-life anecdote came from a large agency where an official, commenting on the high failure rate of projects in a particular sector, said "Of course in this organisation you get credit for the number of projects you initiate, but before they are evaluated you have been moved to another desk so there is little risk of being associated with failed projects".

Whether evaluation reports provide a true picture has been questioned. Two major reviews suggest that there is likely to be a negative bias, because "evaluation reports naturally tend to focus on what went wrong, because the object is to profit from experience" (ODA 1983). Another comment is "Evaluations tend to focus on projects which have had or are encountering some form of difficulty. Thus the evaluation sample is not entirely representative" (FAO 1987).

2.2 Evaluation methods used by some agencies

During this study no single agency was found which could produce documentation which would allow a thorough study of the planning, operation and evaluation of all its projects. In some cases this was because documents were restricted to in-house use, but most agencies had poor records of what reports and documents had been written and an equally poor ability to retrieve them. One of the Bank Audit Reviews points out that the lack of documentation inhibits the process of learning from experience (World Bank 1986).

On the other hand every agency without fail recognized weaknesses in its monitoring, reporting and evaluation procedures, and was talking about, or doing something about, improving this aspect.

Generally, the scale of the evaluation operation correlates with the size of the agency. Several agencies have an evaluation unit which is independent of the operational divisions, e.g. World Bank, ODA and FAO. This has the merit of allowing impartial assessment of projects, but it also means that it can only offer advice and suggestions, which are not always accepted by the operational divisions.

Small agencies and NGOs find that maintaining an independent evaluation unit makes additional demands on the technical staff who are usually in short supply. The NGOs with whom this was discussed felt that monitoring and evaluation deserved more attention, but were reluctant to allocate more resources to it.

Agencies have their own sequence of steps in project planning, implementation and evaluation. Most include the four main components:

- identification and proposal;
- appraisal;
- implementation; and
- evaluation.

Studies of the methodology of monitoring and evaluation point out that plans for monitoring during the project, and evaluation both during and after, should be planned in complete detail right from the stage of project proposal (Casley and Kumar 1987 and 1988, Unesco 1984). However all the major reviews which cover many projects (e.g. World Bank, ODA, FAO) conclude that both monitoring and evaluation are weaknesses in most projects. Monitoring during the project is generally held to be essential so that the feedback can be used to make adjustments and improvements during the life of the project. Evaluation would ideally be a continuous process, starting at appraisal, continuing with mid-term reviews and at termination, followed by an ex-post review, and ideally a follow-up review 5, 10 or 15 years after the end of the project. All agencies have some programme for evaluation of at least some of their projects, but no agency was found to have procedures which approached the ideal specification such as that set out by Goodman and Love (1980). The procedures of the World Bank are probably the most comprehensive, but even this agency is unable to do a complete evaluation on every project and since 1982 has followed "selective auditing".

Each agency has its own methodology for evaluation, and there seem to be two approaches to getting the most information with limited staff capacity. One is to carry out a fairly detailed study of a limited number of projects, limited perhaps by geographical location or by sector. An example is the "Synopsis of reviews of six African Rural Development Projects" by ODA (Morris 1981). The other approach is to use a simple assessment which can be applied easily and quickly to a large number of projects. An example of this approach is the FAO Review of Field Programmes which covered 715 projects in 81 countries. (FAO 1987).

2.3 Methods used in this study

There were four components of the evaluation procedure used in this study, using a mixture of subjective and objective assessments.

i. From the 20 or 30 books and reviews which each analysed many projects, several recurrent themes emerged. Some of these were formulated as a hypothesis and kept in mind when studying or assessing individual projects, so that one could note whether the project supported or contradicted the idea. Examples are:

- Duration of project is important and ten year projects are more likely to be successful than 5 year projects.

- Farmer involvement should start at the time of project preparation not at implementation.

- Projects implemented through Project Management Units (PMU) are less likely to be effective than those integrated into line departments.

This was a subjective and non-quantifiable exercise but very useful in distilling the enormous amount of information in the literature.

ii. The 13-point questionnaire used in the earlier FAO study (see Section 1.3) was expanded into a set of evaluation sheets, shown in Appendix 3. The 3 main sections ask questions relating to what happened before the project started, during implementation, and after the project was completed. Some questions collect background information and ask for a yes/no/do not know answer. Where possible a quantitative answer is requested using a five-point scale. A quantitative assessment can be made for each of the before, during and after sheets, and they can be combined for an overall assessment of the project. Another sheet records background information about the project and the documentation available, and there is a separate sheet for assessment if there was a significant training component.

Sets of evaluation sheets were completed by the author and by a number of past and present FAO staff members. Some projects were assessed independently by more than one person to check whether they obtained similar results. (They were encouragingly similar.)

The numerical results of these studies are shown in Appendix 1.

iii. The assessment sheets also used the simple six-factor evaluation developed by the FAO Evaluation Service and used in the FAO Review of Field Programmes. The six factors are clarity of objective, project design, borrower support and involvement, and achievement of objectives, which is sub-divided into output, transfer of skills, and follow-up prospects. Each of the six factors is assessed as poor, satisfactory or good. Assigning numerical values of zero, plus one and plus two to these categories gives a single score out of twelve.

iv. Many projects were identified as suitable for analysis but it was found that there was not sufficient documentation available to complete the evaluation sheets. All of these projects were studied to some extent and yielded points of interest and subjective assessments which have been incorporated into this report.


Previous Page Top of Page Next Page