Previous PageTable of ContentsNext Page

II. Evaluation in FAO - Institutional Arrangements, Policies and Methods

Policy background

4. All the programmes and activities of FAO, financed from the regular budget of the Organization (from mandatory assessed contributions) and those financed from voluntarily contributed extra-budgetary resources, are subject to evaluation. The policies for evaluation of these programmes have been set by member countries in the Governing Bodies and the Director-General. Evaluation is designed for:

    1. accountability on results, particularly in terms of evidence of contribution to sustainable impacts for the benefit of member countries; and
    2. to assist decision-making by the Governing Bodies and the Organization’s management as part of a results-based approach to decision making.

5. The emphasis on providing evidence-based lessons on the technical content of FAO work encompassing the validity, relevance and future improvement, distinguishes the focus in evaluation from that of audit and ensures complementarity.

6. In establishing evaluation policies, the Council takes advice primarily from the Programme Committee. The Director-General is advised by the internal Evaluation Committee chaired by the Deputy Director-General, which was established in 2004. Three documents have been particularly important in codifying evaluation policy:

    1. Director-General’s Bulletin 2001/33 (November 2001): Strengthening the FAO Evaluation System;
    2. Report of the Joint Meeting of the Ninetieth Session of the Programme Committee and the Hundred and Fourth Session of the Finance Committee (September 2003): The Independence and Location of the Evaluation Service – Further analysis of options; and
    3. Director-General’s Bulletin 2004/34 (December 2004): FAO Evaluation Committee (Internal) which included as an annex the decisions taken in line with the advice of the Joint Committees (above) which were endorsed by the Council.

7. A further important development was the approval in April 2005 of a set of norms and standards for evaluation in the UN system4 by the United Nations Evaluation Group (which is composed of the heads of evaluation from throughout the UN system). These norms and standards are largely in line with the standards of the OECD-DAC and their purpose is captured in a preambular statement: “Towards a UN system better serving the peoples of the world; overcoming weaknesses and building on strengths from a strong evidence base”. They now provide a baseline against which all organizations and programmes of the UN system can gauge their performance.

8. FAO evaluations currently fall into the following major categories, which are complementary:

    1. Major evaluations for the Governing Bodies: These are decided upon by the Programme Committee and cover evaluations of: individual FAO programmes; work towards strategic objectives, as specified in the FAO Strategic Framework; and cross-cutting institutional issues. There are four to five per biennium. They include the aspects of the work covered from the Regular Programme budget and from extra-budgetary funding and deal with work at headquarters, regional and country levels (normative and technical cooperation). Comprehensive evaluations of all FAO’s work at country level have now also been introduced;
    2. Evaluations carried out at the request of FAO management which may also be drawn upon for presentation to the Governing Bodies. These have frequently concerned work at country level, including in response to emergencies;
    3. Evaluations by independent teams of specific extra-budgetary programmes and projects; and
    4. Auto-evaluation by managers of Regular Programme work.

Role of the Programme Committee in Evaluation

9. The Programme Committee is the recipient of major evaluation reports for the Governing Bodies. Its functions with respect to evaluation are to advise the Council on overall policies and procedures for evaluation and to:

    1. decide upon the work programme for major evaluations;
    2. receive and consider the major evaluation reports (which are accompanied by a management response to the findings and recommendations). The Committee presents its conclusions on both the evaluation and the management response to the Council in its report; and
    3. receive progress reports on the implementation of evaluation findings and recommendations and provide its views to the Council.

The Evaluation Service

Undisplayed Graphic


10. The Evaluation Service is responsible for ensuring the relevance, quality and independence of evaluation in FAO. It is located for administrative purposes in the Office of Programme, Budget and Evaluation, which forms part of the Office of the Director-General5. The Service receives guidance from the Programme Committee and the Evaluation Committee (internal). It is solely responsible for the conduct of major evaluations for the Governing Bodies and other major evaluations, including the selection of evaluators and the evaluation terms of reference. It thus enjoys a high degree of independence within the Organization. In addition to its responsibilities for the conduct of evaluations, the Service also:

    1. facilitates feedback from evaluation in direct follow-up to individual evaluations and in communicating lessons for more general application; and
    2. monitors the extent of implementation of those evaluation recommendations accepted by the Governing Bodies and management.

11. Unlike the evaluation units in some other UN organizations, the Evaluation Service does support auto-evaluation by managers but does not have wider responsibilities in results-based management so as to assure a higher degree of independence in its evaluations. The Service is also not involved in evaluation capacity building in member countries. For staff training, it provides comments on training requirements to the Human Resources Division.

Budget for evaluation in FAO

12. For the current biennium 2004-05, a 27 percent real increase was made in the budget for evaluation, which stands at approximately US$ 4.6 million6 for the biennium (in total approximately 0.5 percent of resources available for the Regular Programme of work). Resources for evaluation of extra-budgetary work are currently of the order of US$ 1.3 million per biennium (approximately 0.25 percent of trust fund expenditure). The translation and reproduction of evaluation documents for the Governing Bodies, and certain indirect costs of evaluation such as office space, are covered outside the evaluation budget.

The process and methodology for major evaluations for the Governing Bodies

13. Evaluations for the Governing Bodies normally cover a strategic objective or cross- organizational strategy as defined in the FAO Strategic Framework, a programme, or an organizational unit. In recent years, these evaluations have tended to deal with large blocks of work at the programme or strategy level in order to maximise their usefulness to the Governing Bodies and senior management.

14. Selection of evaluation topics: In proposing subjects for evaluation to the Programme Committee in the rolling evaluation plan, the Evaluation Service takes account of expressed interests in the Governing Bodies and by FAO managers. The intention is that evaluation should focus on those areas where the Governing Bodies and management have the greatest need for evidence-based information on processes, institutional arrangements, outcomes and impacts. In order to achieve the balanced and progressive coverage of the Organization’s strategies and programmes, key factors in deciding on the proposals to be made include: a) the coverage of evaluations over the past six years; and b) the coverage of auto-evaluations and other studies. Criteria also include: the size of the programme or area of work; the demand from member countries; and areas of work being considered for expansion because of their perceived relevance and usefulness, or for elimination or downsizing. When there is already agreement by management and the Governing Bodies that particular work is not of continued priority, evaluation is not usual, as it can provide accountability but is not likely to deliver forward-looking lessons. The Programme Committee decides on priorities for evaluation from a list of possible topics and may introduce additional topics it considers to be of importance. It also can, and does, request evaluations which are timely outside of the regular evaluation cycle and the plan is adjusted to accommodate these.

15. Terms of reference for the evaluation: An approach paper for each evaluation is developed by the Evaluation Service in discussion with the units most closely involved in implementing the strategy or programme. Increasingly, an evaluation team leader is then selected and participates in the finalization of terms of reference.

16. The evaluation team: In the past, most evaluations were led by Evaluation Service staff, but the pattern of the last two years has been increasingly to have evaluations led by external consultants and for the evaluation team to be composed of external consultants with the Evaluation Service having technical supporting and quality assurance functions. Evaluation consultants7 are selected on the basis of competence with attention also to regional and gender balance. Evaluation team leaders are consulted where possible on the composition of the remainder of the team. The size of the teams is related to both the scale and complexity of the evaluation, 3-4 lead consultants being a typical number.

17. Evaluation scope and methodology: The methods used are tailored to the individual evaluations. Certain features are common. The ultimate determinant of the value of a programme, strategy or process is the benefit it delivers to FAO member country governments and their peoples. Key issues for evaluations include:

    1. changes in the external environment in which FAO functions;
    2. relevance to the needs and priorities of the member countries and the international community;
    3. functionality and clarity of the objectives, strategy, design and implementation plan to meet those needs and priorities;
    4. efficiency and effectiveness of the processes followed;
    5. institutional strengths and weaknesses, including institutional culture and inclusiveness of process;
    6. quality and quantity of outputs, in relation to resources deployed in undertaking the work;
    7. quality and quantity of the outcomes (effects) resulting from the activities and outputs in relation to resources deployed for the work;
    8. impacts and their sustainability in terms of benefits to present and future generations for food security, nutrition, social and economic well-being, the environment, etc.; and
    9. FAO’s comparative advantage in addressing the priority needs.

18. Evaluations are forward looking. The central concern is thus to identify strengths and weaknesses in FAO’s programmes, approaches and structures with relevance for the future. In examining the effectiveness and impact of programmes, it has generally been found most productive to examine the outcomes and impacts of work completed and ongoing over the last four to six years (for longer periods than that both detailed information and the lines of causality for impact become difficult to trace). For many institutional issues the evaluations are essentially concerned with the efficiency and effectiveness of current, rather than historical, practice as well as the likely benefits of ongoing reforms.

19. Preliminary desk review and SWOT analysis8 have been found essential in designing evaluations and determining issues for in-depth study. The introduction in 2000 of an enhanced results-based planning model for FAO has made it easier to identify the outcomes and impacts (objectives) towards which programmes are working, but it usually remains essential to clarify the programme logic as an early step in the evaluation process and define appropriate verification indicators for use in the evaluation.

20. Evaluations review the work of other institutions comparable to FAO, especially in the multilateral system. This is important for benchmarking on processes, quality of work, etc. As the performance of FAO cannot be judged in isolation from that of its partners and competitors, it is also essential to make judgements on FAO’s areas of comparative strength and weakness.

21. Also with respect to methodology:

    1. evaluations are consultative of stakeholders. This is done through visits to a sample of countries and partners, questionnaires and workshops;
    2. information is gathered through: document research; surveys; and focus group and individual interviews using check lists. In addition to open questions, questionnaires use carefully researched closed questions to facilitate statistical analysis;
    3. impact and sustainability were always major areas for investigation by evaluation teams. In view of the relatively small inputs by FAO to development processes at the national and global levels, key questions concern the extent to which there has been contribution to a plausible line of causality. Separate sample impact case studies are now being introduced for country evaluations and may form a part of some programme evaluations;
    4. projects and Regular Programme entities are scored by the evaluation team for: relevance; design; implementation; process; outputs; outcomes; and sustainable impacts. This is to facilitate comparability between the findings of evaluations and macro-analysis of trends and of strengths and weaknesses; and
    5. peer review by experts is used to identify issues for evaluation and to provide additional input for the evaluation findings.

22. The evaluation report: The methodology requires the evaluation team to consult with stakeholders, including FAO management but the team is solely responsible for its report, including the findings and recommendations. The role of the Evaluation Service is to assure adherence to the terms of reference, timeliness, and to provide technical support to the evaluation but the Service has no final say on findings and recommendations. Increasingly, external independent evaluation team leaders are present when the reports are discussed in the Programme Committee.

23. All evaluation reports are public documents made available in all languages of the Organization and posted on the evaluation website. The report is required to present evaluation recommendations in an operational form and to include recommendations for improvement with no budget increase (as it was observed that evaluation teams almost invariably made proposals for both expanded work and budget in the area under evaluation and this was not always realistic).

24. Follow-up: The Programme Committee requests management to provide a response to each evaluation on those findings and recommendations it accepts and those it rejects and why. It also requests management, as part of the management response, to provide an operational plan on how it intends to follow up. This is an area where there has been considerable progress in the last few years, and the Programme Committee has emphasised that it would like to see responses in more operational terms. The Programme Committee also requests a follow-up report on the progress made in implementation after two years.

Evaluation of extra-budgetary programmes

25. The policy of the Organization is that all programmes are subject to evaluation, including those funded from extra-budgetary resources and a generally accepted rule of thumb is that programmes should devote 1-2 percent of total resources to independent evaluation9. The Governing Bodies have also indicated that they do not wish the Regular Programme to subsidise evaluation of extra-budgetary activities. Sixty-one percent of bilaterally funded trust fund field projects over US$ 2 million (excluding emergency) that were completed in the years 2000-2004 were evaluated. The corresponding figure for UNDP projects was 27 percent (because UNDP is now generally evaluating country programmes rather than individual projects) and none of the 14 unilateral trust fund projects paid for by countries themselves were evaluated. As might be expected, the figures are considerably lower for projects in the budget range of US$ 1-2 million. Of 53 emergency projects over US$ 1 million, only two were separately evaluated.

26. Extra-budgetary projects have been subject to evaluation by tripartite evaluation missions, normally as the project drew towards completion and follow-up action was under consideration. Such missions consisted of three or four independent evaluators nominated respectively by the funding source, the benefiting country(s) and FAO. However, a number of developments in the last decade have reduced the applicability of this model. There has been a growth in the number of relatively small field projects, and for these a large mission of this type would not be a cost-effective use of resources. There has also been a major increase in emergency response and rehabilitation programmes where resources are handled in an overall package from various donors, to respond to the crisis. The growth in non-traditional projects which support headquarters work or an integrated mix of normative support and field work at country level, have also been important.

27. Country and regional development projects of US$ 2 million or more: The policy now is that field development projects of US$ 2 million or more should continue to be subject to individual project evaluation by an independent team. This modality continues to include a country visit by the evaluation team and evaluation is timed in relation to when it can make the maximum contribution to the work being assisted under the project. The Evaluation Service clears the terms of reference and team composition and also validates that the evaluation report meets essential quality standards.

28. Smaller field development projects: For smaller field development projects, talks are now being initiated with individual donors to pursue their willingness to set aside a small amount under each project and place it in an evaluation trust fund specific to each donor. In consultation with the donor, groups of projects would then be evaluated by independent teams, in some cases as part of wider country or thematic evaluations10. Evaluation of FAO’s Technical Cooperation Programme projects is already handled in this way and it facilitates ex-post evaluation as well as the evaluation of ongoing projects.

29. Major emergencies: For major emergencies, FAO needs to evaluate in an integrated way the relevance, efficiency and sustainable benefit from its response to the totality of the emergency. To date, ad hoc funding has been used for this11 and a management decision has now been taken to introduce an evaluation component in project budgets from which resources can be pooled to evaluate FAO’s response both during the provision of assistance and ex-post. Although the first evaluations of major emergencies were not carried out in full consultation with donors or the affected countries, it is intended that, wherever possible, there will be fuller partnership and the evaluations will continue to be managed by the Evaluation Service. The independent multilateral evaluation of the 2003-05 Desert Locust Campaign, which is looking at the response of FAO, national programmes and donors and has a steering committee of all partners to oversee its work, may yield some useful lessons in this respect.

30. For extra-budgetary programme funding which supports areas of the Regular Programme, or a mix of normative and field development work, evaluation mechanisms appropriate to each of the individual programmes are being flexibly developed in discussion with the individual donors.

31. Management response and follow-up on implementation of recommendations is required for evaluation of extra-budgetary programmes, as for Regular Programme work, but it is generally acknowledged to be an area of weakness and the actual use made of the findings and recommendations is highly dependent upon the extent to which the various partners to the evaluation become convinced of their validity and thus put them into effect.

32. Evaluation teams: For all evaluation of extra-budgetary work, independent teams are utilised. They are required to be consultative in their approach in order to maximise access to information, and facilitate both realism and ownership by partners to the evaluation but have full responsibility for their report findings and recommendations. In the past, for evaluations of extra-budgetary funded work, team members used to be nominated separately by funding sources, beneficiary countries and FAO. The preference now is for all parties to agree to the evaluation team membership without the individuals being their particular representatives.

Auto-evaluation

33. With financial support from UK-DFID, auto-evaluation was introduced in FAO in 2003 as part of results-based management. Auto-evaluations are conducted by programme managers with use of external consultants, and basic principles and guidelines, quality assurance and technical support are provided to managers by the Evaluation Service12. The manager responsible for the programme entity takes final responsibility for the auto-evaluation report. A formal response to the auto-evaluation is then required from the more senior manager to whom they report, normally a Division Director or Assistant Director-General (although this last stage of the process has not been fully effective). Auto-evaluations are now to be reported in summary form through the Programme Implementation Report and summaries are available on the FAO evaluation website.

34. From 2005, auto-evaluation has been extended to Priority Areas for Inter-disciplinary Action (PAIAs) and on a pilot basis to the administrative areas. Nineteen auto-evaluations were undertaken in 2004, covering 28 programme entities. For 2005, 16 auto-evaluations have been agreed for the technical programmes. One PAIA and one non-technical programme auto-evaluation are being undertaken as pilot studies.

35. The principle is that all programme entities should be subject to either independent external evaluation or auto-evaluation during the course of their lives and that programme entities with fixed duration should normally be evaluated towards their completion in order to assist planning for the future. However, it has become clear that due to changes in priorities and the continuing budget constraints, questions arise about whether to expand certain areas of work and cut others irrespective of the planned lifetime of the programme entities. Considerable emphasis is thus being placed on selecting work for auto-evaluation when major changes in direction are being considered, be they either for expansion or for contraction.

36. The Evaluation Service has analysed the experience with the first round of auto-evaluations, including circulating a questionnaire to those involved. Chart 2 summarises the responses on the perceived usefulness of auto-evaluation. It can be seen that Assistant Directors-General all found auto-evaluation helpful. At the Director level, 33 percent of respondents found it very helpful, 58 percent helpful and only 8 percent found no significant benefit. Overall, managers at all levels found the process either helpful or very helpful.

Chart 2: Usefulness of auto-evaluation as perceived by managers

Undisplayed Graphic


37. As auto-evaluation is the responsibility of the managers directly concerned, there can be only limited expectation that they will be strongly critical. It was also evident that useful conclusions were internalised by those responsible for programme entities during auto-evaluation, although it was not necessarily reflected in the reports. Auto-evaluation reports do, however, demonstrate whether the entities were able to contribute significantly to outcomes and impacts, which in itself is valuable information for senior managers. Where partners and users were directly consulted in auto-evaluation processes, more criticism and sometimes verification of benefits came to light than when consultation processes were largely internal. It was also found that the use of external consultants and/or external peer reviewers strengthened both the objectivity and the critical questioning in the process.

38. It is too early in the process to analyse the impact that auto-evaluation has had on the actual planning of programme entities and it will always be difficult to disaggregate the extent to which managers have made changes as a result of auto-evaluation, rather than the auto-evaluations reflecting changes which had been decided upon independently. It was found that a large number of the concerned managers concluded in their auto-evaluations that programme entity design and its actual execution needed to be improved by clarifying what outcomes were expected for which target beneficiaries. Many found that a greater proportion of resources needed to be used for disseminating outputs, particularly publications, rather than just producing them. In addition, there was considerable concern on the need to improve accessibility through FAO’s website.

Auto-evaluation of evaluation

39. The Evaluation Service is now completing the report of its own auto-evaluation of evaluation in FAO, including in particular the performance of the Service itself. In this auto-evaluation, a review was undertaken by peers from other organizations of the UN system and from bilateral development agencies. Structured interviews were held with groups of internal stakeholders and some government representatives. A representative sample of evaluation reports was sent to peers for review against their own criteria. Major findings to emerge from the auto-evaluation process were that:

    1. Evaluation feedback required more attention. In response to this, the FAO evaluation website is being steadily improved both from the point of view of ease of use and of access to material. Popular summaries are being developed for the major evaluations presented to the Governing Bodies and the first of these form Part IV of the current document. For many years, there have been workshops to discuss findings and recommendations, and this is being further emphasised. Evaluation staff are encouraged, when requested by line managers, to assist in the preparation of immediate plans for follow-up, but continued involvement after this could both lead to an eventual conflict of interest and take scarce resources from evaluation work. The link to staff training also has potential to be strengthened.
    2. The FAO evaluation website could be used more as a source of information and lessons by both external and internal users than is the case at present. As mentioned above, efforts are continuing to improve both the ease of access and the range of information on the site, but the Evaluation Service is conscious that users may not actively seek out information and is working to publicise it and package it in an easily utilisable form.
    3. Evaluation methods used were insufficiently documented and it was thus difficult to judge if these methods conformed entirely to best practice. The Evaluation Service is concerned with the cost-effectiveness of evaluation and the most intensive practices for compiling information may not always be appropriate or attainable (evidence may sometimes need to be gained by best approximation and with reasonable plausibility, especially on impacts). Efforts are continuously under way both to learn from others on best practice and to internally improve methods. Documentation on methods used is being improved partly through the material on the website and also through this current section in the Programme Evaluation Report.
    4. Evaluation reports: Several of the peer reviewers considered that evaluation reports required improvement in their coverage of the methods used, and inclusion of annexes on such matters as the persons met. This observation seemed to derive from agencies where reports were primarily internal, and not produced in a number of languages. The Evaluation Service will be putting forward a paper to the Programme Committee at its next session to gain feedback on the form in which the Governing Bodies would find it most useful to receive evaluation reports but it is unlikely that it would be concluded that reports should be longer than they are at present.
    5. Institutional positioning of the Evaluation Service: The peer review panel and the selected government representatives, FAO staff and management expressed views on the Evaluation Service’s administrative position in the Office of Programme, Budget and Evaluation. Senior management considered that the current location provided advantages in feeding back evaluation lessons into the Regular Programme and in ease of administrative oversight, with no loss of independence. The peer review panel and the majority of the other stakeholders consulted felt that the disadvantages of the current location included the possible perception, both inside and outside the Organization, of limited independence of the Service, and issues of the ease of access to other units of the Organization with which the Service must work. Their conclusion was that the Service would be best placed as a separate unit within the Office of the Director-General, with oversight provided by the Deputy Director-General in the capacity of Chair of the internal Evaluation Committee 13.
    6. Transparency in selection of evaluators: It was observed that there was no competitive process for selection of evaluators and that the criteria for selection were not clear. The Evaluation Service has now started advertising for positions of evaluation team leader and team members on the evaluation website and making selection criteria public. Also, where possible, evaluation team leaders are involved in evaluation team selection.
    7. Overall approach to evaluation: The peer review panel concluded that evaluation was too FAO-centric, serving primarily management and the Governing Bodies and not directly useful to member countries or adequately involving them. This view was not accepted by the Evaluation Service as being valid. The programme of country evaluations fully involves the country and has, as its end product, a report intended for use by the country as well as FAO and other concerned partners. For the programme and strategy evaluations, evaluators from the main groups of beneficiary countries are fully involved and the reports are directed towards improving FAO performance in the service of its membership.



4 http://www.uneval.org

5 The Service staffing is composed of a Chief, eight professionals (including one provided from extra-budgetary resources) and three support staff.

6 US$ 4.1 million under the Regular Programme allocation to the Evaluation Service and approximately US$ 0.5 million for evaluation of the Technical Cooperation Programme –TCP.

7 Evaluation consultancies are now advertised on the web.

8 Analysis of Strengths, Weaknesses, Opportunities and Threats.

9 Technical cooperation and analytical work justifies the upper end of this range in view of its relative complexity and high potential for benefit.

10 For which donor specific reports would always be prepared as well as an overall report on FAO’s work.

11 Mozambique floods, Balkans, Afghanistan, southern Africa and Tsunami-affected areas.

12 The Evaluation Service also provides matching funds to support the evaluations.

13 It should be noted in this regard that the organizational location of the Evaluation Service was considered by the Programme and Finance Committees at their joint meeting in September 2003. They “agreed that the independence of the Evaluation Service, within the existing location in PBE, should be further enhanced....”



Previous Page Table of Contents Next Page