Previous Page Table of Contents Next Page


How to evaluate nutrition education programmes1

1Adapted from a presentation at the FAO Expert Consultation on Nutrition Education for the Public, Rome, 18 to 22 September 1995.


Comment évaluer les programmes d'éducation nutritionnelle
Cómo evaluar los programas de educación nutricional

A. Oshaug

Arne Oshaug is Associate Professor at the Nordic School of Nutrition, University of Oslo, Norway. He was elected chairman of the FAO Expert Consultation on Nutrition Education for the Public, 18 to 22 September 1995.

Nutrition education programmes are usually important components of strategies for influencing individual behaviour to solve nutrition problems. Some efforts over the past decades have been successful, while others have failed, These experiences have brought a more realistic perspective on the implementation of nutrition programmes and their impact.

Society has a right to know how resources are used and what final impact nutrition programmes have had, In a world where resources are limited, the need for evaluation of such efforts is increasing, Evaluation should be integrated in the whole nutrition education strategy and must be conducted for all types of interventions.

Useful programmes must be distinguished from ineffective and inefficient ones so that subsequent efforts will be designed and implemented to bring the desired impact. Planners and managers must determine whether the strategy is based on a broad analysis of the nutrition situation, an assessment of needs and aspects of culture and behaviour, They must ask whether the interventions are likely to alleviate the nutrition problems significantly for the appropriate target population, In addition, they should answer questions such as;

Will the various interventions reinforce or counteract each other? Is the intervention being implemented in the ways envisioned and is it effective? How much does it cost? Finally, if the nutrition education programme is one of several interventions, how can its effects be separated from those of the others?

FUNCTIONS OF EVALUATION

In community nutrition, evaluation has been defined as "the systematic collection and delineation and use of information to judge the correctness of the situation analysis, critically assess the resources and strategies selected, to provide feedback on the process of implementation and to measure the effectiveness and the impact of an action programme" (Oshaug, 1994), This definition, which is broad, links the evaluation activities to a specific programme or activity.

Evaluation of nutrition education programmes involves the collection of qualitative and quantitative data and their analysis and interpretation. Formative evaluation is used to improve and develop activities of programmes as they are carried out, and is therefore continuous, Summative evaluation measures the outcome of an activity or set of activities.

Evaluation is used to fulfil requirements of programme sponsors for provision of information about coverage, service, impact, efficiency and fiscal and legal accountability (Rossi and Freeman, 1993), Another function of evaluation is to facilitate the administration and supervision of a programme.

Evaluation may have psychological or socio-political functions in raising awareness of educational activities or promoting public relations. Giving feedback or involving people in activities can help make programme beneficiaries aware of the usefulness of evaluation.

DEVELOPING AN EVALUATION SYSTEM

Commonly, evaluation of a programme follows a systematic approach which should be built into all phases of programme planning, implementation and management. It is essential that evaluation begin with a clear definition of a nutrition education programme's goals and objectives. The goals and objectives are based on needs which are identified through assessment of the nutrition situation and the factors that contribute to problems. The problems that can be solved by nutrition education are identified and the various actors and target groups are described, Systems that can support nutrition education activities should be identified.

Using this information, the goals and measurable objectives (including outcomes) can be specified. With the exception of some deficiency problems, many nutritional problems are not easily recognized, and a precise assessment of the empirical situation is usually required before specific, realistic objectives can be formulated and a nutrition education programme for achieving them can be designed.

Specification of goals and objectives is very important for an education programme and for its evaluation; they give the programme direction, expected results and time frames, and they provide criteria for evaluation. Many programmes have suffered from poorly developed objectives which made evaluation difficult (Wholey, 1981; Chapman and Boothroyd, 1988; Oshaug, Benbouzid and Guilbert, 1993).

Goals are generally broad, abstract, idealized statements on desired long-term expectations. For evaluation, goals must lead to operationalization of the desired outcome, that is the condition to be dealt with must be specified. An objective should be formulated precisely, delineating the expected change or outcome, the conditions under which the expected change is to take place and the criterion (the extent of change expected to satisfy the objective).

The various objectives of an educational programme should have different time perspectives, Short-lived interventions may produce measurable results, but new behaviours are fragile and can rapidly disappear, Evaluation of education projects over time strongly supports the need for a long-term, intensive effort (FAO/WHO, 1992). In management, it is important to define specific objectives ("milestones") to be achieved at certain stages in the programme implementation, because they can be followed and reported on during implementation.

TYPES OF EVALUATION

Context evaluation

Context evaluation focuses on the initial decisions in the nutrition education programme and ensures that past experience is brought into the planning process, Usually, most of the information needed has been collected during the situation analysis or in later baseline studies. If the available information is not sufficient, data from a sample or pilot programme or anecdotal data may be collected to give a better understanding of the problem, Context evaluation is normally carried out to refine objectives and activities and to ensure that they are relevant and realistic.

The analysis may involve contextual factors that have a bearing on implementation, such as the religion, race, ethnic background and sex of the target group in the community, It may also cover general socio-economic and political issues. It is essential for programme staff to understand how different target populations view reality, how they use and perceive symbols and colours and how a nutrition education message would be received, understood and possibly acted upon by the target population, Such an evaluation can focus on factors that may impede a programme, thus preparing staff to cope with them.

Input evaluation

An important aspect of preparation for implementation, input evaluation is a critical look at the adequacy and appropriateness of the resources available to carry out the programme, A programme can have at least four types of input: the programme plan, material resources, human resources and time, particularly that allocated for the initial phase, evaluation, feedback and follow-up. Many evaluations show that planners consistently underestimate the time and effort needed to adopt a new practice (FAO/WHO, 1992).

The evaluators should ask if the activities have been tested for practicability and feasibility and whether the cost per beneficiary has been estimated. It is useful to ask whether the education materials have been tested for relevance; if the target groups have been involved in any stage of programme conceptualization and design; and whether the programme staff have adequate skills and competence, This type of evaluation examines whether the plan includes feedback to the local community, the target group(s), authorities and others, If the answers to such questions are negative, it is important to assess the consequences, Will the gaps prevent successful implementation and should the programme be modified?

Process evaluation

Process evaluation monitors progress while the strategies and activities are implemented. It indicates whether they are likely to generate the expected results and if the work is done on time. If the activities do not meet expectations, they may be changed or even stopped. It is much better to change a programme during implementation than to await a retrospective analysis to find out where it went wrong and who was responsible for the failure. Careful monitoring identifies programme constraints that have been overlooked or underestimated, provides insight into audience characteristics that were misunderstood and suggests important factors that have changed during the course of the programme.

When planning a process evaluation, the choice of indicators depends heavily on the nature and complexity of the programme, the criteria of the objectives, the people involved, the context of implementation, the programme's duration and the target group. Process evaluation may focus on gradual changes in the target group and/or the performance of programme personnel, The complexity of the evaluation depends on the resources available and the expertise of the evaluator, Data collection activities should be as simple and economical as possible, Sophisticated monitoring and quantitative analytical procedures are often not necessary. Many sources of data should be considered, such as direct observation by an evaluator, data from programme personnel, programme records, information from programme participants or their associates and data on food use and/or sale of food.

Process evaluation results have a number of uses, depending on the purpose of the evaluation, the stage of the programme's development and the funding agency, When process evaluation is part of a comprehensive evaluation, one of its important functions is the provision of information about the congruence between programme design and implementation, The results should be given to project managers and staff on a continual basis to permit changes in the programme, A plan should be made for the use and dissemination of the evaluation findings, It is important to present the findings in ways that correspond to the needs and competencies of the stakeholders.

Outcome or impact evaluation

In evaluating the outcome of an intervention a distinction must be made between gross and net outcome (Rossi and Freeman, 1993), The gross outcome consists of all observed changes in the period in question. The measure of gross outcome might be defined as any change in the diet of the participants compared to the diet before the programme started, Net outcomes are more difficult to measure. Assessment of net outcome may involve an attempt to measure, for example, the dietary changes that are caused by the intervention. In impact assessment the primary concern is with the net outcome.

In assessing impact on diet it is important to have a clear definition of the purpose of the assessment and to select appropriate methods and variables, Often a simple food frequency may be a good enough indicator of changes in food use, Assessment of food intake requires personnel with skills to undertake a 24-hour recall, record food history, instruct on food records, etc.

The dietary and nutritional changes seen in a specific period may be attributed to at least three effects: the effect of the intervention; the effects of exogenous confounding factors (i.e. a mixing of effects between the exposure, the outcome and an extraneous factor); and design effects, which are artefacts of the evaluation process itself.

Relative long-term trends, called secular trends, in a community may produce changes in gross outcomes that enhance or mask the net effects of an intervention. For instance, if an economic improvement leads to increased food consumption among the poor, the nutrition programme's effects are enhanced. Alternatively, an effective programme to improve the nutritional situation of poor people may appear to have little or no effect when gross outcomes are influenced by a general downward economic trend leading to decreased food consumption.

Short-term interfering events can enhance or mask changes, yet it is difficult to control for them. Such events could include exposure to other types of education material, disruptions in communications and delivery of food or social or political events that affect community participation (Rossi and Freeman, 1993).

Design effects result from the evaluation process itself and are thus always present and consistently threaten the validity of impact evaluation. The act of evaluation itself is an intervention and thus may have an impact.

An intervention programme will often create an effect regardless of the type of programme, In experiments involving pharmacological treatments this is known as the "placebo effect"; in social programmes like nutrition education for the public, it is called the "Hawthorn effect" (Rossi and Freeman, 1993). This effect is not specific to any particular research or evaluation design. It may be present in any study involving human subjects. In other words, any nutrition education programme can bring about dietary changes, no matter what the message is. It is difficult to estimate the magnitude of the Hawthorn effect, and its importance may be exaggerated (Franke and Kaul, 1978).

QUALITATIVE VERSUS QUANTITATIVE METHODS

Today most evaluators collect qualitative as well as quantitative information. The important choice of approaches depends on the evaluation question at hand, Qualitative approaches can be critical in programme design and are important means of monitoring programmes (process evaluation). Qualitative evaluators aim to make a programme work better by providing information on the programme to its managers (formative evaluation) (Rossi and Freeman, 1993), In contrast, quantitatively oriented evaluators are primarily concerned with impact or outcome evaluation (summative evaluation), Quantitative approaches are more appropriate in estimates of net impact as well as in assessments of the efficiency of programme efforts.

The use of multiple methods, both qualitative and quantitative, can strengthen the validity of findings if the results produced by different methods are congruent and/or complementary.

MEASURING EFFICIENCY

A central step in evaluating nutrition education programmes is determining which programmes show the best results per unit of cost. For decision-makers the reference programme is often the one that produces the most impact on the most targets for a given level of expenditure. This simple principle is the foundation for cost-benefit and cost-effectiveness analyses which provide systematic approaches to resource allocation.

In cost-benefit analyses the outcomes of nutrition education programmes are expressed in monetary terms. For example, a cost-benefit analysis would focus on the difference between money spent on the nutrition education programme and the savings from reduced expenditures for treating diet-related diseases (anaemia, goitre, vitamin A-related blindness, etc), loss of productive capacity, life years gained, quality of life years saved, etc.

In cost-effectiveness analyses the outcomes of nutrition education programmes are expressed in substantive terms For example, a cost-effectiveness analysis would focus on estimating expenditures to change the diet of each target

Efficiency analysis can be done either in the planning and design phases of a programme (ex ante analysis) or, more commonly, after the programme's completion (ex post analysis), often as part of impact evaluation. Ex ante analyses are not based on empirical information and run the risk of either under- or overestimating the benefits or effectiveness. Ex post analysis assesses whether the costs of the intervention can be Justified by the magnitude of net outcomes An important strategy in efficiency analysis is to undertake several different analyses of the same programme, varying the assumptions made, which are open for review and checking This is called sensitivity analysis.

Efficiency analyses may be impractical and unwise for several reasons. The required technical procedures may be beyond the resources of the evaluation programme Also, political or moral controversies may result from the placing of economic values on particular input and outcome measures Cost-effectiveness analysis is seen as more appropriate than cost-benefit analysis Cost-effectiveness analysis requires monetizing only for the programme's cost, and the benefits are expressed in outcome units (Rossi and Freeman, 1993).

WHO SHOULD EVALUATE?

In deciding who should perform the evaluation, the first distinction to be made is between internal and external evaluators An internal evaluator is usually a member of the staff of the programme concerned and reports directly to its managers The internal evaluator's objectivity and external credibility are (often rightly) said to be lower than those of an external evaluator Because external evaluators are not directly involved or employed in the programmes they examine, they enjoy more independence (Oshaug, 1994), but they may be less discerning about context

The second distinction is between professional and amateur evaluators This distinction reflects differences in training and expertise, and is not a judgement on the quality of an evaluation. Evaluation is the focus of the professional evaluator's training and work. An amateur evaluator usually focuses on other topics; evaluation is only a part of her or his job, The amateur might have a better understanding of a programme's evaluation needs, might be able to develop a better rapport with the staff and will be able to use the information and results of the evaluation faster (often directly), in particular if it is an internal evaluation (Oshaug, 1994).

Skills needed in evaluation

Both nutrition and evaluation are interdisciplinary fields. Evaluators use a range of approaches such as large-scale, randomized field experiments, time-series analysis, qualitative field methods, quantitative cross-sectional studies, rapid appraisal methods, focused group discussions and participant observation, The definition of the role of an evaluator in general terms is therefore blurred and fuzzy (Rossi and Freeman, 1993), Clearly, it is impossible for every person involved in nutrition evaluation to be an expert in every methodological procedure, and it may be necessary to hire consultants who are experts on specific methods.

An evaluator will have an important role in assessing the correctness of problem identification, Skills are therefore needed in diagnostic procedures for defining the nature, size and distribution of the nutrition problem. These procedures may include analysis of existing data to assess or provide the baseline; rapid appraisals; qualitative needs assessment; forecasting of needs; estimation of nutrition parameters; estimation of nutrition/disease-risk behaviours; and assessment of target selection. Furthermore, skills are also needed in using indicators to identify trends; measuring programme coverage; identifying effects and impact; assessing biases and confounding factors; and disseminating evaluation results to various stakeholders.

CONCLUSION

Evaluation can be simple or complex depending on the evaluator's competence and the aims of the evaluation. Evaluators of nutrition education programmes should look at various options, aiming at the simplest system that works and seeking the best method or set of methods for answering the questions that address the objectives of the evaluation, Having chosen a type of evaluation and the questions and indicators to use, the evaluator will be better able to decide among methodologies.

As noted earlier, evaluation should be integrated in the planning of the nutrition education programme, and the purpose of the evaluation should be clear, An evaluation system should be developed which takes account of all phases of the nutrition education project. Finally, plans for dissemination of the evaluation results should be made. The results should be presented in a way that corresponds to the needs and competencies of the stakeholders.

REFERENCES

Chapman, D.W. & Boothroyd, R.A. 1988. Evaluation dilemmas: conducting evaluation studies in developing countries. Eval. Program Plan., 11: 37-42.

FAO/WHO. 1992. Communicating to improve nutrition behaviour: the challenge of motivating the audience to act. ICN case study. ICN/92/INF/29. Rome.

Franke, R.H. & Kaul, J.D. 1978. The Hawthorn experiments: first statistical interpretation, Am. Sociol. Rev., A3: 623-643.

Oshaug, A. 1994, Planning and managing community nutrition work. Manual for personnel involved in community nutrition. Oslo, WHO Collaborating Centre, Nordic School of Nutrition, University of Oslo. 2nd ed.

Oshaug, A., Benbouzid, D. & Guilbert, J.-J. 1993. Educational handbook for nutrition trainers, A handbook on how educators can increase their skills so as to facilitate learning for the students. Geneva, WHO/Oslo, WHO Collaborating Centre, Nordic School of Nutrition, University of Oslo.

Rossi, P.H. & Freeman, H.E. 1993. Evaluation, A systematic approach. London, Sage Publications.

Wholey, J.S. 1981. Using evaluation to improve program performance. In R.A. Levin, M.A. Solomon, G.-M. Hellstern & H. Wollmann, eds. Evaluation research and practice. Comparative and international perspectives. London, Sage Publications.

Comment évaluer les programmes d'éducation nutritionnelle

Dans un monde aux ressources limitées, il convient de distinguer entre les programmes d'éducation nutritionnelle utiles et ceux qui sont inefficaces, Pour concevoir et mettre en oeuvre des programmes qui aient l'impact désiré, l'évaluation devrait être intégrée dans les stratégies d'éducation nutritionnelle.

L'évaluation donne des renseignements sur la couverture, le service, l'impact, l'efficacité, la fiscalité et la responsabilité juridique, Elle facilite l'administration et la supervision d'un programme, L'évaluation peut sensibiliser l'opinion aux activités éducatives ou promouvoir les relations publiques.

L'évaluation suit une approche systématique qui devrait être intégrée dans toutes les phases de la planification, de l'exécution et de la gestion du programme. Il est indispensable que l'évaluation commence par une définition claire des objectifs du programme d'éducation nutritionnelle, eux-mêmes identifiés grâce à l'évaluation de la situation nutritionnelle et des facteurs contribuant aux problèmes.

Plusieurs types d'évaluation sont examinés. L'évaluation du contexte sert à préciser les objectifs et les activités et à s'assurer qu'ils sont pertinents et réalistes. L'évaluation des intrants examine de manière critique le caractère approprié des ressources disponibles pour mettre en oeuvre le programme. L'évaluation du processus suit les progrès accomplis à mesure que les '' stratégies et les activités sont mises en oeuvre et indiquent quelles sont les chances d'obtenir les résultats attendus. L'évaluation des résultats évalue les résultats bruts et nets, Les résultats bruts sont n'importe quel changement dans le régime alimentaire des participants, Les résultats nets évaluent les modifications du régime alimentaire provoquées par l'intervention.

Les évaluations de qualité peuvent être effectuées par le personnel interne ou par des consultants extérieurs. Les évaluateurs peuvent être des professionnels ou des amateurs. Les résultats des évaluations devraient être présentés d'une manière qui corresponde aux besoins et aux compétences des personnes intéressées.

Cómo evaluar los programas de educación nutricional

En un mundo donde los recursos son escasos, es necesario distinguir los programas de educación nutricional útiles de aquellos ineficaces. Para concebir y aplicar programas que obtengan los efectos deseados, es conveniente integrar la evaluación en las estrategias de educación nutricional. La labor de evaluación permite obtener información acerca del alcance, el servicio, los efectos, la eficacia y la responsabilidad en materia fiscal y jurídica de los programas y facilita la administración y supervisión de los mismos. Asimismo, puede contribuir a que se tome mayor conciencia de la importancia de las actividades educativas o a promover las relaciones públicas.

La evaluación debe hacerse siguiendo un criterio sistemático que debe aplicarse en todas las fases de la planificación, ejecución y gestión de los programas. Es muy importante empezar la evaluación definiendo de forma clara los objetivos de los programas de educación nutricional, que se determinan en función de la situación nutricional y los factores que contribuyen a los problemas. Se examinan diversos tipos de evaluación. La evaluación del contexto ayuda a definir mejor los objetivos y las actividades con el fin de que sean pertinentes y se ajusten a la realidad. La evaluación de los insumos estudia si los recursos de que se dispone son suficientes y apropiados para llevar a cabo el programa, La evaluación del proceso sigue de cerca los progresos realizados en la puesta en práctica de las estrategias y actividades e indica las probabilidades de alcanzar los resultados previstos. La evaluación de los resultados examina los resultados brutos y netos. Se considera resultado bruto todo cambio registrado en la dieta de los participantes. Los resultados netos son los cambios debidos a la intervención.

La evaluación de la calidad puede encomendarse tanto a personal interno como a consultores externos. Los responsables de la misma pueden ser profesionales o colaboradores voluntarios. Los resultados de las evaluaciones deben presentarse de forma que se ajusten a las necesidades y competencias de los interesados.


Previous Page Top of Page Next Page