Previous PageTable Of ContentsNext Page

EVALUATION AND MONITORING OF HIGHER FORESTRY EDUCATION

PREPARED BY SIEGFRIED LEWARK
UNIVERSITY OF FREIBURG, GERMANY
COORDINATOR OF IUFROi EDUCATION GROUP


Starting ideas

If you ask a professor about his or her experiences with evaluation, most likely he or she will tell you about feed back one gets from the students after a lesson or teaching unit. If you ask a dean of a university faculty or school the same question, he or she may inform you about some recent external audit, perhaps after a revision of a curriculum. But I assume that most professors or deans you ask will start thinking, if you ask them about monitoring.

So first we have to clarify concepts of evaluation and monitoring of learning and teaching processes. We have to ask, why is it done, what for, for whom, by whom. What is the precondition, who pays for it? And finally and probably most important: what is evaluated, on which level, and how?

I will try to give some answers to this complex of questions, as I understand them, based on my own experience and that from literature. Then give some examples of evaluations from higher forestry education in Germany and methods used and comment upon them.

Concepts of evaluation

The general objective of teaching evaluations in universities is gathering information about the success of education, details of learning and teaching processes, the conditions under which these processes are going on, input and output. This sounds rather complex and has to be structured and broken down. A useful structure has been developed by Schomburg, 1997, and is presented in Fig.1.

Evaluation is the process of assessment of information relevant to the questions to be answered and the interpretation and valuation of this information. If this evaluation is repeated regularly or done continuously or according to needs with the aim of controlling processes I would call it monitoring. So is not a concept at the side of evaluation, but a special type of evaluations.

Single evaluations, but especially regular or continuous evaluations may be seen as parts of governance or management of education, which is done from the level of national ministries of education down to single education institutions, schools and in parts even to teachers. If modern management systems are applied this may be considered as elements of (total) quality management or quality development or assurance. It may even be initiated, audited and certified by external agencies.

Fig. 1: Structure of evaluation of higher education (Schomburg, 1997)

Evaluation and accreditation

Whereas accreditation traditionally is common practice in the USA, in Germany this is a completely new initiative of the last few years. It has been started in connection with the transformation of the traditional one-degree curricula into Bachelor and Masters programmes, following the Sorbonne and Bologna declarations.

A German Accreditation Council (Akkreditierungsrat) was founded by the "Standing Conference of the Ministers of Education and Cultural Affairs of the Laender in the Federal Republic of Germany" (Kulturministerkonferenz - KMK) and the "Association of Universities and Other Higher Education Institutions in Germany" (Hochschulrektorenkonferenz - HRK). Under this umbrella some accreditation agencies have been established, which are accrediting a quickly growing number of, mainly new or revised, curriculaii These agencies as well as private and state evaluation agencies normally start their work with the formulation and publication of quality standardsiii So a discussion on quality standards of education not known so far has been started, which may lead to a standardisation of criteria, indicators and assessment methods.

Education controlling versus education research

Evaluation of education processes serves education controlling as well as education research, which thus have a lot in common, but differ in other respects.

Things in common are some of the approaches, objectives, and methods. But education research will be done by researchers with the primary goal of gaining knowledge, which of course may be applied later. Education controlling on the other hand aims at governing education processes and relies, at least partly, on findings and methods found respectively developed by science.

Every university professor as a teacher will be active in monitoring his/her teaching and the learning processes of his/her students and therefore needs knowledge and methodical competence. He or she may be involved in controlling as a member of faculty or a curriculum commission or university management. And education research may be part of his/her scientific activities. I see myself in all three roles as certainly many professors do. But in this paper I will focus on general aspects and education controlling.

Who evaluates? Internal and external evaluation

In countries and education systems without, or times before, regulated quality assurance and accreditation, the education institution decided about starting an evaluation, its scope and ways and handling of the results. The samples given later give evidence from such a situation in Germany.

Of course evaluations on the level of single teaching units carried out by the teacher himself have a long tradition. It has often been called for by the students and encouraged by the education institution. The results were for immediate use, especially in preparation of the follow up teaching unit.

Then standardised questionnaires may have been offered, even to be evaluated by a central service unit. One crucial question is, whether the education institution or even the ministry of education will get the results, which has been argued against as being opposed to academic freedom of the teacher.

A modern approach now is often consisting of a two step evaluation. First in an internal evaluation the faculty "will prepare a self-assessment, reporting on classroom/seminar/laboratory situations, organisation of student assessment and examinations, students' work and achievements, the curriculum and how it relates to examination requirements, staff and staff development, the situation of female students/researchers, the application of resources (library, information technology, equipment), and student support and guidance." (Cf. footnote 2) A second step may be an external assessment by so called peers or by an agency or commissioned consultant (OECD 1996).

Stakeholders

Of course the topmost stakeholder is the society as a whole, which is of special importance for the education of foresters. Then the stakeholders within the education process are students, the later graduates, with their needs and views, and the teachers. And finally the group of potential and actual employers, who participate so far only by way of exception in most countries.

Theoretical frame

According to the stage of development of programmes (curriculum as a programme, Mutz, 2001; Rossi, Freeman, Hofmann, 1988) we may distinguish between evaluations of innovative educational programmes or curricula, evaluations aiming at adaptation and adjusting of curricula and evaluation of established curricula.

As the expert consultation is focused on innovative curricula in forestry education, in this context, programme evaluation according to Rossi, Freeman, Hofmann, 1988, will be given special consideration. So with my examples I will deal with an evaluation of innovative processes, the implementation and its first success rather than with questions of long term monitoring or quality assurance.

Objectives of evaluation

Depending on hierarchy and structure of evaluation, there is also a hierarchy of objectives, connected with the levels of evaluation, as outlined before. There is probably little need to stress, that objectives should be clear and laid down before starting the evaluation process.

In the past, evaluation activities very often had a limited scope focusing at certain points: evaluation of specific teaching unit and evaluation of innovative step. This type of activities will remain to be justified and necessary.

But more and more striving at integrated comprehensive governance is to be observed. This may interfere with (traditional) independence of educational institutions, which has been strong for instance in Germany, may be even more in Sweden. In contrast (speaking from my own limited experience) in Eastern Europe or East Asia decisions on education systems and curricula have been taken more centralised. So in the proceedings of the consultation we must decide what to cover, evaluations with limited scope (for which my examples are standing) or more comprehensive evaluation and monitoring processes.

What is evaluated? (Criteria)

If we focus on evaluation of innovative educational programmes as assumed above we are dealing with programme evaluation. The overall question is that of the suitability and the success of the curriculum.

From that you may deduce the central criteria:

The criteria for a specific evaluation have to be ranked, weighted and selected according to the objectives of evaluation, to the specific programme and also according to practicability (available resources). Only thereafter the choice of methods including indicators will be reasonable.

What is evaluated will depend on the objectives of the evaluation. The decision about objectives and definition of criteria is crucial for the attainable results of the evaluation and therefore of fundamental interest for the stakeholders. It may be approached in a participatory process, but this will not be dealt with here.

Methods of evaluation

A comprehensive evaluation will include all the variables (criteria) named by Schomburg, 1997 (Fig. 1). In principle their assessment belongs to the field of empirical (applied) social research (Rossi, Freeman, Hofmann, 1988). Accordingly the classical methods of assessment are analysis of documents, observation and enquiry. Very much generalising analysis of documents will result in statistical data, observation in information about performance and enquiry in views, meanings or attitudes, thus dealing with the classical field and methods of social sciences (Schnell, Hill, Esser). There is a vast store of methodical literature in this field, and it does not seem necessary to go into any detail in this place. Some information though will come out of the presentation of examples.

Examples

The general description of structures and processes of evaluation of higher education in forestry will be shown by selected examples arranged according to the structure of Schomburg, 1997 (Fig. 1).

Student input

It is obvious that the majority of evaluation results can only be done justice to, if basic input data are taken into account. These include information on age and sex of the students, on their social background, their formal qualifications, their grades in high school exams, but also on their motivation, professional and career aims as well as basic attitudes. Some of this information is included in the regular teaching reports faculties in Baden-Württemberg have to prepare for the Ministry of Education. In addition the Freiburg faculty of forest sciences recently assessed some information on the choice of studying forestry in Freiburg.

For German forestry students there is only one comprehensive study by Oesten, 1978, with no follow up, from which Table 1 is composed. This of course does belong to education research. Oesten found that forestry students' attitudes differ from those of high school students and described it in detail.

Table 1: General attitudes of 1st year forestry students (total number=104, Freiburg) and high-school students (total number=74) (After oesten, 1978)

 

forestry

students

high school

students

mean

st.dev.

mean

st.dev.

My profession must be paid as high as possible

3.3

0.8

2.1

1.1

My profession must be varied and offer many interesting experiences

1.6

0.5

1.4

0.7

In my profession I want to have much contact with other people

2.5

0.8

2.0

1.1

My profession must give me chances to serve other people

2.2

0.9

2.7

1.3

My profession must give me chances to participate actively in changes of social conditions

3.5

1.5

3.1

1.2

Resources

The unpublished teaching report mentioned reveals some basic statistical data on resources (as described above under internal evaluation). In 1999 for instance there were 10 students per one scientist and 27 students per professor. These numbers have been growing with the increasing number of students over the last 30 years.

These data have not been followed systematically, they are not easily accessible. This example shows the limited interest of the past, at least in Baden-Württemberg.

Processes

Curriculum level

Tables 2 and 3 stem from a rather large external evaluation of the teaching situation at the Freiburg Faculty of Forest Sciences five years after the "radical revision" of the curriculum (Lewark 1996). The assessment included enquiries of the teachers (Table 2) and students (table 3) according to a standardised and often used procedure (Webler et. al., 2000), thus leading to results which compare with those from other curricula. They have been laid down in a comprehensive report, which is given to interested public, but nor formally published. The main purpose was to serve as a basis for further development of the curriculum.

Table 2: Satisfaction of professors and lecturers (total number=50) with performance of students in papers and reports, after preliminary examinations (Webler et al. 2000) - Summarized 4 step scale

 

satisfaction with all or many students (%)

dissatisfaction with all or many students (%)

utilisation of literature

24

76

structuring

46

54

contents

54

46

presentation

42

58

working techniques

22

78

diligence

20

80

mastering of language

40

60

independent development of subject

11

89

independent use of material

34

66

Table 3: Problems at beginning of study (total number=202 students) (webler et al. 2000) - Summarized 5 step scale

 

no or few problems (%)

big or very big problems (%)

working self-organised

66

14

social contacts / communication

83

8

co-operation in working groups

68

12

understanding of new study contents

58

16

free speech in discussions

36

32

understanding of teachers

34

22

difficulties with self motivation

54

22

orientation in new surrounding

76

7

learning demands

54

19

gaining general idea on curriculum

44

27

understanding texts in foreign languages

39

22

knowledge of scientific working techniques

47

22

time planning / organisation of studying

50

22

Teaching module level

Some years before, an internal evaluation already studied the acceptance and starting phase of the curriculum in Freiburg (Lewark, Mutz, 1996; Mutz, 2001, gives a broad theoretical background). It included complete assessments on the level of the core teaching modules, for which a

standardised and tested methods was used (Table 4). The evaluation has also been used as a basis for fundamental discussions in the faculty, which lead to a better general agreement on the principles of the revised curriculum and improvements of the current teaching practices.

Table 4: Evaluation on level of teaching unit - descriptive statistics: 5 criteria selected, out of 18, based on 15 indicators, out of 40; total number=11 modules evaluated, module values given, total number of questionnaires -; standardized and tested evaluation instrument HILVE-Inventar (mutz, 2001) - Scale: 1:not true; 7: true

 

mean

st.dev.

min.

max.

structure:
(indicators: logical structure; organization)

5.07

0.88

2.77

5.74

overdemanding (indicators: too much material; too fast; not understandable; level of demand: average = 4)

4.36

0.72

2.85

5.11

learning success:
(indicators: much learned; important things learned)

3.06

0.71

1.90

4.42

teaching competence: (indicators: complicated information understandable; well prepared; motivating speech)

5.06

0.81

3.15

5.77

general judgement: (indicators: expectations fulfilled; content; visit worthwhile; motivating)

4.16

0.89

2.65

5.05

Figure 2: Assessment form for evaluation on the level of a teaching unit based on a mind map (developed by dr. artur hornung)

For a comparison, Figure 2 presents a one- page assessment instrument for use in a single teaching unit by the teacher. The setup as a mindmap has been motivating for the students. The evaluation is by frequency distributions and listing of free formulated comments and suggestions. Of course this method of assessment as well as all other enquiries must be followed by a feed back to the population enquired.

Outcomes

Some of the outcomes of the education process are already included in the teaching reports in the form of statistical data. Other very important information is assessed by graduate analyses. Figure 3 gives a typical result from the only graduate survey carried out so far for graduates from all four German faculties of forest sciences (Gerecke 1997).

Figure 3: Employment situation, 1996, of female and male graduates (Diplomforstwirtinnen/Diplomforstwirte) graduated from 1991 to 1994 at the four German faculties of Forest Sciences, total number=821 (elsbeth Gerecke, 1997)

A list of graduate surveys in Europe is given in Figure 4. The limited information listed already shows a considerable variety of approaches.

Concluding remarks

The expert consultation is focussed on innovative curricula in forestry education. In this context the first aim of evaluations will be a programme evaluation according to Mutz, 2001, and Rossi, Freeman, Hofmann, 1988, an evaluation of innovative processes, the implementation and its first success rather than questions of quality assurance or long term monitoring.

Figure 4: Some characteristics of graduate surveys in Europe - (Lewark, 1998)

As much (or even more) as asking the participants for success in a single lesson, the same question is necessary for learning and teaching progress at the curricula level. It is needed for quality development, for external justification and documentation and for assurance of future funding, perhaps for the continued existence of education institutions.

Evaluation and monitoring should be done in a systematic and methodically sound way, which leads to comparable results, and, if questionnaires are provided with personal codes, to revelation of developments. The overall goal of course is not the transparency of professor and student, but the development and assurance of quality of learning and teaching, thus serving the personal and the social progress.

REFERENCES

Anon. 1996. Evaluating and Reforming Education Systems. Paris: OECD Publication 91 96 04 1, 83 pp.

Gerecke, Elsbeth. 1997. Berufsaussichten für Diplom- Forstwirte/innen. Schluchsee: Study by Deutscher Forstverein, Final report, 81 pp.

Lewark, Siegfried. 1996. The radical revision of the forestry curriculum at the Freiburg forestry faculty. Paper at Eighteenth Session of the FAO Advisory Committee on Forestry Education, Santiago, Chile

Lewark, Siegfried. 1998. What do we want to know about our graduates? Joensuu: Paper at Silvanet Meeting July 1998.

Lewark, Siegfried; Mutz, Rüdiger. 1996. Evaluation des Kernstudiums im Sommersemester 1996 - Vorläufiger Bericht an den Fakultätsrat der Forstwissenschaftlichen Fakultät. Freiburg: Bericht, 77 pp.

Lewark, Siegfried; Pettenella, Davide; Saastamoinen, Olli. 1998. Labour markets for university educated foresters: recent developments and new perspectives. Wageningen: Proc. Workshop New Requirements for University Forestry Education. 30.7-3.8. 1997. DEMETER Series 1, 69-88.

Mutz, Rüdiger. 2001. Studienreform als Programm. Programmevaluation zur Akzeptanz des reformierten Studiengangs "Forstwissenschaft" bei Lehrenden und Studierenden in Freiburg. Landau: Verlag Empirische (Psychologie, Bd. 38), 311 pp.

Oesten, G. 1978. Untersuchungen zur Sozialisation von Nachwuchsmitgliedern für die Forstverwaltung. Freiburg, Univ., Forstl. Dissertation.

Rossi, P.H.; Freeman, H.E.; Hofman, G. 1988. Programm-Evaluation. Einführung in die Methoden angewandter Sozialforschung. Stuttgart: Enke.

Schnell, Rainer; Hill, Paul B.; Esser, Elke. 1993. Methoden der empirischen Sozialforschung. 4. überarb. Aufl. München, Oldenbourg, 504 pp.

Schomburg, Harald. 1997. Standard Instrument for Graduate and Employer Studies. Kassel & Eschborn: Wiss. Zentr. Berufs- Hochschulforsch. Univ. Gesamthochsch. Kassel & GTZ.

Webler, Wolff-Dietrich; Scharlau, Ingrid; Schiebel, Bernd. 2000. Evaluationsbericht zur Situation von Lehre und Studium der Forstwissenschaftlichen Fakultät der Albert-Ludwigs-Universität Freiburg. Bielefeld: 2000. Kooperationsprojekt zwischen der Forstwissenschaftlichen Fakultät der Albert-Ludwigs-Universität und der Projektgruppe Hoch-schulevaluation des IZHD Bielefeld. 124 S. + Anh.


i International Union of Forest Research Organizations (IUFRO)

ii http://www.accreditation-council.de - http://www.fibaa.de - http://sun.vdi-online.de

iii for example Evaluationsagentur Baden-Württemberg: http://www.evalag.de

Previous PageTop Of PageNext Page