Publication: Research › Article in proceedings – Annual report year: 2005
A general evaluation of any education shall be based on the educations ability to meet defined goals and objectives. If such an evaluation is performed continuously it can be viewed as a relative quality measure. A main task when evaluating the whole education will be an evaluation of the individual courses. This may comprise of several parts: A lecturer evaluation/report, a student evaluation/report of the course and the teacher(s), the number of students having passed the course, the grade average and the distribution of the grades. At DTU (The Technical University of Denmark), students have for more than 10 years been evaluating the courses they attend. During the last 5 years, this evaluation has been electronic as an integral part of our CampusNet computing and course admini¬stra¬tion system. At the MEK department we have a committee in charge of our educational activities. One obligation is – two times a year – to go through in detail the evaluations for each of our 100+ courses and for each of the 50+ teachers. A rather tedious job, where the outcome seldom justifies the resources spent. The electronic version has opened up for further analysis of the evaluation data and extraction of important information; this will be the main focus of the paper. In the evaluation of courses, the students are given seven different questions and for each question they can select between 5 different answers. Each answer is given a certain weight, and by summing up the weights for the selected answers and making an average over all the students, each course obtains a utility value. A similar set of questions and answers exists for all course lectures. Sorting the utilities for all courses and also for all lectures reveals some interesting cumulative curves. One obvious result of the analysis is the possibility to identify both good and bad courses and lectures. Though, in most cases the analysis only quantifies what is already well known, it is important in particular to carefully search reasons for low values of the utility. There is a large number of reasons for poor performance and we have seen excellent lectures obtaining poor evaluations due to circumstances out of their control. On the other hand, with the analysis at hand, it is much easier to focus on specific needs for improvements. The effect of ‘chasing the bad performance’ over the past few years will be shown.
|Title||International CDIO Conference and Collaborators' Meeting; : Queen's University, Kingston, Ontario, Canada|
|Conference||The First Annual CDIO Conference, Kingston, Canada.|
|Period||01/01/05 → …|
Loading map data...