Leiden University logo.

nl en

Course evaluations in BA International Studies: some general observations about the past academic year and reflections course evaluations

Who is responsible for course evaluations, who (and what) are they for and what happens with student course evaluations after the course has finished? In this post, I will explain how the OLC deals with course evaluations, describe some general trends we have observed over the past years, as well as address some of the areas we focused on during the academic year 2018/2019.

Course evaluations in International Studies
The OLC is responsible for organizing the course evaluations, analyse the results and based on that formulate recommendations to the programme board. This is one of its central tasks. To give you an idea of the magnitude of the work involved: every academic year there are ca. 130 different courses in the BA International Studies that are evaluated (core courses, area courses, language courses, electives and thesis seminars). In addition, since several courses are taught by multiple instructors we request evaluations per instructor. So, overall, we are looking at more than 100 course evaluation documents per semester.

There are 10 OLC members (5 staff members, 5 student members), and each OLC member is responsible for a set of courses and writes a short summary of the evaluation results at the end of every semester, pointing out the issues spotted in the evaluations. However, the OLC does not solely rely on course evaluations to see what works (or mainly what does not work) in courses, but also seeks input from instructors. Course evaluations are sent to instructors with a request to provide their views, i.e. on how they perceived the way the course ran, the performance of students, whether they recognize the issues mentioned in the course evaluations, etc. Usually, around 10%-15% of the instructors reply, providing additional comments that help us to put the course evaluations into context.

Once all information is in, the OLC tries to distil what the “real” issues are (those identified by students in course evaluations and instructors). In some cases, it helps to also look back at course evaluations from previous years, to see whether issues have been the same for a period of time. Where necessary the OLC would seek further contact with the instructor(s) in question about specific issues, or ask them what proposals they have to address the issues in the coming year.

So far, this has happened in a very small number of cases. Finally, in the very rare cases where issues persist, the OLC writes a recommendation to the programme board, proposing either to consider a change in the structure of the course or take other steps. In short, to be able to get the full picture of what pressing issues there are, course evaluations need to be complemented with other sources. In fact, given the methodological drawbacks of our standard questionnaires (an issue I will address in an upcoming blog post), input from instructors and from students besides the course evaluations is as important (if not more) - but it also takes more time and effort for all involved to provide/ gather this information and make sense of it.

What did we focus on this past academic year?
This past academic year the programme board decided to leave out mid-term exams in a number of courses (e.g. in Cultural Studies), a sort of test-case to see if the number of midterms could be reduced and we can develop more varied forms of assessment in the programme. Analysing the course evaluations of the relevant courses in this pilot, the OLC was particularly interested to see whether this would affect the (perceived) difficulty of the exam, study load etc. Likewise, we scanned the written comments, to see if something was mentioned there, issues not picked up by the standard questionnaire. Finally, we looked at the average exam score and pass-rate for the relevant courses and compared it to the previous year. Without going too much into detail, it seems that abolishing the midterm exam did not affect scores in the evaluations significantly, nor was it a recurring theme in the written comments. Likewise, while the average exam grade and pass fail rate went somewhat down, none of this seemed to be significant. Whether the OLC will ultimately agree to abolishing more midterms, remains to be seen (obviously opinions are divided on this). Course evaluations can play one part in forming the opinion of the OLC on subjects concerning changes to the programme structure, but they are by far not the only source of information to consider.

Throughout 2018/2019, we also focused on a number of core courses where past evaluations indicated issues, which were highlighted by a large number of respondents in the written comments. Based on the comments and survey results of this year, both courses improved on a number of issues, notably instructions for assignment and feedback from tutors on in-class assignments. It goes to show that courses can improve over time and more than one data point is required before drawing any conclusions.

A third area we focused on was the newly introduced seminar series Research Methods/Thematic Seminars. Here course evaluations showed a mixed picture, with lots of variation across the different thematic seminars offered. More importantly, what struck us in the evaluations was that instructors received overwhelmingly positive evaluations for their clarity of explanations, feedback etc. (as we observed in electives in previous year), but the courses itself (on the general score) received relatively low sores.

One of the main issues, it seemed, was an expectation gap on the side of students: from multiple comments across virtually all thematic seminars we saw that while students expected to take an elective course in which the substantive topics would be the central focus, this seminar series actually focused on research methods. Although the clue was in the name of the seminar series (as well as in the description of this seminar series in the e-prospectus) it seemed that somehow this information has either been glanced over, or students in the second year simply did not know what to expect from a course on research methods.  This seminar series ran for the first time this past academic year, so, again, it remains to be seen what the real issues are. But what it also shows is that information within teaching evaluations – in particular the general score (the very first score we see on the evaluation report) – might be ambiguous and issues raised might not necessarily be related to teaching quality, but rather reflect preferences of students for a given subjects.

Some general trends (and some reflections) on course evaluations in the BA International Studies
This past year, the vast majority of courses course evaluations in International Studies received positive evaluations: instructors explanations are clear, courses are generally well structured, students find the content interesting and appreciate the clarity of explanations provided by the instructors. Of course, there are issues in individual courses or seminar series, but on average the picture is this: most courses receive a general score well above 6 (on a 10-point scale), and items that measure the clarity of instructors’ explanations, the way courses are organized etc. very rarely below 3.0 (on a 5-point scale).

Still, undeniably, there is variation across courses (both in terms of scores on the survey part of the evaluations, as well as in the written comments) and it seems that differences are not necessarily due to quality of teaching or didactic approaches of different instructors. Looking at course evaluations for the same course over many years, the following structural factors seem to account for some of the differences. Courses that are taught by the same instructor over several years tend to get more positive evaluations across all survey items. We have seen this again this past semester with two courses that have been relatively new to the program (introduced in their current form in 2017/2018). This past semester the evaluations in both courses have improved compared to last year. One (tentative) conclusion could be that courses can be improved, if the same instructor stays on and gets the chance to address the issues raised.

We also observed that coordination between different components of a course matters a great deal. In particular, in courses taught by more than one instructor course evaluations show (on average) lower scores on the survey item “coherence of the course” and the same issues are picked up in the written comments, too. This was a recurring issue in the course evaluations for Cultural Interaction. Ultimately the OLC sought to discuss the issue with the instructors for that course. Consequently, the course was split into two courses. Coordination across different components remains challenging in other courses as well  - in particular for courses that address different disciplinary perspectives. However, multidisciplinary is challenging and splitting courses might not always be desirable. In fact, it might run counter to one of the fundamental ideas behind the international Studies programme, notably being introduced to multiple disciplinary perspective on a given subject or range of subjects. Splitting courses along disciplinary lines might not serve this goal, even if this means potentially better course evaluations (and as Goodhart’s law goes: “if the measure becomes the target, it ceases to be a good measure.”).

In short, addressing issues (and affecting change) takes time and we need to take the long view on identifying issues in courses. Course evaluations that provide an overview over several years can provide a starting point. However, rather than taking the scores from standardized questionnaires at face value, the information in them needs to be complemented with written comments from students and input from those who actually designed and teach courses.

This website uses cookies.  More information.