How to improve course evaluation instruments?

What do ratings tell me about my teaching? That’s the question SBE alumnus Twan Huybers asked himself a few years ago.   

Student evaluation and ratings

In Australian universities, like pretty much everywhere else in the higher education world, students are asked to provide feedback on their courses.  Student evaluation commonly happens via a questionnaire listing any number of statements for which students provide ratings (on a scale from “strongly disagree” to “strongly agree”).  Academic staff can use evaluation results formatively to reflect on their strengths and weaknesses as perceived by their students.  Evaluation outcomes are also eagerly assessed by university management, for instance to inform decisions affecting staff careers (hiring, career development and layoff).

University management is often not interested in most of the items evaluated, focusing instead on the one question of “overall satisfaction”.  Yet, there is a more complicated and interesting story behind such headline evaluation scores.  Where can I improve my teaching?  Should I re-assess aspects of my courses?  This is where the difficulties start.  There is ample evidence of the shortcomings of rating scales.  The fact that different people use those scales differently presents a problem when aggregating responses.  Response bias is well known, such as the tendency to agree with everything (yea-saying) or to use mid-points only.  As a result, rating scores often generate little discrimination among the various underlying items.

That was the problem with my course evaluations: there was virtually no difference in the scores for the items evaluated.  Surely, I must have been better at some things than at others?  Responses to the open-ended questions, and informal feedback from some students, seemed to suggest it.  More fundamentally, I did not have much faith in the average numbers presented in the duly compiled evaluation reports.  This frustration was a major motivation for my research into an alternative evaluation method.

A new way

Because of my interest in choice experiments, I became aware of a technique called Best-Worst Scaling (BWS) developed by one of my research collaborators, Jordan Louviere.  Instead of eliciting ratings on an item-by-item basis, a BWS question asks a respondent to choose the best and the worst item in a sub-set of all items.  This is done repeatedly in a sequence of sub-sets systematically determined by an experimental design.  Hey presto, I thought.  If I ask students to trade off the individual items, rather than giving them the opportunity to tell me my course is good (or bad) in all respects, I should get a better insight into the relative differences.  Also, BWS provides scores on a scale that is common to all students because their responses are choices instead of ratings on a person-dependent scale.  Moreover, analysis of BWS data can be quite straightforward making it broadly accessible to evaluation practitioners and researchers.

I have used BWS in teaching evaluation studies in the last few years and the results are promising in terms of eliciting better discrimination across items of interest.  Incidentally, while my research is quantitative in nature, I do feel that numerical scores need to be supplemented with qualitative feedback (from students and peers) to provide a richer feedback picture.

During my time as visiting researcher in the Department of Educational Research and Development, I am working on a few projects.  One of them is the application of BWS to evaluate SBE graduates’ experience of their degree programme.  Apart from this being a new BWS application, I will use the study to investigate some currently unresolved BWS issues.  After all, no method is perfect or undisputed.

All recent SBE graduates will soon receive an email invitation to participate in this research.  If what I said above has resonated with you, please don’t delete that invitation but use the opportunity to contribute to this research project.

With that shameless bit of promotion, I rest my keyboard.

 

By Twan Huybers

Twan Huybers is a 1993 alumnus from SBE.  As part of the International Management/Economics programme, he worked on a research project in Australia for six months in 1993.  After a brief return to The Netherlands for his graduation, he moved to Australia permanently and has worked in the higher education sector ever since.  He obtained his PhD in 2001 and is currently Associate Professor in the School of Business at the Canberra campus of the University of New South Wales.  He is visiting the Department of Educational Research and Development at SBE between August and December 2013.

Post Your Thoughts