What can rankings teach us about a school’s value?

“Over the years rankings have developed into a very powerful marketing and benchmarking tool that cannot be ignored,” says Susanne Raeder, Ranking Officer and Quality Assurance Officer for the School of Business and Economics (SBE).

But do rankings always provide an accurate assessment of a school’s value? We asked Raeder to tell us how to best read and use academic ranking figures.

Q: Ms Raeder, what are your responsibilities as Ranking Officer for SBE?

I am mostly concerned with bachelor and master education rankings and to a lesser extent with postgraduate education and research rankings. SBE participates among others in the national Elsevier and Keuzegids rankings as well the international CHE (a German ranking) and the Financial Times rankings. As a contact person for the various ranking agencies, I coordinate on time data delivery, and analyze and communicate ranking results once they are published.

Q: Rankings have gained increased importance in recent years. What is their function?

Their function is primarily twofold: they can be used to benchmark universities, schools or programmes against each other and as progress indicator. Second, as prospective students have become increasingly selective in their choice of school or programme, rankings can facilitate their decision making process by giving them an overview of the ‘best’ schools or programmes– or rather the schools or programmes that best match their wishes and expectations.

In times of information overload, people seem to feel the need for some pre-structured and pre-selected information. Rankings can provide that. The increasing competition among universities for resources is reflected in the benchmarking aspect of rankings.

Q: What are the most important criteria used in education rankings?

Among the common criteria are study programme content and organization, staff and student composition, facilities, learning assessment, study success, contact with the business world and career success of graduates. The data for the categories is gathered through student or graduate surveys, expert opinions and data delivered by the schools or the programmes surveyed.

Q: What is the impact of the different stakeholders on rankings results?

All stakeholders are important to us – whether they are students, alumni, staff or business partners – and we encourage them to share their opinion. They can do this either directly through the various channels provided in our network, or indirectly, for example by taking part in the ranking surveys mentioned earlier. It is important for us that our stakeholders seriously participate in the rankings surveys. This also pays off for them: the higher our school or programme is ranked, the better it is for their résumés.

Q: Rankings have also been criticized. Can you explain this in more detail?

There are many rankings worldwide and quite a few are respected. But indeed, rankings also generate quite some criticism, in relation to their inadequate or incomplete methodology. The criteria are perceived as too general or biased to give an accurate and realistic picture of universities. The very diverse environments in which schools or programmes operate make it difficult to compare them through rankings. This problem is less visible with regards to national rankings because at least in this case academic environments and structures are more comparable. But international rankings can be quite biased in the way criteria are set up or the data to be delivered is defined or in the way survey questions are formulated.

Q: Do you have an example for this?

Take the example of career success, which is often measured through graduates’ annual salaries. When I am asked to quote my salary, the first thing that comes to mind is the figure I see on my bank account at the end of the month and not my annual salary. Moreover, when calculating my annual salary, I should also take into account the allowances I receive for Christmas and other holidays. But this is often forgotten. In other places of the world, people immediately know the correct figure for their annual salary without having to calculate it. A cultural bias, you might say.

The definition of career services can also lead to biased results, because their definition differs per country. Here in Maastricht, we offer career services to support students, but the actual contact with companies takes place in a more indirect manner, through study association activities such as company visits or career days for instance. The question is then whether students or graduates who fill out surveys classify these activities under the label of career services.  Probably not, I would say. Also many students do internships during their studies and are quite successful in using them as career accelerators. But also here, do they define them as such? These seemingly minor issues can make a huge difference in the ranking results.

Q: Regardless of the general pro’s and con’s, what does SBE learn from or do with rankings?

As explained earlier, rankings are despite all criticism a powerful tool and can influence a school’s reputation quite a bit. That’s why we dedicate a lot of time to learn from ranking results. Our commitment goes mainly into internal quality assurance. In fact, rankings are part of our internal quality assurance system, which is also reflected in my function: besides Ranking Officer, I am mainly Quality Assurance Officer. My role is to read ranking results as signals for best practices or for possible areas of improvement. We usually combine these signals with signals derived from other surveys or evaluations to get a better overall picture of our school. Once for example, after identifying a signal that didn’t quite meet our expectations, we established a small expert task force to take a deeper look at the issue.

Q: What is your final advice about rankings?

Rankings are often very subjective, but if results are used properly they can be useful, especially to draw lessons in terms of quality assurance.

 

Post Your Thoughts