• Send
  • Subscribe


Do rankings measure the quality of teaching?

Josep Anton Ferré Vidal - Director of AQU Catalunya
Sometimes when I'm in class I use a magic wand. Not to do the things they do in Harry Potter, but for virtual experiments that help me to explain things, for example, the Archimedes principle. Let's leave the discoveries of the Greek scientist for another day, however, and I'll tell you about what happened to me today returning from Madrid on the high-speed train, when I used my wand to improve the position of Catalan universities in world rankings.

I did four experiments. First, with a wave of my magic wand, no student would fail any exams in the academic year, i.e., they would pass all of their enrolled credits. Surprisingly, this didn't improve the position of any Catalan university in the rankings! Then I had a zero drop-out rate from all courses. The effect was the same: no improvement in the rankings! My third attempt was to see the effect of having an entire cohort of students be awarded their degrees without having to repeat any courses at all... and that had no effect either. Maybe I was asking for too much, and it would be enough just to summon the powers of nature to help students finish their degrees within just one extra year of study. And there was still no effect!

I was getting concerned. The four indicators that I was using the wand to dabble with (achievement rate, drop-out rate, efficiency rate and graduation rate) were the ones dealt with by Spanish legislation (Decree 1393/2007) and which are interpreted and explained in more detail in the document drawn up by the QA agencies in the REACU network (AQU Catalunya, ANECA, ACSUG, AGAE, ACSUCYL, etc.) that was today handed to representatives of the Ministry of Education, the regional Autonomous Community authorities and the universities during the meeting to set up the Technical Committee for the follow-up and accreditation of recognised degrees. If these are the minimum goals that have been set in common for the follow-up of degrees, how is it that, even if we achieve the highest values, our position in the rankings doesn't improve at all?

Have we made a mistake somewhere? No, I don't think so, but that's not all. In order to do things well, from now on we need to measure not just the outputs (outcomes) of the educational process, but the inputs as well. For example, the class hours that are programmed for each course, for master classes, practicals, and the number of students per group in each activity. And the teachers who are giving them, are they part-time, post-doctoral fellows or tenure-track lecturers struggling to acquire a permanent post? And what about laboratories and libraries, what are they like? Ah, and timetables, are they dense or spread out? What assessment methods are used? And Moodle... are teachers putting enough materials on Moodle?

If these things don't get monitored, in the event that drop-out rates, graduation rates, etc. are not what we expect them to be, we'll never know if we are making all the necessary effort, or whether even more funding and resources, that has perhaps been used for other things, need to be set aside for this. Measurement of the different outputs of the process, maintaining them above a threshold value and the ability to control certain inputs are actually the objective of the internal quality assurance system for each programme being developed by the AUDIT programme.

So how are we doing in the rankings? I've gone over things once again. Most rankings fundamentally measure a certain type of research. Unfortunately, the poor students aren't important enough for rankings. Of course, they're not the only ones left out; there's no ranking that measures anything regarding the third mission, or the universities' capability to transform teaching into a social and economic value for the local community. Both the first and third missions are more difficult to measure and, in fact, not very much importance is attached to them in a teacher's curriculum.

Counting the number of students, articles in indexed journals and Nobel prizes is easier. If we put all the Catalan universities together, we'd have more students, more articles, and we'd go higher up the Shanghai ranking, but there would still be no Nobel prize. Maybe all this would make us happier, though we'd be fooling ourselves to think that by being bigger we'd any better. Rankings are neither good nor bad. They're like some soft drinks that cause you to put on weight and are unhealthy, and one should really read the label first to see what they're made of. Likewise, one needs to carefully read about the criteria and indicators that rankings are based on before paying any real attention to them. If in doubt, drink water, or even maybe a glass of wine, which always goes down well!

Generalitat de Catalunya

Via Laietana, 28, 5a planta 08003 Barcelona. Spain. Tel.: +34 93 268 89 50

© 2010 AQU Catalunya - Legal number B-21.910-2008