Are
university rankings reliable?
The parameters used do not necessarily reflect
quality. Large institutions benefit from what is essentially a marketing
exercise
University rankings are a standard feature in most
countries. The rankings resemble a football league table and is always read
like one. Rank seems to be the only thing that counts, with relative position
in the table attracting more attention than the processes by which the rankings
are achieved.
The declared goal of all ranking agencies is to
assess quality using a set of indicators which could overlap with each other.
Most ranking systems are a three-part process: first, data is collected on
indicators; second, the data for each indicator is scored; and, third, the
scores from each indicator are weighted and aggregated. Rankings, therefore,
are an aggregation of indicators, leading to the total score. This
weight-and-sum approach meets the common sense test well and thereby makes
university ranking highly marketable.
Chief
criteria
Rankers typically choose indicators relating to
learning inputs; research; outcomes; reputation. Each indicator is seen as a
reasonable proxy for quality and , suitably aggregated and weighted, constitute
a plausible, holistic “definition” of quality. By selecting a particular set of
indicators and assigning each a given weight, the authors of these rankings
often impose a specific definition of quality on the institutions being ranked.
Intriguingly, there is hardly any agreement among the authors of these
indicators as to what indicates quality, even while the choice of indicators
and the weight given to each indicator make a considerable difference to the
final output.
Additionally, rankings are based on convenient data.
The result is often that ‘teaching quality’— a particularly relevant indicator
— gets excluded because obtaining independent, objective measures of teaching
quality is difficult, expensive and time-consuming. Therefore, ‘measured
institutional quality’ is not immutable, as an institution’s ranking is largely
a function of what the ranking body chooses to measure. No wonder rankings have
been met with a mixture of public enthusiasm and institutional unease. Very few
league tables do a good job of normalising their figures for institutional size
or of using a “value-added” approach to measuring institutions. As a result,
they tend to be biased towards larger institutions and institutions with good
“inputs”.
Big
impact
Underlying the weight-and-sum methodology is the
first assumption that all the indicators are mutually supporting and that they
all contribute, though not necessarily in equal proportion, to the measurement
of academic excellence. In other words, the relationships between the
indicators are assumed to be additive. Related to this is the second assumption
that the indicators compensate one another such that a weakness in one
indicator is made good by strength in another; for instance, having more
international students can compensate a poor showing for citation. Third,
summing of the raw scores from distributions with different standard deviations
does not in any way affect the raw total scores and distort the overall.
Despite these shortcomings, rankings have had an
impact far beyond their arbitrary design would warrant. This is because in the
highly competitive culture of today, people are trying hard to out-do one
another in almost anything, and universities are not spared of this
questionable approach. Increasing marketisation of higher education, coupled
with greater mobility of students, led to the creation of this psyche where
perceived status and reputation are seen as important marketing tools.
These concerns should not be dismissed lightly,
because rank consumers have no way of knowing that what they get is often not
what they have been promised. University ranking has to be raised to a level of
rigorous scientific research. They must (i) clearly spell out what constitutes
quality; (ii) empirically identify minimally overlapping indicators to measure
quality; (iii) give weights in proportion to the relative importance of the
indicator; and (iv) figure out ways to actualise the given weights, without
prejudice or bias.
The writer was dean and director-in-charge, IIM-
Lucknow, and director, Jaipuria Institute of Management
Source | Business Line | 26 July 2017
Regards!
Librarian
Rizvi
Institute of Management
No comments:
Post a Comment