1. Introduction
World rankings are conducted by various organizations, media, and academic bodies. They rank Higher Education Institutions (HEIs) by assessing faculty, research, graduates, income, and reputation using different methodologies and indicators. Ranking importance was revealed by surveys indicating that ranking lists are important in student choice of HEIs [1]. Several studies indicated that rankings are important for global comparison of HEIs [2] [3]. Studies also indicated that rankings influence stakeholders’ and funding agencies’ decision making [4]. However, concern about ranking accuracy pertains to using easily quantifiable rather than important indicators [5] [6]. Some reservations also relate to securing ranking position by dubious practices and data manipulation [6] [7] [8]. Such doubts reflect lack of consensus about rankings among academics. A comprehensive review of ranking literature published during the past fifteen years has been conducted. Published data were studied in order to present a detailed description and comparative view of ranking systems methodology, indicators, and data sources. A thorough analysis was carried out to establish sources of controversy, inherent flaws, and apparent pitfalls. This treatise offers insight into rankings for academics and stakeholders aiming to discuss ranking importance and a comparative view of methodology, indicators, data sources, and analysis of controversy by depicting rankings flaws and pitfalls.
2. Ranking Importance
Publicity empowered rankings to dominate High Education (HE) arena and shape opinion on HEIs reputation. Rankings are important for HEIs and stakeholders in several different ways. They relate to HEIs planning and their indicators are used to gauge institutional success [9]. Ranking position above competitors helps HEIs build robust global image that improves recruitment and funding [10]. Policy makers within HEIs consider ranking indicators a driving force of institutional progress and use them to create and pursue target benchmarks and enforce academic change [11] [12] [13]. Governments also consider ranking results to compare national HE to international benchmarks [14]. Rankings research results are used by HEIs as evidence for quality and cost-effectiveness in pursuit of funding [15] [16] [17]. In addition, HEIs consider ranks prior to establishing academic cooperation and positive images created by rankings help enhance partnerships and collaborations [18] [19] [20]. These factors collectively created demand for high quality research data which usually lack credibility and validation [21] [22] [23] [24]. Problems of research quality assessment pertain to transparency and studies indicated that accuracy in reporting research output is essential for meaningful assessment [24] [25].
3. Ranking Systems
World rankings include, among others, Leiden Ranking, Nature Index, and Reuters Ranking publishing annual lists using methodologies and indicators that assess research quality. Leiden Ranking (Leiden University, Netherlands) ranks 1000 HEIs on annual science papers indexed in Web of Science [26] [27]. Nature Index (Springer-Nature Publishers) ranks 100 HEIs on annual science papers indexed in Science Citation Index [28]. Reuters Ranking (Thomson Reuters, USA) ranks 100 HEIs on science papers and patents to reflect commercializing innovation [29].
In a different approach, Science Webometrics (National Research Council, Spain) compiles data on internet structure, number of hyperlinks, web usage, and web-page versatility for 22,000 HEIs. Different to other rankings, it assesses performance based on application of information technology. Half of total score is assigned to number of hyperlinks, number of users, documents located by search engines on HEIs websites, and publications [30].
In the Arab World major challenges face HE due to socioeconomic and political issues. However, HE is on an upward path with many HEIs appearing on world rankings [31] [32]. Nevertheless, many other HEIs still require substantial improvement in quality and relevance. The Center for World University Ranking (CWUR) in United Arab Emirates is the Arab ranking body with annual list of 2000 HEIs assessed on education quality, student training, and number of science articles verified by Clarivate Analytics [33]. It is worth underlining CWUR attention to education compared to other rankings that mainly focus on research and assess education based on teaching commitment [33] [34]. However, despite rankings popularity HEIs focus on the influential Academic Ranking of World Universities (ARWU), Quacquarelli Symonds (QS), and Times Higher Education (THE).
The ARWU Ranking (Jiao Tong University, China) ranks 1000 HEIs in general and subject-specific lists [35]. It assesses research quality by Nobel Laureates, field medalists, research citations, and publications in Science and Nature with data from Thompson Routers (Table 1). It considers publications indexed in Science Citation Index (SCI) and Social Science Citation Index (SSCI). The top establishment is given a score of 100 and other HEIs are calculated as percentage of top score [36]. It is criticized for assigning 60% of score for research quality and only 10% for education quality and for biased indicators such as Nobel Laureates and Field Medalists [37].
The QS Ranking (Quacquarelli Symonds, UK) has general and subject-specific lists approved of World Ranking Observatory [38]. It ranks HEIs on mission, research, teaching, and graduate employability using peer review, Faculty:Student Ratio, citations, employer reputation, and globalization (Table 2). Criticism to QS pertains to reputation assessment by subjective survey methodology [39]. Teaching commitment assessed by Faculty:Student Ratio is also inadequate for assessing education quality as it does not reflect facilities, resources, and student support [39]. The citation per faculty indicator obtained via Thompson Reuters and Scopus databases is also a matter of concern. Scopus includes more non-English language journals than Thomson Reuters and mixing data from these two sources can yield different citation values [15]. The QS assigns 10% for international student and international faculty ratios to reflect institutional globalization. This indicator is thought to be inadequate due to liability of these ratios to temporal variations [40] [41].
The THE ranking (Higher Education Magazine, UK) publishes a list of 200 HEIs using data shared by HEIs and Thompson Reuters excluding HEIs with no
![]()
Table 1. ARWU ranking indicators. Able type styles.
undergraduate programs and those with annual research output less than 1000 articles [42] [43] [44]. Indicators cover teaching and research quality, citations, reputation, and income. Reputation indicator is assessed by Thompson Reuters survey and citations indicator is calculated as five-year mean per paper in Web of Science indexed journals (Table 3). Criticism to THE pertains to reputation assessment by subjective survey methodology [43]. The highly valued citations indicator is a disadvantage for HEIs using languages other than English since papers in other languages are difficult to trace by search engines, and bias towards natural science is a disadvantage for HEIs focusing on social science [45] [46] [47].
Although ARWU, QS, and THE concur in some features they differ in several aspects. They agree on Thompson Routers, SCI, and SCCI as data sources but they differ in sponsors, partners, and methodology. However, ARWU has academic sponsor and partner while QS and THE have non-academic ones (Table 4). Initial HEIs annually assessed are 1200, 3400, and 2600 for ARWU, QS, and THE, respectively. Final lists include 1000 HEIs for ARWU and QS, and only 100 for THE (Table 5). They also define and value indicators differently. While graduate quality is assessed by Nobel Laureates and Field Medalists in ARWU, it is assessed by graduate employability in QS and THE (Table 5). Faculty quality is based on awards and publications in ARWU, and on publications only in QS and THE. Education quality is assessed as teaching commitment by faculty awards, per capita performance, and Faculty:Student Ratio, in ARUW, QS, and THE, respectively. Globalization is embedded in ARWU peer review while assessed by international students and faculty ratios in QS and THE (Table 5).
![]()
Table 3. Times higher education ranking indicators.
![]()
Table 4. Comparison of Shanghai, QS, and THE rankings features.
![]()
Table 5. Comparison of ARWU, QS, and THE indicators.
However, it is important to reiterate that these ratios are inadequately liable to temporal variations [40] [41]. Finally, while graduate quality, faculty, education, and research are assigned different weights, the three rankings allocate 40% - 60% of score research quality.
4. Ranking Controversy
Ranking dominates HE arena and is commercialized by entanglement of ranking organizations with world media [6] [48] [49]. The ranking debate also occupies a large body of HE literature [6] [37] [50] [51] [52]. This led many HEIs to assign large numbers of staff to ranking-related activities and to sign consultancy contracts with specialized firms in pursuit of ranking [48] [53]. Argument in favour of ranking asserts that absence of appropriate tools makes rankings good for comparing HEIs. Rankings also led HEIs to improve management, recruitment, partnerships, and funding [12] [14]. Although acceptable this argument ignores that rankings are based on indicators that do not cover all performance aspects [5] [36]. Controversy pertains to cases where ranking becomes forefront of planning and policies which turns it into a threat with HEIs being more interested in achieving ranking rather than improving quality [49] [54] [55] [56] [57] [58]. Controversy also emanates from transparency issues, possible data manipulation, and misuse of ranking within HEIs for issues related to faculty promotion [16] [59]. Further, as HE authorities become more interested in ranking, significant resources are delivered to certain HEIs while limited support is given to others. Rankings basic focus on research quality also diminishes role of HEIs in community service and downgrades those with emphasis in this field [15] [16]. Some rankings do not make corrections for institutional size leading large HEIs to rank higher than small ones with similar research quality [60]. Rankings also assess HEIs reputation by subjective surveys where respondents generally tend to favour certain HEIs due to personal experience or acclaim [37]. This reflects negatively on HEIs with less recognition but meaningful contribution to stakeholders and society. Similarly, assessing HEIs by alumni stature is inappropriate since it does not reflect job satisfaction, academic freedom, equal opportunity, and governance [37]. Despite this controversy it would be unwise to assume that rankings will lose their importance in foreseeable future. Rankings are here to stay and HEIs and stakeholders should be aware of their limitations.
5. Ranking Flaws
Publication of ranking lists is met by anticipation by HEIs and stakeholders alike. This is true for students comparing HEIs, HEIs enhancing recruitment and funds, and funding agencies to properly direct funds. Despite anticipation rankings have their flaws. Annual ranking lists are not satisfying for HEIs unable to promote programs due to limited resources, and for students who can find satisfaction in HEIs excelling in aspects of their interest with less international outlook. In addition, different indicators and methodologies bring about differences in ranking position for the same institution on different rankings. Such differences in ranks for the same HEIs within the same country for the same year make it difficult for stakeholders to determine true ranking for a particular institution (Table 6).
Moreover, ranking lists do not always reveal true differences between HEIs. This is illustrated by comparing ranking positions and scores of different indicators for top ten HEIs on 2019 THE list (Table 7). First, statistically significant differences are not clear for overall score, teaching and research quality scores, and citations for top five HEIs (Table 7). Second, score for teaching quality of institute ranking five on list well exceeds that of preceding four HEIs (Table 7). Third, income from industry score does not conform well to ranking. Two HEIs with high income from industry come in fourth and fifth positions while low income institution occupies second position on list (Table 7). Finally the institute in ninth position has highest international outlook (Table 7). Therefore, differences between HEIs are not clear and it is up to stakeholders to decide which HEIs suit their needs by considering scores for aspects of interest rather than just general ranking. If looked upon without consideration of underlying
![]()
Table 6. Ranking positions of five British HEIs in 2019.
![]()
Table 7. Ranking positions and indicator scores of top ten HEIs on THE 2019 list.
scores, stakeholders may base opinions on impression rather than perception. Further, since no ranking considers all HE aspects some rankings may be more appropriate for certain stakeholders than others. Based on included aspects stakeholders should consider rankings that best represent their needs. Finally, subtle HE aspects are difficult for to assess. Education is not only about reputation for students or facilities for researchers. An important element is selecting excellent HEIs with cost that students can afford and facilities that researchers can use. Education is also about amicable environment that encourages students for lifelong learning and researchers for exploration and discovery.
6. Ranking Pitfalls
Many HEIs developed a sense of urgency to prove excellence by allocating resources to planning in pursuit of ranking. However, despite their importance rankings have inherent pitfalls that should be acknowledged by ranking organizations, HEIs, and stakeholders [37] [61].
Adopting generic approach to assessment by using one indicator to assess a group of aspects is one of rankings pitfalls. Examples include assessing teaching quality by Faculty:Student Ratio which do not reflect education facilities and student support [37]. Mixing citations data from different sources can also produce inconsistent values of citations [15]. This generic approach should be avoided by diversifying indicators and unifying data sources. Rankings should also make correction for institutional size since size-dependent indicators are useful to assess large HEIs with ample resources while size-independent indicators are suitable for those achieving success with limited resources [61] [62] [63]. In addition, databases and surveys are two methodologies that produce different assessment results.
For example, databases clearly define university hospitals and medical schools, while in surveys participants find it difficult to differentiate them due to diverse public perception of such medical facilities. Accuracy in dealing with assessment results produced by different methodologies is essential for useful comparisons.
Stakeholders should also acknowledge that rank of an institution can differ on different lists due to different indicators and data sources, and such that differences should not be confused for decline in performance [51] [57] [64]. Additionally, aspects not covered by indicators should not be overlooked by stakeholders since rankings generally focus on aspects that are relatively easy to quantify [36]. Some rankings have relatively narrow focus on specific aspects while others have broader amplitudes and stakeholders should be aware that no ranking covers all performance aspects. On mutual terms, the relationship between rankings and HEIs should be based on transparency and understanding of ranking aim and purpose. Rankings should clarify methodology and data sources, and HEIs should make authentic data accessible. The more transparent the relationship the more useful results are for stakeholders. Similarly, rankings and stakeholders should be aware that HEIs are unique entities with missions and strategies drafted to match their focus and context. Considering HEIs focus and context should be addressed by rankings use of subject-specific indicators and by stakeholders using rankings that apply such indicators [2] [3].
7. Conclusions
Review of published literature indicated that ARWU, QS, and THE are prominent among other global rankings. Rankings assess HEIs using different performance aspects, methodologies, indicators, and data sources, with general focus on research output and quality. Ranking importance relates to improved planning and quality within HEIs which can positively reflect on improved recruitment and funding. It can be emphasized that ranking research results are used by HEIs for building positive images, as evidence of research quality, and for establishing academic partnerships. The HE authorities also use rankings to align national education to international benchmarks.
Despite importance rankings proved to have inherent flaws and pitfalls that cause controversy and concern. Concern pertains to ranking being the only driving force for HEIs which makes them interested in achieving ranking rather than improving quality. Concern also emanates from transparency, data manipulation, and misuse of ranking within HEIs. Results in this study also indicated that rankings use disputable subjective methodologies and mix data from different sources causing discrepancy and making ranking results less useful for stakeholders. This study also revealed that ranking flaws pertain to differences in methodology and indicators that result in different ranking for same HEIs on different rankings which makes difficult for stakeholders to determine true ranking of a particular institution. Flaws also pertain to lack of clear differences between HEIs in ranking lists, which necessitates stakeholders to consider score tables for appropriate ranking interpretation. Stakeholders should also acknowledge that subtle HE aspects of education environment and inspiration are difficult to assess by rankings. Finally despite their importance rankings inherent pitfalls should be acknowledged by ranking systems, HEIs, and stakeholders.