University and College Rankings and the Prestige Spiral in Higher Education
The US News Rankings come out yesterday, and the Times Higher Education (THE) Rankings were released two weeks ago. These university rankings are not just for amusement but dictate the fortunes of institutions and key strategic planning decisions in many universities. As documented by the Chronicle of Higher Education, a survey of strategic plans from 100 public universities found that about 25% of the plans “explicitly affirm the importance of rising in the national rankings.” University and College rankings, along with selectivity, enrolment, 6-year graduation rates, and support levels from alumni and other funding sources are perhaps the primary information that provides feedback on the success for a particular college or university. These indicators are very sparse and provide only the most rudimentary information to students and their families in deciding which institution to choose.
Considering that the US higher education is an industry that includes over 4300 accredited public and private institutions, enrolling over 18 million students, employing 3.6 million people and garnering over $410 billion in revenues and donations, a more rational and richer quantitative and qualitative set of data is needed. Universities and Colleges often launch internal surveys, self-studies, and conduct extensive reviews of their curriculum and its effectiveness with an accreditation team. This information is indeed rich and qualitative, but aside from basic recommendations and a summary of accreditation results, the information is kept internally. To the outside world, including students and their families, there is very little information provided to select an institution on the basis of its curriculum and its teaching mission, aside from glossy brochures and institutional websites.
It is important to stress that College and University rankings not only measure higher education institutions but shape them too. By dividing institutions into categories, the US News rankings lumps institutions into a few basic groups, and then begins the process of comparison between institutions within groups, bringing institutions into greater similarity through these comparisons. These groups include National Universities, National Liberal Arts, Regional Universities and Colleges and several other categories that are primarily based on the institution size and region. The US News rankings at least separates liberal arts institutions from research universities, which helps identify a group of institutions whose mission is primarily for undergraduate education instead of research productivity. The other rankings, THE, and QS, bring simply rank universities on a mix of their research impact and place some weighting on teaching, primarily based on “reputational surveys.”
In all rankings the data that is used is quite sparse and provides very little detailed consideration of the teaching quality and student outcomes. In the Times Higher Education rankings, for example, the largest factor for assessing teaching is the “reputation survey” (15%), followed by statistics such as the academic staff to student ratio (4.5%), and measures of doctorates awarded and institutional income (8.25%). The remaining 72% of the ranking is based on research, citations, “international outlook,” and “industry income,” which are all mostly irrelevant to the quality of education for undergraduates. The US News rankings employ an algorithm that includes graduation and retention rates (22%), social mobility (5%), graduate rate performance (8%), and undergraduate academic reputation based on a peer assessment survey (20%), which provides a larger weighting for student outcomes than the THE or QS rankings. In response to criticism that the US News rankings simply locked in wealthy schools at the top, US News adjusted their algorithm in 2019 to include their “social mobility measures” that track graduation rates and performance from Pell-eligible students. These adjustments are most welcome, but ranking bodies and universities themselves can do better to publicly measure and rate their success in achieving their institutional missions and advancing undergraduate student learning within their campuses.
A top ranking is a powerful signal to students, who then compete for admissions to the top ranked institutions. This signal then triggers higher selectivity and higher donation rates to the university, which only increases the rankings and competitive pressure for admissions to the institution, creating what might be called a “prestige spiral.” These competitive pressures for universities and colleges is only intensifying, altering the planning of universities and colleges to rise in rankings, leaving many of the lower ranked schools out of the limelight, with reduced enrolments and donations. The US News rankings, which have such a crucial role in higher education, are paradoxically determined by a small group of writers and researchers from an organization named after a now defunct newspaper. These rankings shape the destinies of centuries-old institutions, and strongly influence the decisions of millions of students and their families, as they decide on where to invest what for many families is the largest expense they will ever make.
Since the top-ranked universities are primarily ranked for their research “impact”‘ and funding levels, most families are making their decisions on information that is not primarily shaped by how the institution advances student learning and achievement. To gain in the rankings as a university, an institution can only move forward by advancing its research impact, which favors a competitive race for better laboratories, more publications from faculty, and more research grants. Naturally undergraduate students, curriculum design and other aspects of undergraduate education are left behind. The rankings game also pressures universities into conformity – creating what is sometimes termed as “isomorphism” – as they all try to replicate the department structures, curricula and practices of the top-ranked universities to help them gain in rankings and gather more institutional prestige.
Perhaps one method to improve rankings and remove the conformity pressures of isomorphism is to rank institutions how well they succeed in defining and achieving a unique and differentiated mission that shapes their approach to educating students. This process would also make the mission of an institution less a pro forma exercise, and more of a vital force in shaping academic programs and in providing a transparent vision of the kind of education the institution aspires for its students. John Sexton, the former President of NYU, in his book Standing for Reason, has suggested we consider giving institutions something like a LEED rating on their educational program. As many buildings are LEED Gold for their energy and sustainability, we could rank universities and colleges as Platinum or Gold based on their ability to articulate and accomplish their academic teaching missions. The rating would require institutions to articulate their unique mission and their unique “value proposition” to the world, and then be assessed on the basis of this mission. Sexton also suggests this process would be linked to accreditation so that “each school would have to state its essential philosophy and purpose—its ratio studiorum—and how it aligns its various programs in service of that goal.” By requiring the university and college mission to have assessable and measurable components, it would place the educational program on the same footing as the research program of a university, with clear, transparent and measurable outputs. This could help universities evolve in directions that are true to their own missions, just as they hope to help students grow and learn in their own unique and differentiated ways.
 Ferall, V.E., 2011, Liberal Arts at the Brink, Harvard University Press.
 Sexton, John. Standing for Reason (p. 142). Yale University Press