U.S. News & World Report released its 2024 “Best Colleges” ranking in September. Marketed as a guide for students in their college-selection process, the list is, in reality, a reputational ranking that rewards rich and selective institutions while saying little about the educational product they offer. As such, not much has changed in this year’s rankings. The new list looks more like the shuffling of a feudal hierarchy than an actual competition between organizations working to be the best.
Because of the cost and time commitments involved in getting a degree from a higher-ed institution, the decision to attend college can be one of the most important of a young person’s life. The many rankings products ostensibly try to help with this decision. In addition to rankings, raw data about institutions are readily available from third-party sources like the College Board and the Department of Education. But none of these provide much help for students trying to figure out the quality of the educational product: how well students learn, how happy they are with their educational experience, and, perhaps most importantly, how well an institution aids its students in finding meaningful and financially rewarding employment after graduation.
Rankings provide little help for students trying to figure out the quality of an educational product.Some rankings are starting to take these factors into account and change their methodologies in ways that help students select institutions on the basis of student outcomes. Forbes, for example, releases a yearly list of the top 650 undergraduate institutions based on a “consumer-centric approach,” according to Caroline Howard, director of editorial operations. The methodology for Forbes’s rankings skews towards the students themselves, heavily weighting such factors as alumni salary, student debt, and the student experience on campus.
The Foundation for Research on Equal Opportunity (FREOPP) has been an innovator in developing outcome-based rankings. Its 2021 analysis of both colleges and individual programs attempts to set an ROI (return-on-investment) for each course of study. FREOPP’s study helpfully points out that “28 percent of programs have negative returns on investment, meaning that students will be financially worse off for having participated in those programs.” The methodology is impressive, but it is limited to strictly financial outcomes.
The most prominent of U.S. News’s rankings competitors, the Wall Street Journal, dramatically changed its rankings methodology this year, in a way that focuses on student experience and student outcomes. According to the ranking’s author, the newspaper uses “income data from the U.S. Department of Education’s College Scorecard, specifically the median salaries for graduates that received federal financial aid 10 years after enrollment.” Furthermore, the paper tries “to avoid the most common pitfall of comparing colleges to each other, perceiving colleges that input excellent students and output excellent students as somehow superior to colleges that input mediocre students and output good students.” As such, the WSJ rankings place emphasis on student outcomes such as graduation rate, years to pay off net price, and salary impact versus similar colleges. The result is a heterodox and controversial ranking that includes some of the usual suspects in the top-20 but also such small private schools as Babson and Rose-Hulman Institute of Technology. Some Ivies rank as low as 67th.
One drawback of income-based student-outcome rankings is that they tend to reward colleges that focus on pre-professional studies like law and medicine and technology programs like computer science and engineering. It’s not clear that a student in a mediocre engineering program is getting a better education than a student in an excellent humanities program, whatever their salaries 10 years after graduation. Another drawback is that income-based outcomes may be confounded by individual student attributes. Mightn’t it be the case that students in the third quartile of SAT scores, from middle-class families, simply have more ambition and grit than other students, and therefore their college choices make those institutions seem better based on graduates’ incomes later in life?
There might be a better way to guide students to the right higher-education match.Although outcome-based surveys are a vast improvement over the traditional reputational rankings, there might be a better way to guide students to the right higher-education match, as well as to make the new outcome-based rankings better and more indicative of an institution’s educational performance. One addition to the formula that could help would be the results from exit exams. Widely used now by some institutions and state systems, exit exams have great promise. A measure that showed how much students actually learned would be a significant enhancement to outcome-based rankings.
The first hurdle is designing a test that measures knowledge acquired and isn’t just an assessment of general cognitive ability like the SAT. This is the criticism of both the Collegiate Learning Assessment and the California Critical Thinking Skills Test. Professor Richard Vedder has proposed a comprehensive assessment test, which he likens to a licensing or board-certification exam. Such a test would have to be mandated by accreditation agencies or the Department of Education so that it could be used for all colleges. Getting the test right would be complicated. Like all the measures discussed here, this one would be imperfect, but, taken in aggregate, a large number of outcome-based measures can give students some indication of an institution’s performance.
Even the Department of Education is interested in multi-factor outcomes to gauge college performance. As it correctly points out, “Since students have diverse goals … an institution at the top of one prospective student’s list may be at the bottom of another’s.” To that end, the Ed Department has created the “College Scorecard,” which is used as a data source for some of the rankings mentioned above. That said, the scorecard correctly identifies the problem with rankings that boil down multiple factors to one number:
Separate metrics of institutional performance along each measured dimension of access, affordability, and student outcomes provide potential students and their families with a greater amount of information to form their own assessments of which college or university is the best choice given their goals. On the other hand, reviewing each metric along a broad range of information separately puts the burden of synthesizing that information on potential students, who may have difficulty making these tradeoffs without proper support. [Emphasis added.]
The Department of Education should be commended for compiling the data in the College Scorecard, which can be a resource for third parties to develop their rankings and performance indicators. While no one ranking will be perfect for every student, more and better data and new measures like exit-exam results can give students and parents a wider choice of ways to evaluate college performance.
Chris Corrigan was Chief Financial Officer at Andrew College (1998-2005), Emory College (2005-2008), and Armstrong State University (2015-2017).