Heritage Foundation’s College Ratings Should Be Welcomed to the Field

The information they provide will be of value to conservative families.

In September, the Heritage Foundation launched an interactive, web-based guide to help students “Choose College with Confidence.” It currently grants nearly 300 colleges and universities one of three designations: “great option,” “worth considering,” or “not recommended.” I spoke with a contributor to the project, Heritage fellow Jonathan Butcher, who emphasized the difference between rating and ranking. Ordinal rankings, such as the influential U.S. News & World Report list, “emphasize selectivity,” Butcher said. They largely reflect “where students go, not what they do when they get there.”

Traditional rankings reflect where students go, not what they do when they get there.Heritage’s rating methodology, by contrast, is qualitative and subjective. Butcher called it “a package,” noting that criteria are “not weighted” in any algorithmic formula. Inputs include student responses to certain questions on the Foundation for Individual Rights and Expression’s college free-speech survey, letter grades given by the American Council of Trustees and Alumni’s “What Will They Learn” curriculum-evaluation project, ACTA’s dataset on administrative costs per student, and median post-graduation earnings published in the federal College Scorecard. Other measures use institutions’ own published information, such as the comparative number of politically leftist and conservative clubs on campus, the number of Diversity, Equity, and Inclusion (DEI) personnel, whether schools require “diversity statements” in hiring, and whether they have “bias response” teams or gender and ethnic studies departments.

Unsurprisingly, left-leaning commentators have criticized this ratings system. A recent Inside Higher Ed article quotes Akil Bello, a director at the anti-standardized-testing advocacy group FairTest, who noted that “all rankings are … a subjective assessment of quality.” Bello went on to call the Heritage Foundation “aggressively right-wing” and complain that “if the factors you evaluate can only be verified by like-minded individuals, it is not an objective ranking.” This seems rather Orwellian. All rankings are born subjective, but some are born more subjective than others. Similarly, Isaac Kamola of the Association of American University Professors and Trinity College, Connecticut, called Heritage “market fundamentalists” and the rating system “ideologically driven.” Fundamentalism, like subjectivity, may be in the eye of the beholder.

Butcher noted that press coverage of the ratings thus far wrongly “emphasizes ‘best schools’ in the sense meant by existing rankings.” Inside Higher Ed informs readers that “Harvard University got a red light. New College of Florida, Auburn University and West Virginia’s Appalachian Bible College … received greens.” Sardonic contempt is barely a subtext here: New College good, Harvard bad—LOL! Appalachian Bible—yuck!

These critics rather miss the point. Heritage’s ratings do no more than coordinate several data sources to address a few questions of interest to some prospective students as they shop for colleges. (For example, why not let high schoolers know that North Carolina’s Belmont Abbey accepts the Classic Learning Test and is noted for its honors college?) As the web page’s “methodology” subsection explains, the ratings began with recommendations from affiliated state-level think tanks “about which colleges … are good options for conservatives.” Butcher believes that many parents are now less interested in prestige value and more concerned with “factors such as cost, value, and proximity.” To some extent, value is always subjective, so the worth of third-party evaluations also varies. Some may view a consumer guide for conservative families with inputs from right-leaning sources as “market fundamentalism,” but it is better described as an indicator of marketplace diversity.

A consumer guide for conservative families is an indicator of marketplace diversity.At least one left-liberal commentator has recently cited consumer and product diversity and the subjectivity of value against the putatively objective college-rankings industry. Colin Diver, a former dean of Penn law school and president emeritus of Reed College, Oregon, traces the distortive influence of the U.S. News & World Report “rankocracy” in his 2022 book Breaking Ranks. Diver draws on public databases, extensive scholarship, and anecdotal evidence from his administrative career to show how consumer rankings influence applicant decisionmaking and reshape institutional priorities. In 2021, a Chronicle of Higher Education reporter reviewed the strategic plans of the United States’s 100 largest public universities. Roughly one in four expressly cited improved standing in consumer rankings as a governing institutional goal. This should surprise no one. As Diver demonstrates from longitudinal data such as UCLA’s Cooperative Institutional Research Program annual student survey, “experience [shows that] rankings get applicants’ attention and affect their behavior.” To attract applicants, institutions massage outputs with tactics such as enrolling academically weaker students in the spring to keep them out of completion-rate data reported in the fall. Often, administrators simply falsify reported data, as numerous well-publicized scandals at otherwise reputable institutions demonstrate.

Of greater concern, Diver argues, are the industry-shaping practices of even the most prestigious colleges and universities in pursuit of an algorithmically generated ordinal number. Because U.S. News & World Report’s formula gives a 22.5-percent weighting to graduation and retention rates—mostly calculated from raw numbers rather than those accounting for students’ socioeconomic statuses—institutions favor wealthier applicants less likely to drop out. The five-percent weighting for alumni giving further incentivizes that pattern. A 10-percent weighting for financial resources per student encourages institutions to admit smaller classes and increase spending, heedless of commensurate tuition inflation. The 12.5-percent weighting for selectivity encourages them to recruit more applicants whom they intend to reject and award tuition aid on merit to high test-scorers rather than on need to low-income applicants. And so on.

Diver’s advice to trustees and administrators in response to the “rankocracy” is to stop completing peer-reputation surveys and not to publicize “illegitimate” rankings whenever their institutions receive favorable scores. Instead, they should highlight “assessments … that dovetail with [their] school’s distinctive mission” and allow equal public access to relevant institutional data for all third-party evaluators—not just the industry’s “800-pound gorilla,” U.S. News & World Report. Such policies would honor “the rich diversity of needs and expectations of those who seek a postsecondary education, and the wide range of institutions that [exist] to satisfy those desires.” Hear, hear! A market fundamentalist could not say it better.

Colleges should differentiate themselves by their distinctive missions, noteworthy programs, or other peculiar advantages.Americans have a cultural penchant for unitary rankings. The urge to argue whether Tom Brady really is the GOAT or if Dan Marino might have won as many championships under Bill Belichick is harmless fun. But a moment’s sober reflection should suffice to dismiss as farce any attempt to quantify and substantiate the claim that 44th-placed University of Rochester is a superior choice for all applicants than 45th-placed Lehigh. Instead of massaging data outputs and setting institutional priorities in pursuit of such arbitrary rankings, colleges and universities should differentiate themselves by their distinctive missions, noteworthy programs, or other peculiar advantages. The University of Lynchburg’s Aaron Basko has written in the Chronicle of Higher Education of the need for “large umbrella narratives … under which all the substories of [a] college or university can thrive and find shelter.”

But it is precisely the paucity of such missional clarity that subjects administrators to rankocrats’ tyranny in the first place. I am reminded of a conversation some years ago with a student who was then a high-school senior. When I inquired how her college search fared, she responded with mild exasperation: “I receive admissions mailers every day, but how am I supposed to choose between them when they all say the same thing?” This is true. Reading college mission statements rapidly descends into an exercise in tedium. To prove the point, I asked ChatGPT to give me a one-paragraph mission statement for a new institution I plan to open, Cedar State College, a residential, state liberal-arts school focused on undergraduate education. Here is the result:

Cedar State College is dedicated to providing a transformative undergraduate experience that combines rigorous academics, personalized attention, and vibrant residential life. Our mission is to foster intellectual curiosity, critical thinking, and social responsibility, empowering students to become engaged citizens and leaders in their fields. With a commitment to diversity, inclusion, and community, we offer a supportive environment where students from all backgrounds can thrive. At Cedar State, education goes beyond the classroom, preparing students for meaningful careers and lifelong learning in an ever-evolving world. Join us in shaping the future—one student at a time.

This is perfect, painfully so. I could have paid marketing consultants five figures to conduct a rigorous peer study with focus groups before drafting this statement, but a bot produced it in two seconds for free. Presumably, institutions do not adopt more substantively meaningful mission statements because they have nothing to say. It is easier to divert alumni giving toward merit aid for high scorers on the SAT in order to jump a few rivals in the rankings.

Heritage’s rating system is currently only a small sample size, though Butcher said the database will grow to nearly 1,000 institutions within two years, with plans eventually “to capture the entire [higher-education] landscape.” The value of this growing database is underscored by its emphasis on yield rate rather than selectivity—that is, on the number of admitted students who choose to matriculate rather than on the percentage of applicants admitted. In other words, the ratings reward institutions whose missions align with prospective students’ goals and priorities. They provide one information source among many that will be of value to some families as they navigate a complex and expensive choice. Call me a market fundamentalist, but that sounds like a good thing to me.

Samuel Negus is director of program review and accreditation at Hillsdale College.