Today there is a growing emphasis on accountability in higher education. This trend has taken its toll on the U.S. News and World Report’s lists of the “best” colleges. The lists are still wildly popular, but some higher education critics view them with skepticism because they are based on “reputation” and “input” measures, rather than indications of educational quality.
In this new era of accountability, Education Secretary Margaret Spellings is pressuring universities to come up with ways to measure the actual education that they provide – ways known as “student outcome” measures. Universities are working on this, but the process is slow. Some measures exist, such as the Collegiate Assessment of Academic Productivity, but many schools are reluctant to reveal their scores. At least that seems to be a major reason why “College Portrait,” the online version of large universities’ Voluntary Accountability System, is not yet available.
One educator doesn’t think outcome indicators are all that difficult to find, however. Richard Vedder, an economist and former member of the Spellings Commission on the Future of Higher Education, has proposed an ingenious alternative to U.S. News’ rankings based on readily available information.
He and his “whiz kids” (his students at Ohio University) tapped data from Who’s Who in America, evaluations on the Web site Rate My Professors.com, records of awards such as Fulbright grants and Rhodes scholarships, and graduation rates. With these measures, they created a list of more than 200 top schools, published in the May 19, 2008, issue of Forbes Magazine.
Rate My Professors.com features students’ ratings of their professors at schools throughout the country. Such evaluations are not strictly a student outcome but, like the highly touted National Survey of Student Engagement, they collectively reveal what students think of their college experience. The site evokes controversy, but in the past year two studies [here and here] have found that the online comments correlate fairly well with the student evaluations that universities use as part of their faculty reviews.
The most disputed element in Vedder’s list is undoubtedly the 5,220 randomly selected entries from Marquis Who’s Who in America. Many people view Who’s Who as a snobbish or even “vanity” publication. Undoubtedly, some people who receive invitations toss them away; others who accept may not be as notable as they should be. The publication does not disclose its specific criteria for selection, saying that its goal is “current reference value” and that it considers “factors such as position, noteworthy accomplishments, visibility, and prominence in a field.”
At the same time, the Marquis Who’s Who in America (the broadest of its publications and therefore the most selective) is a respectable publication that attempts to identify influential people; it is widely used in libraries; it does not accept payments to be included; and it’s been around since 1899. Even if the Who’s Who selections are erratic or arbitrary, they represent a pool of people who are successful and whose selection is not biased by the schools that they attended.
Vedder calls Who’s Who “imperfect” but points out that it is a “comprehensive listing of professional achievement,” and perhaps the only one that includes undergraduate schools in its entries. A better measure, he says, would be the Social Security earnings history of recent college graduates, which would reveal salaries over time, but he couldn’t get those.
So what does he find? Some will be disappointed to learn that at the very top levels, Vedder’s rankings are about the same as those of U.S. News!
Harvard, Yale, Princeton, and Chicago head the list of “national universities,” for example. There are some divergences from U.S. News but the comparisons don’t really get interesting until after the top 10.
Those divergences can be sharp. To illustrate: among national universities, Baylor ranks 34 in Vedder’s list, but only 75 in U.S. News.’ Brigham Young ranks 40 in Vedder’s list, 79 in U.S. News.’ In liberal arts colleges, Vedder ranks Colgate only 51, while U.S. News gives it a 17. The entire list, which is worth perusing, is available here.
The consistency at the top may reflect the fact that Who’s Who has some correlation with the reputation and student selectivity measures used by U.S. News. In large part, Who’s Who must measure brains and talent, not the value added by one’s alma mater. Given that Harvard, Yale, and Princeton have developed reputations over centuries, they necessarily draw top talent, regardless of what they add in educational value.
This stability of the colleges at the top, in fact, reflects a deeper problem with universities and colleges. Vedder makes the observation that unlike corporations, schools at the height of the reputation ladder stay up there for a very long time. The corporate giants of yesterday, like U.S. Steel and Sears-Roebucks, are by-passed on the list of the biggest U.S. companies by upstarts such as Microsoft and Apple. But universities are not subject to forces that demand innovation and high quality, as businesses are. They can maintain their place atop the academic world through past reputation.
For example, in 1940 Charles Lovejoy did a school ranking also based on Who’s Who, using the entries for 1937-1940. When measured in “absolute number of entries” (but not adjusted for enrollment size, as in Vedder’s study), nine of the schools that he ranked as the top ten are in Vedder’s top ten today.
The absence of turnover in higher education – the lack of technological change and innovative upstarts – suggests stagnation in higher education as a whole. But that’s a topic to delve into another time.
Another red flag urging caution about Vedder’s list is its tendency to downgrade the rankings of technical schools — some highly regarded engineering schools, like Rensselaer Polytechnic and Stevens Institute of Technology, drop 50 positions or more, and even MIT drops from 7 to 17. Perhaps research scientists and engineers are underrepresented by Who’s Who and maybe even in Rate My Professors.
For now, however, the important point is that Richard Vedder has brought a breath of fresh air into the debate over rankings. He is implying that U.S. News could broaden its measurements. And he is showing colleges, universities, and the Department of Education that student outcome measurements may be easier to find than they claim. We at the Pope Center will champion further attempts to help people find the colleges that best match their needs.