There’s a new kid on the college rankings block. The Center for College Affordability and Productivity has devised a new college ranking system and it has received a lot of attention in the current issue of Forbes magazine)
For many years, the college ranking business has been dominated by U.S. News and World Report, but the numerous shortcomings of its ranking system, which concentrates mainly on inputs and has little to do with educational results have led many people to conclude that we need something better. (In 2004, the Pope Center put out this paper that explains the drawbacks to the U.S. News system.)
Before looking at the CCAP ranking system, let’s pause to ask if it makes much sense to have college rankings at all.
If we could go back in time to the days before there were any college rankings (to pick a nice round-numbered year, say 1950) were students and parents distraught over the difficulty of choosing schools to apply to? No. They could find out which ones had noteworthy programs in particular fields, such as Johns Hopkins and medicine. The idea that some colleges were objectively better than others – not just in some respects, but overall – would have been scoffed at.
It’s still an idea that ought to be scoffed at. Whether Harvard, Yale, Princeton or some other school is “Number One” according to some set of criteria means just as much as if a restaurant critic went around New York’s restaurants and ranked them according to his tastes. You might enjoy a meal at Applebee’s much more than one at his top eatery.
So there – I’m an infidel when it comes to college rankings. The amount of attention that students and parents should give to rankings when they’re trying to decide which schools to apply to is precisely none. Search instead for evidence of good teaching and serious intellectual purpose – if that’s what you want. (This recent Pope Center piece contrasting a student’s experiences at North Carolina State University and Meredith College underscores the importance of evaluating colleges at the “micro” rather than the “macro” level.)
Now what about the new CCAP ranking system? Its rankings are based on five criteria: success of alumni at being included in Who’s Who in America (25%), student evaluations of professors on Ratemyprofessors.com (25%), four-year graduation rate (16.67%), number of students and faculty receiving nationally competitive awards (16.67%) and average student debt of those who need to borrow (16.67%).
Those criteria strike me as somewhat more sensible than the U.S. News criteria, but not enough to overcome my skepticism about the whole college ranking enterprise.
First, Who’s Who in America listings are very hit or miss. Many successful people don’t bother with it. (I guess that I’m sufficiently successful to be included if I felt like writing up a biographical sketch, but I don’t see enough benefit to justify even a small allocation of time.) Buying the publication costs a lot ($710) and it isn’t searchable online. As an indicator of educational success for colleges, the percentage of alums included in Who’ Who is pretty thin. It has some probative value, but not much.
Second, student evaluations of professors are questionable evidence about the competence of even the professors who draw comments and they don’t tell us anything about the overall quality of the faculty. If Ratemyprofessors.com had existed back during my own teaching career in the 1980s, I am sure that the negative comments from students unhappy over low grades they had earned would have far outnumbered positive ones from students who thought I had taught them something useful.
Third, graduation rates tell us more about the quality of the students admitted to a school than it does about its educational quality. It is quite possible for colleges to aim for high student retention (and therefore graduation) by pressuring faculty members to water down courses and inflate grades, as in this case I wrote about recently.
Fourth, my comment regarding graduation rates also applies to the “nationally competitive awards” criterion. Schools that attract top students will naturally look good in this regard, but we don’t know whether student success is because of anything done by the institution. Also, it’s possible for professors to receive prestigious awards and yet do a mediocre to poor job in the classroom.
Fifth, there isn’t any necessary connection between the extent to which a student needs to borrow and the quality of his education.
I’ll concede that the CCAP system is more sensible than U.S. News, but to me that’s like comparing different versions of national health care. I just don’t want it at all, so the respective plusses and minuses don’t matter.
Instead of trying to rate colleges and universities with the objective of trying to say which ones are best, I think it would be much more useful to look through the other end of the telescope. Why not try to assess the educational “lemons?” With lots of good consumer products available on the market, I’m most interested in information that tells me which ones to avoid.
An educational lemon list would alert students to schools where a high percentage of the courses are easy fluff or politicized blather. One good criterion might be the percentage of professors who still have the nerve to give a student a failing grade. Another might be the percentage who still assign and critically grade essays and papers. The lower those numbers, the more lemony the school looks.
Anyone want to take on that project?