According to a new report by Jonathan Rothwell and Siddharth Kulkarni of the Brookings Institution, yes.
“Drawing on a variety of government and private data sources, this report presents a provisional analysis of college value-added with respect to the economic success of the college’s graduates, measured by the incomes graduates earn, the occupations in which they work, and their loan repayment rates,” the authors write.
They argue that their system doesn’t just tell how financially successful the graduates of a school are, but also “measures the difference between actual alumni outcomes (like salaries) and predicted outcomes for institutions with similar characteristics and students.”
Thus, if the system Rothwell and Kulkarni have devised is at least somewhat accurate in its evaluations, then we have something important—a measure of how well or poorly postsecondary institutions do in increasing the capabilities of their students. That is, how they compare in improving the human capital of students.
The ability to make such comparisons is very useful. Suppose that Colleges A and B both enroll a high percentage of students who have rather poor academic profiles. Students, parents, and officials would like to know which of them does a better job of getting those students engaged with learning and on a course for success. Let us say that College A on the whole has high scores on the system’s three measures (alumni mid-career salaries, federal student loan repayment rates, and occupational earnings potential) while College B has low scores.
While that information does not prove that College A is a superior school or the better choice for all students (the authors are very careful not to claim too much for their system), it is probative.
Students and parents who are examining colleges invariably find bright, happy, upbeat literature and websites that make it sound as though enrolling at the school will almost guarantee success in life. The problem, of course, is that at many of them, the academic standards are generally low, much of the faculty only weakly motivated, and many students derive scant intellectual benefit from sitting through the classes.
The Brookings system appears to help them figure out which schools merely talk a good game about student gains and which ones really enable students to improve their knowledge and skills. A little bit of sleuthing with it yields results that seem to ring true.
In an article published in Academic Questions back in 2001, University of Wisconsin-Parkside history professor Thomas C. Reeves discussed the nature of the students enrolled and how difficult it was to get them to make more than a minimal effort at doing academic work. Reeves wrote about them, “They can talk about several things, including their jobs, television, sports, and Rock, but they are often baffled and sometimes irritated to hear from their professor that there is more to life. If that ‘more’ requires reading, they aren’t interested.”
So, how does the Brookings system evaluate Parkside? The report includes evaluations of a very large number of colleges and universities, and with respect to mid-career earnings of its graduates and the rate of federal loan repayment, Parkside is very low (12th and 11th percentile). The Brookings numbers appear consistent with Professor Reeves’ dim view of the school as a place where most of the students coast along without putting forth much effort.
In short, we know that there are lots of “no pain, no gain” colleges (public and private) in the U.S. and the Brookings system might allow us to spot them.
But people should be careful, as the authors acknowledge. Just because a school has low numbers does not necessarily mean that it is a bad institution that merely processes students through to collect tuition, loan, and grant money.
Consider a small, religious school in North Carolina, Belmont Abbey College. It has a very strong core curriculum and a faculty that is quite devoted to working with the students, as we read in this piece we published in 2011. The school fares poorly in the Brookings system, but there is no reason to doubt its commitment to educational quality. A better explanation is probably that relatively few Belmont Abbey students pursue the high paying careers that make a school shine.
For decades, Americans have been infatuated with the U.S. News college rankings, which is foolish because those rankings have almost nothing to do with educational results and everything to do with input measures and subjective estimates of “reputation.” (For the Pope Center’s assessment, see this paper Michael Lowrey and I wrote in 2004.) Quite a few rival college rankings have appeared on the scene, but so far not one has put much of a dent in the dominance of U.S. News. The Brookings system is vastly superior, and might do so.
One reason for liking it is that the data help to crush some mistaken notions about higher education. We often hear, for instance, that students should always prefer the most prestigious school they can possibly get into because they do so much more to enhance a student’s chances for success.
But that’s not necessarily so. Near the top of the four-year schools, we find Harvard and Drake University (Des Moines, Iowa) with almost identical numbers. Apparently, Drake is doing quite a lot for most of its students. Good students who can’t get into Harvard or other elite universities shouldn’t despair because it’s possible to get a good, useful education at colleges few people have heard of. That, incidentally, is the point of Frank Bruni’s recent book Where You Go Is Not Who You’ll Be, which I reviewed here. The Brookings results support his argument.
Many questions about and quibbles with the Brookings system will no doubt be raised. Rothwell and Kulkarni themselves raise one that I think is crucial when they write, “more accurate results may be achievable if the data used here were replaced with student-level data containing more precise measure of both economic outcomes and characteristics at the time of admission.”
Right. Evaluating colleges as a whole is very problematic because schools are not monoliths. Almost every one has its academic strong departments and others that are hardly worthy of that description. A student at a college with strong Brookings numbers could navigate his way to a degree by taking one of its cushy majors and searching for the easy professors; a student at a college with poor numbers might do the opposite.
Furthermore, if we’re serious about evaluating educational quality, we need individual baselines for students when they enter college and an assessment of their gains by the time they graduate. The Collegiate Learning Assessment (or some other test like it) would allow us to see how much each individual gains in thinking ability while in college. Relying on CLA data, Arum and Roksa concluded in Academically Adrift that many American students don’t get much benefit at all from college.
If and when it becomes possible to get before and after pictures of individual students’ mental abilities, we’ll be able to say with much greater certainty whether a college really helps its students gain in human capital, or just collects money from them while they cruise along with their high school capabilities.