You can pretty easily tell how well a car, a computer, or a coffee maker works because performance can be measured and compared. With educational programs, however, it is much more difficult to determine whether their performance is excellent or pitiful.
That’s because we seldom have any benchmarks. College students enter school with zero course credits and when they graduate, they have accumulated many credits – but how much have they actually learned? We don’t know, although some students will admit that they learned very little.
Passing one or more English courses doesn’t necessarily mean that a student has learned how to write well. (In fact, as this Pope Center paper notes, college writing classes are often poor or even counterproductive.) Passing one or more college history classes doesn’t necessarily mean that a student has learned much about our history, as the Intercollegiate Studies Institute has shown in a series of reports.
Students spend years and a great amount of money in college, yet we have to take it on faith that the pursuit of the degree is sensible.
That was all right for most people in bygone days when college didn’t cost so much and academic standards were solid enough to create a strong presumption that a student had gained in knowledge and skills from having earned a degree. But the cost of college has skyrocketed and academic standards have been plunging. Those facts have many people wondering if higher education is worth it.
One school, Colorado State University, is taking an initial step toward answering that question. According to this Denver Post story, before classes begin this fall, one hundred CSU freshmen will take a test to measure their writing and reasoning abilities.
“People want evidence that their tax dollars and tuition money are being well-used,” says CSU vice provost Alan Lamborn. That’s putting it mildly. With many college graduates ending up in the competition for jobs with low educational requirements and low pay – a point I have written about here — proof that studying at a college or university demonstrably adds educational value for students could be the equivalent of the Underwriters Laboratories (UL) seal. It would give students and parents some assurance that its degrees are worth more than the paper they’re printed on.
Colorado State’s efforts hardly warrant a full-throated cheer, however. Incoming freshmen don’t have to take the test — the Collegiate Learning Assessment — and the inducement for them to do so is rather weak; they get to move into their dorms a day early and get a $10 voucher good in the university cafeteria. If you only test a rather small number of students who are probably the least test-averse, you won’t learn much about educational value added. Still, it’s a step in the right direction.
The Collegiate Learning Assessment is an interesting development. Designed in 2002 by a high-powered research group, the three-hour test is not your typical multiple choice or fill-in-the-blank exam. Instead, it requires students to write three essays that are meant to probe their thinking ability. Several dozen schools including Harvard and Duke are now using CLA, but only for internal evaluation. So far, no one is using CLA the way hotels and restaurants use Zagat ratings – to attract more business.
Perhaps the leading reason why colleges aren’t racing to show how well they succeed in educating their students is that the faculty, which has huge clout at most schools, isn’t enthusiastic about the idea testing to make comparisons. The Denver Post article quotes a professor at the University of Colorado: “But many faculty will be perturbed at the concept of a standardized test that could be used to blame them for inadequate teaching. As if that’s the sole factor when a student doesn’t succeed.”
That defensiveness is revealing. A lot of professors know that their courses are short on content and give students credit for very little learning. They like things the way they are. If CLA or some other test were used to show the lack of student improvement, the hunt would be on to identify the weak links in the school. A frightening prospect to those with a vested interest in avoiding measurement of their efforts.
Higher education is ready for an entrepreneurial move by some college president who will get serious about the need to show that his institution adds to its students’ foundation of skills and knowledge. Furthermore, the goal of such testing ought to be not only for internal use to show where the school is more or less effective, but as a metric by which individual students can demonstrate their accomplishments.
Suppose that a major state university adopted the Collegiate Learning Assessment or some other test that it devised to show how much each student improved in fundamental knowledge and skills (and also specific mastery in his major field) over the course of his studies. Its graduates would then be able to say to prospective employers or graduate schools, “I didn’t just get a degree there, but as you can see from my scores, learned a lot.”
Knowing that one university was testing in this manner would put pressure on others to follow suit. If UNC were testing its students, it probably wouldn’t be long before cries were resounding in Virginia that its public universities should do the same. The pressure would come from both taxpayers, who would want to see if they were getting as much educational value (or perhaps more) for their dollars as the folks down in North Carolina, and also from parents and students, who would see that in the national labor market, UNC graduates had the advantage of evidence of their college learning and not just a degree.
Because these standardized tests would provide solid evidence of a school’s worth in comparison to others, college ranking systems that are based on highly subjective information, such as the one published annually by U.S. News & World Report, would no longer be important. Real results would replace perceived prestige as the main determinant of a school’s value.
Furthermore, such a testing program would create competition among students. When the degree is the school’s educational product, many students will just put forth the minimum effort necessary to get one. (There’s abundant anecdotal evidence from both professors and students that this is the case.) If, however, students received not only a degree but also a set of reliable test scores showing that they have skills and knowledge, the temptation to coast along by taking the easiest courses and highest-grading professors would be reduced.
In short, a good testing program unleashes one of the most potent forces we know – competition.