Universities have been assessing students by grading their work since the Middle Ages. Sometimes students complained that the professor wasn’t fair, but nobody thought the system was fundamentally flawed.
Then, about three decades ago, a new idea arose in American universities—that campus bureaucrats needed to assess student learning outcomes. This occurred as part of a legitimate effort to distinguish between the effects of only admitting top quality students (selection effect) and actually teaching students, perhaps of indifferent quality, new skills and knowledge (training effect). Mid- and lower-tier colleges that were anxious to show that they were adding value to their graduates (even if they did not carry the prestige of the top schools) were quick to embrace assessment.
The “assessment” craze did little harm until the 1990s, when its advocates began to argue that while grading measured student performance, it did not measure student learning. That has led to the growth of a whole industry that purports to measure student learning.
When I started teaching in the mid-1990s, I had my history students write weekly essays and take a couple of essay-based exams. From that I could determine whether they understood the content I was teaching and I could see their writing develop as the semester progressed. That gave me the information I needed to award grades but also to see if there were any consistent problems that I needed to address. I still do this and any substantive changes I make to my courses come from this type of feedback.
In the assessment era, however, this is considered insufficient. For about a decade now, university assessment directors, who are often ex officio members of curriculum committees (even though few of them are faculty members), have been instructing professors to develop program- and course-level statements called Student Learning Outcomes (SLOs). Typically, professors have three or more of these per program or course.
These statements have to use an approved list of verbs. SLOs have to be concrete and measurable. You shouldn’t use words like “understand” in formulating a SLO because that is not something a student does and it’s hard to measure. Likewise one ought to avoid the term “ability” because that suggests something inherent to the student and not a demonstration of what he has learned. Worse, it suggests that some might have more ability than others.
Above all “critical thinking” is not a desirable outcome because it’s too vague and mushy.
So we end up with goals (sorry, that’s a forbidden term too, I should say “outcomes”) for our courses like: “Describe the origins of the institution of slavery in the US.” “Compose an essay that distinguishes between the nature of European Imperialism before and after 1850.” “Distinguish between primary and secondary source material.”
Those are worthy skills for a student to have, but they don’t capture the more important, but vague, abilities like becoming a critical reader and writer or making valid inferences from historical material that most professors would consider the real goals of their courses.
Then you need to develop a mechanism for finding out whether your students have met the SLO. You will be forgiven for thinking that the students’ grades in the course might answer this question. Not so. You need a separate process, usually involving the insertion into your course of some test that is easy to grade and yields a quantifiable result, reports to several committees, and some meetings to determine whether the SLOs have been met.
Most universities don’t require that material used for assessment be graded, but students usually don’t put much effort into work that does not affect their grades, so in practice assessment material becomes part of the curriculum.
That’s not the end. Next you have to identify some aspect of your course that is not going as well as you would like and change something about your course or program to remedy this “problem.” This is called “closing the loop” and it has to happen every year.
Amazingly, there is no evidence that learning outcomes assessment has improved student learning or led to any improvement in what universities do.So, each year we find that although our students have met the SLOs (and they always do), there is nonetheless some contrived problem that we respond to with an equally contrived but heavily documented and carefully reported change. Loop closed.
Virtually every university in the country now has an assessment office devoted to overseeing and directing this byzantine process. Those offices have steadily been gaining staff, power, authority, and resources.
Amazingly, there is no evidence that learning outcomes assessment has improved student learning or led to any improvement in what universities do.
On a practical level, no one even pretends that what distinguishes good schools from bad schools is their commitment to or execution of learning outcomes assessment.
Nor does anyone seriously propose that people who were educated before the age of assessment received inferior educations because bureaucrats did not assess their learning.
A couple of years ago I wrote a piece in the Chronicle of Higher Education pointing out the lack of evidence that learning outcomes assessment improves student learning. In researching the article I looked high and low for a study that showed assessment has improved student learning or that a robust assessment program might make one college better than another. I found nothing. No one has bothered to assess the effects of assessment.
The response to my piece from the assessment world was anecdotes and panel discussions about how to deal with assessment doubters, but no evidence to support their claims that students learn more when we follow their formula.
And because assessment employs none of the basic principles of research design, there is no reason to believe that further investment in assessment (as opposed to actual scholarly research on student learning) will yield meaningful results.
For example, assessors never use the control groups that are a standard feature of real research. So even if you can show that the students in a particular course do 15 percent better at something at the end of course than they did at the beginning, there is no way to say that it was something intrinsic to the course that caused the change.
Worse, the entire assessment process is designed and executed by people with a strong interest in the outcome of the process.
For those and other reasons, I concluded in this Inside Higher Ed essay last November that assessment is just an “empty ritual” that wastes time and money.
So Why Do Universities Continue to Invest Money in Assessment Offices?
The proximate answer is that they do so because the accreditors tell them they have to. That is, accrediting standards require schools to set learning outcomes for courses and to quantify the results.
If it wants federal student aid money, a college must remain accredited. Therefore, a big component of the accreditation process is to heap quantities of data on the accreditors to show them that your students are meeting the learning outcome objectives.
Ask an assessment bureaucrat whether assessment works or not and inevitably the response will be, “Do you want to lose X million in federal money?” and never, “Here is concrete evidence that assessment works.”
It’s the quality control equivalent of the TSA’s security theater. Just as taking off your shoes and not carrying liquids provides little more than the appearance of security, demanding that universities provide “evidence” that they are doing a good job of teaching their students provides the appearance of accountability.
The consequences of this are far reaching. Universities have to pay directly to support a new class of assessment bureaucrats, but they also pay indirectly because faculty have to deal with assessment both in their classes and in the many committees that have grown up around the assessment imperative. All of that time and effort represent resources that are not spent on other more productive things.
If This Is So Useless and Costly, Why Don’t Universities Fight Back?
I contend that universities don’t resist assessment because it serves a purpose. Although it has nothing to do with student learning, assessment strengthens the parts of the university administrators can most easily control—the staff and the most staff-like members of the faculty—and weakens the more independent-minded elements of the faculty by forcing them to comply with this empty ritual.
Submitting to the assessment agenda is actually attractive to some faculty members. For academics who are uninterested in or unable to thrive in the traditional faculty roles of teaching and scholarship, assessment offers a place where the ability to master the ever-changing jargon of curriculum maps, to police the baroque language of Student Learning Outcomes, and to devise rubrics creates a type of expertise.
Active scholars avoid assessment committees like the plague, so people who are not busy in their labs or the archives are the ones who end up on these committees.
On one level, this might seem like a good allocation of human resources; it lets the researchers do their research and shifts the burden of the bureaucratic busy work to the less scholarly. Unfortunately, these committees have real power and are taking control of the curriculum in an increasingly centralized, top-down process.
The result is that American public universities, once the envy of the world, are increasingly being run in the centralized and bureaucratized manner of our secondary schools, which are global laggards.