For over twenty years, the Massachusetts Board of Higher Education has been under the legal obligation to develop “a system of student assessment” to gauge the academic effectiveness of the state’s public higher education institutions.
Massachusetts General Laws Chapter 15A, Section 32—passed by overwhelming majorities in the House and Senate in 1991—requires the Board “to measure student improvement, between the first and fourth years of attendance at public higher education institutions, on various tasks, including, but not limited to, ability to reason, communication and language skills, and other factors” it “deems appropriate…in order to assess the general performance of higher education institutions in fostering learning and academic growth.”
The results, moreover, are to be published so they will become a matter of public record for the guidance of the Board itself as well as for the information of students and the public generally.
None of this has been accomplished—or even, it seems, attempted—by the Board.
Until two years ago, the Board had essentially ignored Section 32, but with the assessment component of its new Vision Project, it has made a feint toward compliance. But just a feint. Instead of measuring “student improvement” in several carefully specified areas, the Board proposes something vague that gets it off the hook–state-by-state comparisons rather than measurements of student learning focused on specific criteria.
According to the Vision Project website, the Board’s aim is “to identify ways to compare and then publicly report the level of learning achieved by public college and university students with the level of learning achieved by peer institutions in other states.” That sounds interesting enough, but it doesn’t do what the law calls for.
Apart from the obvious failure to comply with Section 32, the Board’s approach is objectionable on two further grounds. First, it makes the dubious assumption that academic performance in other states should become the standard for Massachusetts. Second, it embraces the notion that peer-based comparisons can produce information of practical value.
Comparing the “level of learning” at “peer” institutions–which, by definition, are bound to resemble one another–would simply tether Massachusetts to prevailing norms. Such an approach could not reveal anything about students’ “improvement” over their undergraduate years. It would, however, foster the assumption that all is essentially well in higher education, an assumption that recent scholarship has shown to be incorrect.
Not so long ago, anyone with a bachelor’s degree could be expected to possess certain fundamental intellectual skills. A substantial body of research has now established that this is no longer true. In their 2011 book Academically Adrift: Limited Learning on College Campuses, sociologists Richard Arum and Josipa Roksa report, “An astounding proportion of students are progressing through higher education today without measurable gains in general skills….”
In their study of 2,322 students at nationally representative four-year institutions, Arum and Roksa found that by the end of their sophomore year “at least 45 percent” showed no significant improvement in “critical thinking, complex reasoning, and writing skills.”
If the situation is anywhere near as dismal as Academically Adrift and other studies have shown (and as employers for years have been complaining), then what is the point of comparing Massachusetts students with their “peers” in other states where similar conditions are likely to prevail?
And even if Massachusetts should turn out to be superior, what is the value of that when the norm itself is so debased?
What the law requires is not peer-based comparison but a criterion-referenced approach–one capable of yielding diagnostic information that can be acted upon.
This was clearly the purpose of its enactment. Assessment results, which could be based on a small but representative cohort of student volunteers (perhaps responding to an instrument modeled on the highly regarded and widely used Collegiate Learning Assessment), would indicate which campuses, in each of the system’s three segments, are most effective in the areas designated for assessment.
By then studying the educational approaches of high-scoring and low-scoring campuses the Board could identify models for emulation. On a solid empirical basis, rather than on graduation rates (themselves based on inflated grade point averages) or largely irrelevant state-by-state comparisons, a clear picture of undergraduate “learning and academic growth” would emerge along with some indication of the steps needed for improvement.
While Section 32 authorizes the Board to “determine the means of assessment,” that language does not override the clear requirement to evaluate students’ academic growth, including their ability to reason and their communication and language skills. To suppose, as the Vision Project appears to have done, that this phrase entitles the Board to ignore the very things that the section specifies for assessment would be illogical. Method, after all, is not substance.
The Board should not be allowed to escape the plain requirements of the law by coming up with a labyrinthine process and cloaking it in the language of accountability. The obligation remains to comply with the criteria as stated, including the value-added element of improvement over four years.
Because the statute is clear, the Board’s failure to comply with it must have some other source, perhaps the tendency in higher education for concerns about institutional reputation to trump much else.
Responding to Louis Freeh’s report on the terrible cover-up at Penn State, Anne Neal, president of the American Council of Trustees and Alumni, has caught the essence of the problem. Writing in the July 13, 2012, Wall Street Journal, she stated, “What happened at Penn State is emblematic of a pervasive culture on college campuses where reputation is more important than academic quality, transparency, ethics and accountability.”
By behaving for twenty years as though it believes itself to be above the law, the Board has thwarted a major initiative toward the improvement of public higher education in Massachusetts—and, not incidentally, has denied students and the general public information to which they are legally entitled.
Whatever one may think of the recent performance of governing boards in American higher education, one ought at least to be able to assume that they will obey the law.