Universities love high rankings.
Whether it’s Princeton Review, which grades schools based on students’ evaluations, or U.S News and World Report, where rankings rely heavily on graduation and retention rates, schools celebrate a high position as an indication of a quality education.
But these rankings tell a prospective student almost nothing about the most important aspect of a higher education: Will he or she actually learn anything?
This is because universities dragged their feet for nearly a decade to avoid the standardization of college learning assessments. The University of North Carolina system is no different.
The Pope Center has followed the sluggish attempts to institute student learning measures for years. The process started in 2007-2008 when the UNC General Administration funded each university to participate in the Voluntary System of Accountability pilot program. Four years later some schools reported results, but it appears the program was pushed aside to make room for new pilot pro grams spurred by the General Administration’s new strategic plan, which called for strengthening the use of student learning outcome data to improve instructional effectiveness.
What started as a promising step towards a coordinated system of assessment at all 16 UNC campuses now appears to be another lackluster attempt to appease all stakeholders, while avoiding concrete data that could spur serious and necessary reform at the campus level.
The latest report by the UNC General Education Council, which is composed of faculty and administrators and is tasked with evaluating assessment pilot programs, shows again that no consensus has been reached on a universal assessment tool. This latest development came after an evaluation of several multi-year pilot programs that used highly regarded assessment tools, which apparently failed to woo council members.
The UNC report (linked below) highlighted three pilot programs tested at different campuses: e-Portfolios, The Collegiate Learning Assessment (CLA), and the newly developed ETS HEIghten test. Each program attempts to test both written communication and critical thinking skills. The council found strengths and weaknesses in all three testing methods, but ultimately failed to recommend a unified method of testing.
E-Portfolios are a relatively new qualitative approach to student learning assessment that use student writing to evaluate communication and critical thinking skills. While the report found e-Portfolios useful in “identifying and correcting gaps in the curriculum,” the council was concerned about the reliability of evaluation rubrics and the substantial costs associated with such a program.
The primary issue with e-Portfolios and other qualitative measures is that they provide little more than introspective reflections for students, based on the perceived benefits of such programs. Although students and faculty who participated in the pilot programs were “enthusiastic” about the opportunity for self-reflection, this method doesn’t actually measure learning outcomes in a way that is comparable across institutions.
The second pilot, the CLA test, uses essay-based and multiple-choice questions to evaluate both entering and graduating students on core competencies. This method is often cited as the most trusted assessment tool currently in use nation-wide, but the council found weaknesses in the reliability of test scores and the influence of student motivation on the results.
Unlike the e-Portfolios, the CLA test results are quantifiable, and they don’t reveal the best situation for the piloted schools. As shown in table 1, UNC-Asheville ranked high among those that tested the program, achieving a 96th percentile ranking among all institutions that currently use the CLA test, with an overall 93 percent of seniors having scored as proficient or advanced. Fayetteville State University scored near the bottom, in the 5th percentile among all institutions that use the CLA test, with only 20 percent of seniors achieving proficiency.
Furthermore two of the universities—Eastern Carolina University and Fayetteville State University—ranked at a basic mastery level. This means the majority of students were only able to “demonstrate that they at least read the documents, made a reasonable attempt at an analysis of the details, and are able to communicate in a manner that is understandable to the reader.” No university achieved an advanced mastery status.
The CLA scores reported here shouldn’t shock anyone, since they showed every university that was tested performed statistically near what was expected.
Instead of using the results of the CLA test to improve general education, faculty and administrators often criticize the test as an ineffective measure of student learning. However a report from the Council for Aid to Education (CAE), which designed the CLA, offers compelling evidence that the CLA test is a reliable assessment of learning outcomes at the institutional level, achieving reliability with face validity and test-retest measurments.
The third pilot, the ETS HEIghten test, is a new assessment piloted at all 16 UNC universities in spring of 2015, but the test will not be operational until after the spring 2016 semester. The UNC report did not include initial results. Although not yet available, the findings are likely to be met with the same criticism as the CLA test, due to its attempt to quantify learning outcomes.
We don’t lack an adequate assessment system because the tools don’t exist; It’s because the data most often points to a failure by universities to ensure students learn the most basic skills. While this is a fact that colleges might deny, the truth is obvious. As a recent Pope Center article by writing professor John Maguire argued, college graduates lack basic writing skills, and employers are catching on.
The UNC system is in a unique situation to recommit to meaningful assessment measures under the leadership of new President Margaret Spellings. The Commission on the Future of Higher Education, more commonly known as the Spellings Commission, carried out the first national push for assessment in higher education. At the time Spellings vigorously supported increased accountability at colleges and universities by endorsing standardized assessments of student learning. While it’s not clear she will be as outspoken in her new role, she certainly could push the institutionalization of system-wide assessment measures.
The UNC General Education Council is likely to continue its languid support for a system-wide assessment method, but after years of stalling, it’s time to apply more pressure. A transparent assessment system allow students to make more informed choices about their education. It also offers faculty and administrators an unparalleled opportunity to improve curricula to close gaps between expectations and outcomes.
It’s not unreasonable to expect universities to provide data that shows students learn the things universities claim to teach.