Joseph Soares is an associate professor of sociology at Wake Forest University. This is his response to an article by the Pope Center’s Jay Schalin, “Class Warfare Comes to College Admissions”, about Wake Forest’s SAT-optional policy.
Wake Forest has chosen to go SAT-optional in our continuing pursuit of academic meritocracy. It takes some imagination to see this step as portending, as Mr. Schalin fears, “class warfare.” Yet whenever universities improve their admissions practices, as Oxford and Yale both did in the 1960s, and as we are doing now, critics have accused reformers of abandoning standards and traditions.
Once in Oxford the accusation was, “How dare you lower our standards by making Latin optional for science undergraduates!” And at Yale, some complained, “How dare to you let women take places traditionally reserved for male leaders!” Oxford dared in 1960, and Yale, too, in 1969; and so do we in 2008. Right now we are out in front among highly-ranked national universities, but we are confident others will soon join us. We already enjoy the company of top liberal-arts SAT-optional colleges, such as Bates, Bowdoin, Hamilton, Holy Cross, Middlebury, Mount Holyoke and Smith. And I rather doubt that any of those campuses have suffered from class warfare.
The Pope Center is widely viewed as being conservative, but even Edmund Burke, the father of British conservatism, knew that living institutions must improve in order to conserve. We need to update policies and practices to remain true to Wake Forest’s Pro Humanitate mission. By removing a biased barrier to many potential applicants, we will make the competition for admissions at Wake Forest more fair, and our student body stronger and more diverse than ever before.
In the space allotted I cannot thoughtfully reply to every mistake in Mr. Schalin’s article; instead I will focus on the SAT. Mr. Schalin suffers from a misperception of the SAT as a fair test that usefully predicts college grades and facilitates an objective selection of America’s meritocracy. In fact, the SAT is none of the above. He correctly points out that I am not opposed to all standardized tests, just biased and ineffective ones; but he is mistaken on what current research on the validity of the SAT shows. Schalin and I both reference Geiser’s work in California, but Schalin’s reading is wrong. Fortuitously, Geiser released another report on the SAT the same day as Schalin’s article. In “Back to the Basics,” Geiser tells us, “The SAT is a relatively poor predictor of student performance.… As an admissions criterion, the SAT has more adverse impact on poor and minority applicants than high-school grades, class rank and other measures of academic achievement.… Irrespective of the quality or type of school attended, high-school GPA proved the best predictor not only of freshman grades…but also long-term college outcomes such as cumulative grade-point average and four-year graduation.” Geiser favors admissions practices that emphasize HSGPA (high school grade point average), class rank, and subject “achievement” tests, and that discount or drop SAT scores, as do I.
When Geiser says that the SAT is a poor predictor, he bases that judgment on a statistician’s knowledge of what the SAT adds to a multiple regression model predicting college outcomes. Everyone who has done statistical work on admissions knows that HSGPA is the best single predictor of college grades. My book documents that the College Board (CB) knew that as early as 1932, and that fact was reaffirmed by the CB’s most recent “validity” study (CB 2008-5).
If HSGPA works best, why does anyone request SAT scores? The CB’s case for the SAT is based on it adding to the statistical power of regression models predicting freshman grades. But just how much does it add? I will give three examples. When California, working with the old SAT, added SAT scores to HSGPA in a regression model, it found the percent of variance explained in freshman grades went up from 20 to 25 percent; a University of Georgia study (Cornwell, Mustard, Parys 2008) on the new SAT found a tiny increase from 32 to 33 percent; and the College Board’s validity study on the new SAT shows an improvement from 13 to 21 percent. The CB’s study displays the largest increase, but I remain skeptical since they do not grant unrestricted access to their data, and they have a billion dollars riding on the results. Yet California’s and Georgia’s research show improvements of 1 to 5 percentage points. Is that enough to justify the human and fiscal expense and the glaring disparities in test scores between men and women, blacks and whites, and students from high and modest or low family income?
Schalin may not be worried about the discriminatory effects of one test, but I am. Women consistently do less well than men on the SAT, but outperform men in college. Blacks usually score lower than whites, but our own research found no racial gap at Wake Forest in college GPA. Lower socioeconomic status (SES) youths of all races do less well than higher SES youths. On SES, California found the SAT to be especially contaminated, but the same was not true of subject tests or HSGPA. Geiser and Santelices note, “SAT scores exhibit a strong, positive relationship with measures of socioeconomic status … whereas HSGPA is only weakly associated with such measures”(2007: 2). When one statistically controls for SES, the weak contribution of the SAT in a regression model drops down to near zero, while the statistical power of HSGPA and subject-achievement tests go up. The social cost of adding the SAT to the model is to stack the odds against under-privileged youths.
We do not have a test-score meritocracy in America. The SAT was never used by Yale or Wake Forest to select as much as it was used to set limits on the applicant pool.
While convenient as a way to reduce the admissions office’s work load, using the SAT to exclude gave us statistical models that explained only 30 percent of the variance in college grades. Seventy percent remained unknown. Admissions have always been more “subjective” art than “objective” science.
This is also not news. William Bowen, the former president of Princeton, drew attention to this in 1998, as did Geiser and Santelices in 2007, and Steve Farmer, director of admissions at UNC Chapel Hill, in 2008. My book (The Power of Privilege) documents how Yale decided in 1971 that predicting first-year grades was a useless exercise. They embarked on a project to identify personal qualities, distinct from academic record, that best correlate with having a successful college experience – and success was defined in terms much broader than GPA. The unsurprising truth is that private colleges select from many academically qualified applicants those with the most personal promise as suggested by many other factors, including extra-curricular records and special talents.
We are making more work for ourselves by removing a bad, but bureaucratically convenient, way to reject applicants. Our admissions staff will evaluate the whole file, rather than toss it based on a spurious number. We must trust to their judgment that they will find the most qualified and interesting class possible, as they have done in the past, but this time freed from the distortions of the SAT.
To read Jay Schalin’s reply to this article, click here.
To read the original article by Jay Schalin, click here.