One of the big controversies in higher education is the proper role of standardized tests—the ACT and especially the more famous SAT—in evaluating students applying for admission. Initially, the SAT was created so that good students who didn’t have prestigious high school diplomas could demonstrate their academic capabilities, but in recent years, the SAT has frequently been denounced as unfair and misleading.
For that reason, quite a large number of colleges and universities have decided to make it optional for applying students to report their SAT scores. (One group that is opposed to such testing in principle, Fair Test, keeps a list of those schools.)
Proponents of the “drop the SAT” policy contend that a student’s scores are not reliable indicators of college success and that a better, more “holistic” evaluation can be made from looking at high school records, achievement tests, essays and personal interviews. Doing admissions without SAT scores won’t diminish the quality of the student body, but can actually improve it, they contend.
Not so fast, says Howard Wainer. Wainer is a professor of statistics at the Wharton School and a distinguished research scientist with the National Board of Medical Examiners. In his recent book Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies, Wainer uses his considerable talent for statistical analysis to examine a number of education policies (some pertinent to higher education and others pertinent to K-12) where the guesses of policymakers seem to be wrong.
Wainer’s opening chapter is about the advisability of dropping the requirement for standardized test scores. He focuses on data from Bowdoin College in Maine, a selective liberal arts college that made SAT scores optional back in 1969. Over the years, significant percentages of applicants (between 16 and 30) have decided not to report their SAT scores even though nearly all of them actually took the test.
Looking at the entering class of 1999, when 106 applicants chose not to report their scores, Wainer contemplates two plausible explanations: “(1) They knew the SAT wasn’t required, so they decided ‘Why bother?’ (2) They believed they would compare unfavorably with other applicants…and decided that submitting their scores would not help their cause.”
Fortunately, Wainer was able to obtain the identities and SAT scores of those 106 students and found that the mean SAT score of those students was 1201, compared with a mean of 1323 for those who did submit–a substantial 122 point difference.
If the theory that SAT scores are of little or no value in assessing student capabilities is correct, then we would expect to find that the non-reporting students did just as well as the reporting students did, but Wainer was able to get the grade records for the 1999 entering class and, knowing which students were in which group, found that the non-submitting students had lower first year grades by 0.2 points.
That led him to conclude, “(W)hatever other variables are taken into account in the admission of applicants without SAT scores, they do not, on average, compensate for the lower performance on the SAT….”
To see whether the Bowdoin results squared with similar schools where SAT scores were optional, Wainer looked at Northwestern, Carnegie Mellon, Barnard, Georgia Tech, and Colby College. He found a consistent pattern—lower first year GPA among students who did not submit SAT scores. “Making the SAT optional,” he writes, “seems to guarantee that it will be the lower-scoring students who withhold scores.”
My suspicion is that Wainer’s conclusion understates the magnitude of the difference, for two reasons. First, grade inflation in college has rather dramatically bunched grades in the A to B range. Weaker students are held up by the grading floor. Second, since students can select their courses to some extent, even in their first year, weaker students probably gravitate toward less demanding ones where grade inflation is most pronounced. Therefore, the difference in academic ability between students who report and those who don’t is greater than the GPA discrepancy.
The finding that adopting the SAT optional policy tends to reduce the academic capability of the student body runs headlong into stiff opposition from SAT opponents. One of them is Wake Forest University sociology professor Joseph Soares, who wrote a reply to a Pope Center article about his school’s decision to drop the SAT requirement, stating, “Everyone who has done statistical work on admissions knows that HSGPA (high school grade point average) is the best single predictor of college grades.”
I think there is probably some hyperbole in that statement, but rather than digging into competing experts and their research, I would like to pose this question: Is a school more likely to get a false positive from high school grades or from the student’s SAT score? That is to say, is it more likely that a student who scored well on the SAT will turn out to be a fairly weak student, or that a student with a good high school GPA will turn out to be a fairly weak student?
Whatever the remaining flaws in the SAT (and the College Board says that over the years, it has worked to eliminate questions that have hidden cultural biases), it is a uniform test. That, however, cannot be said about America’s high schools, where the expectations and rigor vary enormously. I was reminded of that fact in a new City Journal article by Myron Magnet where the author comments on the high spending mandated by courts in New Jersey on public schools.
Magnet writes, “What are New Jersey taxpayers accomplishing with the $22,000 to $27,000 they spend per pupil each year in the big inner-city districts? On test scores and graduation rates in Newark, the needle has scarcely flickered. As the E3 education reform group’s report Money for Nothing notes, high schools in the state’s biggest city can’t produce substantial numbers of juniors and seniors who can pass tests of eighth-grade knowledge and skills….”
So the top students graduating from Newark high schools probably have significantly weaker academic capabilities than lower-ranking students from other public schools, private schools, and home-schooled students. Of course, non-SAT colleges look at other factors besides high school grades, but essays are notorious for being coached if not entirely ghost-written. Personal interviews allow subjective and emotional elements to play a large role, possibly for the good, but also possibly misleading for the evaluator.
For that reason, I believe that it is more likely that a student who looks good based on high school grades, essays, and an interview will be falsely identified as a good student to admit than a student who scored well on the SAT.
Suppose that a college selects students to admit based entirely on HSGPA plus those questionable factors of essays and interviews. That approach ensures that the student body will have a far wider range of the abilities that count in college (reading, basic math, writing) than if the school admitted the applicants with the strongest SAT scores. That’s important because when professors have to deal with classes with students of widely differing abilities (and interest levels) they have a harder time teaching a rigorous class.
Those who want to dump the SAT say that without it, they can bring together a more “diverse and interesting” student body. No doubt they think that’s good, but from the professor’s point of view, it’s better to have a homogeneous group of students with regard to their ability and interest levels. That’s a hidden but important cost when you open the Pandora’s Box of looking for a “diverse and interesting” group of students.
Why shouldn’t college admissions committees work with more information rather than less? If Howard Wainer is correct, it’s a mistake to abandon SAT scores because that makes the guesswork in deciding which students to admit more difficult.