Much ink has been spilled considering the problem of “artificial intelligence” in the college classroom. Yet less has been said about its use in the admissions office, where the fortunes of the next generation are decided each year. As Inside Higher Ed observed last fall, 50 percent of higher-education admissions offices “are currently using AI in their review process[es].” Given this sea change in how applications are read, elected officials and watchdog organizations should be preparing to practice oversight.
The job of an admissions officer is one I do not envy. Reading student-written puff pieces, staring at application data, and putting it all together does not sound terribly exciting to me. I can empathize with the desire to make things a bit easier. The problem is that AI, like every other computer technology, must be told what to do by people. As such, it is subject to the same “progressive” distortions that have lately roiled university admissions of the human kind.
It would be far too easy to program AI to privilege applications containing social-justice buzz-concepts.While AI has thus far been used primarily to read and filter applications and transcripts, developments are being made to use it as part of a “holistic” admissions process. Researchers at UC-Boulder and UPenn, for example, have developed an AI model that can read and analyze admissions essays for certain phrases or characteristics that signify particular criteria about a candidate. While the model in question is imperfect, it has been largely successful according to its creators.
This should be a worrying development. On the other side of SFFA v. Harvard and UNC, we know that universities public and private are attempting to use “holistic” admissions systems to promote their diversity agenda on the sly. It would be far too easy to program an AI “reader” to give preference to applications containing social-justice buzz-concepts that serve as a proxy for race.
As previously mentioned, AI necessarily requires some kind of outside influence in the form of programming. At the end of the day, AI software can do only what it is told to do. We can be sure, given contemporary staffing realities, that admissions officers will program AI systems in ways that reflect their progressive priors. An AI model that recommended a racially homogeneous class, for example, would be swiftly shut down.
But that won’t happen. What will occur is that admissions officers will use AI models to get around the Supreme Court’s 2023 affirmative-action ruling. They will do so by calling their AI models “objective,” “holistic” admissions systems. Instead of a person reading an applicant’s poorly written sob story, a supposedly fair AI will do so. Yet, if these AI models are not open-source, there will be no way of knowing if they are truly objective. We will have no way of seeing what programming is going on behind the scenes. The exact same issues that arise in a human-run “holistic” system will show up in one run by a human-made AI.
Exactly where AI systems are implemented in admissions offices is of little concern. The real issue is that such models must be transparent. At present, many schools that are using AI in their admissions offices are not using open-source models, so there can be little to no oversight of their processes. Any school using an AI model should make that model’s programming publicly available. This should be the policy pursued by governing bodies, including federal ones.
As a student, I am rightfully scrutinized if I use ChatGPT on an essay. I understand why that form of cheating is treated so harshly. What I cannot understand is why our university staff are not held to the same standard. AI models have their uses, but they should not extend to ignoring Supreme Court precedent in order to pursue racist policies. Let us have a look under the hoods of these machines. Then we’ll decide.
Stephen Halley is a rising senior at the University of North Carolina at Chapel Hill, double majoring in English and religious studies with a concentration in British and American literature. He is a summer 2024 Martin Center intern.