Vitaly Gariev, Unsplash A recent study from Tyton Partners reveals that what students believe about AI use is quite at odds with their actual AI use. The study, which surveyed over 1,500 students, more than 1,500 instructors, and over 300 administrators, reveals that student and faculty attitudes toward AI in academia have both taken a downturn since 2024. Turning to artificial intelligence for academic assistance is not widely believed, by either students or educators, to be conducive to greater educational quality. Student preference for AI as a primary source of academic help has dropped 13 percentage points since 2024. Faculty attitudes about the necessity of AI training for future jobs, meanwhile, have fallen seven percentage points since last year’s equivalent study.
What is needed now is for professors to help students live up to what they say they believe. What these attitudes reflect is a desire for normalcy, as well as the perseverance of old-fashioned ideals about academic excellence—even if those ideals are too often unrealized. For example, while optimism about AI has fallen, 65 percent of students still report using standalone generative AI tools such as ChatGPT regularly. Nonetheless, it is encouraging to see that true learning remains theoretically valued. The growing preference for in-person over online courses—with faculty preference for the former increasing by 16 percent and student preference increasing by 32 percent since 2023—is another indicator of a residual desire for quality over convenience.
The job of professors is to demand and help create more serious students. What is needed now is for professors to help students live up to what they say they believe. Instructors cite ineffective studying, a lack of appropriate prerequisites, and a lack of motivation as major hindrances of student success. Forty-five percent of instructors cite cheating prevention as their primary classroom concern. The difficulties that institutions encounter in creating universal, appropriate, and effective AI policies certainly exist, but they may be occupying too much space in the conversation. When the logistical focus of faculty is largely on keeping students away from ChatGPT, the lede is buried. More important academic measures are needed.
The problem with artificial intelligence in the vast majority of college classes is that, as far as students are concerned, it works. Most students are not using artificial intelligence to answer multiple-choice questions on a biology exam. Most students are using it to pad their philosophy papers. The fact that students are successful enough in these efforts to pass classes reveals an extreme failure on the part of educators. After all, AI is no use to the student taking an end-of-year, handwritten, blue-book exam. Extensive memorization, timed essays, and handwritten assignments have become higher-education antiques at precisely the time when they are most called for.
The inherent limitations of artificial intelligence where originality, personality, and substance are concerned ought to render it irrelevant to serious students. Indeed, it is irrelevant to many serious students. The job of professors is to demand and help create more serious students.
In all educational settings, there is a naturally occurring gap between high- and low-capacity learners. Eliminating this gap is impossible. Far better to work towards a system in which the gap is consistently reflected in actual grades. Educators need to recognize that their academic standards have slipped, such that ChatGPT is now capable of writing “A-level” responses to tired essay prompts. Moreover, instructors’ lack of attention to individual students often allows dishonesty to go undetected. Professors should choose to demand more of students, even if this means letting some of them fall behind. The alternative is to simply write and rewrite ineffective AI-policing policies to be buried deep in unopened student handbooks.
Gabriella DiPrima is a pre-law philosophy and economics student at Wofford College in South Carolina.