In a recent Digital Education Council survey, 86 percent of college students said they “‘regularly’ used programs such as ChatGPT in their studies,” while over half of students claimed to use it weekly. Despite the high rates of AI usage among undergraduates, however, new surveys have found that many of them are concerned about the ubiquity of AI in higher education. Although this advanced technology can greatly aid learning and research if used properly, some students are worried about the ways in which misuse of this new tool might negatively impact their college education. At the same time, many students are worried that they haven’t received sufficient training in the new technology. Universities, it seems, have done little to allay students’ worries about AI.
A student who uses AI to cheat hinders his own learning while placing his peers at a disadvantage.One of the most common student concerns about AI in higher education is academic disintegrity. Cheating was the primary issue for undergraduates who acknowledged a “strong dislike” of AI in a recent Wiley report. And for good reason—AI makes cheating on assignments quicker and easier than before. A student, crunched for time or unenthusiastic about a particular assignment, needs only to open his laptop, type his questions into ChatGPT, and slightly rephrase the results before submitting his “work” to the professor, thus possibly earning a higher grade than a peer who completed the assignment on her own.
This scenario is not uncommon: Turnitin, a plagiarism and AI-writing detector, recently found “‘at least’ 20 percent AI-drafted content” in “11 percent of 200 million submissions it received,” according to Inside Higher Ed. Such high percentages indicate that students’ fears about AI cheating are not unfounded. A student who uses AI to cheat hinders his own learning while placing his peers at an unfair disadvantage in the gradebook.
Many students also take issue with the lack of clear communication concerning AI use at their universities. Some 86 percent of undergraduates surveyed by the Digital Education Council said that they were “not fully aware of the AI guidelines at their university.” Danny Bielik, president of the Council, believes that “conflicting” AI policies from different professors at the same university may exacerbate this issue. He explains, “Students might be in one class being told one thing about what they are and aren’t allowed to use generative AI for, and then they go into another class and they’re told something completely different.” Rather than risking an academic-integrity violation, some students deal with the confusion by avoiding AI altogether: 37 percent of students in the Wiley report refrain from AI use out of fear that their professors would “think they were cheating if they used AI.” While this approach may work to prevent issues with integrity, it also prevents students from gaining valuable experience. In a world where nearly every field is adopting this new technology, such hesitance poses another issue for career-oriented undergraduates.
Lack of AI experience is a prominent concern for college students, who are looking ahead to their careers. According to a recent Cengage survey, 73 percent of employers “use GenAI.” Further, 58 percent of employers are “more likely to interview and hire those with AI experience.” Both of these numbers will likely grow over time. Yet less than half of current college students feel they possess “sufficient AI knowledge and skills.” The Digital Education Council survey found that undergraduates “increasingly expect training to be incorporated into their studies”; however, most universities have yet to introduce AI courses into their curriculum.
The implementation of clearly defined, university-wide AI policies could help to diminish students’ concerns.The implementation of clearly defined, university-wide AI policies could help to diminish these students’ concerns. It’s virtually impossible to eradicate AI cheating, but a clear explanation of what AI use is allowed could encourage some students to work on their own from that point rather than leaving all of their work to AI. Further, a clear outlining of what constitutes a violation of academic integrity, as well as of resulting consequences, may dissuade AI cheating in some cases. This would also allow the more conscientious students to use AI in a proper manner rather than avoiding it altogether for fear of cheating accusations. Lastly, a clearly defined AI policy is the first step towards implementing AI training for professors and students.
The Digital Education Council explains that, “[although] students do not want to become over-reliant on AI … most students want to incorporate AI into their education.” By encouraging proper use of AI within their institutions, universities can help bring about the best outcomes for students and professors.
Sophia Damian is a student at Wake Forest University and a 2024 Martin Center intern.