Universities are continuing to navigate the challenges and opportunities posed by artificial-intelligence (AI) tools such as ChatGPT. While many are wary of its power and capacity to enable student cheating, others point out that AI has legitimate research uses. Students (and faculty), however, need to be taught how to use it well. There are times in the research process when consulting AI is appropriate and times when it is not. Students need to understand the difference. This is what three experts at UNC-Chapel Hill are working to accomplish through the creation of online information modules.
An effort of Carolina AI Literacy, the modules were created by Daniel Anderson, director of the Carolina Digital Humanities and of the digital innovation lab; Dayna Durbin, undergraduate teaching and learning librarian; and Amanda Henley, head of Digital Research Services. The project was funded through one of six seed grants awarded by UNC-Chapel Hill’s School of Data Science and Society. The grant is designed to encourage interdisciplinary research in data science.
The Martin Center spoke with Anderson and Durbin to discuss their work on these modules in greater detail. The following transcript has been edited for clarity and length.
“We want to boost students’ confidence in using AI tools.”Martin Center: AI is a powerful tool, but it can be used poorly or incorrectly. What are you hoping students will get out of these tutorials?
Dayna Durbin: One of the things that we were hearing from students was that they were intrigued by generative AI tools. But they weren’t quite sure if they were staying within the bounds of the academic honor code and if they would, perhaps, be accused of cheating or plagiarism if they used some of these tools. We wanted to boost their confidence in using AI tools and [for them to] feel like they had the skills to make those calls. So we created the three modules to start initially. The first one focuses on how to prompt AI or how to craft the prompts that you put into these AI tools. The module that I worked on focused on fact-checking and bias, making sure that you’re avoiding misinformation when you use these tools, because they can “hallucinate” or create information that’s not factually correct. [We wanted to make] sure that students understand that piece.
And then the third module focuses on avoiding plagiarism. Many of the same skills that you would use with other types of tools and other types of resources also apply when using these generative AI tools. We just wanted to help students have a better understanding of how the tools work and how they could use them as tools—rather than replacing thinking or writing that the students do on their own. But they can be used as tools to help with those processes.
Daniel Anderson: Building on what Dayna said, we did want students to have a sense of being in charge of the decisions they make with AI, getting enough background understanding and enough practice with AI so that they don’t feel like they’re just consumers in this AI space but can produce their own knowledge by making smart choices.
Martin Center: Yes, one of the modules addresses the importance of students’ background knowledge on a given topic and emphasizes the importance of active thinking, how it’s directly related to an AI’s effectiveness. Can you give us some examples of how students can use their knowledge of subject matter to guide the AI to give them more specific and useful information?
Daniel Anderson: One of the things that we did in the first-year writing course that I taught—the course focuses a lot of times on different genres, and one genre is a literature review, where you take a research topic and review it and summarize different perspectives, different bits and pieces of information. So we were experimenting with ChatGPT and came up with a sample topic: noise pollution from server farms. If you’re in a rural community and there’s a server farm, it turns out it creates a whole bunch of noise pollution. And this is what Dayna and my other librarian colleagues might call a nice narrow topic. It’s not just something like “happiness,” it’s a very focused topic.
“That’s the kind of AI literacy that we’re hoping for.”So we asked ChatGPT to give us a literature review on this topic, and it came up with this wonderful set of sources. There was an article actually about noise pollution in rural North Carolina from the Journal of Audio Aesthetics, something like this. And we thought, “Wonderful, this saved us all this labor.” As soon as we went to the library and started to look for it, it turned out that not only was there not an article with that name, but there wasn’t even a Journal of Audio Aesthetics. That had all been made up. What the students were able to do at that moment was decide that it made more sense for them to go straight to the library databases and use what they knew about the topics they were interested in: find articles, survey them, do traditional research, than untangle what was legitimate and what was not legitimate from the AI output.
Then, in other instances, if they found a legitimate topic and realized that parts of it were relevant but some weren’t, they could ask AI to summarize that piece for them quickly. That turned out to be an appropriate use of the technology. What they ended up doing was almost intuitively saying, “This isn’t going to be that helpful for me in one instance,” but in another instance recognizing, “This is how I want to use it.” And that’s the kind of literacy that we’re hoping for, a kind of situational awareness that students are able to develop.
Martin Center: I would like you to elaborate on the problem of misinformation a little bit more. If a student wanted to find a reliable scholarly source, should they go the traditional route of a database like Google Scholar and only use AI to then summarize it?
Dayna Durbin: That’s kind of what we have been recommending in the library. We’re finding particularly that the free tools like the earlier versions of ChatGPT, because they work on a prediction model, if you ask them a question and they can’t find the information, the tool will just create some information out of thin air. And it can be very convincing-sounding. As Dan mentioned, [AI] can come up with article titles and journal titles that sound very realistic. And so what I’ve been kind of coaching students to do is use the AI tools to help them fill in their background info, or to even narrow in on a research topic. Maybe they’re interested in something very broad. They can talk with a tool like ChatGPT and say, “Here’s my wider research interest, can you give me some more narrow topics that I might investigate?,” and that can help them focus on what they want to research.
Another great use of it that I’ve coached students on is coming up with keywords or search terms that they can use in a tool like Google Scholar or a library database. Sometimes just coming up with the right words to find the articles that you want can be a difficult process, especially if you’re new to a research topic and you haven’t quite gotten a handle on the language that experts use to describe that topic. ChatGPT is really helpful for that. Those are some ways that I’ve used it and coached students to use it: [Use AI at] the beginning stages of the research process, and then take those keywords and search terms to Google Scholar or another library database and use them to find legitimate sources that actually exist that the AI tool hasn’t cobbled together and created based on its training data.
“We are establishing a baseline for students, which is Trust But Verify.”Daniel Anderson: I think that makes a lot of sense. Dayna’s describing a research and composition ecosystem, where there are library databases, there’s Wikipedia, there’s AI that can help you generate keywords. There’s a whole bunch of different options that you can use to explore. Thinking about the stage that you’re in [and] what’s going to be the most useful at any given moment, and then, in terms of misinformation, knowing what stage is going to be more prone or less prone to provid[ing] helpful guidance for you, I think is useful. [We are] establishing a baseline for students, which is this “trust but verify” mode.
You might be asking [the AI] to summarize some of the major battles of the Civil War, to provide some background information before you dive into your specific topic. [The AI is] going to be pretty good at that, it has reasonably accurate historical information. You can save yourself some time by doing that. Then double-check if there’s some battle that looks like it doesn’t ring true, [so] you can track that down. It’s this kind of ecosystem model of “How does intellectual work happen?” And “when can you fit in some helpful tools?” And then it can save you some labor if you have all these Civil War battles [to learn about]. And then you say, “I’d also like to know the date and the location, please add those to the list.” It’ll do that for you very quickly. And you realize, “I want to be able to sort that, can you please format that [in] a spreadsheet? It makes perfect sense to do that, rather than you copy[ing] and past[ing] every one of those into a spreadsheet. Knowing that that’s a little labor-saving move at a certain moment but not appropriate at a different moment.
Martin Center: AI is such a new tool available to students. How do you properly cite an AI source?
Dayna Durbin: That’s a great question. As you said, these are very new tools. I’ve only been using ChatGPT [for] a year or so. They change so rapidly that it’s hard to keep up with the influx of tools that are coming out. It’s interesting advising students on how to cite their sources or how to cite their use of AI tools. What we are recommending at a campus level is [to] double-check with each instructor [to make sure] that they’re okay that you’re using these AI tools. We do have a spectrum: Some faculty members are asking students to hold off on using these tools in their particular class or in their discipline. They feel like it’s not a good fit for their discipline or their particular course. And so they have a note in their syllabus that says “AI tools are not appropriate for this course, don’t use them.” And then we also have professors who are designing assignments that require students to use the AI tools. There’s a really wide spectrum.
When I advise students, I always say go to that syllabus and double-check with your instructor. Just make sure that you’re meeting their expectations and their requirements for the course. But if the professor is okay with using AI tools, what we’re advising students, in that case, is to put a note in their references or works cited. Or perhaps even if they have a method section of a research paper, [to note the] ways that they used these AI tools. As an example, maybe I use the AI tool to help me brainstorm my research topic, or I use the AI tool to help me develop keywords that I then use in certain databases. [It’s] a disclosure statement almost. In terms of citing the output, that’s kind of a gray area right now.
“The most important thing is acknowledging that you did use the tool and being transparent in your use.”Each citation style—APA, MLA, Chicago, and so on— has advice on how they recommend citing the tools, but they haven’t yet been incorporated into the official style guides. It’s still a little bit messy right now. Each style guide’s advice is slightly different. But for me, the most important thing is just acknowledging that you did use the tool and being transparent in your use. And it’s interesting too at the research level, we’re even seeing individual journal publishers advise researchers on “Here’s how, if you use AI, we want you to disclose it.” It’s been interesting watching the guidance roll out and try to adapt to the tools as quickly as they change.
Daniel Anderson: You do need to be agile. And it’s difficult to make blanket recommendations or proclamations at this point. Underneath a lot of this, I think the big concern that educators have, particularly people focused on writing, is AI replacing the thinking that students are supposed to do. Writing instructors are concerned about the link between thinking and writing. They see writing as an activity of exploring and generating knowledge. If that piece goes away, that’s a big concern. That’s where these declarative statements [come in]: “This is how I used AI in this project that helped me in the invention stage; when it came to drafting, I did all of that myself; for editing, I had to change all of the styles so I asked the machine to do that.” [It’s important to keep] track of how the intellectual work of the project was shaped by AI and [make] sure that you’re aware of that to document it and share it with instructors. But also [because you should be] aware of it to make sure that you get to participate in the actual process of working through the ideas of a project.
Martin Center: I had a question about professors’ concerns over the use of AI. I’m sure that students, whether or not they’re transparent about it, are often going to be using AI. In the long run, what advice do you have for professors who are going to have to navigate this new environment, with both the pluses and minuses of this tool? Do you recommend that they not ban AI altogether but rather encourage students to add a disclaimer or declarative statement in their assignments?
Daniel Anderson: I think there’s good news and bad news in these terms. The bad news first is [that] it’s gotten more difficult. Since generative AI, ChatGPT has been able to produce prose in ways that a keyword search on Google wasn’t able to do in the past. So there are more possibilities for confusion and for prose to be produced that students haven’t created. That’s a challenge. The good news is the kind of instructional paradigms that have been successful in the past will actually be the most helpful to pull through this moment. If you’ve always asked students to gravitate toward topics that are of interest to them, and meaningful and original, then that’s going to be a big help. If you build a healthy intellectual process and writing process into your assignments, where there’s scaffolding and there’s development and bits and pieces, then that’s going to help a lot, too. Good instructional approaches will probably ameliorate a lot of these concerns. Students will be engaged and interested in their work. And if you challenge them, they’ll rise to the challenge and do a nice job. If you simply say, “Produce X amount of words for me tomorrow,” it’s going to be more prone to those challenges. Good instruction is the path.
“One of the things that’s been important is to contextualize and historicize this moment.”Martin Center: Is there anything you would like to highlight about the information you share in the AI modules?
Daniel Anderson: One of the things that’s been important to me is to contextualize and historicize this moment. November 2022 is when AI came out, and there’s this “dog years” phenomenon for technology. [It] seems like 20 years ago, [but] this just came out. There’s a long history of humans doing things to augment their intelligence and using that in communication situations and using that to build out culture and build out knowledge. I like to cite Socrates, who voiced all of these concerns about the invention of writing. One concern [of his] was that you’re going to mediate the exchanges that people have. If I’m looking at you face to face and there’s some confusion, I can notice it, and you can ask me a question—we have a back and forth. The other piece is memory. If we are able to use writing, then I don’t have to remember my speech.
And Socrates was worried that if we don’t have to remember anything, it’s going to rewire the brain, as psychologists are fond of saying. We’re going to lose this ability to have a good memory. That was a very early instance of taking human thinking and augmenting it. Memory didn’t go away, it just moved. So now memory is being put on this piece of paper that I have next to me, I don’t have to remember every item. And that’s just one long-ago instance. The invention of the telegraph changed our thinking about speed and space in the world. The invention of the word processor changed our thinking about the permanence of language when we write it down. There are all kinds of shifts that have taken place. I think if you put that historical perspective in place, it helps you better navigate the moment that we’re in right now.
Dayna Durbin: I try to stress [to] students that a lot of the same skills that you would use in the research and the writing process don’t go away in terms of critical thinking and doing that hard intellectual work. Generative AI is a tool that can help you along that path, but it shouldn’t be replacing any of those steps. In a similar way, when the internet became widely available as a tool, it changed the way that we research and write, but we’re still doing that same intellectual work. I think we’ll see, as these tools get more widespread and more used, a similar process.
Shannon Watkins is the research associate at the James G. Martin Center for Academic Renewal.