Steve Johnson, Unsplash The integration of artificial intelligence (AI) into everyday life occurred in what seemed like a blink of an eye. At every turn, some form of an AI “assistant” now offers to correct grammar, help compose emails, take notes of video calls, or distill large amounts of information into easily digestible summaries. Yet, even though AI is only an arm’s length away, many of its uses remain unexplored by academics and non-academics alike.
In academia, some simply haven’t taken the time to learn how to use AI, while others are skeptical and even suspicious about its influence on research and learning. Others hold that, for better or worse, AI is here to stay. Rather than ignore it, the best strategy is to learn how to harness its abilities for work efficiency, research, and teaching.
“AI is definitely changing how a lot of work gets done, and it’s heavily influencing how we think about workforce development.” The University of North Carolina at Chapel Hill falls in this second camp. For several years, the university has been working to increase AI literacy on campus through the availability of courses and information modules.
The university recently took another step to develop its AI strategy. In November, Jeffrey Bardzell was appointed vice-provost for artificial intelligence and chief AI officer. Bardzell is also a professor at, and former dean of, the School of Information and Library Science. In this new capacity, Bardzell will be leading the university’s thinking about and approach to AI.
“Quite a few professions are likely to disappear because of AI, and other ones are going to be created.” The Martin Center recently spoke with Bardzell about what he hopes to accomplish. The following transcript has been edited for clarity and length.
Martin Center: How can the university best prepare students for the workforce in the rapidly changing environment caused by AI?
Bardzell: That’s a really central question to what I’m doing. It’s really important to start with what we’re trying to do, because we’re a public university, and our mission is to develop people, and that includes preparing them for the workforce but also to be free-thinking citizens in a democracy and contributing members of our community and its material and spiritual prosperity. And so we think about AI. It’s definitely changing how a lot of work gets done, and it’s heavily influencing how we think about workforce development. But other aspects of human development may be less influenced by AI. There’s the great tradition of liberal-arts education that stretches back to Greece and runs straight through to our times. That, of course, teaches critical thinking and how to write and how to think and work in teams and to lead and to deal with ambiguity. But students also need real-world skills and professionalism, so we’re seeking a balance.
Insofar as we’re preparing students for the workforce, I would say a couple of things. One is that AI is ubiquitous. Every profession is not only likely to be shaped by AI, but actually quite a few professions are likely to disappear because of AI, and other ones are going to be created. I think we’re headed towards a very volatile time, and I think it’s really important that everybody, as a result, not only has skill with AI but also has the agility to deal with the fact that they’re likely going to go into professions [that will] face a considerable amount of disruption. I think that AI preparation is an all-hands issue. I think the whole university needs to take this on, whether you’re going to be a nurse or an entrepreneur or a public-school teacher, a government-service worker, a historian, whatever you may get into—all of this is going to be relevant to what you do, and so we need to set people up for that.
Martin Center: Will there be new AI-focused course requirements, perhaps as part of the general-education curriculum?
Bardzell: There are hundreds of classes at UNC today that are treating AI as an object of inquiry and/or using AI as a tool to get work done. What I think we can improve is how coordinated that is, and then that really gets at the question you’re asking. There are two strategies that we might do, and I think we’re pursuing both of them. One is to create at least a course or more that focuses on some kind of AI literacy, to make sure there’s some baseline experience and exposure to the concepts, to [teach] how to use the tools but also to be accountable [for] the results.
AI sometimes doesn’t tell the truth, or AI sometimes shows bias or sometimes produces an inappropriate or even ridiculous response. You have to know when it’s doing those things, because you’re going to be on the hook if you’re in an organization and you present something, [so] you’ve got to be able to recognize that. So, being accountable [for] AI. We also talk about the ethics of AI, like privacy, bias, and these kinds of issues. But if we do a course like that, it runs the risk of being a one-and-done situation, and then it becomes easy to get compartmentalized. I’m sure you remember courses when you were an undergrad where you took the course, and it went to a little corner of your head and never got back out. That’s not going to serve us well. We also need to get AI integrated throughout the curriculum. And, as I said, that’s happening, but I think it can happen in a more intentional and designerly way than we’re currently doing it.
“We need to get AI integrated throughout the curriculum. And, as I said, that’s happening.” The other piece is AI-use guidelines. One thing that’s challenging is that, in some courses, you’re allowed to use AI to do A and B, but not C or D. In others, you can’t use it at all. And in other courses, you can use it for A, B, C, or D. We don’t have a great vocabulary to articulate that, and I think some students are experiencing confusion. That’s something we can improve. I expect to have that result in time for the fall, and the other stuff is ongoing.
“I think it’s reasonable for administration to set some expectations and some norms.” Martin Center: When it comes to AI use, how much discretion will faculty members have within their classrooms?
Bardzell: When I first became a dean of a school that was very interdisciplinary, I was an associate dean at the time, and students would come to me, and they would complain, “So-and-so professor gave me a grade that I don’t like.” What do I do about that? I learned some humility in that case, because I’m an expert in human-computer interaction, but I’m not an expert in cybersecurity—we had a cybersecurity group in that same school. What you have to do is share in the trust. The faculty do need a lot of discretion about how they implement these things. At the same time, I think it’s reasonable for administration to set some expectations and some norms and also to provide resources and support to help people do that. There’s a bit of a balance that we’re trying to do to respect their expertise, [as] I don’t have the expertise that they all have, but, at the same time, there are certain things that we’re trying to achieve, and I very much want to communicate those and then support people.
Martin Center: Are there limits on the extent to which AI can be woven into curricula and processes?
Bardzell: The main limits that I can point to have to do with technology and infrastructure and constrained resources. We’re in a constrained-resource environment. We can’t afford everything. Some of these things are very expensive, and sometimes they’re not available on-demand in the way that we would like them to be. So there are, in that sense, limits. But if you mean organizational limits, no, I do expect this to be across campus. I actually said this in a recent leadership meeting: I expect every person on campus, whether they’re a faculty member or a staff member, to know within their own domain of expertise how AI is being understood, how people are using it, and what the cutting edge is. And they need to have a considered response to that. And if you’re a faculty member, then that considered response should be reflected in your teaching. And if you’re a staff member, it should be reflected in your practice. I think everybody should know. If you’re an accountant, you should know what’s happening with AI and accounting. If you’re a professor of history, you should know how AI is influencing how the work of historians gets done, both in professions and in your discipline.
Martin Center: Different disciplines will have different uses for AI, as you just said. The extent to which each discipline uses AI will vary, correct?
Bardzell: Correct. And that’s actually very healthy. There are certain disciplines, for example, that are like my own discipline. I’m from a computing discipline, but it’s a human-centered computing discipline, and so my training really focuses on usability, what’s good for humans, desirability, those kinds of issues. On the other hand, you have computer scientists who are very technical, and they should be innovating on technology, so you want the different voices. There are some disciplines that may express some skepticism of AI, but that skepticism can wind up leading to better AI, because it leads to better governance of AI. It leads to better awareness of what can go wrong, and it can help surface some of the ethical issues and make people more sophisticated users of AI. I think lively debate is totally fine. What’s not okay is people just disengaging. That’s the one thing that’s not okay.
Martin Center: A couple of Harvard professors did an experiment showing that physics students learned better from a purpose-built AI that functioned as a tutor than in a typical, large lecture classroom. What are your thoughts about the findings? Will UNC run similar experiments?
“There are some disciplines that may express some skepticism of AI, but that skepticism can wind up leading to better AI.” Bardzell: I have a few different thoughts on this. One is, I think there’s been considerable research in education and in pedagogy for a couple of decades [that’s] really been questioning some of the sage-on-the-stage-type pedagogies, and innovations like the flipped classroom or labs or studio-based education, service-based learning, and problem-based learning have been attempts to counter some of the limitations of that. In that sense, I wasn’t super surprised that that one pedagogy, especially in a large-style classroom, could be improved upon. I think learning agents have really high potential. I think the idea that a student might have an AI agent to support them in an individual way [where], over time, AI starts to learn what that student is good at, or where they consistently make mistakes, or where they consistently struggle to focus, at a minimum, can help the students with their own metacognition.
“I think all of our operations can be improved with AI, and I think all of our operations eventually will be transformed.” At the same time, that same kind of work can be used in learning analytics to actually predict students who will get in trouble long before they’re actually irredeemably in trouble, so that we can intervene sooner. I think there’s really high potential to improve there, so I’m really excited. But one thing I’m going to say is, I think a lot of pedagogical disruption is on the way, and we need to get out ahead of it and not wait for it to happen. I think this study just gives us another piece of evidence why.
Martin Center: Which administrative functions do you anticipate can be the most improved by AI, and do you expect it to perhaps replace some of these functions?
Bardzell: I think all of our operations can be improved with AI, and I think all of our operations eventually will be transformed. I think it’s going to take time. I think, increasingly, we have the technological infrastructure, but we’re going to need the culture change. We’re going to need the upskilling, the systematic upskilling of our staff. And [those things are] happening, and as they continue to happen, I think these kinds of transformations are going to be organic. One story I’ve heard over and over, when I talk to people in industry, they say, in the past, we had this job to do, and we had a team of nine people, or six people, or 25 people who worked on that task, and now we’re doing that task with two people and AI. Then I say, “Well, what happened to the other seven people?” And they’re like, “We didn’t lay off the other seven people. We’ve got the other seven people also in two- and three-person teams.” The point of this is, what counts as a team is going to change, and it’s hard for us to understand what exactly that’s going to look like. But, as the teams change, I think the nature of the work is going to change, and I think it’s going to become much more efficient.
Martin Center: Do you think AI programs are, or ever will be, sentient?
Bardzell: There are a narrow set of specialists who are focusing on a question like this and should be focusing on a question like this, and they include philosophers, cognitive scientists, and computer scientists. I think, for most of the rest of us, [this question can] distract us from serving students and from solving the problems we have been talking about up until this point: getting AI embedded in the curriculum, thinking about AI in relationship to changing expertise, thinking about technological infrastructure, thinking about optimal pedagogies. So I, a little bit, want to resist the question, just because I think sometimes people talk about these questions, and they wind up not talking about what they should talk about.
But, just so you have an answer, from what I’ve read, from those experts that I’ve just talked about and their explanation of why they had the position they do, I do not believe it’s sentient, and I do not believe it’s imminently going to be sentient just based on how it works. I would really rather focus on serving our students. And I hope most of my colleagues, except the ones who should be asking questions like that, are really focusing on students.
Martin Center: I do want to give you the opportunity to address anything else you’re working on at UNC right now that we haven’t covered, or if you would like to go into more depth on a topic.
Bardzell: Thank you. I appreciate that. And for the opportunity to talk. I’ve only been in the position a couple of months. What I’m really trying to do is build up a partnership and find the places where we’re all in agreement. I think there are places where we’re all in agreement, whether it’s the chancellor, whether it’s people who are trying to hire our alums, or whether it’s faculty or the students themselves or our professional staff. I think there’s much more agreement than people realize. And so a lot of what I’m trying to do is to articulate what that is. There is some hesitation around AI, and I think there are understandable reasons for that. A big part of what I’m trying to do is to help address that, to take it seriously and respectfully, but also to try to encourage people to lean into their expertise, to work through it. As a university, we shouldn’t be buying into Silicon Valley hype and just repeating what they’re saying, but if you are an expert in religious studies or in geography, you really need to know what geographers and religious-studies people are doing with this and showing that leadership. That has been a big piece.
I think we’re trying to find ways to incentivize faculty, students, and staff alike, all three groups, to use AI, and each of them has different motivations and different resource constraints. So it’s learning those and trying to overcome them. But I think, in terms of a tangible outcome, I really would like to see AI literacy solved in a clear, explicit way. And the AI-use guidelines, I’d really like to get those addressed by the end of the semester, so that, heading into the fall, we’ve got that in place, and that, I think, creates a better foundation to then pursue the more systematic work of embedding AI intentionally throughout the curriculum.
Shannon Watkins is the research and policy fellow at the James G. Martin Center for Academic Renewal.