Ban or embrace.
That is the dilemma universities everywhere find themselves in as ChatGPT and other generative AI tools have become – suddenly – mainstream. In one analysis from March 2024 of more than 200 million university papers, 11% were found to have been written in part or wholly by AI.
“The education sector is the first that has had to grapple with generative AI, because the students have simply brought it into the classroom,” says Ida Schrøder, a postdoc at Aarhus University in Copenhagen and contributor to the Algorithms, Data and Democracy research project. “So the universities have found themselves in over their head and unsure how to respond, and that is why I have been interested in studying what happens to our universities when we add this new unpredictable ingredient of generative AI,” she explains.
Calculator or chameleon
Ida Schrøder has mapped the universities’ reactions, spoken to faculty staff from all the Danish universities and followed the actions undertaken at a Danish university during the spring and summer semesters of 2024. To understand how generative AI is used, but also how the university tries to regulate something that it does not fundamentally understand.
“What stands out clearly is that it is not possible to limit this to a discussion of what the students are doing, because AI is not just like a calculator, as some have said. The calculator changed how we teach mathematics, but it was limited in what it could do. ChatGPT and Copilot and these other systems on the other hand change constantly.”
“So the way I like to think about generative AI systems is like a chameleon that changes depending on its context. AI changes the education rulebook. But also has an impact on staff wellbeing, inequality between students, climate impact and techno stress. It is about which strategy the university wants to bet on for its future,” she says.
And the mental models we use to think about AI are important, Ida Schrøder goes on to say:
“If universities look at ChatGPT as a tool that students can use to cheat, as a form of fraud, which is how it was formulated at one point in Aarhus University’s guidelines, then it follows that you have to handle that as a problem, almost like a crime, right?”
“But when ChatGPT is used as a conversation partner, or to give feedback, which is how many students use it, then it is a question of didactics or pedagogy. Then you have to find new ways of teaching that can accommodate that.”
“We have to rethink how we teach at universities, but that does not mean we have to reinvent the wheel. Perhaps we can focus more on old virtues like group exams or exams based on active participation,” Ida Schrøder says.
After the tsunami
ChatGPT 4 was released to the public in November 2022, and washed over the universities like a tsunami. Many universities’ first response was to try to prevent students from using it out of worry that it would make grading impossible. Since then, some universities like Aarhus university have reversed their position – allowing students to use AI in acknowledgement that it is near impossible to prevent – while others have delegated the decision to individual faculties.
One of the respondents Ida Schrøder spoke to compared it to the Spider-Man meme: All the universities are trying to understand what others are doing, and in the process creating policies that look very similar. And delegating the responsibility down in the organisation.
Who knows how to regulate generative AI? It can be easier to point fingers than find answers.
“One of the insights from my research is that the responsibility lands on a very individual basis,” Ida Schrøder says. “Either on the individual faculty, student or lecturer, who are tasked with figuring out how to implement AI responsibly, because the universities do not know how.”
“How can you ensure that the students are still fulfilling the learning criteria? Which criteria need to be changed, and how? It is a major problem because most lecturers do not have the skill or familiarity to assess this. But no matter how we handle this, it is key that we involve the students much more,” she says.
This is a process that will be difficult for many professors, lecturers, researchers and other staff at universities, Ida Schrøder predicts.
“For those of us who work in higher education I think it is a very sensitive topic, and many are feeling overwhelmed. That is also why you will hear terms like ‘professional grief’ or ‘techno stress’ because educators feel like they are drowning in the process.”
Similarly, not all students are rushing to embrace AI:
“It is a spectrum from those that say they use it all the time to those that do not want to use it at all. And the most extreme are those that say they do not want to use it because of the climate impact, and who are against the university asking them to use these tools because of it,” Ida Schrøder says, and likens generative AI to a hungry robot: “Because AI uses so much more energy than a Google search, universities also have to think about it in terms of their climate accounting and responsibility.”
Who are in the room when AI policies are made?
How generative AI will change universities is near impossible to predict. But one thing seems certain: It is here to stay. And that is why it is important to focus on who is taking advantage of this technology and who is not, Ida Schrøder says.
“One surprising aspect that my research helped me see is the gender imbalance. In the universities it is generally younger men who are early in their careers who have taken the lead on becoming experts in generative AI, who participate in the different fora that weigh in on how we can best adapt the curriculum and so on.”
“But if it is only faculty with extensive technological expertise, mostly male, who are in the room when decisions are made about how generative AI can be assessed in exams or used in day-to-day education, then that will have a major impact on our educational landscape,” Ida Schrøder says.
These insights made Ida Schrøder rethink her position of expertise at Aarhus University. It has been important to her to counterweigh that gender imbalance: “I have felt that it was important for me to advise our faculty and our students how they can best make sense of this, including some of the women students who have been more apprehensive about using generative AI, but where I have seen several use it after a gentle prod and flourish.”
Ida Schrøder has Ph.d in management accounting and organising from Copenhagen Business School. Her new research into generative AI is undertaken in collaboration with Helene Friis Ratner, academic co-lead of the Algorithms, Data and Democracy project. The research is expected to be finished by summer 2025.
Click here to read some of Ida Schrøder’s previous publications.