Close

ADD Partner Conference: Where will GenAI take us in 10 years?

At this year’s conference for partners in Algorithms, Data, and Democracy, the theme was pitfalls and opportunities in generative artificial intelligence.

There was a strong turnout when Copenhagen Business School hosted the annual ADD project partner conference on November 1.

This year’s highly relevant theme was where GenAI—artificial intelligence like ChatGPT and other programs that can create content based on user prompts—will take us in 10 years.

To shed light on this question, the day’s speakers were asked to focus either on the benefits or concerns regarding AI—although most revealed that they had a foot in both camps.

Generative AI has already, according to an internal Microsoft survey among developers, increased their coding speed by 55%, said Mette Louise Kaagaard.

Microsoft: Digital Assistants Could Give Us an Extra Hour Each Day

The first speaker of the day was Mette Louise Kaagaard, CEO of Microsoft in Denmark and Iceland, who expects AI to be ubiquitous soon:

“We’re all going to have AI in everything, and it will become a completely integrated part of our everyday lives, even in situations where we won’t notice it,” said Mette Louise Kaagaard, referring to new statistics showing an expected 40% annual growth in the AI market globally through 2032. According to her, many companies are already experimenting extensively:

“One of the biggest trends right now is that companies are using generative AI to help their employees become more efficient and to relieve them of some administrative tasks. When people use their own digital assistants, we see that they can save about an hour of work time each day,” she said.

Boston Consulting Group: AI Can Democratize Knowledge and Skills

Boston Consulting Group has also encouraged its employees to use AI. Their experiences show that it is especially the lowest-performing employees who benefit most from AI. Thus, GenAI can help “democratize skills,” said the second speaker, Marianne Dahl, managing partner at BCG:

“We often talk about technology creating an A-team and a B-team, but this technology can actually elevate those who may not have access to substantial knowledge.” Although Marianne Dahl was cast as an AI optimist for the day, she highlighted a few concerns, ranging from climate impact to gender inequality in the use of new AI tools and the geopolitical struggle unfolding between the U.S., China, and Europe caught in between.

Marianne Dahl pointed to other studies showing that a ChatGPT tool can even be better at expressing empathy than humans because, unlike people, the algorithm never gets tired, hungry, or frustrated. This can be particularly useful for friendlier customer service, said Marianne Dahl: “So with these tools, you can actually make the world more humane.”

Thomas Ploug: Four Criteria Shaping AI’s Future

Thomas Ploug, a professor at Aalborg University’s Department of Communication and Psychology, describes himself as a “cautious optimist” about AI’s impact.

In his view, we should focus less on positive examples and more on the overarching structures when predicting the future. He outlines four particularly important criteria: regulation, ethics, politics, and technology.

“There are areas where AI performs well, and areas where AI performs very poorly. When considering AI’s future, one must look at the entire landscape of criteria that will influence its direction,” he said. Based on this, Thomas Ploug expects highly specialized AI to break through in the healthcare sector, which he researches, because algorithms are already able to diagnose certain diseases better than highly specialized doctors:

“So, the claim is that we will see ‘narrow’ AI in areas where there are truly significant gains to be made. We’ll see chatbots, avatars, and, of course, administrative implementations of AI over the next 10 years.”

Artificial intelligence will affect all of our fundamental human rights, said Louise Holck.

Danish Institute for Human Rights: AI Needs Clear Frameworks—or We’ll Be Overrun

The first speaker to zoom in on concerns was Louise Holck, director of the Danish Institute for Human Rights. And there’s plenty to address, she asserted: “When you look around the world, human rights, the rule of law, and civil society organizations are under pressure,” she said. “Therefore, it is essential that technology is developed in a way that considers rights and legal security, which hasn’t always been the case in the past.”

“There are many opportunities in AI for all of us, but as a society, we must set clear frameworks so that we, as citizens, don’t have our fundamental rights steamrolled.”

“How we will use AI is still an open question,” said Sine Just Nørholm.

Sine Nørholm Just: Who Will Clean Up After AI Misinformation?

The head of the ADD project and professor at Roskilde University, Sine Nørholm Just, then took the stage, arguing that as a society, we still have the ability to shape AI development.

“If we think of generative AI as a social technology, what kind of society does this technology invite us to create?” she asked. “If we look at the current direction, it may be worrying. But my main message is that it doesn’t have to be.”

To illustrate AI’s challenges, Sine Nørholm Just provided an example one can try out themselves by Googling ‘Baby Peacock’. This search primarily yields AI-generated images.

“You can probably guess,” said Sine Nørholm Just, pointing to the projector screen. “The dullest one is the real one. The problem is that once these images are circulated, the internet becomes flooded with misinformation—fake news about baby peacocks.”

This gives us entirely new challenges and questions we must address, she said. “You can compare it to an oil spill. Who is responsible for cleaning it up, who should pay, and will all the fake peacocks ever be removed—or does it even matter?”

Jan Damsgaard: When We Invent the Airplane, We Also Invent the Plane Crash

Author of the new book AI Between Reason and Emotion, Jan Damsgaard, gave the concluding keynote, urging greater proportionality in the AI debate.

“I believe these technologies solve more problems than they create, but there are negative consequences. When we invent the airplane, we also invent the plane crash,” said Jan Damsgaard, who, in addition to being an author, is a professor at CBS.

Jan Damsgaard believes that in Europe, we’ve chosen the most restrictive way to view AI, which could have major consequences. “In public administration, severe errors occur daily that could have been avoided if we used AI,” he said, highlighting healthcare as another area that could benefit from AI, as it could ease doctors’ administrative burdens and give them more time for patients.

Thus, he argues that we risk doing ourselves a disservice if we don’t seize AI’s opportunities.

“The best way to secure European norms and values is to have an industry that can keep pace in this AI race. My concern is that fear of AI will lead to us no longer being able to decide our own future,” Jan Damsgaard concluded.

Join the Next ADD Event on AI in Municipalities

Throughout the fall, the ADD project offers several exciting events. On November 20, you can attend a major conference on AI in municipalities. Read more and sign up here.

There was also time between presentations for networking and “speed dating” between researchers and partners in the ADD project.