Close

ADD Blogpost: Can AI work for democracy? 

The ADD project turns five years old. Sine Nørholm Just therefore provides insight into her research.

The ADD blog provides insight into the ADD project’s research across six university partners. Meet our researchers from Aalborg University, Aarhus University, Copenhagen Business School, Roskilde University, the University of Copenhagen, and the University of Southern Denmark. Read about their projects, activities, ideas, and thoughts – and gain a new perspective on the controversies and dilemmas we face in the digital age, along with ideas on how to strengthen digital democracy.

By Professor Sine Nørholm Just, Roskilde University

When thinking through the trajectories of ongoing developments, what are the chances that AI will work for democracy? In the five years that we have been working on Algorithms, Data & Democracy (the ADD project), this question has become increasingly pressing. As the mission of the project is to present solutions as well as diagnoses, our task is not only to identify and assess current developments, but also to suggest which potential sociotechnical AI-democracy configurations might work better than others. This statement is, perhaps, unduly cryptic; why not just rephrase our mission statement, shifting the focus from digitalization to AI developments, and say that we now aim to advance AI democracy?  

The ADD mission statement  

Well, if there is one thing that we have learned in the first five years of the ADD project, it is that AI is, in fact, not one thing (Suchman, 2023) – it is a moving part of many different and dynamic sociotechnical relationships, and it must be understood in and as its multitude. Similarly, the quality of democracy should not be measured in terms of stability, although stable societies may, indeed, be one important outcome of well-functioning democracies. Instead, we should assess the state of democracy on the basis of its capacity for disagreement, its ability to ‘contain multitudes’. Good democracy, we might say, enables controversial encounters in which recognition can be obtained without reconciliation (Just, 2024): you and I may not agree, but we each support each other’s right to our opinions. 

As such, singular definitions and uses of AI technologies will be democratically detrimental, and societal developments that seek to depoliticize AI, to make it uncontroversial, are equally problematic. This returns us to the question of better AI-democracy configurations, which can now be rearticulated as those interrelations of AI and democracy in which technical and societal elements each support each other’s plurality, inviting the formation of what might, with a nod to Karl Popper, be termed ‘the open AI society’. Ironically, the company named OpenAI does not present itself as a model for such a society; while this is not a main point of this text, I will use it to illustrate certain aspects of the currently dominant AI-democracy configuration before getting back to the question of possible better configurations. In what follows, I will take stock of the state of AI controversy, as we have studied it in the first five years of the ADD project, and then indicate how we plan to continue these investigations in the five years to come. 

Controversial algorithms and algorithmic controversies 

In 2021, when we kicked off the project, algorithms and data had entered public discourse, and their democratic implications were already a key topic of concern. In fact, that was the basic premise of the project; it was what we set out to study, as we asked how algorithms and data appear in public controversies, how controversies are shaped by the technologies, and, finally, how algorithms and data can be organized in such a way as to  create better opportunities for the articulation of controversies.  

Even with this starting point, however, we had not anticipated the explosion of AI controversy that we have seen over the past five years. When looking back from the vantage point of 2026 to the public debate about AI in 2021, its size and heat are like that of a candle in comparison to the all-consuming bonfire of the intervening years. It has been interesting, to put it mildly, to be able to follow this shift in public attention and concern as it was unfolding. 

In our studies, we have mapped the ways in which algorithms and data have become topics of public concern, how they are articulated as controversial. Working with a large data set of the Danish media coverage of ‘all things’ algorithms and data from 2011 to 2021, we were able to see the multifariousness of this topic as well as the ways in which issues of digital technologies have attached themselves to – or been articulated within – a wide range of other topics of concern, from soccer and pop music through international politics and Danish business to, increasingly, being articulated as discussions around the technologies (and their makers) in their own right.  

The map was curated by researchers at TantLab, Aalborg University, who also hosted data sprints for all participants in the ADD project, enabling us to get a sense of how controversies in which we were particularly interested appeared in the Danish media coverage. For instance, my colleagues at Roskilde University and I studied how issues of algorithms and data appear in relation to women’s health, finding two dominant articulations: First, individual responsibilization through access to data, e.g., with the help of period trackers. And second, collective rationalization through the implementation of algorithmic technologies in the Danish public health sector, most notably, in the case of mammography screenings (Dahlman et al., 2023).  

Controversy map of algorithms in Danish news coverage, 2011-2021  

Beyond articulations of controversial algorithms and data, we have been concerned with the extent to which and the ways that controversies have become algorithmic and datafied. What does it mean for democratic society when the organization of its publics is, to a large extent, outsourced to digital technologies and, not least, the companies behind them? We find one entry point for addressing this matter in the Danish digital democracy index, commissioned by the Centre for Social Media, Tech, and Democracy under the Ministry of Digitalization and conducted in a collaboration between the Centre for Digital Citizenship at Roskilde University and Digital Democracy Centre at University of Southern Denmark (Filstrup et al. 2025).  

The index shows that: 1) access to the arenas of digital democracy is consistently high for all citizens; 2) participation varies according to citizens’ different identity positions, with majority groups feeling fully represented and able to speak freely and minority groups experiencing constraints in the form of lacking representation as well as virulent backlash against attempts at partaking; and 3) opinions are divided across different societal dimensions, with citizens placing least trust in tech companies while also showing hesitancy towards the established institutions of democracy. While the overall index score is well above average, with the ubiquitous digitalization of Danish society driving up the numbers, citizens’ unequal participation and the widespread experience of disillusionment with decision-makers indicate that the current organization of digital publics does not serve democracy well – or at least, that digital publics could be better organized.  

The deluge of generative AI 

In November 2022, the articulation of AI controversies and the digital organization of that articulation was completely reconfigured by the sudden and momentous advent of generative AI. Or rather, that is how it felt at the time: from one moment to the next AI content creation was everywhere and everyone had an opinion about it. What is more, those opinions were often hopeful in the extreme or extremely pessimistic; meaning, the debate was intensified and polarized.  

Remarkably, the big industrial players who were involved in developing and launching the new technologies were also involved in hyping the debate about them – and they were just as likely to be pushing the pessimistic narratives in which AI technologies might pose ‘existential risks’ to people and planet as they were to be articulating hopeful visions of an ever better AI futures. This is where OpenAI re-enters the account as one significant representative of the mutually reinforcing ‘boom-through-doom’ articulations (Roose, 2023). Beyond hyping AI technologies, what the optimistic and pessimistic accounts have in common is a sense of determinism. Regardless of whether you espouse a utopic or a dystopic view of the AI future will bring, each vision comes with an in-built sense of inevitability; AI technologies will become dominant, whether you like it or not. 

From the perspective of the ADD project, this deterministic turn is particularly worrying, as it not only limits people’s ability to imagine alternatives and restricts the possibility of discussing those alternatives, but at an even more foundational level diminishes the sense that there even is a choice. Somewhat paradoxically, then, the hyperpoliticization of AI has depoliticizing consequences – turning the advent of AI societies into an assumed fact rather than a collective project. 

Make AI democracy controversial again 

Challenging the assumption that technological developments are inevitable, for better or worse, will be a key priority of the ADD project in the coming five years. To do so, we focus on three interrelated themes, the infrastructuring, sustainability, and meaning formation of AI democracies.  

The first theme tackles the question of how inevitability is constructed directly, aiming to show that the role of AI in society is a continuous work in progress; a contingent and contested process rather than a necessary or, indeed, ‘automatic’ outcome. As such, this research will aim at ‘saying the quiet parts out loud’, denaturalizing and, hence, repoliticizing the assumptions about AI that are baked into the process of making it integral to society, the process of its infrastructuring. This involves addressing issues of data that OpenAI and similar companies have yet to acknowledge and resolve, issues of ownership and potential compensations, issues of quality and scarcity – and of the eventual self-referentiality of AI, as more and more of the available data will be AI generated. This raises the question of how society will figure in increasingly AI-on-AI relationships. 

The second theme speaks to one of the ways in which AI, in its current sociotechnical configuration, is directly harmful to society; namely, via the strain it puts on the planet. This involves confronting articulations of AI as a sustainability solution with the current unsustainability of AI technologies; notably, the energy and water consumption involved in the training and deployment of ever-larger models. And it involves experimenting with smaller, more specialized models that may improve efficiency while preserving effectiveness. Here, we see the contours of a shift away from the dogma that the larger model is the better model, and it becomes possibly to challenge the idea that AI will inevitably become a ‘general purpose’ technology. Rather, we can think of AI in the plural; not as an all-encompassing system that ‘changes everything’, but as different tools that may help solve different challenges. One thing is clear: as long as AI is not environmentally sustainable, it cannot support healthy democracies. Hence, addressing the tensions involved in presenting AI as a solution to grand societal challenges whilst the technologies remain part of the problem is central for the reconfiguration of sustainable AI democracies.    

For the third theme, we reverse this figure, suggesting that AI technologies may, in some respects, offer solutions to problems that arise from using these technologies. One such problem is that of meaning formation, of how people gain knowledge and articulate their opinions once the production of texts, images, code, and other socially significant signs can be automated. This problem has been frequently raised since the introduction of ChatGPT and other generative AI systems, with many people expressing concern about the effects of AI on individual learning and collective sense-making. These concerns are legitimate; students cheating on exams and electorates manipulated by deepfakes, to name but two key examples, may contribute to the erosion of democracy. But the problem, here, may not be inherent to the AI technologies, and rather than denouncing AI generated meaning formation in its entirety we could think about whether and how AI tools can be used to enhance learning, creativity, participation, and deliberation. In this spirit, we will experiment with AI democracy rather than seek to deny its existence.  

Only by identifying the contingencies of AI society, by reinvigorating the discussion of what we think it ought to be, and by engaging with the many different things it can become do we have a chance of configuring open AI societies. 

References 

Dahlman, S., Just, S. N., Pedersen, L. M., Lantz. P. M. V. & Kristiansen, N. W. (2023): Datafied female health: Sociotechnical imaginaries of femtech in Danish public discourse. MedieKultur, 74: 105-126.  

Filstrup, S., H., Kristensen, J. B., Lorenzen, M. S., Andersen, K., de Vreese, C., Just, S. N. & Mayerhöffer, E. (2025): Indeks for det danske digitale demokrati 2024. Available at: https://www.ft.dk/samling/20241/almdel/diu/bilag/73/2994645.pdf

Just, S. N. (2024): Controversial Encounters in the Age of Algorithms. Bristol: Bristol University Press. 

Roose, K. (2023): AI poses ‘risk of extinction,’ industry leaders warn. New York Times. Available at: https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

Suchman, L. (2023): The uncontroversial ‘thingness’ of AI. Big Data & Societyhttps://doi.org/10.1177/20539517231206794