The ADD blog provides insight into the ADD project’s research across six university partners. Meet our researchers from Aalborg University, Aarhus University, Copenhagen Business School, Roskilde University, the University of Copenhagen, and the University of Southern Denmark. Read about their projects, activities, ideas, and thoughts – and gain a new perspective on the controversies and dilemmas we face in the digital age, along with ideas on how to strengthen digital democracy.
By Assistant Professor Aysel Küçüksu, ADD researcher, Copenhagen University
Denmark’s ambition to be the world-leader in public sector AI deployment might be regularly broadcasted to the wider world, but sits uneasily with the reality at hand. Research shows that, more often than not, AI projects developed for the Danish public sector end up in the so-called ‘digital graveyard’. This is problematic from an economic and human rights’ perspective, not least because having a publicly available record of the successes and failures of these systems is not a political priority.
Economically, this omission means that we are at risk of continuously channelling Danish public funds into repeating the failures of the past without any guarantees that such repetitions would differ, even in the slightest, from what has already been tried before. From the vantage point of human rights, off-the-record failures risk normalising experimental data practices that occur without sufficient parliamentary scrutiny, judicial oversight, or citizen awareness.
Transparency itself is a minimum human rights’ safeguard and the current opacity raises democratic concerns. Not only that, but the absence of systematic documentation of ongoing automation efforts further erodes trust in public authorities by reinforcing a relationship of inverse proportionality, whereby increased digitalisation enhances the transparency of citizens vis-à-vis the state while simultaneously diminishing the transparency of the state vis-à-vis its citizens.[1]
In an effort to counterbalance these consequences, I co-founded the Offentlig-AI database which maps Danish public sector AI projects (dead or alive). Data collection has been challenging, but giving. With more than a 140 entries and counting, the database is a valuable resource in efforts to diagnose some of the causes behind the size of the digital graveyard in Denmark. What becomes clear when reading through documentation and talking to public authorities is that Denmark has reached a structural limit: we can develop AI systems, but we cannot lawfully deploy them for lack of an overarching framework that would enable this in a coherent and responsible manner.
This, at its core, is what I term a ‘legality checkpoint’: a recurring pattern in which AI projects advance through technically and financially demanding development phases, yet do not proceed to deployment. The reason is simple: Denmark has no comprehensive legal framework that enables the deployment of AI systems in the public administration. The result is a digital public administration that is technologically capable, but legally restrained, albeit with good reason: the legality checkpoint is protecting citizens against any unintended and illegal uses of AI until the appropriate legislative framework is in place. When legal safeguards fail to keep pace with technological ambition, withholding deployment is not a bureaucratic nuisance, but a necessary rights-preserving measure.
To understand why this happens, we need to look beyond individual cases and examine the broader architecture of Danish law. Today, Denmark relies on an ad-hoc legislative patchwork to enable AI systems’ deployment in the public sector: one-off legal provisions get drafted to rescue individual AI projects from legal uncertainty.[2]
This is the default response whenever a public authority wants to use AI in a specific administrative task. Even the recent announcement of the three big-scale projects that will be rolled out in the public sector was accompanied by a list of sector-specific laws that would need to be amended to allow for the governmental commitment to come to fruition. While this approach can enable singular initiatives or solve isolated issues, it is fundamentally unsustainable. It consumes political and administrative resources, creates unequal opportunities across sectors, and leaves public authorities guessing whether they have sufficient legal authority to deploy AI-supported systems.
Not only that, but many public-sector AI systems operate across multiple sectors or draw on data streams from several administrative domains. Sector-based statutory bases therefore risk creating statutory asymmetries, where an AI system is lawful in one sector, but unlawful in another. Perhaps most importantly, ad-hoc authorisations sacrifice legal certainty and ultimately risks jeopardising citizens’ fundamental rights by circumventing the kind of high-level democratic deliberation required by a serious and comprehensive engagement with the question of what types of AI systems we would like to (dis)allow in our public administration.
For these reasons, the only sustainable solution to the current conundrum is to create explicit, coherent, and future-proof legislation that governs the if, the when, and the strict conditions under which AI may be deployed in the public sector.
This should take the form of a general legal framework, an AI Law, which conditions the lawful deployment of AI across the Danish public administration, defines the relationship between automated processes and human discretion and establishes the necessary elements for citizens to understand, challenge, and trust AI-assisted decision-making. Such a framework would not be a green light for unconstrained technological expansion; it would be a legislative tool designed to provide a legal basis for AI deployment that supplements the already existing regulation in the GDPR and AI Act by articulating a set of rules and principles that generally clarify the conditions for AI deployment.
We would not be the first in such an endeavour. Norway and Finland have taken decisive steps to update their administrative laws to allow for a more consistent application of AI in their public sectors. Norway has a very liberal approach: it has revised its administrative act so as to explicitly authorise both automated decision-making and automated decision-support, so long as existing legal safeguards are respected, administrative discretion remains meaningful and the law does not otherwise prohibit the practice.[3]
Arguably, this approach does not recognise the new types of challenges posed by AI, in whose face existing legal safeguards and principles might be rendered ineffective and thus require more AI-specific fortification. One example is the phenomenon of ‘emergent discrimination’ whereby algorithmic decision-making results in discrimination that is not based and does not neatly map onto existing protected grounds. Finland has a more cautious approach, establishing statutory rules that prohibit machine-learning-based automated decisions, whilst allowing for rule-based automated decisions in ‘easy’ cases that do not involve discretion.[4] It has additionally legislated to ensure robust human oversight, documentation duties, and procedural safeguards in a model that seeks to balance innovation with accountability.[5]
Norway’s approach is undergirded by the belief that existing administrative law and EU law provisions can accommodate the responsible rollout of AI, but it also raises questions about whether legal safeguards are sufficiently robust when automation is broadly authorised.
Finland’s more restrictive stance, by contrast, reflects an explicit decision to shield citizens from opaque, probabilistic systems in domains where human judgment and contestability are essential and in the face of changes whose consequences we are incapable of fully grasping.
Even though Denmark does not have a tradition for amending its Administrative Law provisions, the Norwegian and the Finnish examples can serve as inspiration as to the differentiations and principles that might inform eventual, higher-level engagement with the rules around AI deployment in the public administration.
To conclude, it is important to acknowledge that public authorities wield significant power over individuals’ lives; power that must remain reviewable, contestable, and accountable. Even with clearer legislation, technological possibility should never be mistaken for inevitability. A future Danish AI legal framework must acknowledge that not all public-sector tasks are suitable for automation.
This includes recognising the environmental consequences of training and operating large-scale AI systems whose development consumes public resources and contributes to climate burdens that will disproportionately affect vulnerable communities and future generations. A responsible regulatory framework must therefore create space not only for innovation, but also for the informed decision to refrain from using AI when it is not the right instrument for the task.
[1] Alston, P. (2019). Report of the Special rapporteur on extreme poverty and human rights. United Nations. Availabel at:OHCHR, ‘World Stumbling Zombie-like into a Digital Welfare Dystopia, Warns UN Human Rights Expert’ (OCHHR, 17 October 2019) https://www.ohchr.org/en/press-releases/2019/10/world-stumbling-zombie-digital-welfare-dystopia-warns-un-human-rights-expert, accessed 14 January 2026, p. 3.
[2] See for example, Forslag til Lov om ændring af SU-loven (L152, Folketingstidende Tillæg A, Folketinget 2024-2025, Uddannelses- og Forskningsmin., j.nr. 2025-460), which seeks to establish a legal basis for the use of artificial intelligence in decisions by the Danish Agency for Higher Education and Science (SU styrelsen) concerning the award of supplementary grants on the grounds of permanent physical or mental disability; Beskæftigelsesministeriets bekendtgørelse om Arbejdstilsynets anvendelse af kunstig intelligens (BEK nr 868 af 23/06/2025) til brug for tilsyns- og vejledningsopgaver. In the latter case, an already existing statutory authorization in Lov om arbejdsmiljø (§72, which empowers the Minister for Employment to lay down “more detailed rules on the Danish Working Environment Authority’s (Arbejdstilsynet) collection, processing, and disclosure of information …”) is relied upon as the legal basis for permitting the Authority to use artificial intelligence in its case handling.
[3] Prop. 79 L (2024 –2025) https://www.regjeringen.no/contentassets/ef9495b7fc144db7a73242f90976ff45/no/pdfs/prp202420250079000dddpdfs.pdf.
[4] Suksi, M. (2024) ‘Creating Legal Basis for Automated Decision-making: A Review of Section 53 e of Finland’s Administrative Procedure Act and Beyond’, European Review of Digital Administration & Law 5(2).
[5] Similarly to Denmark, however, Finland has not yet legally regulated automated decision support-practices.
