Close

A qualitative evaluation of algorithmification in a Scandinavian NGO

A qualitative evaluation of algorithmification in a Scandinavian NGO

Impact lead: Ida Schrøder, Aarhus University

Subproject: Public administration and prediction

We only develop digital solutions to support the children – we have no motive at all to develop just to develop.

The above quotation is from the manager at the Children’s helpline in the Danish Children’s Rights organisation, and it encapsulates the ambiguity often present in AI development projects. While AI-infused systems are frequently introduced with a clear purpose of either solving a problem or improving existing practices, they often entail far-reaching transformations of work processes, qualities, and the environments embedded. This kind of ‘algorithmification’ is a general trend in Danish welfare society, where the goal is for future welfare services to be supported by personalized data analysis (Government 2022), and where organisations and professions pursue this goal to the best of their ability.26 In the case of AI for decision support in social services, the main stakeholders are local authorities and private firms that support and advise the public sector and how to develop and use AI solutions in their work.27  A third group of main stakeholders are the non-governmental, civil society organisations (NGOs), who aim to supplement public social services and improve the lives of targeted groups of people.   

This case study investigates how voluntary counsellors use a specific AI solution, the Counselling Assistant (AICA), and explores what the algorithmification of the Danish Children’s Rights organisation entails for their voluntary counsellors and the organisation at large. Due to the focus on evaluation, but without a clear set of evaluation criteria, fostered rather open research questions to guide the data collection: 

  • What significance does the AICA have in the work of the volunteers?
  • How does the significance of the AICA change over time and in different situations?
  • Where, when, and who contributes to making the algorithm professional?

To answer the above-stated research question, the research team applied a situational ethnographic approach, aiming to collect data from as many different situations involving the AI solutions as possible. The research team combined data from volunteers, employers, developers, managers and lawyers, focusing on work, expectations, and attitudes, as well as from documents and decision-making meetings where the AI solutions were described and assessed. The theoretical foundation of the research relies on Science and Technology Studies (STS).   

A central point in STS is that technologies are never stable.28 They are rather relational, meaning they are integrated differently and have different significances in different situations.29 ‘Algorithm’ is, in this case, understood as an unbounded model, as its reach and consequences encompass far more than what is hidden in the computer’s code language.30 This theoretical approach is particularly suited to uncovering and noticing what we do not take for granted and therefore do not expect to see.31 

The Danish Children’s Rights organisation, as the main stakeholder in this subproject, is a well-known NGO in Denmark that offers free counselling to children and young people up to the age of 25. Their mission is to “stop neglect of children”. This refers to neglect caused by children’s parents and by society, e.g., local governments, schools, day care institutions, etc. Correspondingly, the Danish Children’s Rights organisation both helps children and represents their voices in Danish society, often with a critical stance against national policies and public sector organisations.  

To meet their mission, they raise funding from private investors, receive charity, and provide direct help services with volunteer staff. To continuously develop, target and improve their services and means of representation, they collaborate with public/private organisations and researchers. In the specific case of developing AI solutions, they raised funding from a philanthropic foundation and collaborated with a local tech firm. These stakeholders – children, public/private partners, funders and Danish citizens – all contribute to the Danish Children’s Rights organisation for moral reasons, because they want to do good. However, there is always the possibility that a partner or funder aims to exploit the brand of the Danish Children’s Rights organisation to boost their own social responsibility. To mitigate this kind of exploitation, the Danish Children’s Rights organisation insists on a very high ethical standard, continuously evaluating their collaborations, services, ideas, and impact against the consequences for individual children. This regarded our research as well. And, as we shall see in the following, this duality between following an agenda – for instance, the implementation of a new AI support application – and doing good for the individual was very much present in the development and use of their AI solutions.   

The subproject’s pathway to impact was built through close collaboration with the Danish Children’s Rights organisation and continuous engagement with its staff and volunteers throughout the research process.  

The collaboration between Ida Schrøder and the Danish Children’s Rights organisation started with an informal conversation, which evolved into a formalised collaboration between Ida, as the researcher, and the Danish Children’s Rights organisation, as a case to study the development and use of an AI solution to support their voluntary staff. This kind of co-creation in the field and scope of research is a partnership collaboration.   

To make the research tangible, Ida invited Mathilde Høybye-Mortensen, professor at VIA University College, and Marie Leth Meilvang, associate professor at University College Lillebælt, to join her research team. This transdisciplinary team structure, with one member based close to the Danish Children’s Rights organisation’s branch in Aarhus, enabled a more comprehensible and locally embedded research process, enhancing both data access and interdisciplinary analysis.   

Furthermore, to sustain engagement from the representatives of the Danish Children’s Rights organisation, the research team held two reflection workshops together with the organisation. The aim was toqualify data and validate ethical aspects of the research. At the Danish Children’s Rights organisation, they expressed the usefulness of having the opportunity to discuss why they are doing, what they are doing, and that the discussions in the workshops helped them create a language for talking about their AI initiatives, not least their ethical implications.   

Early in the research, ethical concerns around access to sensitive child data were identified as a potential barrier. Rather than compromising the integrity of the study or risking ethical breaches, the research team and the Danish Children’s Rights organisation agreed to revise the scope of the project to only include one AI-solution, the AICA, leaving out other, more tentative AI-solutions.  

As part of balancing the ethical concerns and the research uptake, the research team and the Danish Children’s Rights organisation had regular meetings throughout the research process where they negotiated in a friendly and respectful manner how the research team could maintain their research code of conduct while also offering the Danish Children’s Rights organisation the results of value to their work.   

The project culminated in a report outlining seven key points of attention for both the Danish Children’s Rights organisation and other NGOs working with AI to support human services:

While grounded in the context of the Danish Children’s Rights organisation, these points of attention have broader applicability across nonprofit and public sectors. They offer a practical and ethical framework for those planning, developing, or implementing AI in socially sensitive contexts.   

At the time of finalising the description of this ADD impact case in March 2026, the Danish Children’s Rights organisation have gone through several iterations of their AI-solutions. The latest the research team has heard from them is that the AICA is now assisting the voluntary counsellors with their administrative task of filing their conversations, while the AICA is no longer supporting the volunteers in their conversations. This mirrors a shift towards data quality and away from complex task of juggling dual ethical responsibilities for model fairness and situated fairness. While the Danish Children’s Rights organisation may have considered such changes independently, they have acknowledged that the research helped them discuss how best to develop the AICA. This alignment between the Danish Children’s Rights organisation’s adjustments and the report’s focal points suggests that the research has impacted their strategic direction on AI use. 

Beyond the Danish Children’s Rights organisation, the research has sparked broader public and professional dialogue. Researcher Ida Schrøder has been repeatedly invited to speak at events and institutions, using the case of the Danish Children’s Rights organisation to explore the ethical and practical challenges of AI in professional settings. Invitations have included the Municipality of Copenhagen, the Alumni Network for Master Students at Adult Learning and Organisational Change, the National Agency for IT and Learning’s yearly conference on online supervision, and the ADD symposium for researchers, managers, and public sector practitioners.   

These engagements demonstrate that the research resonates beyond academia, offering relevant insights into the ethical governance of AI in human-facing professions. 

This project exemplifies how research built on co-creation, ethical reflection, and sustained stakeholder engagement can lead to tangible organisational changes, inform broader public debate, and contribute to the development of more ethically responsible AI practices.   

Moreover, it shows the importance of working with key stakeholders and engaging them in the research process to guide the research toward useful insights and recommendations. To get the full picture of the project’s impact on the Danish Children’s Rights organisation’s implementation of a new language model, it would be interesting to reach out to them to get feedback and evidence of the impact. It would also be interesting to investigate the perspective of the Danish Children’s Rights organisation’s partners’ reactions to the new AI solutions to grasp the reach of the project’s impact.   

To better grasp the reach of the impact, it would be interesting to get to learn how Ida’s talks for other public sector organisations have impacted their work with algorithmic solutions and ethics.   

Høybye-Mortensen, M. (2021). Science Technology Studies (STS) – de ikke-menneskelige aktører i socialt arbejde. I M. Christensen, R. E. Jørgensen, N. H. Lysen, & C. Rosenberg (red.), Videnskabsteori ogsocialt arbejde (s. 217-234). Samfundslitteratur. 

Laage-Thomsen, J., & Ratner, H. F. (2024). Kunstig intelligens i den offentlige forvaltning: sammenhænge mellem algoritmisk regulering og automatisering af beslutninger i de danske AI ”signaturprojekter”.Politica. Advance online publication. https://tidsskrift.dk/politica/article/view/153262/195915 

Laage-Thomsen, J., Ratner, H. F., & Schrøder, I. (2025). The beginning of AI-driven welfare? An inquiry into how public sector AI experiments shape the Danish welfare state. Digitalization, Data and Welfare: Sociotechnical Approaches to Service Delivery (s. 38-56). Edward Elgar Publishing. https://doi.org/10.4337/9781035338153.00010  

Meilvang, M. L. (2023). Working the Boundaries of Social Work: Artificial Intelligence and the Profession of Social Work. Professions and Professionalism13(1). https://doi.org/10.7577/pp.5108 

Ratner, H. F. & Schrøder, I. (2022). The emerging ethical plateau of predictive algorithms in the public administration of Danish child protective services. 1-25. https://events.ruc.dk/democracyanddigitalcitizenship/conference  

Ratner H. F. & Schrøder, I. (2023). Ethical Plateaus in Danish Protection Services: The Rise and Demise of Algorithmic Models. Science & Technology Studies. 44-61. https://doi.org/10.23987/sts.126011.  

Seaver, Nick (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data and Society 4 (2):1-12. doi:10.1177/2053951717738104.  

Star, S. L. & Strauss, A. (1999). A. Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work. Computer Supported Cooperative Work (CSCW) 8, 9–30. https://doi.org/10.1023/A:1008651105359   

Suchmann, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society 4(2). https://doi.org/10.1177/20539517231206794