
Impact leads: Sine Nørholm Just and Marcus Lantz, Roskilde University
Subproject: Digital organisation of datafied health
Context
Studies of the implementation of AI technologies in organisational contexts consistently find that AI solutions serve strategic goals only when humans collaborate with algorithmic agents, establishing sociotechnical relationships of collegiality rather than control or competition. However, achieving such collegial relationships has also proven to be a main challenge; it is simply difficult for humans to get to know their algorithmic colleagues, leading to technical domination or social rejection rather than sociotechnical collaboration. Thus, across the various use cases, one should not only consider potential technological gains in terms of increased efficiency, accuracy, etc., but also take into account what might be termed ‘human pains’ – i.e., reasons why different stakeholder groups might fail to understand that technological innovation is needed or could even object to this claim. In other words, one must begin by assessing the situation, asking: What is the problem that AI might solve? How could AI solve the problem? Why is it a good solution? And what are the drawbacks of and reservations to this solution? Ultimately, when is the implementation of AI technologies a good idea? And how can it also become a good process?
Broadly speaking, the healthcare sector is in many ways well-suited to the implementation of AI systems, not only because of alignment between algorithmic and diagnostic rationalities but also because of broader societal preconditions like changing demographics and limited resources. In the context of the Danish public healthcare sector, this increases political pressure for reform of the sector and for accelerated use of innovative solutions, not least because there is significant public support for and trust in the implementation of AI in the context of healthcare in Denmark. The promises of innovation include cutting resources. Specifically, the development of treatment, improved well-being for, staff and the freeing of human resources. In sum, the development of AI technologies for clinical purposes is raising hopes for better and more efficient diagnoses and treatments.
The field of radiology is particularly affected by a shortage of human resources, and technological advances, spearheaded by AI image recognition tools, promise to solve the challenge of delivering sound diagnoses to more patients with fewer human radiologists at hand. The Danish mammography screening program, for which all women above the age of 50 are eligible, is a key example. Given changing demographics and limited human resources, the program is struggling to remain successful. Hence, it is not surprising that stories of imminent technological advances have reached the Danish public, raising hopes and expectations among medical professionals, patients, and politicians, while also causing some experts to call attention to the nuances that risk being lost in the heat of anticipation. The point, then, is not to let the sense of urgency overshadow the need for analysis.
The study
The case is part of the ADD subproject on digital health, based at Roskilde University and revolving around the digital organisation of personal and public health. Questions of decision-making that involve datafied health, including ethical concerns about data protection and privacy as well as human autonomy, are at the core of the subproject. Specifically, this case was part of Prins Marcus Valiant Lantz’s postdoc project on automated decision-making in healthcare.
Seeking to contribute to an understanding of how human-AI collaboration may be organized to achieve societal goals, the research focused on the strategic decision to develop and implement clinical AI within the Danish mammography screening program. The case is critical, as demographic changes and limited access to human experts suggest that the program will only continue to be successful if it is technologically enhanced. The purpose of the case was to offer best practices for implementing diagnostic AI by exploring principles of organizing explainable AI diagnostics. This case highlights three principles of best practices for implementing diagnostic AI:
- Begin from nuanced versions of the debate on AI.
- Create solutions with all relevant stakeholders: Medical doctors, AI developers, patients, and their relatives.
- Make use of emerging knowledge from ongoing AI projects.
Collaboration with the Centre for Clinical Artificial Intelligence (CAI-X)
During his postdoc, Marcus established a close collaborative relationship with the Centre for Clinical Artificial Intelligence (CAI-X).
CAI-X is a joint centre between Odense University Hospital (OUH) and two faculties at the University of Southern Denmark (SDU): The Faculty of Engineering and the Faculty of Health Sciences. The purpose of the centre is to bridge technology and healthcare and “to ensure full utilisation of intelligence to benefit patients and staff of the hospital”.
Given the collaborative nature and research design of the project, methods include not only ethnographic approaches such as observations and interviews but also forms of action research, most notably the organisation of workshops with different stakeholder groups.
Pathways to impact
The foundation for impact pathways was built in the close collaboration with CAI-X and the continuous engagement with a broad range of stakeholders.
Building relationships and creating context-sensitive insights
Through close collaboration with CAI-X, Marcus explored how diagnostic AI was being developed and implemented differently across Denmark. While the Capital Region adopted a commercially developed AI tool due to pressing clinical demands, such as long waiting lists and radiologist shortages, the Region of Southern Denmark followed a more locally driven, research-informed path. This comparison illuminated the importance of adapting AI strategies to local healthcare contexts – a key insight stemming directly from the collaboration.
Crucially, insights were not developed in isolation but co-produced with clinical researchers, computer scientists at CAI-X, public health officials, patients, and members of the public. Engagements such as workshops, conferences, and panel debates ensured that a wide range of voices informed the research. This co-creation approach enhanced the legitimacy and applicability of the findings.
Enhancing capacity through stakeholder workshops
By bringing the different stakeholders together, workshops served as a core method for generating real-time impact. They enabled participants to share perspectives, learn from one another, and reshape their understanding of diagnostic AI implementation. These workshops were more than knowledge-sharing sessions; they became active arenas for capacity-building and reflection, laying the groundwork for key policy recommendations and insights that participants could immediately apply in their respective domains.
For example, participants came to recognise the need for a nuanced, context-specific approach to AI implementation rather than adopting one-size-fits-all solutions based on past implementations elsewhere. They also developed a deeper appreciation of the socio-technical dynamics involved in introducing diagnostic AI, including the professional and ethical relationship between humans and technology.
Policy recommendations
The project culminated in a set of co-created policy recommendations for implementing diagnostic AI in the Danish public healthcare system. These are outlined in the policy brief Healthy scepticism and emerging optimism – A human perspective on artificial intelligence in healthcare, jointly produced by the research team and CAI-X.
The key recommendations include:
- Begin from nuanced versions of the debate on AI.
- Create solutions with all relevant stakeholders: Medical doctors, AI developers, patients, and their relatives.
- Make use of emerging knowledge from ongoing AI projects.
These recommendations represent a paradigm shift from top-down technology implementation to participatory, practice-informed innovation. Patients’ perspectives, such as the comment “It’s a really good tool, if there’s a human involved all along” [translated from Danish] support the importance of human oversight and relational trust in the deployment of AI systems.
The theme of human-technology relation is further explored in the book chapter The ethos of automation, which theorises the replacement of interpersonal trust by sociotechnical reliability when automation enters the healthcare space.
The recommendations are elaborated in the policy brief, co-created between the research team and CAI-X. Together, the recommendations lay the foundation of changing the implementation of diagnostic AI in mammography screening – and in healthcare, more broadly.
Broadening influence through public engagement
To expand the reach and impact of the research, the team engaged in multiple public and professional outreach formats. The research team and CAI-X joined forces and discussed the research findings and their relevance for national health policy at a webinar at the think tank, Monday Morn. Key concerns raised in the discussion included data governance, clinician-AI collaboration, and the central role of patients in technology, highlighting the following aspects:
- Insecurity of how and where data will be stored.
- The collaboration between professionals and technology has two dimensions: the direct interaction between professionals and technology, and the professionals’ mediating role between patients and technology.
- The importance of including patients and clinicians in the AI development team, as it is not meaningful to leave out the context of implementation in the development phase.
Furthermore, an op-ed in the Danish Medical Journal, a tailored magazine for medical professionals, was published, translating the research insights into a format accessible to clinicians and reinforcing the practical importance of inclusive, context-aware AI implementation.
By disseminating the research across different platforms and for diverse audiences, the project significantly broadened its influence and contributed to shaping a more ethically grounded, practically feasible, and socially informed pathway for diagnostic AI in healthcare.
Conclusion
The collaboration between Roskilde University and CAI-X has been defined by a dialogical and co-creative partnership, where workshops and joint inquiry formed the core of both the research process and the development of policy recommendations. Thus, the project has demonstrated how impact can be achieved through collaboration, and it emphasises the importance of considering context sensitivity and sustained stakeholder engagement.
Rather than treating stakeholders as passive consultees, the project embraced the principle of “nothing about us without us”, embedding active stakeholder participation at most stages of the research process. This model of inclusive, participatory knowledge production has proven critical for addressing the complex, context-specific challenges of implementing diagnostic AI in mammography screening.
Most importantly, the project has contributed to reframing diagnostic AI implementation as a socio-technical challenge, where success depends not only on technical efficiency but on trust, human oversight, and meaningful inclusion of those most affected by the implementation.
Looking ahead, the collaboration opens several avenues for further impact. One is to revisit the implementation landscape to assess whether and how the recommendations are being applied in real-world mammography screening programmes. This would offer insights into how evolving public debate and policy developments around AI are affecting practice. Another valuable and interesting direction is to investigate how trust can be fostered not only between patients and professionals, but also between humans and diagnostic technologies.
References
Bader, V. & Kaiser, S. (2017): Autonomy and control? How heterogeneous sociomaterial assemblages explain paradoxical rationalities in the digital workplace. Management Revue, 28(3): 338-358.
Faraj, S., Pachidi, S. & Sayegh, K. (2018): Working and organizing in the age of the learning algorithm. Information and Organization, 28(1): 62-70.
Gulbrandsen, I. T. & Just, S. N. (2024): Artificial intelligence in organizational communication: Challenges, opportunities, and Implications. In: Ndlela, M. N. (ed.), Organizational Communication in the Digital Era. Examining the Impact of AI, Chatbots, and Covid-19 (pp. 51-77). Cham: Palgrave Macmillan.
Lantz, P. M. V. & Just, S. N. (2025): The ethos of automation: Strategy-as-rhetoric in the development of trustworthy clinical AI. In: Hess, A. & Kjeldsen, J. E. (eds.), Ethos, Technology, and AI in Contemporary Society. The Character in the Machine (pp. 278-298). New York: Routledge.
Meijer, A., Lorenz, L. & Wessels, M. (2021): Algorithmization of bureaucratic organizations: Using a practice lens to study how context shapes predictive policing systems. Public Administration Review, 81(5): 837-846.
Olsen, T. L., Martensen, M. M., Yding, H. & Futtrup, L. R. (2023): Minister vil bruge kunstig intelligens og rollebytte til at løse mangel på radiologer og radiographer. DR. https://www.dr.dk/nyheder/indland/minister-vil-bruge-kunstig-intelligens-og-rollebytte-til-loese-mangel-paa-radiologer.
Ploug, T., Sundby, A., Moeslund, T. B., Holm, S. (2021): Population preferences for performance and explainability of artificial intelligence in health care: Choice-based conjoint survey. Journal of Medical Internet Research, 23(12): e2611.
Raftopoulos, M. & Hamari,J. (2023): Human-AI collaboration in organisations: A literature review on enabling value creation. ECIS 2023 Research Papers. 381. https://aisel.aisnet.org/ecis2023_rp/381.
| Read more about the subproject |
|---|
| MAGIC – AI for Breast Cancer Diagnostics (CAI-X) ADD webinar on AI in the health care system Lantz, P. M. V., Sørensen, U. D., Nielsen, P. B. (2024). Policy brief: Healthy scepticism and emerging optimism – A human perspective on artificial intelligence in healthcare Agency for Digital Government (n.d.). Optimisation of mammography screening through artificial intelligence in the Capital Region of Denmark |
