Close

ADD Blogpost: Organizations are saying no to AI – Ole Willers dives into one specific barrier

Ole Willers turns his attention to the issue of legitimacy, suggesting that the barrier may not be as difficult to overcome as it seems.

The ADD blog provides insight into the ADD project’s research across six university partners. Meet our researchers from Aalborg University, Aarhus University, Copenhagen Business School, Roskilde University, the University of Copenhagen, and the University of Southern Denmark. Read about their projects, activities, ideas, and thoughts—and gain a new perspective on the controversies and dilemmas we face in the digital age, along with ideas on how to strengthen digital democracy.

By Ole Willers, Postdoc at the Department of Organization, Copenhagen Business School.jow.ioa@cbs.dk

The debate around artificial intelligence (AI) today is largely dominated by a disruption narrative. According to this narrative, the combination of big data and machine learning will fundamentally change the way we work, learn, and solve problems—whether in the public or private sector.

This expectation of imminent transformation puts pressure on organizations to ‘get on board’ – lest they fall behind. Sociologists Marion Fourcade and Kieran Healy describe this pressure as the data imperative: a self-reinforcing dynamic where data collection and analysis become a necessity rather than a choice (Fourcade & Healy, 2017).

However, reality does not live up to the hype. On the contrary, several analyses show that the proportion of organizations actually using AI is far below expectations. According to the European Commission, only 13.5% of European companies had adopted AI technology by 2024 (European Commission, 2025, p. 13). This stands in stark contrast to the EU’s goal of having three out of four companies using AI by 2030.

AI usage is also unevenly distributed across sectors. Eurostat data indicate that it is primarily the IT and consulting industries driving the development, while manufacturing and service sectors are lagging behind. At CBS, we refer to this phenomenon as uneven algorithmization (Gamerdinger & Willers, 2025).

But why are so many organizations choosing not to adopt AI?

AI Barriers: An Overlooked Research Field

Innovation research shows that implementing new technologies depends on a complex interplay of organizational and social factors (Kim & Chung, 2017). In the case of AI, high costs, legal uncertainty, and internal resistance are often cited as key barriers. As both research and public debate have been overly focused on AI’s transformative potential, we still know relatively little about how and why organizations actively reject the technology—and that’s problematic.

If we fail to understand why AI is rejected, we risk missing out on innovations that could help solve major societal challenges. We also risk concentrating the benefits among a few actors, thus exacerbating inequality.

In this blog post, I focus on one particular barrier: the legitimacy problem.

The Legitimacy Problem

Organizations depend on their actions being perceived as legitimate—a classical point in organizational theory (Meyer & Rowan, 1977). In our research within the ADD project, we repeatedly see how legitimacy emerges as a central barrier to AI adoption.

For example, Helene Friis Ratner and Ida Schrøder (2024) showed how predictive algorithms in child welfare services sparked intense ethical and legal controversies—leading ultimately to a more cautious approach to AI in the public sector.

But the problem extends beyond individual cases. A 2020 survey found that 62% of 9,640 European businesses saw low public trust as a barrier to AI adoption. One in four identified social acceptance as a core challenge (European Enterprise Survey, 2020).

When speaking with Danish business leaders, we often hear a particular phrase: “The tabloid test.” The question posed is: “Would our AI solution become a critical front-page story if it became publicly known?”

This concern is not unfounded. Public skepticism across Europe is widespread (Ehret, 2022), and warnings about AI’s potential harms have dominated bestseller lists in recent years. Such concerns serve as a crucial counterbalance to the ‘data imperative’ described by Fourcade and Healy. I call this counterforce the legitimacy imperative—which might prove decisive in shaping AI development grounded in shared societal values and norms.

However, the legitimacy imperative also leads to conflicts over what is considered acceptable or not—what Germans call Deutungskämpfe (battles over interpretation). The drafting of ethical guidelines for AI has become a central arena for these battles.

What is Ethical AI, and Who Decides?

Ethical AI has become a first-order management issue (Berente et al., 2021). Although expert committees and ethical guidelines have been established, many of these guidelines are too abstract to be operationalized in practice. The EU’s guidelines for “trustworthy AI” are a good example: ambitious on paper but difficult to implement concretely.

Our research in the insurance sector illustrates how attempts to tailor these guidelines to specific use-cases quickly encounter conflicting interests (Gamerdinger & Willers, 2025). Rather than clarifying ethical dilemmas, these tailored guidelines often present contradictory conceptions of what constitutes ethical AI usage.

As a result, organizations often end up feeling more confused than assisted.

A New Direction: Preventing Controversies

One suggestion is to seriously engage with the legitimacy problem from the start: engage with the public, listen to concerns, and tell a compelling narrative.

The insurance industry’s recent experience with AI-based damage prevention serves as an example. This sector previously championed data optimism. A 2017 report predicted that the next 15–20 years would bring more change than the previous 300 years (F&P, 2017), focusing on new forms of risk assessment and product innovation.

Yet, this digital revolution never fully materialized. None of Denmark’s 17 largest life insurance companies have implemented AI for pricing. Regulatory uncertainty and concerns over surveillance dampened their initial enthusiasm.

Instead, a new direction has gained traction: using AI to prevent harm. Three out of four major Danish pension companies now have strategies for AI-based prevention, such as identifying risks for long-term illness. The technology is the same, but the context and purpose have shifted—and it works.

In March 2025, PFA announced a collaboration with the Socialist People’s Party—historically one of the strongest critics of private health insurance—to use AI for preventing long-term illness. The narrative here isn’t about surveillance, but about mutual benefits: a healthier life, a stronger welfare state, and a more sustainable business.

From Legitimacy Demands to Shared Value?

Management scholar Michael Porter, along with Mark Kramer, introduced the concept of shared value in 2011—the idea of linking financial gain with social responsibility (Porter & Kramer, 2011). Could AI’s legitimacy problem serve as a catalyst for such a development?

Perhaps.

Few sectors are as dependent on trust and shaped by moral dilemmas as insurance (Horan, 2021), but ethical issues also lie at the heart of many other industries. Our experiences within the ADD project suggest that an explicit focus on ethical dilemmas can foster productive dialogue—internally and externally.

An example is the dilemma-based approach to cybersecurity developed by Laura Kocksch and Torben Elgaard Jensen (2024). In collaboration with F&P, we at CBS have developed a similar framework for mapping ethical trade-offs (Gamerdinger & Holm, 2024).

Dare We Slow Down – Just a Little?

The legitimacy problem is certainly just one reason among many why companies hesitate to adopt AI. Uncertainty about existing and future regulations, as well as internal organizational dynamics, also play significant roles. My intention was not to argue exclusively for focusing on legitimacy, but rather to provoke reflections on how actively engaging with legitimacy challenges could pave the way for more socially beneficial AI usage.

The data imperative pushes for rapid action. Engaging actively with legitimacy takes time.

Can we afford to be slow? Probably not.

Can we afford to be a little less fast? I believe we can.

Just like the hare in Aesop’s fable: It’s not always the fastest who wins.

Reference list:

Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly45(3).

Ehret, s. (2022) Public preferences for governing AI technology: Comparative evidence, Journal of European Public Policy, 29:11, 1779-1798, DOI:10.1080/13501763.2022.2094988

European Commission (2025). ‘AI Continent Action Plan’ COM(2025) 165 final.

European Enterprise Survey (2020). European enterprise survey on the use of technologies based on artificial intelligence. Final report. doi:10.2759/759368

F&P. 2017. Radikal Digitalisering i Forsikringsbranchen. Udarbejdet af Ph.d., Brian Due, Mads Hennelund og Jesper Højberg Christensen for Forsikring & Pension, september 2017.

Fourcade, M. & Healy, J. (2017). ‘Seeing Like a Market’. Socio-Economic Review, 2017, Vol. 15(1), pp. 9–29. DOI: 10.1093/ser/mww033

Gamerdinger, A., & Holm, J. (2024). Ethical AI in Life and Non-Life Insurance: A Framework for Mapping Ethical Trade-Offs in AI Use. Copenhagen: Forsikring & Pension.

Gamerdinger, A., & Willers, J. O. (2025). Solving AI ethics? Hybrid expertise and professional power in EU ethics governance. Journal of European Public Policy, 1–28. https://doi.org/10.1080/13501763.2025.2499113

Horan, C. (2021). Insurance era: Risk, governance, and the privatization of security in postwar America. In Insurance Era. University of Chicago Press.

Kim, J. S. & Chung , G. H. (2017) Implementing innovations within organizations: a systematic review and research agenda, Innovation, 19:3, 372-399, DOI: 10.1080/14479338.2017.1335943

Kramer, M. R., & Porter, M. (2011). Creating shared value (Vol. 17). Boston, MA: FSG.

Kocksch, L. and Torben Elgaard Jensen. 2024. The Mundane Art of Cybersecurity: Living with Insecure IT in Danish Small- and Medium-Sized Enterprises. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 354 (November 2024), 17 pages. https://doi.org/10.1145/3686893

Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American journal of sociology83(2), 340-363.

Porter, M. E., & Kramer, M. R. (2011). Creating Shared Value – How to reinvent capitalism and unleash a wave of innovation and growth. Harvard Business Review.

Ratner, H. F. and Schrøder, I. (2024) “Ethical Plateaus in Danish Child Protection Services: The Rise and Demise of Algorithmic Models”, Science & Technology Studies, 37(3), pp. 44–61. doi: 10.23987/sts.126011.