Close

ADD Blogpost: Researcher criticizes oversimplified AI debate and suggests ways to improve it

Torben Elgaard Jensen calls for a more nuanced debate about AI and here presents three suggestions for achieving this.

The ADD blog provides insight into the ADD project’s research across six university partners. Meet our researchers from Aalborg University, Aarhus University, Copenhagen Business School, Roskilde University, the University of Copenhagen, and the University of Southern Denmark. Read about their projects, activities, ideas, and thoughts—and gain a new perspective on the controversies and dilemmas we face in the digital age, along with ideas on how to strengthen digital democracy.

Torben Elgaard Jensen, TANTlab, Aalborg University

In this blog, I will attempt an exercise that is almost impossible. Almost all current discussions about AI are firmly focused on the future. What will the next technological breakthrough be? What will a future society with AI look like? Will we become smarter or dumber? Will AI systems gain agency, consciousness, or emotions? Will AI replace tens of thousands of public-sector employees? Will Europe be sidelined? Will democracy survive? Will the climate survive? Will humanity survive? These have been the hot topics in recent years, especially since the launch of ChatGPT in 2022, which made AI tangible and accessible to a global mass audience.

But there are other perspectives on AI that we could call AI afterthoughts. Now that we have experienced and lived through several years of AI debate, we can – at least for a brief moment – look back and reflect on how the debate has unfolded. That is what I will do in this blog. As will become evident, I draw on a broad selection of social science researchers who have produced both case studies and reflections on the interplay between technological and societal development.

For the impatient reader, I can hint at the conclusion in advance. My claim is that the drama surrounding AI has revived some old bad habits. The debates of recent years have breathed new life into some oversimplified ideas about technology and technological development. I am convinced that we can do better in the coming years.

Afterthought 1: Can We Escape the ‘Epochal Tyranny’?

My first afterthought concerns time, or more precisely, periodization. Do we imagine that we are standing on ‘the threshold of a new era’ defined by AI, or do we imagine that AI will integrate into countless other complex dynamics of development? There is no doubt that the discourse about a completely new AI era has captured much of the attention in recent years. OpenAI has skillfully staged the narrative of a breakthrough with world-changing consequences. Consulting firms have chimed in with predictions of an A-team of companies that will rise to unprecedented heights on the wave of AI, and a B-team that will sink to the bottom. High-profile debaters have written books promoting new spectacular periodizations. For example, Kissinger et al. (2021) claim that throughout history, humans have been ‘alone’ in understanding nature—until now, when we have suddenly gained a formidable partner.

All these claims merit closer scrutiny. Perhaps we should think of AI as an open controversy and an ongoing process rather than a definite ‘thing’ (Suchman 2023). Perhaps OpenAI’s technology is just a small step in a longer evolution of technologies (Pi 2024). Perhaps the AI hype is beginning to fade, and perhaps we will have to look much harder for the spectacular business cases (Naughton 2023). Perhaps the history of science has always been a history of developing instruments, journals, and research communities, meaning that scientists have never understood nature ‘alone’ but always with the aid of constructed socio-technical ‘machines’ (Latour 1990).

However, my concern here is not to engage in current debates. My concern, my afterthought, and my worry revolve around what happens to a discussion when someone claims that we are on the brink of a new epoch. Sociologist Paul du Gay pointed out over 20 years ago a particular form of rhetoric and discourse that he calls the tyranny of the epochal. He describes this tyranny as “a logic of over-dramatic dichotomization that establishes the available terms of debate and critique in advance, in highly simplified terms, either for or against, and offers no escape from its own categorical imperatives” (Du Gay 2003:664). When epochal discourse dominates, Du Gay argues, one ends up in an all-or-nothing situation. Either one bows to the claim that the future can be defined and understood as a radically new era with completely different rules, or one is placed in the somewhat undesirable position of the stubborn skeptic who refuses to face ‘reality.’

Time and again, the tyranny of the epochal has turned up in discussions of new technologies and societal developments. In the 2000s, when there was a boom in dot-com companies and many spoke of ‘the new economy,’ I conducted a close study of epochal discourse in practice (Jensen 2008). In Copenhagen, a shared office space had just opened for startup companies. The manager of the office space was, of course, interested in selling office spaces, but he also enthusiastically sold a vision of ‘the office of the future’ to a steady stream of visiting journalists.

Again and again, he argued that people in ‘the old economy’ did not collaborate effectively because everyone guarded their own knowledge, whereas people in the ‘new economy’—here he gestured over his open office landscape—sat together, ‘networked,’ and shared knowledge. Looking back, it is astonishing that anyone believed that people did not collaborate or share knowledge in ‘the old economy.’ But at the time, when epochal rhetoric about a new dot-com era was at its peak, there was a great willingness to dismiss everything ‘old’ as soon-to-be irrelevant and a great willingness to see even quite small physical manifestations (10-15 people in an open office space) as a sign that a new world order was emerging.

In my view, there is no doubt that the AI discussion has been heavily influenced by the tyranny of the epochal. Again and again, both critics and enthusiasts claim that AI is rushing in and will change everything. I believe we are ready to recognize this rhetorical figure for what it is: not a particularly objective or precise description of the world, but a rhetorical device that fixes the premises of the debate. Let us discuss what the future holds, but not on the terms of the tyranny of the epochal.

Afterthought 2: Can We Move Beyond the Idea of the Purely Technical Versus the Purely Human?

A common statement in recent years’ debates is that computers – now equipped with neural network architectures and machine learning strategies – can match or surpass human intelligence. The premise of this statement is that on one side, we have ‘pure’ human intelligence, and on the other, the machine competitor. This premise is, in my opinion, misleading, if not outright incorrect.

Let’s start by looking at the human side of the equation. What is human intelligence really? When cognitive psychologists measure the so-called intelligence quotient as an expression of a person’s generalized intelligence, they do so by administering a series of very different subtests. The test subject might be asked to remember and repeat a long sequence of numbers or to arrange a handful of images in a sequence that tells a coherent story. If we call these very different subtests apples and oranges, many might be surprised by the psychologists’ next step: they sum up the results of the subtests into a single number.

Their argument for this operation is that empirical observations show a weak positive correlation between performance on different subtests. Based on this, cognitive psychologists have constructed a theory that there is a single underlying factor contributing to all subtests, which they have named generalized intelligence. Notice that no one has ever seen or measured ‘generalized intelligence’ in any way other than through the subtests. It is, therefore, more accurate to say that the subtests is the cause of generalized intelligence rather than the other way around! Also, note that intelligence tests and the intelligence quotient have created a strong cultural notion that intelligence is something very concrete, even though it is a theory based on a weak positive correlation.

But if one can be skeptical about the idea that the vast and varied arsenal of human competencies is driven by a single underlying factor, what then explains all the things we can do? The answer is more straightforward than one might think. If we look at how we solve tasks in practice – frying an egg for example – it is obvious that we are always situated in specific circumstances that our actions and reflections are intertwined with. My ‘intelligence’ does not allow me to fry an egg while riding a bicycle, but if I go into my kitchen, I can connect with decades of experience and development (electricity, refrigerator, stove, pan, plastic spatula, egg cartons). I can interact with these elements during the process, allowing me to continuously see and smell how my small project is progressing.

Afterward, I might say that ‘I’ fried an egg. But that is, in reality, just boasting. Human agency is never purely human in practice. Everything we do is deeply interconnected with the places we are located and the extensive and everchanging network of materials and technologies we draw upon (Hutchins 1995). This means that when we say AI can match human intelligence, one side of the equation – ‘human intelligence’ – is a rather unknown entity.

Now let’s look at the other side: AI systems. There is no doubt that the performance of these systems have improved dramatically in just a few years. But again, we can rightly ask whether we are dealing with a ‘pure’ machine intelligence that can be directly compared to human intelligence. When discussing large language models, it is well documented that they are based on the largest ‘appropriation’ of human-produced material in history. It is also well known that AI model training often relies on enormous amounts of work from frequently underpaid individuals who gradually guide the AI model in the right direction (Crawford 2021).

Despite all this, one might still get the impression that AI systems will eventually function ‘automatically.’ However, the sense of ‘automation’ is quickly challenged when looking at the practical use of AI. A good example is Hawk-Eye, VAR, and similar systems used to track ball movement in sports tournaments. Many expected that these smart technologies would make referees and assistant referees redundant, but the opposite has happened. Soccer matches now have up to five referees, and there is also a large team of people in a control center who continuously replay and verify video sequences. The number of employed sports officials has never been higher (Runciman 2023:227).

The overall conclusion is that comparisons between human and artificial intelligence are largely a confusing element in the debate. In practice, we are always comparing different socio-technical systems. The most important comparative question is therefore not whether ‘the machine’ has now or in the future surpassed ‘the human.’ What we should really compare are different socio-technical systems. We should ask ourselves whether one socio-technical system over another has more positive (or negative) practical, social, political, economic, or environmental consequences (Bender et al. 2021). If we can keep this in mind, I believe the AI discussion in the coming years can move forward in a meaningful way.

Afterthought 3: Can We Get a More Realistic Picture of Innovation?

The story of AI is the story of a few, primarily American, companies that have managed to amass exceptional amounts of data, technical AI experts, and capital. With these resources at their disposal, they have developed some astonishing new technologies. This story is true, but it is not the whole truth. First and foremost, because this narrative gives too much credit to tech giants and too little to a range of other actors.

Let’s start by looking at the role of states. Economist Mazzucato has sought to debunk the myth that technological innovation is primarily driven by the private sector, while the public sector merely regulates and bureaucratizes (Mazzucato 2013). In an analysis of Apple’s iPhone, Mazzucato points out that there is not a single major technology in the iPhone that has not been state funded. Furthermore, the ‘smartness’ of the phone builds on an additional array of state funded technologies such as the internet, GPS, and touchscreen technology. Finally, she notes that Apple received substantial support funds from the U.S. government in its startup phase.

The image of tech firms’ innovation can also be nuanced by examining the role of users. Users can be defined in several ways. First, there are ‘end users,’ such as a high school student experimenting alone with how AI can be used to improve a text or a piece of code. Users can also be user communities, such as open-source developers making a coordinated effort to invent and test new variants and functionalities.

Finally, we can think of ‘users’ as more inadvertent participants. This could be people experiencing how AI products proliferate online or how organizations or authorities begin using AI. In some cases, these users will engage in public debates about what we can and should use AI for. In doing so, they help shape our cultural norms and perceptions of AI. Technical developers might not consider public debates part of AI innovation, but if we look at what practically determines what AI can be used for, it makes sense to count these activities as well.

Ultimately, I believe it is important to consider the full spectrum of AI innovation. Both because new developments constantly reshape the dynamics surrounding tech companies – new (de)regulations, new state investments, new whistleblowers, and new surprising open-source competitors – and because there may be interesting opportunities to gather and coordinate the dispersed AI innovation already occurring among many different actors. In this context, we can perhaps learn something from the history of technology: One of the decisive moments in the development of the Danish wind turbine industry was the government’s decision to establish a wind turbine test center. This government organization became a meeting place and a hub for a broad field of Danish wind turbine enthusiasts, who were given the opportunity to compare and further develop their inventions (Garud & Karnøe 2003).

Perhaps we should think along similar lines in the AI area: Can we invent new creative coordination initiatives or infrastructures that can gather the many scattered AI innovation efforts?

Back to the Future

As I have outlined above, there may be good reasons to revisit the AI debate of recent years and do some reflection. We can benefit from a critical look at thinking that is based on simplified divisions of the human and the technical, on notions of radically divided eras or on the assumption that innovation only comes from the big Tech companies.

With that as a starting point, I believe we will be better equipped to understand our tumultuous present and discuss what we will fight for in the future. The coming years of discussion about AI will certainly not be boring. It is often said that a week is a long time in politics, and this is now also true for AI. I look forward to following the development together with my colleagues, and I am pleased that the Villum/Velux foundations have funded Algorithms, Data and Democracy all the way up to 2031.

References:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Crawford, K. (2021). Atlas of AI. Yale University Press.

Du Gay, P. (2003). The tyranny of the epochal: Change, epochalism and organizational reform. Organization, 10(4), 663-684.

Garud, R., & Karnøe, P. (2003). Bricolage versus breakthrough: distributed and embedded agency in technology entrepreneurship. Research policy, 32(2), 277-300.

Hutchins, E. (1995). Cognition in the wild. MIT press.

Jensen, T. E. (2008). Future and Furniture: A Study of a New Economy Firm’s Powers of Persuasion. Science, technology, & human values, 33(1), 28-52.

Kissinger, H., Schmidt, E. & Huttenlocher, D. (2021) The Age of AI, John Murray Publishers.

Latour, B. 1990. Drawing Things Together. In M. Lynch and S. Woolgar (Eds.) Representation in Scientific Practice. Cambridge, Mass, MIT Press: 19-68.

Mazzucato, M. (2011) The Entrepreneurial State. London: Demos.

Naughton, J. (2023) For all the hype in 2023, we still don’t know what AI’s long-term impact will be, The Guardian.

Pi, W. (2024) Brief Introduction to the History of Large Language Models (LLMs), Medium, May 7th 2024.

Runciman, D. (2023). The Handover: How We Gave Control of Our Lives to Corporations, States and AIs. Liveright Publishing.

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2).