Luk

Research project

Digital solutions to crises of public trust

The ever-increasing use of digital algorithms for the collection and interpretaion of big data fundamentally changes the way in which we form opinions and make decisions – as individuals and in society. Previously, this transformation happened outside of the attention of the public, but it has now become a pressing issue of concern. Data and algorithms are not only the pervasive technological infrastructure of public debate, but also a frequent topic of such debate. 

Digitalization has, in other words, become controversial and now contributes to society’s crises of trust, as technological developments accelerate tendencies like populism, polarization, conspiracy theories and conflicts. Thus, environmental, social, political and economic crises are interwoven with technological concerns, increasing general mistrust in the traditional institutions of democratic societies.

The ADD-project explores the reasons for this development and the possibilities for turning it around: how can data and algorithms be used to enlighten and engage citizens and to strengthen democracy? We offer explanations of and solutions to societal problems by studying controversies about digitalization as these play out on digital platforms and are shaped by digital technologies.

How can data and algorithms be used for the greater good of society?

There is no doubt that technological developments are to the advantage of individual citizens and society as such. To pose just two illustrative questions of the ubiquity of digital technologies, how did we find our way before we had route planning at hand? And how were citizens’ health data handled before they were digitalized?

However, current developments also cause insecurity and concern, not only about the collection and handling of people’s data, but also about the automation of individual and collective decisions. What happens when algorithmic operations are used as the foundation for predicting individual outcomes, e.g. of the trajectories of children at risk or citizens in debt? Should such predictions form the basis of public administration and political decisions? And what might be the consequences of such algorithmic decision-making?

These questions indicate that the implementation of new technological solutions demand high levels of trust – and they imply that the very nature of trust is in transition. What does it mean to trust an algorithm? We can be sure of its precision, but not of its ethics. Similarly, the result of an algorithmic calculation will always be unambiguous, but hardly anybody understands how it works. We need more transparency, but also an entirely new set of ethical guidelines – for the use of people’s data as well as for algorithmic decision-making.

The ADD-project will explore existing practices within thematic areas – health, finance and public administration – and in relation to cross-cutting concerns and possibilities – cybersecurity and innovation. To answer the question of how to organize data and algorithms for the greater public good, the project brings together an interdisciplinary team of researchers spanning the humanities, social sciences and computer science. The project is organized as a series of work packages, covering theory, methodology and empirical subprojects. Further, research results are directly linked to outreach activities and practical recommendations, seeking to improve current conditions by means of actions and policies.

Theory development

The theoretical work package aims to integrate theory development with empirical casework and is led by Torben Elgaard Jensen and his team of tecno-anthropologists at Aalborg University. We have designed a common process for all empirical subprojects, thereby enabling their integration. The work begins with a general mapping of public controversies involving data and algorithms. That is, we collect a large set of relevant data from traditional news media, social media, parliamentary debates and research publications. Beginning from this data set, the project unfolds through four phases, which toggle between empirical specification and theoretical integration.

First, each subproject conducts data sprints, which provide quantitative starting points for further studies of the issues involved in each case and establish an overview of existing controversies about data and algorithms – within and across the selected fields. How are data and algorithms articulated in relation to health, finance and public administration? What questions of cybersecurity and privacy arise? And what are the visions of technological innovation? In each data sprint, we involve relevant experts and other stakeholders in order to detail the ways in which data and algorithms are used and understood as well as the controversies involved within and across different publics.     

On this basis, phase two of the common theory development consists of a series of internal workshops, aimed at deepening our understanding of data and algorithms as technical and social phenomena. In the third phase we expand the dialogue to international colleagues to test and strengthen our developing framework. Finally, phase four consists of synthesis and presentation of our collective results. 

Methodological contributions

The methodological work package is led by Christina Lioma and the members of the Machine Learning Section at the Department of Computer Science, University of Copenhagen. The work package has two main purposes: First, to support the common research process in terms of collection and analysis of data as well as understanding of the technical aspects of controversial algorithms. Second, to contribute to state-of-the-art research in computer science. The first purpose is integrated with our common process of theoretical development. The second is developed in three subprojects.

Algorithmic evaluation

Arguably, current design of algorithms unintentionally spreads false information, as may be illustrated with the example of search engines: any new algorithmic component must be evaluated, typically by its output, and this requires testing it in situations of known correct output, called ground truth. When developing search engines, programmers test them by giving them queries as input and comparing their output to the ground truth. A major problem is that, on the one hand, it is extremely costly to collect ground truth data, in terms of time, human effort, financially etc.; on the other hand, very large numbers of ground truth data is needed for training.

Therefore, approximations of what users find useful, based on what they click on, tend to replace ground truths as drivers of algorithmic development. Here, the question of why users would click a hyperlink is not considered, which is problematic because users tend to click not only on the correct answer, but also out of curiosity or because they are provoked by a specific heading (i.e., clickbait) or simply because they assume that the first answer is the correct answer (trusting the authority of the search engine), to name but a few potential reasons. Hence, the higher the number of clicks that a piece of content has, the more likely to be retrieved it becomes.

This subproject aims to improve state-of-the-art click-based evaluation in the direction of (a) alternative ground truth approximations to user clicks, and (b) factoring the context of clicks into their collection and use as ground truth. In doing so, the subproject supports the quantitative data collection of all empirical subprojects by providing insights into the limitations of available data sets.

Sampling bias

Algorithms are trained on samples of data that are assumed to be representative of the full universe of data that they will eventually process. Thus, there is a distinction between the data sample that an algorithm is calibrated on and the unknown data that the algorithm will process when it has been calibrated. The better the sample of data, the better the algorithm. If the sample contains bias, the algorithm will learn to apply this bias. The problem, however, lies in the very definition of bias. Thus, defining and identifying unbiased data remains an open question.

No matter how carefully we select a data sample in terms of its coverage and representativeness, there will almost certainly be bias with respect to some dimension. We will not obtain balance with respect to, for instance, gender and nationality and skin colour and income and educational level and political orientation and sexual orientation and… However, if we intervene in the composition of the data in the sample and construct a perfectly unbiased data set with respect to all possible dimensions, then this sample will be so artificial, so far from real-world data, that it will be practically unusable for algorithmic development; any algorithm we train on it, would perform erratically when applied to real data.

The above dilemma forms the basis of research in a subproject. The starting point will be a further characterisation of prescriptive versus descriptive bias in the context of data sampling. On this basis, this subproject enhances our technical understanding of the algorithms that are involved in the controversies that form the nexus of the empirical investigations; what are their biases? And how might they be mitigated?

Out-of-sample generalisability

The state-of-the-art paradigm in algorithmic development applies deep learning technology.This is a paradigm within machine learning that, simply put, can calibrate itself; i.e., the algorithm can learn directly from the input data what its optimal parameters should be and what values these should have. The higher the number of parameters in the learning model, the more powerful it will be.

Theoretically, deep learning assumes that the higher the number of parameters in a model, the higher the amount of data it should be trained on, and the lower our tolerance of error should be. These three quantities, number of parameters, size of data, and size of tolerance to error, are thoroughly understood and precisely related to each other mathematically via Hoeffdning’s Inequality. This means that it is possible to pinpoint mathematically what each of these quantities should be. In practice, however, this assumption is almost always ignored and the concomitant calculations are not performed, let alone followed. This is highly problematic, as it bars out-of-sample-generalisability. The algorithm does not learn anything, but merely memorises what is in the sample of data.

This subproject will investigate the implications of the practical breach of theory and suggest ways to correct it. In so doing, the subproject evaluates currently used algorithms and suggests how they might be improved. Establishing the technical basis for understanding limitations of algorithmic opinion formation and decision-making, the subproject is closely aligned with the project’s ultimate aim of empowerment. It connects our combined methodological findings with the common theoretical position and provides the basis for reorganising controversial data and algorithms.

Empirical investigations

The project’s theory development is supported by the methodological contributions and both are developed in close coordination with the empirical investigations. Helene Friis Ratner at the Danish School of Education, Aarhus University and Leonard Seabrooke at Department of Organization, Copenhagen Business School coordinate this work package, securing the integration of the empirical subprojects, each of which are anchored at a partner university.

Cybersecurity and privacy

Large-scale IT infrastructures such as SmitteStop, NemID, e-boks, Sundhedsplatformen, Rejsekortet and Mobile Pay are construction sites, and sometimes battlegrounds, for establishing public trust in digital services and solutions. Hence, such infrastructures are also fertile grounds for studying the intended and unintended uses of data and algorithms as well as the opportunities for building more trustworthy practices.

This subproject investigates issues of cybersecurity and privacy that have emerged around major Danish IT infrastructures. We begin by identifying moments of crisis where public debates erupt around these issues. With these public controversies as entry points, we investigate how the infrastructures, understood as sociotechnical arrangements, configure and (re-)define privacy and security in particular ways, and how they negotiate issues of system efficiency with the inherent risks of data sharing.

We conduct participant observations to examine the actual use of the infrastructures, and we interview different actor groups – system designers, politicians, end-users – to uncover their understandings of privacy and security. To further enhance the depth and relevance of the case studies, we analyse and explore the technical configuration of key algorithms.

The case studies included in this subproject are informed by the conceptual work and methodological strategies that Torben Elgaard Jensen and his team at Aalborg University have developed in their recent work, which is broadly inspired by (digital) science and technology studies, studies of infrastructure and social studies of algorithms.

Public administration and prediction

Predicting which children will become subject to parental neglect. Predicting which students will drop out of their education. Predicting where crime will take place and who will commit it. Predicting which citizens will become long-term unemployed and predicting who will not pay their taxes. These are all examples of public sector predictive analytics, either in use, under development or under political consideration.

The potentials and promises of these AI techniques are obvious: they help public administration actors tailor and target the citizens most at risk, potentially preventing the risky behaviour through early intervention. However, as the numerous public and political debates around predictive analytics have shown, public administration’s use of these technologies is fraught with data ethical dilemmas. The dilemmas revolve around issues of privacy, register merging, the biases inherent in predictive algorithms as well as fundamental questions about our vision of the relationship between the state and its citizens.

Using a socio-technical approach that mixes digital methods with multi-sited ethnography and textual analysis, the subproject examines the potentials and data ethical concerns that materialise with the Danish public administration’s increased interest in utilising algorithms for prediction.

In its initial phase, the subproject maps the chronology and the different actors involved in specific cases of the public administration’s (potential) use of predictive analytics: which hopes, sociotechnical imaginaries and data ethical concerns are mobilised? Which publics emerge and become engaged?

The second phase investigates select turning points of each case. Building on Helene Friis Ratner’s existing work on public administration’s usage of citizen data, the analytical strategy of ‘infrastructural inversion’ is applied. This analytical strategy enables exploration of how data ethical concerns appear at the intersection of infrastructural development and public attention. Ultimately, we will explain how data ethical principles of public administration develop in conjunction withtechnical innovations, such as predictive analytics, and the publics they generate.

Population health and control

Technological developments fundamentally change the conditions of possibility for public health regulation and communication. On the one hand, big data can be used by health authorities to monitor population developments, e.g., epidemic outbreaks. On the other hand, citizens increasingly expect direct, immediate and personalised interaction, just as they seek out alternative sources of information and advice. The subproject on issues health will explore the interrelations of the personal and public datafication, focusing on how algorithms enable personal empowerment and enforce collective control.   

Nowhere have these interrelations become more pressing than in the public response to the COVID-19 pandemic. Here, data is used to monitor and contain the spread of the disease, as exemplified by the SmitteStop app, just as social media are central to the official communication strategies of public authorities. The issue of how data and algorithms may help safeguard public health while protecting individual freedoms will form one nexus of the subproject.

Another nexus is the increasing generation and application of data at the level of the individual. The movement known as ‘the quantified self’, brings people devoted to ‘personal science’ together to explore the many different ways in which individuals may enhance themselves through data. Such self-tracking feeds into a population health paradigm of prevention as preferable to treatment, which potentially raises the bar as to what information and behaviour can be expected of people who are seeking medical advice.

These public and personal uses of data are brough together to explore the interrelations of personal and collective decision-making in questions of population health. As a whole, the subproject is inspired by process methodology, using both qualitative and quantitative methods of data collection to explore how individual and collective decisions evolve and interact over time. This research combines digital process-tracking of public debates with in-depth qualitative inquiry, using ethnographic methods of observation and participation of a sample of key informants to understand the thoughts, experiences and feelings that support their attitudes and actions.

Finance and transparency.

The proliferation of algorithms in finance has changed the relationship between regulators, creditors and citizens. While there is a great deal of work on the technology-politics interplay of algorithm-based high finance, we turn to how algorithmic forms of calculation are changing the governance of everyday financial concerns. This issue has boomed in the past years, as public authorities, banks and other financial institutions seek to locate the patterns of actual and likely behaviour of citizens and clients.

The opportunities and constraints from new calculative practices are many. The biggest benefit is the provision of certainty to citizens and clients on their socio-economic situation. A further benefit is the lowering of public administration costs in monitoring wrongdoing (intentional or incidental). The constraints, however, are equally considerable. More invasive forms of personal data collection and monitoring drive controversy over what oversight is appropriate and whether such oversight could become purely algorithmic or should continue to rely on professional judgement. A further constraint is that algorithm-based predictions enable even greater liberalisation of and access to short-term credit, despite its many socio-economic problems.

We will investigate a series of cases on finance, illustrative examples of which include i) the development of machine learning by the Danish Financial Supervisory Authority’s FinTech Lab to build a ‘neural network’ for the valuation of owner-occupied properties in Denmark; ii) the Danish Tax Authority’s expansion of machine learning systems to trace citizens’ tax behaviour; and iii) the blossoming market for ‘payday’ instant loans from non-banks.

Using social network analysis to identify the key actors in these cases, interviews with key professionals and stakeholders and textual analysis, this subproject will identify new calculative practices among economic agents, goods and exchanges. In the first phase, we will establish the relevant professional and organisational networks. In the second phase, we will trace transparency concerns in cases of professional and public controversies over the use of data and algorithms in finance.

Innovation and democracy.

While innovation is recognised as a critical driver of socio-economic development, its universality is increasingly being subjected to criticism. We will analyse and problematise the manner in which innovation discourses are deployed and utilised as democratic societies face crises and stress, including how digital innovation is presented as a universal solution to societal problems such as controversies surrounding democracy.

Even though innovation is a necessary component of societal development, it is often characterised by uneven diffusion as well as opacity in how it aligns with democracy. As critical choices are increasingly shielded away by algorithmic complexity, and the ownership of data becomes an ever more complex issue, the governance of innovation can become hidden in ‘dark data’. However, choices concerning the distribution of innovation resources are becoming ever-more critical on a societal level (for instance, due to the necessity of addressing wicked problems such as the ecological crisis or societal inequality). Therefore, the manner in which innovation trajectories become ‘black boxed’ and hidden from the populace must be revealed and counter-acted.

As a consequence, this subproject engages with the other subprojects, and the technologies studied therein, to make clear the manner in which innovation can either support open, democratic opinion formation in society or enhance closed, non-democratic processes of policy-making. Challenging the ways in which data, on the one hand, and democratic society, on the other, affect practices and discourses of innovation, this research highlights the roles that democracy can and should play in an era of algorithm-driven innovation logics.

The subproject will utilise theory from the digital humanities, science and technology studies and critical innovation studies in order to develop a more robust conceptualisation of the way in which innovation affects and is affected by democratic decision-making. This investigation strengthens the integration of our empirical studies in scrutinizing the cases of the other subprojects from the perspective of innovation. Further, it establishes a bridge between the research and outreach components of the ADD-project, as it concerns itself with the issue of how technological innovations may serve democratic purposes.