Information Security and Governance of Automated Security

ADD Lecture and Workshop on Information Security and Governance of Automated Security, September 12th 2023 @TANTlab
Logo TantLab

Aalborg University Copenhagen, A.C. Meyers Vænge 15, TANTlab (room 2.3.003)

We have the great pleasure of welcoming Prof. Lizzie Coles-Kemp, Professor for Information Security at Royal Holloway in London to TANTlab on September 12th. Lizzie will give a lecture and organize a workshop for the ADD research group.


September 12th, 9-14 hrs

Workshop “Designing for Inclusive Digital Security: A Creative Engagement Workshop”

There is a growing canon of both academic and practice literature that calls for a pivot away from designing security technologies for an idealised user archetype towards a vision of security technology and practice that offers protection to all, regardless of an individual’s resources and capabilities. Such work, however, often struggles to recognise the intersections between digital insecurities and other forms of insecurity such as housing, economic, health, and food insecurities. Nevertheless, the wider pressure and vulnerability landscape of an individual or group both shapes an individual’s need for security technologies and the ability to use such technologies.

In this workshop we look at some of the common barriers to safely accessing digitally delivered essential services such as housing, finance, welfare, education, and health services. Participants will have the opportunity to try out so-called creative engagement methods that enable service designers, policy makers and technologists to use story-based methods to better understand the pressure and vulnerability landscapes that people experience and the ways in which this landscape constrains digital service access. Using a pre-prepared example of access to one such service, participants will be invited to develop storyboards depicting access for people experiencing different types of insecurity and create vignettes in materials such as LEGO through which to examine alternative user access pathways. 

Participants will then have the opportunity to try out a simple service design framework that encompasses both social and digital accessibility to evaluate the inclusivity of each pathway. The workshop will conclude with a discussion about what this exercise tells us about the inclusive design of digital security technologies, and how such methods might be used more generally across digital service design.

September 12th, 15-17 hrs 

ADD Lecture “Mind the Gap: Digital Responsibilities and the Governance of Automated Security”

The concept of a responsibility gap in the context of intelligent automation is much discussed in the field of human computer interaction. Responsibility gaps occur where automation absorbs some of the human processes for surfacing, establishing, actioning, and checking responsibilities related to the design and use of a technology and yet does not readily offer any means of tracing what happens to those digital responsibilities.

The work related to this area is often interdisciplinary and brings together ideas from computational law, applied philosophy, and digital anthropology as well as human computer interaction and computer science.  Responsibility gaps are particularly challenging for information and cyber security which traditionally has had a human or organisational responsibility framework wrapped around the use of mechanisms for the protection of data and technology. The implications of responsibility gaps emerging from the increased automation of security technology have had little scholarly attention. However, roles and responsibilities are a means to ensure that security technologies work with and for people and so the impact of intelligent automation does require discussion.

In this talk, I set out the potential for responsibility gaps to emerge in the growing automation of security controls and then illustrate the implications of such gaps using the example of an emerging technology that automates granular data access control. I conclude with a discussion of how the conclusions form the talk might inform how we work with responsibility gaps more generally in intelligent automation.