Close Menu

Thematic Fields

Sessions in: Digitalisation


B.1 The Automated Public

Cassy Johanna Maria Elisabeth Juhasz, Robert Gianni

Maastricht University, The Netherlands




The debate surrounding AI and AI powered systems focuses largely on actors in the big tech sector. As such, there is a proliferation of appeals and attempts to regulate the private actors in this sector through institutional guidelines. However, in both the academic and the public debate less attention seems to be given to the use of AI systems by public institutions. Yet public institutions are increasingly adopting AI systems to operationalize policies in public administration, such as in welfare allocation. The inclusion of artificial intelligence within the public institutional systems necessarily raises ethical questions. Questions concerning fairness and equality, aspects of paternalism, transparency about the political agenda and in terms of democratic adequacy in general.

When it comes to public institutions, the rule of ethics is even more substantial than it is in the debate regarding private actors, as the government is not just in charge of regulating private sector AI use, but is additionally in a relation of trust with society and its citizens. Yet recent scandals and exposes have shown unclear and even unethical use of AI by public institutions. Examples like the Dutch child welfare scandal and the UK’s department for work and pensions integrated risk and intelligence service are just a few examples of unethical and unclear ways of employing artificial intelligence for public services. These cases contributed to highlight a widespread opacity concerning the design and implementation of AI systems for public services, their training data and the algorithms, which make it more complicated to analyse and assess AIs used in the public domain.

Existing checkpoints and regulatory measures, such as human checks, adopted by public institutions have proven insufficient to avoid phenomena of discrimination and marginalization of vulnerable groups. The municipality of Rotterdam retired their welfare fraud algorithm in 2021 after it found that it unfairly targeted single mothers and migrants for fraud investigations due to biased training data, despite the ultimate decision being delegated to human caseworkers. Similar types of these AI systems are still in use by several municipalities in the Netherlands, such as the ‘detection misuse and abuse of the WMO and Jeugdwet benefits’ by the city of The Hague, with concerning opacity towards how it operates.

This panel focuses on the use of AI in public institutions and examples of controversial use of AI in public policies. The aim of the panel is to gather knowledge about actual practices and potential risks entailed by the non-transparent implementation of AI system by public institutions. It pays specific attention to the ethical challenges that arise with the use of AI and on methods to address these challenges in the development and deployment of AI for public services.


The panel welcomes contributions to this round table by researchers and practitioners in the field of but not limited to:



  • AI ethics,

  • Democratic use of AI

  • Transparent AI

  • Explainable AI

  • Trustworthy AI

  • Public administration

  • AI governance




Submit Abstract

B.2 “Code is Law” Revisited. STS Perspectives on the Digitization of Law and the Legal Sector

Nikolaus Poechhacker1, Lukas Daniel Klausner2, Elisabeth Paar3

1: University of Klagenfurt; 2: St. Pölten University of Applied Sciences;

3: University of Vienna






Already in 1999, Lessig proclaimed that “code is law”, reflecting on the question of how software is controlling everyday life and on the relationship between software and regulation. While Lessig mainly focused on the internet and its normative dimensions, we are now experiencing an ever growing diffusion of digital technologies that are regulating everyday life, blurring the boundary between social and technological norms. The law and its institutions are not exempt from this development. Increasingly, digital technologies are used within the legal system and its institutions, raising questions regarding how algorithms, models, and data infrastructures are becoming part of the institutions of law. As Hildebrandt phrased it, we are experiencing the rise of “computational law”, as advanced digital technologies are becoming an important medium of legal practice, that has the potential – as all media transformations within the legal system (Vismann) – to shift both legal culture and our understanding of the law (Vesting).


We therefore argue that STS can and should contribute to the discussions on the digitization of the legal system. STS has a long-standing engagement with the law, especially in terms of expertise within court proceedings (Jasanoff), the social construction of scientific evidence for legal cases (Cole, Lynch), the digitization of policing and its practices (Egbert & Leese) or more generally the relation between law and innovation (Eisenberger). In this panel, we want to further the discussion by taking a closer look at current developments and discussing what perspectives STS has to offer on these developments. We understand “legal technologies”, as they are often called, not only as technologies that impact the legal system, but instead as integral parts of the socio-technical structures which enact the law and the legal system (see also Latour’s inquiries into the making of law). Further, we are not only asking questions about how the legal sector and potentially even the nature of law itself are changing, but also what challenges this specific domain in turn brings for the implementation of IT applications. To this end, we want to revisit and complicate the notion of “code is law” and invite contributions that reflect on the entanglement of digital technology and law from an STS perspective, including theoretical reflections, empirical cases, and practical or theoretical interventions (including moments of resistance or theoretical subversion).






Submit Abstract

B.3 (Responsible) Standardisation of Disruptive Digital Technologies

Andrea Fried1, Kai Jakobs2, Olia Kanevskaia3, Ray Walshe4, Paul-Moritz Wiegmann5

1: Linköping University; 2: RWTH Aachen University; 3: Utrecht University; 4: Dublin City University; 5: TU Eindhoven




Today, technical standards for the digital domain are developed mostly by engineers and computer scientists, typically employed by large manufacturers. As a result, technical expertise and economic interests guide standardisation and thus technical development. Societal issues are mostly considered outside technical working groups (WGs; as e.g. in ETSI’s Smart City Task Force), if at all. ANEC, the European consumer voice in standardisation, can be active only in a handful of relevant WG, despite funding by the EU. The same  holds for other such groups, e.g. the EU’s ‘Annex III organisations’. Plus, they all represent their respective ‘constituencies’ (consumers, workers, the environment, SMEs) as opposed to society at large.


This is an untenable situation in general, but even more so in the case of technologies that have the potential to change society – for better or worse. According to the MIT, examples of such disruptive technologies include e.g. Artificial Intelligence and Machine Learning (aspects of which are under standardisation by e.g. ISO, IEC, IEEE and ETSI) , the Internet of Things and Cyber-security, all of which are components of, or utilised by, smart systems (which are also under standardisation in many different bodies).


Some things just don’t change: “The shaping process [of a technology] begins with the earliest stages of research and development” (this is, of course, a bi-direction process – earlier experiences with technology also shape expectations and requirements)[1]. Standardisation represents such an early stage; it is also typically the first stage to which societal stakeholders may contribute (as opposed to e.g. corporate Research and Innovation; at least in theory). This suggests to exploit the standards setting process to contribute broader, non-technical (e.g. societal, environmental, legal and ethical) expertise to the development of disruptive technologies. This, in turn, requires active contributions from an additional broad range of stakeholders including citizens, NGOs, unions, (local) administrations as well as e.g. lawyers, sociologists and philosophers.


This session solicits contributions that discuss aspects of such a ‘Responsible Standardisation’ from both a practical and a theoretical perspective. Potential topics include but are by no means limited to:



  • The roles and representation of societal stakeholders in standardisation.

  • Contributions of societal stakeholders to standards development.

  • Ways to enable participation of societal stakeholders on an equal footing..

  • Legitimacy and influence of the different stakeholders in standards development.

  • Societal norms and their impact on standardisation.

  • Potential ethical and legal issues.


[1] Williams, R.; Edge, D. (1996). The Social Shaping of Technology. Research Policy, 25, 856-899. DOI: 10.1016/0048-7333(96)00885-2 .



Submit Abstract

B.4 Rethinking and re-shaping digital work(places) with practice theory sensibilities

Katja Schönian1, Stefan Laube2

1: Friedrich-Alexander University Nuremberg-Erlangen, Germany; 2: Johannes-Kepler University Linz, Austria


cancelled !