Universiteit Leiden

nl en

Research project

Investigating Institutional Diversity and Innovation: AI adoption and implementation in Taiwan and The Netherlands

(1) What are the institutional factors that influence AI adoption and implementation? and (2) How does AI reshape the exercise of administrative discretion within public organisations, and how do adoption and implementation choices moderate these effects?

Duration
2024 - 2029
Contact
Matt Young
Funding
FGGA Starter Grant

Summary

While the literature on public sector AI implementation is broad and growing, there are nevertheless critical gaps. One is a lack of evidence of the material effects of in-operation AI on public organisations, their staff, and the public. There is also a strong tendency in the existing literature to treat public sector AI implementation as a wholly new event, decoupled from any technical or institutional preconditions within the adopting organisation or its environment (though for exception see Giest and Klievink 2022). A more vexing problem still is the dearth of theoretical and empirical studies on AI grounded in the theoretical frameworks of public administration. Finally, there is an acknowledged lack of comparative research, with corresponding limits to external validity (Wirtz, Langer, and Fenner 2021; Zuiderwijk, Chen, and Salem 2021).

This project addresses these gaps by providing systematic, generalisable empirical analysis of both the antecedents and consequences of public sector AI implementation. The project will test and advance theories of institutional economics and public sector innovation using a novel and innovative combination of cases and methods, generating richly contextual and comparative findings, and advancing international public administration research. The project is motivated by a desire to advance our understanding of the connections between AI adoption, the exercise of administrative discretion, and outcomes for both public administrators and the public served. It does so using two linked research questions, discussed below.

Research Question 1: What are the institutional factors that influence AI adoption and implementation?

The first research question centres the topic of AI implementation using theories from institutional economics and public sector innovation. We will look for evidence of variance based on:

  • Path dependency: conditioned by prior digital and ‘big data’ system implementation;
  • Slack resources: both management and fiscal resources 
  • (Young 2020);
  • Organizational attitudes towards data-driven decision making and automation (Young 2020);
  • Risk preferences of senior management (Bannister and Connolly 2014; Brown and Osborne 2013; Bullock, Greer, and O’Toole 2019);
  • Institutional isomorphism, both coercive and mimetic pressures (DiMaggio and Powell 1983;
    Jun and Weare 2011)
    ;
  • Propensity to contract out for technology products and services (Young 2020); and
  • Fit of tool to task: the match between AI capabilities, task context, and degree of discretion required  (Young, Bullock, and Lecy 2019).

 

Research Question 2: How does AI reshape the exercise of administrative discretion within public organizations, and how do adoption and implementation choices moderate these effects?

Answering the project’s second research question on administrative discretion speaks directly to foundational topics in public administration and management. The scope and complexity of tasks necessary to solve collective action problems requires delegation of work to agents, who in turn must exercise discretion in deciding how to perform complex and contingent tasks (Arrow 1984; Bertelli 2012; Huber and Shipan 2002). The observed effect of public sector ICT implementation on discretion has, generally speaking, condensed into two competing perspectives. One views technology as a force that curtails administrative discretion; another argues that technology produces ‘contingent affordances’ that enable discretion (Buffat 2015; de Boer and Raaphorst 2021; Garson 1989).

Drawing on the arguments and propositions developed in Young et al (2019, 2021) and empirical work by Huang et al (2021; 2021), we hypothesize that use cases in which there is apparent enablement of new administrative discretion by the introduction of AI systems are in fact strongly – perhaps completely – attenuated by system characteristics that curtail discretion in practice. For example, user interfaces of human-AI hybrid systems are known to induce a cognitive bias, known as automation bias, causing users to subconsciously over-preference AI decisions rather than their own expertise and discretion (Cummings 2004; Davis et al. 2020; Parasuraman and Manzey 2010).

The project will employ a mixed-methods approach, including qualitative analysis of primary (interview) and secondary (document) data, quantitative analysis of primary (survey) and secondary (administrative) data, and experimental designs (surveys and/or vignettes). All proposed methods for this project have been applied with demonstrable success both by the project team and their respective fields in general. For comparative purposes, cases will be drawn from The Netherlands and Taiwan. These countries provide significant institutional, demographic, and cultural variation, are known to use AI proactively, and leverage our pre-existing research networks. The sampling strategy will ensure variation among sampled agencies (George and Bennett 2005). The primary set of data collection tasks for the project are rooted in the case study analysis. Data from these case studies will be collected in multiple ways. The first of these is through the collection and analysis of secondary data on each of the case study organisations. The second is primary data collected in the form of expert interviews with case organisation employees. These interviews will take a semi-structured approach. Coding for all interviews will be evaluated using measures of intercoder reliability to improve the systematicity and trustworthiness of subsequent analysis (O’Connor and Joffe 2020). All surveys and experiments will be piloted, validated, and translated as necessary. The experiment protocols and hypotheses will also be pre-registered with a registry for randomised controlled trials, such as The American Economic Association.

The project rationale is to fill a gap in academic knowledge that is, at the same time, of strategic importance for Leiden University (c.f., SAILS). The project generates scientific knowledge on both the applied and normative implications of machine learning-based artificial intelligence systems in the public sector. Both the work process and the outputs of the project will serve to jump-start a long overdue advancement on the topic of public sector AI use.

This website uses cookies.  More information.