Robots at the School of Law?
At the end of November, eLaw - Center for Law and Digital Technologies at Leiden University, welcomed leading international scholars with interdisciplinary backgrounds addressing how humans interact with robots and AI-driven technologies. The seminar entitled “Interacting with Robots and AI” built a bridge between technical and social science disciplines and promoted room for discussion on the consequences of the use and development of such technologies.
Robots were back to the School of Law because, due to the novelty of practices and impacts, the development of robot and AI technologies may bring about unclear rules and areas of legal ambiguity. Robots process vast amounts of data that can learn from experience, and self-improve their performance, challenging this way, the applicability of existing regulations that were not designed for progressive, adaptive, and autonomous behavior. Moreover, these systems increasingly interact with children, older adults, and persons with disabilities in private, professional, or public settings, although it is not always clear what safeguards are needed to ensure a safe interaction. However, it is not that common to see robots at the School of Law, although it is essential to ground, frame, and steer the discussions in the appropriate direction.
If you want to know more about the seminar or contact the speakers, you can find more information about them on Laiden.org, the dedicated website for Law and Artificial Intelligence at Leiden University. A brief summary of what each speaker talked about is here nonetheless:
- Cristina Zaga is a researcher and lecturer at the Human-Centred Design Group at the DesignLab at the University of Twente. Cristina argued that responsible human-centered design should play a more prominent role in the human-robot interaction design cycle to understand how children perceive a robot’s agency and explain a robot’s behavior. In this sense, Cristina presented PeerPlay as a co-design technique based on perspective tacking and role play theories that enables children to play with and reflect on the agency of robotic objects and intelligent playthings.
- Dr Joost Broekens is an Assistant professor of Affective Computing at Leiden Institute of Advanced Computer Science (LIACS) of Leiden University. After giving an introduction to what is Artificial Intelligence, then he explained how artificial beings with emotions and rational explanations of behavior could be engineered.
- Dr Roger A. Søraa is an Assistant professor at the Department of Interdisciplinary Studies of Culture at NTNU Norwegian University of Science and Technology, and he talked about how isolation and loneliness can be mitigated through social robots’ emotional care. Roger referred to the Tessa Robot from Tinybots.
- Dr Maria Luce Lupetti is a postdoctoral researcher at the AiTech initiative at TU Delft and addressed how embodied manifestos, a type of critical design artifacts, represent deliberate and tangible manifestations of design ideas that can be used to invite public audiences to reflect and act on certain ethical principles related to the coexistence of humans and autonomous systems.
- Dr. Felzmann is a lecturer in Philosophy/Ethics in the discipline of Philosophy at the School of Humanities, NUI Galway. Heike talked about the process and experiences that nursing stuff go through when robots are inserted in a care facility.
- Prof. Tobias Mahler is Professor at the Norwegian Research Center for Computers and Law (NRCCL) at the University of Oslo. Tobias spoke about Impact Assessment methodologies for addressing legal challenges arising from human-robot interaction.
- Dr. Evgeni Aizenberg is a postdoctoral researcher at the AiTech initiative at TU Delft. Evgeni talked about designing for human rights in AI. Evgeni point of view was from the framework of Design for Values, and he introduced a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process for Artificial Intelligence.