Universiteit Leiden

nl en

Lecture

SAILS Lunch Time Seminar:Melanie Fink

Date
Monday 10 February 2025
Time
Location
Online only

Do we need a right to a human explanation?

Modern machine learning algorithms are notoriously bad at producing explanations for their outputs that make sense to humans. It is widely recognised that this is a problem, especially in public sector decision-making. Say, an algorithm suggests to the sentencing judge that I might be dangerous, tells the border guard I might be untrustworthy, or flags to the tax administration that I might have cheated: I want to know why. Most obviously, this is because knowing why will allow me to effectively challenge that characterisation by bringing forward evidence against it. But reason-giving is important also because the subjection to unreasoned state power is dehumanising.
The obvious answer is requiring explanations, as legal systems do for public-sector decision-making specifically or more broadly for decisions involving AI. But who is to give those explanations? Is an ‘explainer algorithm’ sufficient? Or do we need human to give them? The answer depends on what purpose we want an explanation to fulfil. If the challengeability of a decision is the problem, a human explanation might not be needed. However, if the problem is the lack of experiencing empathy and dignity, ‘robot explanations’ are not the solution. In this presentation, I explore how we might go about answering these fundamental questions that play a central role in addressing the risk of dehumanisation of the automated state.

 

Join us!

The SAILS Lunch Time Seminar is an online event, but it is not publicly accessible in real-time. Please click the the link below to register to our mailinglist and receive participation links for our Lunch Time Seminars.

Sign up
This website uses cookies.  More information.