Universiteit Leiden

nl en

Do we need a right to a human explanation? Melanie Fink at Leiden University’s SAILS Lunchtime Seminars

On 10 February 2025, Melanie Fink presented her recently awarded NWO Veni project at Leiden University’s SAILS Lunchtime Seminars.

Modern machine learning algorithms are notoriously bad at producing explanations for their outputs that make sense to humans. It is widely recognised that this is a problem, especially in public sector decision-making. Say, an algorithm suggests to the sentencing judge that I might be dangerous, tells the border guard I might be untrustworthy, or flags to the tax administration that I might have cheated: I want to know why. Most obviously, this is because knowing why will allow me to effectively challenge that characterisation by bringing forward evidence against it. But reason-giving is important also because the subjection to unreasoned state power is dehumanising.

The obvious answer is requiring explanations, as legal systems do for public-sector decision-making specifically or more broadly for decisions involving AI. But who is to give those explanations? Is an ‘explainer algorithm’ sufficient? Or do we need a human to give them? The answer depends on what purpose we want an explanation to fulfil. If the challengeability of a decision is the problem, a human explanation might not be needed. However, if the problem is the lack of experiencing empathy and dignity, ‘robot explanations’ are not the solution. In this presentation, Melanie Fink explored how we might go about answering these fundamental questions that play a central role in addressing the risk of dehumanisation of the automated state.

This website uses cookies.  More information.