Leiden University logo.

nl en

Word from the Chair

There’s a lot of talk these days about artificial intelligence (AI) and how it is going to reshape our life-worlds in the coming decade and beyond. And justifiably so too – so much of what we do already in our social activity, ‘choices’ of entertainment, and business interactions are already shaped by algorithms.

By logging our behaviour and drawing logical conclusions, algorithms build a framework of our tastes, interests, and opinions, in many ways creating a machine-built version of our own ‘self’, which then gets reflected back to us in a kind of endless cycle of contentment. AI is of course much bigger than simply determining your tracklistings on Spotify, but the link between machine-learning and entertainment goes to the heart of current debates on human-computer relations.

Games are fundamentally a human activity, they set us apart from the other animals. Leiden historian Johan Huizinga presented a whole theory in his book Homo Ludens (1938) that play was a primary ingredient for the generation of human culture. Little wonder, then, that games have represented major thresholds in determining the limits of human mental superiority over computers. Take chess, for instance. In 1996-1997 world chess champion Garry Kasparov played the IBM supercomputer Deep Blue in two series of six games. Kasparov won the first series 4-2, but his defeat by 3.5-2.5 in the second triggered widespread reflection on the limits to human superiority. Although Kasparov was later deemed to have played badly, Deep Blue’s capacity to process millions of possible moves was seen as a major step towards out-calculating the human mind.

The next and more significant threshold was passed 9 years later. In 2016 Google’s AlphaGo squared off against South Korean champion Lee Sedol in the game of Go, a complex board game that represents a higher level of difficulty for AI systems than chess. Against most expectations, AlphaGo won their encounter 4-1. Whereas Deep Blue crunched through sets of probability algorithms provided by its coders, AlphaGo worked out its own pathways making use of its own self-learning neural networks. The victory led many observers to comment on what it means if AI possesses and applies ‘common sense’. In contrast, Lee Sedol reached for another way to maintain human uniqueness: “robots will never understand the beauty of the game the same way that we humans do.”

This represents the scope of the challenge for the humanities as a field of enquiry – and the opportunity. In International Studies we examine the creativity, production, belief systems and behaviours of humans across the realms of politics, economics, history, and culture. If AI can outwit us in something as fundamentally human as a complex game, have we reached the point where the humanities have lost their usefulness? Unsurprisingly, the answer is no, but not for the reasons often given. One of the areas of concern regarding machine learning and AI is related to ethical and moral decision-making. Humans operate according to codes of behaviour that are culturally specific, and interact according to social norms built up over long periods of time. The headlong rush into AI as if it will produce ‘answers’ for human-related problems often misses this ‘human’ layer of thinking and processing. Programmers only aim for systems that function ‘better’, but what does ‘better’ actually mean? Delivering more results in a faster time? Is that all that is at stake?

Humanities and AI can have a mutually beneficial relationship, if both sides recognize the value of the other. Even then, it’s a relationship that requires mutual recognition of the exclusive value of each. For humanities, we can make use of the increase in the availability of information for our work, brought to us by search engines that didn’t exist 25 years ago. But humanities research also requires looking ‘outside the box’ and recognizing details that ‘big data’ and algorithms can easily pass over. For AI, the standard line is that it needs the insights of those who study history, culture, languages, philosophy – the whole range of insights into what makes humans ‘tick’ – in order to prevent systems that operate according to in-built bias. But applying humanities knowledge to machine learning should also involve reflecting on what ‘better’ means. Culturally-differentiated algorithms remain algorithms.

It could be tempting here to launch into some kind of Retro Manifesto, lauding the LP and praising petrolheads, but I’ll avoid it because its not the point. Instead we need to stake out a clear space for the humanities on a higher level. The danger of being overwhelmed by the wave of tech-inspired ‘solutions’ to everything from crime to climate change, with all other forms of enquiry declared redundant, is very real. But science should be shaped by ethical awareness, not ethical awareness by science.

The thing about AI is that, while on the one hand its potential is so great, on the other hand its also so divisive. The best and brightest are piling into new (often privately-funded, often by someone in Silicon Valley) ventures that seek to map out just what AI can do to make our world a better place. Its regarded by some as the key to a tech-led holy grail – a world free of degradation and destruction brought about by codes that tell us where we are failing, and perhaps thereafter compel us to respond to remove the causes. The BBC website ran an article recently entitled ‘How AI could unlock world peace’, which looked at the many projects aiming to predict where, how, and with what intensity conflicts will break out, or to what extent crops will fail and a famine will spread.

This website uses cookies.  More information.