Leiden University logo.

nl en

Robots that empathise with humans

If we want to build robots and computer systems that are not only smarter but also possess more social skills, we first need to find out more about how humans interpret information. Max van Duijn and Tessa Verhoef conduct research at the intersection of cognitive science and AI.

Looking for the Holy Grail of AI

It’s one of the big challenges in AI: to provide AI systems with a theory of mind, a way of understanding things from our perspective. Max van Duijn, Assistant Professor of Cognitive Science & AI: ‘When an AI system such as Siri, Apple’s personal assistant, communicates with us, it processes our spoken language. It’s already fairly good at that, up to a point, but the processing differs fundamentally from human language processing. We humans put the words of someone else in that person’s perspective: what does he or she mean? It’s the Holy Grail of AI to furnish an artificial system with this capability.’ 

Van Duijn is convinced that we don’t sufficiently understand how people acquire their theory of mind. That is why he will study this process in children aged from four to 11. He believes that language acquisition is crucial for this, especially the ability to tell stories with characters and changes of perspective. This is a feature of even very simple stories (‘Mummy gave me a strawberry ice cream yesterday, but I dropped it and Mummy got very angry.’) Van Duijn: ‘I want to understand better how children learn to create characters that are models of real people. In the long run, I hope this will help us implement theory of mind in AI systems.’ A pilot project ended in 2019.

Max van Duijn is researching the development of 'theory of mind' in young children

Even very simple robots can teach each other a rudimentary language

If you begin at the other end, you can construct multi-agent systems: a group of agents – virtual robots in a computer – that can interact with each other. These could be very simple robots – with the interaction no more than exchanging strings of letters – that are subject to a few simple communication rules. The agents adapt their language rules after each interaction, gradually creating more agreement between the agents. Young agents learn their language from previous generations and expand on this. In this way, language becomes more and more successful and easier to learn, in a process comparable to natural evolution.

This is the research field of Tessa Verhoef, also Assistant Professor of Cognitive Science & AI: ‘A limited number of generations already generate a stable language. This has been demonstrated with computer agents, but also with living simulators, for instance people who have learned a language consisting of sounds produced by a slide whistle.’ These simulations clearly show how language evolves as agents interact and transmit the language to the next generation. It quickly becomes a self-organising system.    

Verhoef: ‘At the time, this was an exciting new discovery. Until then, linguists had assumed that people have a very special section in our brains that learns and generates language.’ She now wants to make these simple models more advanced, and to combine them with the latest findings from machine learning and natural language processing. In future, this could even form the basis of natural interaction between humans and machines.

In Tessa Verhoef's experiments, subjects were asked to learn a 'language' consisting of simple melodies played on a slide whistle. They had to reproduce these after hearing and memorising them, after which other subjects had to learn from them. She found that consecutive generations soon developed a stable language that was easy to reproduce.

Robots should be learning from humans

According to Verhoef and Van Duijn, too much effort is spent building ‘human-ness’ into one single robot, when people only learn their social skills by interacting with other humans. In that respect, robots are still comparable to the legendary orphans raised by wolves. Verhoef: ‘A robot should also learn its behaviour from interaction with humans.’

Just as language is transferred from one generation to the next, stories are told again and again, in a slightly different version each time. By studying how stories change in this process, which is similar to what happens with language evolution, Van Duijn hopes to understand how the human brain represents storylines and relationships between characters. This might explain why some plots linger longer than others or how complex storylines arise. Verhoef: ‘Computer models are already firmly entrenched in the study of the evolution of language, but we also want to apply them to the field of storytelling and theory of mind.’

This website uses cookies.  More information.