Elise Dusseldorp: ‘Algorithms can see whom a treatment will work for’
Imagine how much time, money and discomfort it would save: a personalised treatment for each individual patient. Precision medicine like this is coming ever closer, thanks in part to Elise Dusseldorp’s algorithms. They retrieve a wealth of information from research data.
One group of patients with depression is given the regular treatment, whereas the other is also given mindfulness therapy. On average, the second group fares somewhat better after the treatment. But there are marked differences within the group: some patients have made an excellent recovery whereas others have been more or less unaffected. So an average result is not much use.
An algorithm by Dusseldorp has shown that mindfulness therapy primarily works if the depression begins before your 30th and if you are somewhat of a worrier. Dusseldorp: ‘The algorithm looked for subgroups on the basis of 15 characteristics in the test respondents: whether they had a partner, for example, or a job, and whether this was their first bout of depression. Level of education and medication use can also make a difference.’
Algorithm creates a tree
Dusseldorp’s algorithm begins by looking for the best ‘first split’: the split between effective or not for the characteristic that shows the greatest difference in treatment effectiveness. In this example it was the age at which the depression began, with the split being the age of 30. Within these two ‘branches’ of older or younger than 30, the algorithm then looked for the split in the next characteristic that made the most difference, and so on. Dusseldorp: ‘The algorithm creates a tree.’
'The algorithm sometimes searches too well'
Three challenges
Perfect! Researchers will now be able to take their old datasets and find out whom drugs or treatments will work for. Not quite, unfortunately: there are three obstacles to widespread application. The first is of a technical nature. ‘The algorithm sometimes searches too well. Then it points to characteristics that only played a chance role.’
Combinere...
This doesn’t matter with a very large sample because you can use part of the sample group for validation and thus exclude chance findings. ‘For instance, in a study by the employment agency into the predictability of 50,000 benefit recipients returning to work. But if you can only test a treatment on a small group, things can go wrong.’ To solve this, researchers can combine data from different, comparable studies.
Repeat...
The second problem lies in the research method itself. Dusseldorp: ‘You use this method for an exploratory search, without a hypothesis. You would have to repeat the mindfulness therapy study with the hypothesis that this helps if the depression started before people turned 30 and their worrying score is high. Only then would the results be valid for the practice.’
...And interpret
The last problem is an ethical one: ‘I’ve been tinkering away since 1996 and am now faced with the dilemma of to what extent we should still be able to interpret an algorithm. More and more algorithms spew out an answer, for instance that treatment X is best for person Y, without us knowing how that answer came about. So we really need a new algorithm to help interpret these algorithms. That’s my latest challenge!’
Text: Rianne Lindhout
Photo: Patricia Nauta