Algorithmic discrimination in European Union law: the case of clinical decision support systems based on machine learning


 Machine learning is a subset of the class of technologies commonly referred to as “artificial intelligence” (AI). 

Mathias Hauglid Photo: UiT

Machine learning techniques typically entail the use of software-implemented learning procedures (algorithms) to analyse large historical datasets and thereby learn patterns and correlations. For example, the algorithm might learn that a given combination of words in an electronic health record is linked with increased risk of disease, or it might learn that patients with a certain set of characteristics are more likely than others to benefit from a given treatment. Based on the learned patterns, a model can be constructed for predicting the content of unknown information or future events. Currently, machine learning is being used to develop systems to support clinical decisions which may include diagnostic and therapeutic assessments.

Machine learning can improve the quality of healthcare, but an expanding body of literature has emphasized the risk that machine learning can also lead to inequality and unforeseen discrimination – “algorithmic discrimination”. The topic of algorithmic discrimination has been addressed in relation to contexts such as employment (hiring decisions), criminal justice (e.g. predictive policing, recidivism risk assessments) and advertisement (personalised ads). Against this backdrop, the project explores what the non-discrimination principle means in the context of clinical decision-making based on machine learning, and how its effectiveness in this context can be ensured through current and prospective mechanisms in (mainly) European Union law.