June 11, 2020
Imagine that you are a medical doctor and a COVID-19 patient comes to you with severe symptoms. You think it is best to send him to intensive care, however, places are limited and your decision support system – based on machine learning – suggests otherwise. This system normally works very well and gives accurate predictions, but now you feel unsure. Why did it decide this way? What explanation could you give to the patient? Should you trust the system or should you go with your own decision?
Explainable artificial intelligence (XAI) 1 is here for your help! It is an emerging field of artificial intelligence (AI) with the aim of shedding light on the reasons behind the decisions of AI models. Recognising the importance of this aspect for the responsible development and application of AI, the research in this topic has skyrocketed in recent years, as supported by Figure 1 below.
Figure 1: The number of publications with a title, abstract and/or keywords referring to the field of XAI in the past years, querying the ScopusR database on December 10th, 2019. 2
There are two main approaches to achieve model interpretability. The first one is to only use prediction algorithms that are intuitive and simple enough for humans to understand, such as linear regressions and decision trees. The second is to use post hoc methods to analyse, approximate, or in some other ways make sense of the complex model and its predictions. Popular examples belonging to this group are partial dependence plots3 and the 4 technique5, see Figure 2 for an illustration of how the latter one works.
Figure 2: An illustration of how the LIME technique explains an individual prediction in a post hoc manner. By highlighting the main factors for and against the prediction, it enables the doctor to decide whether to trust the model or not in this instance. (Figure source is also from the LIME paper in the reference)
Understanding the reasons behind the decisions of an AI system in a healthcare setting helps in four main areas:
In general, complex models have higher accuracy but lower comprehensibility than simpler models. Therefore, when simple models are used to ensure interpretability, a trade-off between model performance and understandability has to be made. On the other hand, using post hoc methods to explain complex models is less trustworthy, as these explanations usually cannot fully be faithful to the original model. Another difficulty is that explanations are subjective and hard to compare. Therefore, finding the best ones for different users can be tricky, but some insights from the social sciences can help guide us 8.
Although the quest to make AI explainable has its challenges, it is a necessity to ensure transparency, and it can also help to identify problems with the algorithm, discover new knowledge, and build trust with the user - which are all of great importance in a healthcare setting.
Emese Thamo (XAI Researcher @perfeXia) is a Cambridge maths graduate, pursuing a PhD at the University of Liverpool, where her research topic concerns the safety and interpretability of deep learning. She is passionate about innova-tive technologies and has work experience in the personalised healthcare and aerospace in-dustries.
Contact him at : firstname.lastname@example.org